Squashed 'third_party/ceres/' changes from 399cda773..7f9cc571b
7f9cc571b Set CMAKE_CUDA_ARCHITECTURES depending on CUDAToolkit_VERSION
6a74af202 Remove a level of indirection when using CellInfo
f8c2994da Drop Ubuntu 20.04 and add Ubuntu 24.04 support
20954e693 Eliminate CUDA set but unused variable warning
57aba3ed0 Enable Apple linker library deduplication
1f15197be Drop macos-11 runner and support macos-14 instead
5de0fda0f Update Github actions
308a5bb43 Add missing include
715865101 Remove 32-bit MinGW from CI matrix
f71181a92 Remove remaining references to CXSparse
522210a08 Reuse macro to format version string
cd2dd06e9 Fix clang16 compiler warnings
1f2e6313a Add missing std qualifiers
125c06882 Link static cuda libs when ceres is build static
62c03d6ff Revert "Update Eigen::BDCSVD usage to comply with Eigen 3.4"
4027f6997 Update Eigen::BDCSVD usage to comply with Eigen 3.4
1b2ebabf5 Typo fixes in the documentation
2ffeb943a Fix typo in AutoDiffManifold comment and docs
da34da3da Remove CreateFakeBundleAdjustmentPartitionedJacobian
85b2c418a ClangTidy fixes
91773746b Simplify instantiation of cost functions and their functors
8b88a9ab4 Use C++17 Bessel functions
84436f791 Use native CMake TBB package configuration
556a56f21 Do not assume Homebrew usage
3fd2a72cc MinGW no longer provides a 32-bit METIS package
e776a6f1a Unbreak the Bazel build.
095d48392 Skip structure detection on preprocessor failure
a0876309a Fix typos in comments
774973731 Fix search on ceres-solver.org
85331393d Update docs for 2.2.0.
2120eae67 Optimize the computation of the LM diagonal in TinySolver
611b139b1 Fix Solver::Options::callbacks type in documentation
b652d3b4f Unbreak the windows build
b79c4d350 ClangTidy fixes
76af132d0 Update docs for 2.2.0RC3
dc7a85975 Single-threaded operations on small vectors
b379ab768 Remove MaxNumThreadsAvailable
354002f98 Schedule task in ParallerFor from the previous task
a9b3fcff4 Minor update to docs
5ccab18be Drop use of POSIX M_PI_2 and M_PI_4
4519b8d77 Drop use of POSIX M_PI
b83abdcb1 Add a default value for Solver::Summary::linear_solver_ordering_type
8d875a312 Fix checks for CUDA memory pools support
94335e3b9 More ClangTidy fixes
399395c4f Miscellaneous ClangTidy fixes
c8bed4b93 Update version_history for 2.2.0rc2
489339219 Rework MSVC warning suppression
0cea191d4 Move stream-ordered memory allocations
8e3b7d89e Fix a copy-pasta error
dc0bb8508 Various cleanups to the documentation
4588b0fbb Add an example for EvaluationCallback
0fc3fd4fc Add documentation for examples
e6b2f532b Parallelize PSE preconditioner
41672dff8 Add end-to-end BA tests for SCHUR_POWER_SERIES_EXPANSION
83ee376d8 Add an example for IterationCallback
371265094 Cleanup example code
59182a42c Update documentation
d4db6e6fe Fix typos in the documentation for EvaluationCallback
dffd8cd71 Add an accessor for the CostFunctor in DynamicAutoDiffCostFunction
bea247701 Add a missing include dir to the cuda kernels target.
18ea7d1c2 Runtime check for cudaMallocAsync support
a227045be Remove cuda-memcheck based tests
d10e786ca Remove an unused variable from CudaSparseMatrix
5a30cae58 Preparing for 2.2.0rc1
9cca67127 Enable compatibility with SuiteSparse 7.2.0
a1c02e8d3 Rework the Sphinx find module
a57e35bba Require at least CMake 3.16
863db948f Eliminate macOS sprintf warning
6f5342db6 Export ceres_cuda_kernels to project's build tree
d864d146f Add macOS 13 runner to Github workflows
01a23504d Add a workaround for CMake Policy CMP0148
de9cbde95 Work around MinGW32 manifold_test segfault
5e4b22f7f Update CudaSparseMatrix class
ed9921fc2 Fix Solver::Options in documentation
de62bf220 Two minor fixes
5f97455be Fix typos in documentation
5ba62abec Add CUDA support to windows CI builds
a98fdf582 Update CMakeLists.txt to fix Windows CUDA build
ec4907399 Fix block-sparse to crs conversion on windows
799ee91bb Fix check in CompressedRowJacobianWriter::CreateJacobian()
ee90f6cbf Detect large Jacobians and return failure instead of crashing.
310a252fb Deal with infinite initial cost correctly.
357482db7 Add NumTraits::max_digits10 for Jets
908b5b1b5 Fix type mismatch in documentation
75bacedf7 CUDA partitioned matrix view
fd6197ce0 Fixed GCC 13.1 compilation errors
df97a8f05 Improve support of older CUDA toolkit versions
085214ea7 Fix test CompressedRowSparseMatrix.Transpose
d880df09f Match new[] with delete[] in BSM
bdee4d617 Block-sparse to CRS conversion using block-structure
0f9de3daf Use page locked memory in BlockSparseMatrix
e7bd72d41 Permutation-based conversion from block-sparse to crs
abbc4e797 Explicitly compute number of non-zeros in row
96fdfd2e7 Implement tests for Euler conversion with jets
db1ebd3ff Work around lack of constexpr constructors for Jet
16a4fa04e Further Jet conversion fixes
92ad18b8a Fix a Jet conversion bug in rotation.h
a5e745d4e ClangTidy fixes
77ad8bb4e Change storage in BlockRandomAccessSparseMatrix
d340f81bd Clang Tidy fixes
54ad3dd03 Reorganize ParallelFor source files
ba360ab07 Change the value of BlockRandomAccessSparseMatrix::kMaxRowBlocks
0315c6ca9 Provide DynamicAutoDiffCostFunction deduction guide
3cdfae110 Replace Hertzberg mentions with citations
0af38a9fc Fix typos in documentation
4c969a6c1 Improve image of loss functions shape
b54f05b8e Add missing TukeyLoss to documentation
8bf4a2f42 Inexact check for ParallelAssign test
9cddce73a Explicit conversions from long to int in benchmarks (for num_threads)
5e787ab70 Using const ints in jacobian writers
f9bffbb6f Removing -Wshorten-64-to-32 warnings from examples (part 1)
e269b64f5 More ClangTidy fixes
f4eb768e0 Using int64 in file.cc. Fixing compilation error in array_utils.cc
74a0f0d24 Using int64_t for sizes and indexes in array utils
749a442d9 Clang-Tidy fixes
c4ba975ae Fix rotation_test.cc to work with older versions of Eigen
9602ed7b7 ClangFormat changes
79a554ffc Fix a bug in QuaternionRotatePoint.
f1113c08a Commenting unused parameters for better readibility
772d927e1 Replacing old style typedefs with new style usings
53df5ddcf Removing using std::...
e1ca3302a Increasing bazel timeout for heavy BA tests
f982d3071 Fixing bazel build
cb6b30662 Use hypot to compute the L^2 norm
1e2a24a8b Update Github actions to avoid deprecation warnings
285e5f9f4 Do not update brew formulae upon install
73d95b03f Clang-Tidy fixes
51d52c3ea Correct epsilon in CUDA QR test changed by last commit.
546f5337b Updates to CUDA dense linear algebra tests
19ab2c179 BlockRandomAccessMatrix Refactor
a3a062d72 Add a missing header
2b88bedb2 Remove unused variables
9beea728f Fix a bug in CoordinateDescentMinimizer
8e5d83f07 ClangFormat and ClangTidy changes
b15851508 Parallel operations on vectors
2fd81de12 Add build configuration with CUDA on Linux
06bfe6ffa Remove OpenMP and No threading backends.
352b320ab Fixed SuiteSparse 6.0 version parsing
8fd7828e3 ClangTidy fixes
0424615dc Fix PartitionedMatrixView usage in evaluation_benchmark
77a54dd3d Parallel updates to block-diagonal EtE FtF
946fa50de ClangTidy fixes
e4bef9505 Refactor PartitionedMatrixView to cache the partitions
d3201798e Clean up sparse_cholesky_test
addcd342f ClangTidy fixes
c2e7002d2 Remove an unused variable from evaluation_benchmark.cc
fef6d5875 Parallel left products for PartitionedMatrixView
37a3cb384 Update SuiteSparse in MSVC Github workflow
5d53d1ee3 Parallel for with iteration costs and left product
9aa52c6ff Use FindCUDAToolkit for CMake >= 3.17
47e03a6d8 Add const accessor for Problem::Options used by Problem
d6a931009 Clang Tidy Fixes
9364e31ee Fix a regression in SuiteSparse::AnalyzeCholesky
89b3e1f88 Remove unused includes of gflags and gtest
984079003 Fix missing regex dependency for gtest on QNX
6b296f27f Fix missing namespace qualification and docs for Manifold gtest macro
6685e629f AddBlockStructureTranspose to BlockSparseMatrix
699e3f3b3 Fix a link error in evaluation_benchmark.cc
19a3d07f9 Add multiplication benchmarks on BAL data
b221b1294 Format code with clang-format.
ccf32d70c Purge all remaining references to (defunct) LocalParameterization
5f8c406e2 struct ContextImpl -> class ContextImpl
9893c534c Several cleanups.
a78a57472 ClangTidy fixes
b1fe60330 Parallel right products for partitioned view
16668eedf Fix a memory leak in ContextImpl
d129938d5 Third time is the charm
4129b214a More fixes to cuda_dense_cholesky_test.cc
5c01d2573 Remove unused variables from cuda_dense_cholesky_test.cc
5e877ae69 Fix the Bazel build
d89290ffa Fix evalution_benchmark compilability
ae7f456e3 ClangTidy fixes
9438c370f Restore the semantics of TrustRegionMinimizer
c964fce90 Fix bug in cuda_kernels_test
afaad5678 Fix a typo
b7116824b Evaluation benchmark
2b89ce66f Add generalized Euler Angle conversions
8230edc6c ClangTidy fixes
9a2894763 Speed up locking when num_threads = 1.
739f2a25a Parallelize block_jacobi_preconditioner
c0c4f9394 Change implementation of parallel for
fc826c578 CUDA Cleanup
660af905f Fix a bug in TrustRegionMinimizer.
4cd257cf4 Let NumericDiffFirstOrderFunction take a dynamically sized parameter vector
6c27ac6d5 Fix repeated SpMV Benchmark
430a292ac Add a missing include
858b4b89b More ClangTidy fixes
e9e995740 ClangTidy fixes
7f8e930a0 Fix lint errors
f86a3bdbe Unify Block handling across matrix types
5f1946879 clang-formated source
00a05cf70 CUDA SDK Version-based SpMV Selection
de0f74e40 Optimize the BlockCRSJacobiPreconditioner
ba65ddd31 Improvements to benchmarks
42352e2e2 Added CUDA Jacobi Preconditioner
f802a09ff &foo[0] -> foo.data()
344929647 Add CUDA GPU and Runtime Detection
6085e45be Minor CUDA cleanup.
e15ec89f3 Speed up bundle_adjuster
058a72782 One more ClangTidy fix.
f6f2f0d16 ClangTidy cleanups Also some clang-format cleanups.
9d74c6913 Fix some more errant CATD warnings
a4f744095 Remove an unused variable from compressed_row_sparse_matrix.cc
388c14286 Fix GCC 12.1.1 LTO -Walloc-size-larger-than= warnings
9ec4f7e44 Refactor BlockJacobiPreconditioner
adda97acd Fixed a few more missing CERES_NO_CUDA guards
22aeb3584 Fix a missing string assignment in solver.cc
3b891f767 Insert missing CUDA guards.
829089053 CUDA CGNR, Part 4: CudaCgnrSolver
6ab435d77 Fix a missing CERES_NO_CUDA guard
c560bc2be CUDA CGNR, Part 3: CudaSparseMatrix
c914c7a2b CUDA CGNR, Part 2: CudaVector
3af3dee18 Simplify the implementation to convert from BlockSparseMatrix to CompressedRowSparseMatrix.
242fc0795 Remove unnecessary destructors
737200ac8 Add macos-12 to Github workflow runners
9c968d406 Silence Clang warning
2c78c5f33 Small naming fixups.
2a25d86b0 Integrate schur power series expansion options to bundle adjuster
4642e4b0c Use if constexpr and map SparseMatrix as const
92d837953 Enable usage of schur power series expansion preconditioner.
f1dfac8cd Reduce the number of individual PRNG instances
79e403b15 Expand vcpkg installation instructions
7b0bb0e3f ClangTidy cleanups
c5c2afcc9 Fix solver_test.cc for preconditioners and sparse linear algebra libraries
07d333fb6 Refactor options checking for linear solvers
ba7207b0b A number of small changes.
20e85bbe3 Add power series expansion preconditioner
04899645c LinearOperator::FooMultiply -> LinearOperator::FooMultiplyAndAccumulate
288a3fde6 Add missing virtual destructors to matrix adapters
6483a2b4c Add Sergiu's name to the list of maintainers
1cf49d688 Update FindGlog.cmake to create glog::glog target
1da72ac39 Refactor ConjugateGradientsSolver
f62dccdb3 Fix the Sphere and Line Manifold formulations
3e1cc89f6 A bunch of clang-tidy fixes.
80380538a One more CATD fix
560ef46fb A bunch of minor fixes.
67bae28c1 CUDA CGNR, Part 1: Misc. CLeanup
5d0bca14d Remove ceres/internal/random.h in favor of <random>
d881b5ccf Minor fixes in comments
37516c968 Fix a bug in InnerProductComputer.
d8dad14ee CUDA Cleanup
c9d2ec8a9 Updates to sparse block matrix structures to support new sparse linear solvers.
5fe0bd45a Added MinGW to Windows Github workflow
738c027c1 Fix a logic error in iterative_refiner_test
cb6ad463d Add mixed precision support for CPU based DenseCholesky
df55682ba Fix Eigen error in 2D sphere manifolds Since Eigen does not allow to have a RowMajor column vector (see https://gitlab.com/libeigen/eigen/-/issues/416), the storage order must be set to ColMajor in that case. This fix adds that special case when generating 2D sphere manifolds.
1cf59f61e Set Github workflow NDK path explicitly
68c53bb39 Remove ceres::LocalParameterization
2f660464c Fix build issue with CUDA testing targets when compiling without gflags.
c801192d4 Minor fixes
ce9e902b8 Fix missing CERES_METIS_VERSION
d9a3dfbf2 Add a missing ifdef guard to dense_cholesky_test
5bd43a1fa Speed up DenseSparseMatrix::SquareColumnNorm.
cbc86f651 Fix the build when CUDA is not present
5af8e6449 Update year in solver.h
88e08cfe7 Mixed-precision Iterative Refinement Cholesky With CUDA
290b34ef0 Fix optional SuiteSparse + METIS test-suite names to be unique
d038e2d83 Fix use of NESDIS with SuiteSparse in tests if METIS is not found
027e741a1 Eliminated MinGW warning
4e5ea292b Fixed MSVC 2022 warning
83f6e0853 Fix use of conditional preprocessor checks within a macro in tests
70f1aac31 Fix fmin/fmax() when using Jets with float as their scalar type
5de77f399 Fix reporting of METIS version
11e637667 Fix #ifdef guards around METIS usage in EigenSparse backend and tests
0c88301e6 Provide optional METIS support
f11c25626 Fix fmin/fmax() to use Jet averaging on equality
b90053f1a Revert C++17 usage of std::exclusive_scan
dfce1e128 Link against threading library only if necessary
69eddfb6d Use find module to link against OpenMP
b4803778c Update documentation for linear_solver_ordering_type
2e764df06 Update Cuda memcheck test
443ae9ce2 Update Cuda memcheck test
55b4c3f44 Retain terminal formatting when building docs
786866d9f Generate version string at compile time
5bd83c4ac Unbreak the build with EIGENSPARSE is disabled
2335b5b4b Remove support for CXSparse
fbc2eea16 Nested dissection for ACCELERATE_SPARSE & EIGEN_SPARSE
d87fd551b Fix Ubuntu 20.04 workflow tests
71717f37c Use glog 0.6 release to run Windows Github workflow
66e0adfa7 Fix detection of sphinx-rtd-theme
d09f7e9d5 Enable postordering when computing the sparse factorization.
9b34ecef1 Unbreak the build on MacOS
8ba8fbb17 Remove Solver::Options::use_postordering
30b4d5df3 Fix the ceres.bzl to add missing cc files.
39ec5e8f9 Add Nested Dissection based fill reducing ordering
aa62dd86a Fix a build breakage
41c5fb1e8 Refactor suitesparse.h/cc
12263e283 Make the min. required version of SuiteSparse to be 4.5.6
c8493fc36 Convert internal enums to be class enums.
bb3a40c09 Add Nested Dissection ordering method to SuiteSparse
f1414cb5b Correct spelling in comments and docs.
fd2b0ceed Correct spelling (contiguous, BANS)
464abc198 Run Linux Github workflow on Ubuntu 22.04
caf614a6c Modernize code using c++17 constructs
be618133e Simplify some template metaprograms using fold expressions.
3b0096c1b Add the ability to specify the pivot threshold in Covariance::Options
40c1a7e18 Fix Github workflows
127474360 Ceres Solver now requires C++17
b5f1b7877 clang-format cleanup
32cd1115c Make the code in small_blas_generic.h more compiler friendly.
f68321e7d Update version history
b34280207 Fix MSVC small_blas_test failures
b246991b6 Update the citation instructions in the docs
c0c14abca Fix version history item numbering
d23dbac25 Update Windows install guide
e669c9fc7 Provide citation file
ff57c2e91 Update version history for 2.1.0rc2
ab9436cb9 Workaround MSVC STL deficiency in C++17 mode
97c232857 Update the included gtest to version 1.11.0
4eac7ddd2 Fix Jet lerp test regression
2ffbe126d Fix Jet test failures on ARMv8 with recent Xcode
0d6a0292c Fix unused arguments of Make1stOrderPerturbation
93511bfdc Fix SuiteSparse path and version reporting
bf329b30f Fix link to macOS badge
0133dada2 Add Github workflows
3d3d6ed71 Add missing includes
0a9c0df8a Fix path for cuda-memcheck tests
ee35ef66f ClangFormat cleanup via scripts/all_format.sh
470515985 Add missing includes for config.h
d3612c12c Set CMP0057 policy for IN_LIST operator in FindSuiteSparse.cmake
4bc100c13 Do not define unusable import targets
e91995cce Fix Ubuntu 18.04 shared library build
94af09186 Force C++ linker
a65e73885 Update installation docs
1a377d707 Fix Ubuntu 20.04 SPQR build
817f5a068 Switch to imported SuiteSparse, CXSparse, and METIS targets
b0f32a20d Hide remaining internal symbols
572395098 Add a missing include
b0aef211d Allow to store pointers in ProductManifold
9afe8cc45 Small compile fix to context_impl
284be88ca Allow ProductManifold default construction
f59059fff Bugfix to CUDA workspace handling
779634164 Fix MSVC linker error
7743d2e73 Store ProductManifold instances in a tuple
9f32c42ba Update Travis-CI status badge to .com from .org
eadfead69 Move LineManifold and SphereManifold into their own headers.
f0f8f93bb Fix docs inconsistencies
e40391efa Update version history in preparation for 2.1.0
6a37fbf9b Add static/compile time sizing to EuclideanManifold
4ad787ce1 Fix the bazel build
ae4d95df6 Two small clang-tidy fixes
f0851667b Fix MSVC compilation errors
c8658c899 Modernize more
46b3495a4 Standardize path handling using GNUInstallDirs
98bc3ca17 Fix shared library build due to missing compile features specification
8fe8ebc3a Add final specifier to public classes
84e1696f4 Add final specifier to internal classes.
518970f81 Context should be exported
09ec4997f Cleanup examples
90e58e10f Add missing #include.
15348abe9 Add CUDA based bundle adjustment tests.
57ec9dc92 Do not enforce a specific C++ standard
99698f053 Fix Apple Clang weak symbols warnings
8e0842162 Add support for dense CUDA solvers #3
aff51c907 Revert "Do not enforce a specific C++ standard"
527c3f7da Fixed gflags dependency
d839b7792 Do not enforce a specific C++ standard
f71167c62 Fixed missing include in minilog build
47502b833 Miscellaneous CUDA related changes.
bb2996681 Check CUDA is available in solver.cc
7d2e4152e Add support for dense CUDA solvers #2
f90833f5f Simplify symbol export
c6158e0ab Replace NULL by nullptr
7e4f5a51b Remove blas.h/cc as they are not used anymore.
e0fef6ef0 Add cmake option ENABLE_BITCODE for iOS builds
446487c54 Add <memory> header to all files using std::unique_ptr.
9c5f29d46 Use compiler attributes instead of [[deprecated]]
44039af2c Convert factory functions to return std::unique_ptrs.
708a2a723 Silence LocalParameterization deprecation warnings
de69e657a Fix another missing declaration warning
677711138 Fix segmentation fault in AVX2 builds
0141ca090 Deprecate LocalParameterizations
fdfa5184a Fix missing declaration warning
4742bf386 Add LineManifold.
c14f360e6 Drop trivial special members
ae65219e0 ClangTidy cleanups
a35bd1bf9 Use = default for trivial special members
db67e621e Fix dense_cholesky_test.cc comma handling.
484d3414e Replace virtual keyword by override
2092a720e Fix some nits.
36d6d8690 Add support for dense CUDA solvers #1
af5e48c71 Add SphereManifold.
182cb01c5 Normalize Jet classification and comparison
9dbd28989 Loosen tolerances in dense_qr_test.cc
cab853fd5 Add DenseQR Interface
8ae054ad9 Fix missing declaration warning in autodiff_manifold_test
408af7b1a Move the constructor and destructor for SchurComplementSolver
d51672d1c Move the constructor and destructor for DenseSchurComplementSolver
177b2f99d Use benchmark version 1.6 compatible syntax.
ce9669003 Add const accessor for functor wrapped by auto/numeric-diff objects
9367ec9cc Add a benchmark for dense linear solvers.
7d6524daf Support fma Jet
a0d81ad63 Fix a bug in AutoDiffManifold
5a99e42e1 ClangTidy fixes
40fb41355 Update .gitignore
e6e6ae087 Unbreak the bazel build
0572efc57 Fix a compilation warning in autodiff_manifold.h
475db73d0 Fix build breakage when LAPACK support is disabled.
095c9197f Fix iterative_refiner_test.cc
6d06e9b98 Add DenseCholesky
77c0c4d09 Migrate examples to use Manifolds
19eef54fc Rename Quaternion to QuaternionManifold. Also rename EigenQuaternion to EigenQuaternionManiold.
ca6d841c2 Add AutoDiffManifold
97d7e0737 Move the manifold testing matchers to manifold_test_utils.h
16436b34b Fix some more clang-tidy suggestions.
dcdefc216 Fix a bunch of clang-tidy suggestions.
d8a1b69ab Remove an unused variable from gradient_checker.cc
611b46b54 Remove the use of CHECK_NOTNULL.
125a0e9be LocalParameterization -> Manifold #1
00bfbae11 Add missing algorithm header to manifold.cc
1d5aff059 Refactor jet_tests.cc
fbd693091 Fix two unused variable warnings.
c0cb42e5f Add Problem::HasParameterization
7e2f9d9d4 Add EigenQuaternion manifold and tests for it.
b81a8bbb7 Add the Quaternion manifold and tests.
4a01dcb88 Add more invariants and documentation to manifold_test.cc
bdd80fcce Improve Manifold testing
ce1537030 Fixed missing headers in manifold.h
23b204d7e LocalParameterization -> Manifold #1
c2fab6502 Fix docs of supported sparse backends for mixed_precision_solves option
8cb441c49 Fix missing declaration warnings in GCC
d2b7f337c Remove split.cc from the bazel sources.
206061a6b Use standard c++ types in jet_test.cc
1f374a9fe Support promotion in comparison between Jet and scalars
06e68dbc5 Avoid midpoint overflow in the differential
276d24c73 Fix C++20 compilation
b1391e062 Support midpoint Jet
8426526df Support lerp Jet
57c279689 support 3-argument hypot jet
123fba61c Eigen::MappedSparseMatrix -> Eigen::Map<Eigen::SparseMatrix>
3f950c66d reworked copysign tests
48cb54d1b fix fmin and fmax NaN handling
552a4e517 support log10 jet
4e49c5422 reuse expm1 result for differential
8d3e64dd5 Use modern-style Eigen3 CMake variables
2fba61434 support log1p and expm1 jet
a668cabbc support norm jet
a3a4b6d77 support copysign jet
b75dac169 fix abs jet test comment
034bf566f use copysign for abs jet
31008453f Add example for BiCubicInterpolator
7ef4a1221 Add a section on implicit and inverse function theorems
686428f5c Move the further reading section to bibliography
e47d87fdd Add a note about Trigg's correction
d2852518d Fix the docs for Problem::RemoveResidualBlock & Problem::RemoveParameterBlock
06e02a173 Delete unused files split.h/cc
ac7268da5 Fix an 80cols issue in covariance_impl.cc
17dccef91 Add NumericDiffFirstOrderFunction
03d64141a Fix a incorrect check in reorder_program.cc
884111913 Two changes to TinySolver
4dff3ea2c Fix a number of typos in rotation.h
98719ced4 Fix a type in interfacing_with_autodiff.html
0299ce944 Update conf.py to be compatible with Sphinx 4.1.2
2a2b9bd6f Fix a bug in covariance_impl.cc
27fade7b8 Fix a bug in system_test.cc
d4eb83ee5 Fix the Jacobian in trust_region_minimizer_test.cc
5f6071a1c Fix a bug in local_parameterization_test.cc
b2e732b1e Fix errors in comments from William Gandler.
42f1d6717 Add accessors to GradientProblem
aefd37b18 Refactor small_blas_gemm_benchmark
dc20db303 [docs] Fix `IterationSummary` section. Add missing `IterationCallback`
90ba7d1ef [docs] Fix typos
c3129c3d4 Fix tests not executing
7de561e8e Fix dependency check for building documentation
4fbe218f2 Refactor small_blas_test
3fdde8ede Remove an errant double link.
20ad431f7 Fixing a typo in the version history
0c85c4092 Revert "Reduce copies involved in Jet operations"
3a02d5aaf Fix typo in LossFunctionWrapper sample code
7b2c223be Add fmax/fmin overloads for scalars
c036c7819 Reduce copies involved in Jet operations
51945e061 Introduce benchmark for Jet operations
ec4f2995b Do not check MaxNumThreadsAvailable if the thread number is set to 1.
98f639f54 Add a macro CERES_GET_FLAG.
766f2cab5 Reduce log spam in covariance_impl.cc.
941ea1347 Fix FindTBB version detection with TBB >= 2021.1.1
323c350a6 fix Eigen3_VERSION
2b32b3212 Revert "Group specializations into groups of four"
313caf1ae Allow Unity build.
4ba244cdb Group specializations into groups of four
d77a8100a Make miniglog's InitGoogleLogging argument const.
863724994 Use portable expression for constant 2/sqrt(pi)
97873ea65 Add some missing includes for glog/logging.h
d15b1bcd3 Increase tolerance in small_blas_test.cc
17cf01831 Hide 'format not a string literal' error in examples
64029909b Fix -Wno-maybe-uninitialized error
21294123d Fix nonnull arg compared to NULL error.
1dd417410 Fix -Wno-format-nonliteral
6c106bf51 Fix -Wmissing-field-initializers error
c48a32792 Use cc_binary includes so examples build as external repo
e0e14a5cd Fix errors found by -Werror
e84cf10e1 Fix an errant double in TinySolver.
66b4c33e8 updated unit quaternion rotation
d45ec47b5 Fix a typo in schur_eliminator.h
Change-Id: I60db062e44d051d50dbb3a145eec2f74d5190481
git-subtree-dir: third_party/ceres
git-subtree-split: 7f9cc571b03632f1df93ea35725a1f5dfffe2c72
Signed-off-by: Austin Schuh <austin.linux@gmail.com>
diff --git a/.github/workflows/android.yml b/.github/workflows/android.yml
new file mode 100644
index 0000000..ee5e966
--- /dev/null
+++ b/.github/workflows/android.yml
@@ -0,0 +1,171 @@
+name: Android
+
+on: [push, pull_request]
+
+jobs:
+ build-android:
+ name: NDK-${{matrix.abi}}-${{matrix.build_type}}-${{matrix.lib}}
+ runs-on: ${{matrix.os}}
+ defaults:
+ run:
+ shell: bash -e -o pipefail {0}
+ env:
+ CCACHE_DIR: ${{github.workspace}}/ccache
+ CMAKE_GENERATOR: Ninja
+ DEBIAN_FRONTEND: noninteractive
+ strategy:
+ fail-fast: true
+ matrix:
+ os:
+ - ubuntu-20.04
+ abi:
+ - arm64-v8a
+ - armeabi-v7a
+ - x86
+ - x86_64
+ build_type:
+ - Release
+ lib:
+ - shared
+ - static
+ android_api_level:
+ - '28'
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Setup Dependencies
+ run: |
+ sudo apt-get update
+ sudo apt-get install -y \
+ ccache \
+ ninja-build
+
+ # Ensure the declared NDK version is always installed even if it's removed
+ # from the virtual environment.
+ - name: Setup NDK
+ env:
+ ANDROID_NDK_VERSION: 23.2.8568313
+ ANDROID_SDK_ROOT: /usr/local/lib/android/sdk
+ run: |
+ echo 'y' | ${{env.ANDROID_SDK_ROOT}}/cmdline-tools/latest/bin/sdkmanager --install 'ndk;${{env.ANDROID_NDK_VERSION}}'
+ echo "ANDROID_NDK_ROOT=${{env.ANDROID_SDK_ROOT}}/ndk/${{env.ANDROID_NDK_VERSION}}" >> $GITHUB_ENV
+
+ - name: Cache Eigen
+ id: cache-eigen
+ uses: actions/cache@v4
+ with:
+ path: eigen/
+ key: NDK-${{matrix.os}}-eigen-3.4.0-${{matrix.abi}}
+
+ - name: Download Eigen
+ if: steps.cache-eigen.outputs.cache-hit != 'true'
+ run: |
+ wget https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip
+ unzip eigen-3.4.0.zip
+
+ - name: Setup Eigen
+ if: steps.cache-eigen.outputs.cache-hit != 'true'
+ run: |
+ cmake -S eigen-3.4.0 -B build-eigen \
+ -DBUILD_TESTING=OFF \
+ -DCMAKE_ANDROID_API=${{matrix.android_api_level}} \
+ -DCMAKE_ANDROID_ARCH_ABI=${{matrix.abi}} \
+ -DCMAKE_ANDROID_STL_TYPE=c++_shared \
+ -DCMAKE_Fortran_COMPILER= \
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/eigen \
+ -DCMAKE_SYSTEM_NAME=Android \
+ -DEIGEN_BUILD_DOC=OFF
+ cmake --build build-eigen \
+ --config ${{matrix.build_type}} \
+ --target install
+
+ - name: Cache gflags
+ id: cache-gflags
+ uses: actions/cache@v4
+ with:
+ path: gflags/
+ key: NDK-${{matrix.os}}-gflags-2.2.2-${{matrix.abi}}-${{matrix.build_type}}-${{matrix.lib}}
+
+ - name: Download gflags
+ if: steps.cache-gflags.outputs.cache-hit != 'true'
+ run: |
+ wget https://github.com/gflags/gflags/archive/refs/tags/v2.2.2.zip
+ unzip v2.2.2.zip
+
+ - name: Setup gflags
+ if: steps.cache-gflags.outputs.cache-hit != 'true'
+ run: |
+ cmake -S gflags-2.2.2 -B build-gflags \
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} \
+ -DBUILD_TESTING=OFF \
+ -DCMAKE_ANDROID_API=${{matrix.android_api_level}} \
+ -DCMAKE_ANDROID_ARCH_ABI=${{matrix.abi}} \
+ -DCMAKE_ANDROID_STL_TYPE=c++_shared \
+ -DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/gflags \
+ -DCMAKE_SYSTEM_NAME=Android
+ cmake --build build-gflags \
+ --config ${{matrix.build_type}} \
+ --target install
+
+ - name: Cache glog
+ id: cache-glog
+ uses: actions/cache@v4
+ with:
+ path: glog/
+ key: NDK-${{matrix.os}}-glog-0.5-${{matrix.abi}}-${{matrix.build_type}}-${{matrix.lib}}
+
+ - name: Download glog
+ if: steps.cache-glog.outputs.cache-hit != 'true'
+ run: |
+ wget https://github.com/google/glog/archive/refs/tags/v0.5.0.zip
+ unzip v0.5.0.zip
+
+ - name: Setup glog
+ if: steps.cache-glog.outputs.cache-hit != 'true'
+ run: |
+ cmake -S glog-0.5.0 -B build-glog \
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} \
+ -DBUILD_TESTING=OFF \
+ -DCMAKE_ANDROID_API=${{matrix.android_api_level}} \
+ -DCMAKE_ANDROID_ARCH_ABI=${{matrix.abi}} \
+ -DCMAKE_ANDROID_STL_TYPE=c++_shared \
+ -DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
+ -DCMAKE_FIND_ROOT_PATH=${{github.workspace}}/gflags \
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/glog \
+ -DCMAKE_SYSTEM_NAME=Android
+ cmake --build build-glog \
+ --config ${{matrix.build_type}} \
+ --target install
+
+ - name: Cache Build
+ id: cache-build
+ uses: actions/cache@v4
+ with:
+ path: ${{env.CCACHE_DIR}}
+ key: NDK-${{matrix.os}}-ccache-${{matrix.abi}}-${{matrix.build_type}}-${{matrix.lib}}-${{github.run_id}}
+ restore-keys: NDK-${{matrix.os}}-ccache-${{matrix.abi}}-${{matrix.build_type}}-${{matrix.lib}}-
+
+ - name: Setup Environment
+ if: matrix.build_type == 'Release'
+ run: |
+ echo 'CXXFLAGS=-flto' >> $GITHUB_ENV
+
+ - name: Configure
+ run: |
+ cmake -S . -B build_${{matrix.abi}} \
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} \
+ -DCMAKE_ANDROID_API=${{matrix.android_api_level}} \
+ -DCMAKE_ANDROID_ARCH_ABI=${{matrix.abi}} \
+ -DCMAKE_ANDROID_STL_TYPE=c++_shared \
+ -DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
+ -DCMAKE_C_COMPILER_LAUNCHER=$(which ccache) \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=$(which ccache) \
+ -DCMAKE_FIND_ROOT_PATH="${{github.workspace}}/eigen;${{github.workspace}}/gflags;${{github.workspace}}/glog" \
+ -DCMAKE_SYSTEM_NAME=Android
+
+ - name: Build
+ run: |
+ cmake --build build_${{matrix.abi}} \
+ --config ${{matrix.build_type}}
diff --git a/.github/workflows/linux.yml b/.github/workflows/linux.yml
new file mode 100644
index 0000000..8cd9d8e
--- /dev/null
+++ b/.github/workflows/linux.yml
@@ -0,0 +1,122 @@
+name: Linux
+
+on: [push, pull_request]
+
+jobs:
+ build:
+ name: ${{matrix.os}}-${{matrix.build_type}}-${{matrix.lib}}-${{matrix.gpu}}
+ runs-on: ubuntu-latest
+ container: ${{matrix.os}}
+ defaults:
+ run:
+ shell: bash -e -o pipefail {0}
+ env:
+ CCACHE_DIR: ${{github.workspace}}/ccache
+ CMAKE_GENERATOR: Ninja
+ DEBIAN_FRONTEND: noninteractive
+ strategy:
+ fail-fast: true
+ matrix:
+ os:
+ - ubuntu:22.04
+ - ubuntu:24.04
+ build_type:
+ - Release
+ lib:
+ - shared
+ - static
+ gpu:
+ - cuda
+ - no-cuda
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Setup Dependencies
+ run: |
+ apt-get update
+ apt-get install -y \
+ build-essential \
+ ccache \
+ cmake \
+ libbenchmark-dev \
+ libblas-dev \
+ libeigen3-dev \
+ libgflags-dev \
+ libgoogle-glog-dev \
+ liblapack-dev \
+ libmetis-dev \
+ libsuitesparse-dev \
+ ninja-build \
+ wget
+
+ # nvidia cuda toolkit + gcc combo shipped with 22.04LTS is broken
+ # and is not able to compile code that uses thrust
+ # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1006962
+ - name: Setup CUDA Toolkit Repositories (22.04)
+ if: matrix.gpu == 'cuda' && matrix.os == 'ubuntu:22.04'
+ run: |
+ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
+ dpkg -i cuda-keyring_1.0-1_all.deb
+
+ - name: Setup CUDA Toolkit Repositories (24.04)
+ if: matrix.gpu == 'cuda' && matrix.os == 'ubuntu:24.04'
+ run: |
+ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
+ dpkg -i cuda-keyring_1.1-1_all.deb
+
+ - name: Setup CUDA Toolkit (<24.04)
+ if: matrix.gpu == 'cuda' && matrix.os != 'ubuntu:24.04'
+ run: |
+ apt-get update
+ apt-get install -y cuda
+ echo "CUDACXX=/usr/local/cuda/bin/nvcc" >> $GITHUB_ENV
+
+ - name: Setup CUDA Toolkit (>=24.04)
+ if: matrix.gpu == 'cuda' && matrix.os == 'ubuntu:24.04'
+ run: |
+ apt-get update
+ apt-get install -y nvidia-cuda-toolkit
+ echo "CUDACXX=/usr/lib/nvidia-cuda-toolkit/bin/nvcc" >> $GITHUB_ENV
+
+ - name: Cache Build
+ id: cache-build
+ uses: actions/cache@v4
+ with:
+ path: ${{env.CCACHE_DIR}}
+ key: ${{matrix.os}}-ccache-${{matrix.build_type}}-${{matrix.lib}}-${{matrix.gpu}}-${{github.run_id}}
+ restore-keys: ${{matrix.os}}-ccache-${{matrix.build_type}}-${{matrix.lib}}-${{matrix.gpu}}-
+
+ - name: Setup Environment
+ if: matrix.build_type == 'Release'
+ run: |
+ echo 'CXXFLAGS=-flto' >> $GITHUB_ENV
+
+ - name: Configure
+ run: |
+ cmake -S . -B build_${{matrix.build_type}} \
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} \
+ -DUSE_CUDA=${{matrix.gpu == 'cuda'}} \
+ -DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
+ -DCMAKE_C_COMPILER_LAUNCHER=$(which ccache) \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=$(which ccache) \
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/install
+
+ - name: Build
+ run: |
+ cmake --build build_${{matrix.build_type}} \
+ --config ${{matrix.build_type}}
+
+ - name: Test
+ if: matrix.gpu == 'no-cuda'
+ run: |
+ cd build_${{matrix.build_type}}/
+ ctest --config ${{matrix.build_type}} \
+ --output-on-failure \
+ -j$(nproc)
+
+ - name: Install
+ run: |
+ cmake --build build_${{matrix.build_type}}/ \
+ --config ${{matrix.build_type}} \
+ --target install
diff --git a/.github/workflows/macos.yml b/.github/workflows/macos.yml
new file mode 100644
index 0000000..89c3534
--- /dev/null
+++ b/.github/workflows/macos.yml
@@ -0,0 +1,108 @@
+name: macOS
+
+on: [push, pull_request]
+
+jobs:
+ build:
+ name: ${{matrix.os}}-${{matrix.build_type}}-${{matrix.lib}}-${{matrix.target}}
+ runs-on: ${{matrix.os}}
+ defaults:
+ run:
+ shell: bash -e -o pipefail {0}
+ env:
+ CCACHE_DIR: ${{github.workspace}}/ccache
+ CMAKE_GENERATOR: Ninja
+ strategy:
+ fail-fast: true
+ matrix:
+ os:
+ - macos-12
+ - macos-13
+ - macos-14
+ build_type:
+ - Release
+ lib:
+ - shared
+ - static
+ target:
+ - host
+ - ios
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Setup Dependencies (iOS)
+ if: matrix.target == 'ios'
+ run: |
+ brew install \
+ ccache \
+ eigen \
+ ninja
+
+ - name: Setup Dependencies (Host)
+ if: matrix.target == 'host'
+ run: |
+ brew install \
+ ccache \
+ eigen \
+ gflags \
+ glog \
+ google-benchmark \
+ metis \
+ ninja \
+ suite-sparse
+
+ - name: Cache Build
+ id: cache-build
+ uses: actions/cache@v4
+ with:
+ path: ${{env.CCACHE_DIR}}
+ key: ${{matrix.os}}-ccache-${{matrix.build_type}}-${{matrix.lib}}-${{matrix.target}}-${{github.run_id}}
+ restore-keys: ${{matrix.os}}-ccache-${{matrix.build_type}}-${{matrix.lib}}-${{matrix.target}}-
+
+ - name: Setup Environment
+ if: matrix.build_type == 'Release'
+ run: |
+ echo 'CXXFLAGS=-flto' >> $GITHUB_ENV
+
+ - name: Configure (iOS)
+ if: matrix.target == 'ios'
+ run: |
+ cmake -S . -B build_${{matrix.build_type}} \
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} \
+ -DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
+ -DCMAKE_C_COMPILER_LAUNCHER=$(which ccache) \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=$(which ccache) \
+ -DCMAKE_TOOLCHAIN_FILE=${{github.workspace}}/cmake/iOS.cmake \
+ -DEigen3_DIR=$(brew --prefix)/share/eigen3/cmake \
+ -DIOS_PLATFORM=OS \
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/install
+
+ - name: Configure (Host)
+ if: matrix.target == 'host'
+ run: |
+ cmake -S . -B build_${{matrix.build_type}} \
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} \
+ -DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
+ -DCMAKE_C_COMPILER_LAUNCHER=$(which ccache) \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=$(which ccache) \
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/install
+
+ - name: Build
+ run: |
+ cmake --build build_${{matrix.build_type}} \
+ --config ${{matrix.build_type}}
+
+ - name: Test
+ if: matrix.target == 'host'
+ run: |
+ ctest --test-dir build_${{matrix.build_type}} \
+ --config ${{matrix.build_type}} \
+ --output-on-failure \
+ -j$(sysctl -n hw.ncpu)
+
+ - name: Install
+ run: |
+ cmake --build build_${{matrix.build_type}}/ \
+ --config ${{matrix.build_type}} \
+ --target install
diff --git a/.github/workflows/windows.yml b/.github/workflows/windows.yml
new file mode 100644
index 0000000..c340f9f
--- /dev/null
+++ b/.github/workflows/windows.yml
@@ -0,0 +1,259 @@
+name: Windows
+
+on: [push, pull_request]
+
+jobs:
+ build-mingw:
+ name: ${{matrix.sys}}-${{matrix.env}}-${{matrix.build_type}}-${{matrix.lib}}
+ runs-on: windows-latest
+ defaults:
+ run:
+ shell: msys2 {0}
+ env:
+ CCACHE_DIR: ${{github.workspace}}/ccache
+ strategy:
+ fail-fast: true
+ matrix:
+ build_type: [Release]
+ sys: [mingw64]
+ lib: [shared, static]
+ include:
+ - sys: mingw64
+ env: x86_64
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Setup Dependencies
+ uses: msys2/setup-msys2@v2
+ with:
+ msystem: ${{matrix.sys}}
+ install: >-
+ mingw-w64-${{matrix.env}}-ccache
+ mingw-w64-${{matrix.env}}-cmake
+ mingw-w64-${{matrix.env}}-eigen3
+ mingw-w64-${{matrix.env}}-gcc
+ mingw-w64-${{matrix.env}}-gflags
+ mingw-w64-${{matrix.env}}-glog
+ ${{matrix.sys == 'mingw64' && format('mingw-w64-{0}-metis', matrix.env) || ''}}
+ mingw-w64-${{matrix.env}}-ninja
+ mingw-w64-${{matrix.env}}-suitesparse
+
+ - name: Setup Environment
+ if: ${{matrix.build_type == 'Release'}}
+ run: |
+ echo 'CFLAGS=-flto' >> ~/.bash_profile
+ echo 'CXXFLAGS=-flto' >> ~/.bash_profile
+
+ - name: Cache Build
+ id: cache-build
+ uses: actions/cache@v4
+ with:
+ path: ${{env.CCACHE_DIR}}
+ key: ${{runner.os}}-${{matrix.sys}}-${{matrix.env}}-${{matrix.build_type}}-${{matrix.lib}}-ccache-${{github.run_id}}
+ restore-keys: ${{runner.os}}-${{matrix.sys}}-${{matrix.env}}-${{matrix.build_type}}-${{matrix.lib}}-ccache-
+
+ - name: Configure
+ run: |
+ cmake -S . -B build_${{matrix.build_type}}/ \
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} \
+ -DCMAKE_BUILD_TYPE=${{matrix.build_type}} \
+ -DCMAKE_C_COMPILER_LAUNCHER:FILEPATH=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER:FILEPATH=ccache \
+ -G Ninja
+
+ - name: Build
+ run: |
+ cmake --build build_${{matrix.build_type}}/ \
+ --config ${{matrix.build_type}}
+
+ - name: Test
+ run: |
+ cd build_${{matrix.build_type}}/
+ ctest --config ${{matrix.build_type}} \
+ --output-on-failure \
+ -j$(nproc)
+
+ - name: Install
+ run: |
+ cmake --build build_${{matrix.build_type}}/ \
+ --config ${{matrix.build_type}} \
+ --target install
+
+ build-msvc:
+ name: ${{matrix.msvc}}-${{matrix.arch}}-${{matrix.build_type}}-${{matrix.lib}}-${{matrix.gpu}}
+ runs-on: ${{matrix.os}}
+ defaults:
+ run:
+ shell: powershell
+ env:
+ CL: /MP
+ CMAKE_GENERATOR: ${{matrix.generator}}
+ CMAKE_GENERATOR_PLATFORM: ${{matrix.arch}}
+ strategy:
+ fail-fast: true
+ matrix:
+ arch:
+ - x64
+ build_type:
+ - Release
+ msvc:
+ - VS-16-2019
+ - VS-17-2022
+ lib:
+ - shared
+ gpu:
+ - cuda
+ - no-cuda
+ include:
+ - msvc: VS-16-2019
+ os: windows-2019
+ generator: 'Visual Studio 16 2019'
+ marker: vc16
+ - msvc: VS-17-2022
+ os: windows-2022
+ generator: 'Visual Studio 17 2022'
+ marker: vc17
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Download and install CUDA toolkit
+ if: matrix.gpu == 'cuda'
+ run: |
+ Invoke-WebRequest https://developer.download.nvidia.com/compute/cuda/12.2.1/network_installers/cuda_12.2.1_windows_network.exe -OutFile cuda_toolkit_windows.exe
+ Start-Process -Wait -FilePath .\cuda_toolkit_windows.exe -ArgumentList "-s cusolver_dev_12.2 cusparse_dev_12.2 cublas_dev_12.2 thrust_12.2 nvcc_12.2 cudart_12.2 nvrtc_dev_12.2 visual_studio_integration_12.2"
+ Remove-Item .\cuda_toolkit_windows.exe
+ $CUDA_PATH = "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.2"
+ echo "CUDA_PATH=$CUDA_PATH" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
+ echo "CUDA_PATH_V12_2=$CUDA_PATH" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
+ echo "$CUDA_PATH/bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
+
+ - name: Cache gflags
+ id: cache-gflags
+ uses: actions/cache@v4
+ with:
+ path: gflags/
+ key: ${{matrix.msvc}}-gflags-2.2.2-${{matrix.arch}}-${{matrix.build_type}}-${{matrix.lib}}
+
+ - name: Download gflags
+ if: steps.cache-gflags.outputs.cache-hit != 'true'
+ run: |
+ (New-Object System.Net.WebClient).DownloadFile("https://github.com/gflags/gflags/archive/refs/tags/v2.2.2.zip", "v2.2.2.zip");
+ Expand-Archive -Path v2.2.2.zip -DestinationPath .;
+
+ - name: Setup gflags
+ if: steps.cache-gflags.outputs.cache-hit != 'true'
+ run: |
+ cmake -S gflags-2.2.2 -B build-gflags `
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} `
+ -DBUILD_TESTING=OFF `
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/gflags
+ cmake --build build-gflags `
+ --config ${{matrix.build_type}} `
+ --target install
+
+ - name: Cache glog
+ id: cache-glog
+ uses: actions/cache@v4
+ with:
+ path: glog/
+ key: ${{matrix.msvc}}-glog-0.6.0-${{matrix.arch}}-${{matrix.build_type}}-${{matrix.lib}}
+
+ - name: Download glog
+ if: steps.cache-glog.outputs.cache-hit != 'true'
+ run: |
+ (New-Object System.Net.WebClient).DownloadFile("https://github.com/google/glog/archive/refs/tags/v0.6.0.zip", "v0.6.0.zip");
+ Expand-Archive -Path v0.6.0.zip -DestinationPath .;
+
+ - name: Setup glog
+ if: steps.cache-glog.outputs.cache-hit != 'true'
+ run: |
+ cmake -S glog-0.6.0 -B build-glog `
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} `
+ -DBUILD_TESTING=OFF `
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/glog `
+ -DCMAKE_PREFIX_PATH=${{github.workspace}}/gflags
+ cmake --build build-glog `
+ --config ${{matrix.build_type}} `
+ --target install
+
+ - name: Cache SuiteSparse
+ id: cache-suitesparse
+ uses: actions/cache@v4
+ with:
+ path: suitesparse/
+ key: ${{matrix.msvc}}-suitesparse-5.13.0-cmake.3-${{matrix.arch}}-${{matrix.build_type}}-${{matrix.lib}}
+
+ - name: Download SuiteSparse
+ if: steps.cache-suitesparse.outputs.cache-hit != 'true'
+ run: |
+ (New-Object System.Net.WebClient).DownloadFile("https://github.com/sergiud/SuiteSparse/releases/download/5.13.0-cmake.3/SuiteSparse-5.13.0-cmake.3-${{matrix.marker}}-Win64-${{matrix.build_type}}-${{matrix.lib}}-gpl-metis.zip", "suitesparse.zip");
+ Expand-Archive -Path suitesparse.zip -DestinationPath ${{github.workspace}}/suitesparse;
+
+ - name: Cache Eigen
+ id: cache-eigen
+ uses: actions/cache@v4
+ with:
+ path: eigen/
+ key: ${{runner.os}}-eigen-3.4.0
+
+ - name: Download Eigen
+ if: steps.cache-eigen.outputs.cache-hit != 'true'
+ run: |
+ (New-Object System.Net.WebClient).DownloadFile("https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip", "eigen-3.4.0.zip");
+ Expand-Archive -Path eigen-3.4.0.zip -DestinationPath .;
+
+ - name: Setup Eigen
+ if: steps.cache-eigen.outputs.cache-hit != 'true'
+ run: |
+ cmake -S eigen-3.4.0 -B build-eigen `
+ -DBUILD_TESTING=OFF `
+ -DCMAKE_Fortran_COMPILER= `
+ -DCMAKE_INSTALL_PREFIX=${{github.workspace}}/eigen `
+ -DEIGEN_BUILD_DOC=OFF
+ cmake --build build-eigen `
+ --config ${{matrix.build_type}} `
+ --target install
+
+ - name: Setup Build Environment
+ run: |
+ echo "Eigen3_ROOT=${{github.workspace}}/eigen" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
+ echo "gflags_ROOT=${{github.workspace}}/gflags" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
+ echo "glog_ROOT=${{github.workspace}}/glog" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
+ echo "CMAKE_PREFIX_PATH=${{github.workspace}}/suitesparse" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
+
+ - name: Setup Runtime Environment
+ run: |
+ echo '${{github.workspace}}\gflags\bin' | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
+ echo '${{github.workspace}}\glog\bin' | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
+ echo '${{github.workspace}}\suitesparse\bin' | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
+
+ - name: Configure
+ run: |
+ cmake -S . -B build_${{matrix.build_type}}/ `
+ -DBLAS_blas_LIBRARY=${{github.workspace}}/suitesparse/lib/libblas.lib `
+ -DBUILD_SHARED_LIBS=${{matrix.lib == 'shared'}} `
+ -DCMAKE_CONFIGURATION_TYPES=${{matrix.build_type}} `
+ -DCMAKE_INSTALL_PREFIX:PATH=${{github.workspace}}/install `
+ -DLAPACK_lapack_LIBRARY=${{github.workspace}}/suitesparse/lib/liblapack.lib
+
+ - name: Build
+ run: |
+ cmake --build build_${{matrix.build_type}}/ `
+ --config ${{matrix.build_type}}
+
+ - name: Test
+ if: matrix.gpu == 'no-cuda'
+ env:
+ CTEST_OUTPUT_ON_FAILURE: 1
+ run: |
+ cmake --build build_${{matrix.build_type}}/ `
+ --config ${{matrix.build_type}} `
+ --target RUN_TESTS
+
+ - name: Install
+ run: |
+ cmake --build build_${{matrix.build_type}}/ `
+ --config ${{matrix.build_type}} `
+ --target INSTALL
diff --git a/.gitignore b/.gitignore
index f6aae3f..ca55495 100644
--- a/.gitignore
+++ b/.gitignore
@@ -23,3 +23,7 @@
.buildinfo
bazel-*
*.pyc
+.idea*
+
+cmake-build*
+small-blas-benchmarks
\ No newline at end of file
diff --git a/.travis.yml b/.travis.yml
deleted file mode 100644
index 3e109b5..0000000
--- a/.travis.yml
+++ /dev/null
@@ -1,70 +0,0 @@
-language: cpp
-
-matrix:
- fast_finish: true
- include:
- - os: linux
- dist: bionic
- sudo: required
- compiler: gcc
- env: CERES_BUILD_TARGET=LINUX
- - os: linux
- dist: bionic
- sudo: required
- compiler: gcc
- env: CERES_BUILD_TARGET=ANDROID
- - os: osx
- osx_image: xcode11.2
- env: CERES_BUILD_TARGET=OSX
- - os: osx
- osx_image: xcode11.2
- env: CERES_BUILD_TARGET=IOS
-
-env:
- # As per http://docs.travis-ci.com/user/languages/cpp/#OpenMP-projects don't be greedy with OpenMP.
- - OMP_NUM_THREADS=4
-
-before_install:
- - if [ $TRAVIS_OS_NAME = linux ]; then sudo apt-get update -qq; fi
- - |
- if [[ "$CERES_BUILD_TARGET" == "ANDROID" ]]; then
- cd /tmp
- wget https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip
- unzip -qq android-ndk-r20b-linux-x86_64.zip
- fi
-
-install:
- - if [ $TRAVIS_OS_NAME = linux ]; then $TRAVIS_BUILD_DIR/travis/install_travis_linux_deps.sh; fi
- - if [ $TRAVIS_OS_NAME = osx ]; then $TRAVIS_BUILD_DIR/travis/install_travis_osx_deps.sh; fi
-
-before_script:
- - mkdir /tmp/ceres-build
- - cd /tmp/ceres-build
-
-script:
- # NOTE: TRAVIS_BUILD_DIR is actually the source directory for Ceres.
- - |
- if [[ "$CERES_BUILD_TARGET" == "LINUX" || "$CERES_BUILD_TARGET" == "OSX" ]]; then
- cmake $TRAVIS_BUILD_DIR
- fi
- - |
- if [[ "$CERES_BUILD_TARGET" == "ANDROID" ]]; then
- cmake -DCMAKE_TOOLCHAIN_FILE=/tmp/android-ndk-r20b/build/cmake/android.toolchain.cmake -DEigen3_DIR=/usr/lib/cmake/eigen3 -DANDROID_ABI=arm64-v8a -DANDROID_STL=c++_shared -DANDROID_NATIVE_API_LEVEL=android-29 -DMINIGLOG=ON -DBUILD_EXAMPLES=OFF $TRAVIS_BUILD_DIR
- fi
- - |
- if [[ "$CERES_BUILD_TARGET" == "IOS" ]]; then
- cmake -DCMAKE_TOOLCHAIN_FILE=$TRAVIS_BUILD_DIR/cmake/iOS.cmake -DEigen3_DIR=/usr/local/share/eigen3/cmake -DIOS_PLATFORM=OS $TRAVIS_BUILD_DIR
- fi
- - make -j 4
- - |
- if [[ "$CERES_BUILD_TARGET" == "LINUX" || "$CERES_BUILD_TARGET" == "OSX" ]]; then
- sudo make install
- ctest --output-on-failure -j 4
- fi
-
-notifications:
- email:
- - alexs.mac@gmail.com
- - sandwichmaker@gmail.com
- - keir@google.com
- - wjr@google.com
diff --git a/BUILD b/BUILD
index 5e04e03..4b18cb9 100644
--- a/BUILD
+++ b/BUILD
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2018 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -72,7 +72,7 @@
CERES_TESTS = [
"array_utils",
"autodiff_cost_function",
- "autodiff_local_parameterization",
+ "autodiff_manifold",
"autodiff",
"block_jacobi_preconditioner",
"block_random_access_dense_matrix",
@@ -90,6 +90,7 @@
"cost_function_to_functor",
"covariance",
"cubic_interpolation",
+ "dense_cholesky",
"dense_linear_solver",
"dense_sparse_matrix",
"detect_structure",
@@ -118,7 +119,6 @@
"levenberg_marquardt_strategy",
"line_search_minimizer",
"line_search_preprocessor",
- "local_parameterization",
"loss_function",
"minimizer",
"normal_prior",
@@ -191,7 +191,7 @@
# dependency that we'd prefer to avoid.
[cc_test(
name = test_filename.split("/")[-1][:-3], # Remove .cc.
- timeout = "moderate",
+ timeout = "long",
srcs = [test_filename],
copts = TEST_COPTS,
diff --git a/CITATION.cff b/CITATION.cff
new file mode 100644
index 0000000..08848f8
--- /dev/null
+++ b/CITATION.cff
@@ -0,0 +1,15 @@
+cff-version: 1.2.0
+message: If you use Ceres Solver for a publication, please cite it as below.
+title: Ceres Solver
+abstract: A large scale non-linear optimization library
+authors:
+- family-names: Agarwal
+ given-names: Sameer
+- family-names: Mierle
+ given-names: Keir
+- name: The Ceres Solver Team
+version: 2.2
+date-released: 2023-10-13
+license: Apache-2.0
+repository-code: https://github.com/ceres-solver/ceres-solver
+url: http://ceres-solver.org
diff --git a/CMakeLists.txt b/CMakeLists.txt
index ea7e9b8..6cbc942 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2024 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -29,69 +29,24 @@
# Authors: keir@google.com (Keir Mierle)
# alexs.mac@gmail.com (Alex Stewart)
-cmake_minimum_required(VERSION 3.5)
-cmake_policy(VERSION 3.5)
-if (POLICY CMP0074)
- # FindTBB.cmake uses TBB_ROOT in a way that is historical, but also compliant
- # with CMP0074 so suppress the legacy compatibility warning and allow its use.
- cmake_policy(SET CMP0074 NEW)
-endif()
-
-# Set the C++ version (must be >= C++14) when compiling Ceres.
-#
-# Reflect a user-specified (via -D) CMAKE_CXX_STANDARD if present, otherwise
-# default to C++14.
-set(DEFAULT_CXX_STANDARD ${CMAKE_CXX_STANDARD})
-if (NOT DEFAULT_CXX_STANDARD)
- set(DEFAULT_CXX_STANDARD 14)
-endif()
-set(CMAKE_CXX_STANDARD ${DEFAULT_CXX_STANDARD} CACHE STRING
- "C++ standard (minimum 14)" FORCE)
-# Restrict CMAKE_CXX_STANDARD to the valid versions permitted and ensure that
-# if one was forced via -D that it is in the valid set.
-set(ALLOWED_CXX_STANDARDS 14 17 20)
-set_property(CACHE CMAKE_CXX_STANDARD PROPERTY STRINGS ${ALLOWED_CXX_STANDARDS})
-list(FIND ALLOWED_CXX_STANDARDS ${CMAKE_CXX_STANDARD} POSITION)
-if (POSITION LESS 0)
- message(FATAL_ERROR "Invalid CMAKE_CXX_STANDARD: ${CMAKE_CXX_STANDARD}. "
- "Must be one of: ${ALLOWED_CXX_STANDARDS}")
-endif()
-# Specify the standard as a hard requirement, otherwise CMAKE_CXX_STANDARD is
-# interpreted as a suggestion that can decay *back* to lower versions.
-set(CMAKE_CXX_STANDARD_REQUIRED ON CACHE BOOL "")
-mark_as_advanced(CMAKE_CXX_STANDARD_REQUIRED)
-
-# MSVC versions < 2015 did not fully support >= C++14, and technically even
-# 2015 did not support a couple of smaller features
-if (CMAKE_CXX_COMPILER_ID MATCHES MSVC AND
- CMAKE_CXX_COMPILER_VERSION VERSION_LESS 14.0)
- message(FATAL_ERROR "Invalid CMAKE_CXX_COMPILER_VERSION: "
- "${CMAKE_CXX_COMPILER_VERSION}. Ceres requires at least MSVC 2015 for "
- "C++14 support.")
-endif()
-
-# On macOS, add the Homebrew prefix (with appropriate suffixes) to the
-# respective HINTS directories (after any user-specified locations). This
-# handles Homebrew installations into non-standard locations (not /usr/local).
-# We do not use CMAKE_PREFIX_PATH for this as given the search ordering of
-# find_xxx(), doing so would override any user-specified HINTS locations with
-# the Homebrew version if it exists.
-if (CMAKE_SYSTEM_NAME MATCHES "Darwin")
- find_program(HOMEBREW_EXECUTABLE brew)
- mark_as_advanced(FORCE HOMEBREW_EXECUTABLE)
- if (HOMEBREW_EXECUTABLE)
- # Detected a Homebrew install, query for its install prefix.
- execute_process(COMMAND ${HOMEBREW_EXECUTABLE} --prefix
- OUTPUT_VARIABLE HOMEBREW_INSTALL_PREFIX
- OUTPUT_STRIP_TRAILING_WHITESPACE)
- message(STATUS "Detected Homebrew with install prefix: "
- "${HOMEBREW_INSTALL_PREFIX}, adding to CMake search paths.")
- list(APPEND CMAKE_PREFIX_PATH "${HOMEBREW_INSTALL_PREFIX}")
- endif()
-endif()
-
+cmake_minimum_required(VERSION 3.16...3.29)
project(Ceres C CXX)
+# NOTE: The following CMake variables must be applied consistently to all
+# targets in project to avoid visibility warnings by placing the variables at
+# the project top.
+
+# Always build position-independent code (PIC), even when building Ceres as a
+# static library so that shared libraries can link against it, not just
+# executables (PIC does not apply on Windows). Global variable can be overridden
+# by the user whereas target properties can be not.
+set(CMAKE_POSITION_INDEPENDENT_CODE ON)
+# Set the default symbol visibility to hidden to unify the behavior among
+# the various compilers and to get smaller binaries
+set(CMAKE_C_VISIBILITY_PRESET hidden)
+set(CMAKE_CXX_VISIBILITY_PRESET hidden)
+set(CMAKE_VISIBILITY_INLINES_HIDDEN ON)
+
# NOTE: The 'generic' CMake variables CMAKE_[SOURCE/BINARY]_DIR should not be
# used. Always use the project-specific variants (generated by CMake):
# <PROJECT_NAME_MATCHING_CASE>_[SOURCE/BINARY]_DIR, e.g.
@@ -106,8 +61,14 @@
# additional paths via -D.
list(APPEND CMAKE_MODULE_PATH "${Ceres_SOURCE_DIR}/cmake")
include(AddCompileFlagsIfSupported)
+include(CheckCXXCompilerFlag)
+include(CheckLibraryExists)
+include(GNUInstallDirs)
include(UpdateCacheVariable)
+check_cxx_compiler_flag(/bigobj HAVE_BIGOBJ)
+check_library_exists(m pow "" HAVE_LIBM)
+
# Xcode 11.0-1 with macOS 10.15 (Catalina) broke alignment.
include(DetectBrokenStackCheckMacOSXcodePairing)
detect_broken_stack_check_macos_xcode_pairing()
@@ -130,30 +91,21 @@
enable_testing()
-include(CeresThreadingModels)
+include(CMakeDependentOption)
include(PrettyPrintCMakeList)
-find_available_ceres_threading_models(CERES_THREADING_MODELS_AVAILABLE)
-pretty_print_cmake_list(PRETTY_CERES_THREADING_MODELS_AVAILABLE
- ${CERES_THREADING_MODELS_AVAILABLE})
-message("-- Detected available Ceres threading models: "
- "${PRETTY_CERES_THREADING_MODELS_AVAILABLE}")
-set(CERES_THREADING_MODEL "${CERES_THREADING_MODEL}" CACHE STRING
- "Ceres threading back-end" FORCE)
-if (NOT CERES_THREADING_MODEL)
- list(GET CERES_THREADING_MODELS_AVAILABLE 0 DEFAULT_THREADING_MODEL)
- update_cache_variable(CERES_THREADING_MODEL ${DEFAULT_THREADING_MODEL})
-endif()
-set_property(CACHE CERES_THREADING_MODEL PROPERTY STRINGS
- ${CERES_THREADING_MODELS_AVAILABLE})
option(MINIGLOG "Use a stripped down version of glog." OFF)
option(GFLAGS "Enable Google Flags." ON)
option(SUITESPARSE "Enable SuiteSparse." ON)
-option(CXSPARSE "Enable CXSparse." ON)
if (APPLE)
option(ACCELERATESPARSE
"Enable use of sparse solvers in Apple's Accelerate framework." ON)
+ option(ENABLE_BITCODE
+ "Enable bitcode for iOS builds (disables inline optimizations for Eigen)." OFF)
endif()
+# We can't have an option called 'CUDA' since that is a reserved word -- a
+# language definition.
+set(USE_CUDA "default" CACHE STRING "Enable use of CUDA linear algebra solvers.")
option(LAPACK "Enable use of LAPACK directly within Ceres." ON)
# Template specializations for the Schur complement based solvers. If
# compile time, binary size or compiler performance is an issue, you
@@ -165,6 +117,7 @@
# Enable the use of Eigen as a sparse linear algebra library for
# solving the nonlinear least squares problems.
option(EIGENSPARSE "Enable Eigen as a sparse linear algebra library." ON)
+cmake_dependent_option(EIGENMETIS "Enable Eigen METIS support." ON EIGENSPARSE OFF)
option(EXPORT_BUILD_DIR
"Export build directory using CMake (enables external use without install)." OFF)
option(BUILD_TESTING "Enable tests" ON)
@@ -179,35 +132,6 @@
if (ANDROID)
option(ANDROID_STRIP_DEBUG_SYMBOLS "Strip debug symbols from Android builds (reduces file sizes)" ON)
endif()
-if (MSVC)
- option(MSVC_USE_STATIC_CRT
- "MS Visual Studio: Use static C-Run Time Library in place of shared." OFF)
-endif()
-
-# Allow user to specify a suffix for the library install directory, the only
-# really sensible option (other than "") being "64", such that:
-# ${CMAKE_INSTALL_PREFIX}/lib -> ${CMAKE_INSTALL_PREFIX}/lib64.
-#
-# Heuristic for determining LIB_SUFFIX. FHS recommends that 64-bit systems
-# install native libraries to lib64 rather than lib. Most distros seem to
-# follow this convention with a couple notable exceptions (Debian-based and
-# Arch-based distros) which we try to detect here.
-if (CMAKE_SYSTEM_NAME MATCHES "Linux" AND
- NOT DEFINED LIB_SUFFIX AND
- NOT CMAKE_CROSSCOMPILING AND
- CMAKE_SIZEOF_VOID_P EQUAL "8" AND
- NOT EXISTS "/etc/debian_version" AND
- NOT EXISTS "/etc/arch-release")
- message("-- Detected non-Debian/Arch-based 64-bit Linux distribution. "
- "Defaulting to library install directory: lib${LIB_SUFFIX}. You can "
- "override this by specifying LIB_SUFFIX.")
- set(LIB_SUFFIX "64")
-endif ()
-# Only create the cache variable (for the CMake GUI) after attempting to detect
-# the suffix *if not specified by the user* (NOT DEFINED LIB_SUFFIX in if())
-# s/t the user could override our autodetected suffix with "" if desired.
-set(LIB_SUFFIX "${LIB_SUFFIX}" CACHE STRING
- "Suffix of library install directory (to support lib/lib64)." FORCE)
# IOS is defined iff using the iOS.cmake CMake toolchain to build a static
# library for iOS.
@@ -226,27 +150,26 @@
# Apple claims that the BLAS call dsyrk_ is a private API, and will not allow
# you to submit to the Apple Store if the symbol is present.
update_cache_variable(LAPACK OFF)
- message(STATUS "Building for iOS: SuiteSparse, CXSparse, LAPACK, gflags, "
- "and OpenMP are not available.")
+ message(STATUS "Building for iOS: SuiteSparse, LAPACK, gflags "
+ "are not available.")
update_cache_variable(BUILD_EXAMPLES OFF)
message(STATUS "Building for iOS: Will not build examples.")
endif (IOS)
unset(CERES_COMPILE_OPTIONS)
-message("-- Building with C++${CMAKE_CXX_STANDARD}")
# Eigen.
# Eigen delivers Eigen3Config.cmake since v3.3.3
find_package(Eigen3 3.3 REQUIRED)
-if (EIGEN3_FOUND)
- message("-- Found Eigen version ${EIGEN3_VERSION_STRING}: ${EIGEN3_INCLUDE_DIRS}")
+if (Eigen3_FOUND)
+ message("-- Found Eigen version ${Eigen3_VERSION}: ${Eigen3_DIR}")
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*)" AND
- EIGEN3_VERSION_STRING VERSION_LESS 3.3.4)
+ Eigen3_VERSION VERSION_LESS 3.3.4)
# As per issue #289: https://github.com/ceres-solver/ceres-solver/issues/289
# the bundle_adjustment_test will fail for Eigen < 3.3.4 on aarch64.
message(FATAL_ERROR "-- Ceres requires Eigen version >= 3.3.4 on aarch64. "
- "Detected version of Eigen is: ${EIGEN3_VERSION_STRING}.")
+ "Detected version of Eigen is: ${Eigen3_VERSION}.")
endif()
if (EIGENSPARSE)
@@ -258,7 +181,109 @@
message(" which can still use the EIGEN_SPARSE_QR algorithm.")
add_definitions(-DEIGEN_MPL2_ONLY)
endif (EIGENSPARSE)
-endif (EIGEN3_FOUND)
+endif (Eigen3_FOUND)
+
+if (CMAKE_VERSION VERSION_LESS 3.17)
+ set_property(CACHE USE_CUDA PROPERTY STRINGS OFF default)
+else (CMAKE_VERSION VERSION_LESS 3.17)
+ set_property(CACHE USE_CUDA PROPERTY STRINGS OFF default static)
+endif (CMAKE_VERSION VERSION_LESS 3.17)
+
+if (USE_CUDA)
+ if (CMAKE_VERSION VERSION_LESS 3.17)
+ # On older versions of CMake (20.04 default is 3.16) FindCUDAToolkit was
+ # not available, but FindCUDA was deprecated. To avoid special-case handling
+ # elsewhere, emulate the effects of FindCUDAToolkit locally in terms of the
+ # expected CMake imported targets and defined variables. This can be removed
+ # from as soon as the min CMake version is >= 3.17.
+ find_package(CUDA QUIET)
+ if (CUDA_FOUND)
+ message("-- Found CUDA version ${CUDA_VERSION} installed in: "
+ "${CUDA_TOOLKIT_ROOT_DIR} via legacy (< 3.17) CMake module. "
+ "Using the legacy CMake module means that any installation of "
+ "Ceres will require that the CUDA libraries be installed in a "
+ "location included in the LD_LIBRARY_PATH.")
+ enable_language(CUDA)
+
+ macro(DECLARE_IMPORTED_CUDA_TARGET COMPONENT)
+ add_library(CUDA::${COMPONENT} INTERFACE IMPORTED)
+ target_include_directories(
+ CUDA::${COMPONENT} INTERFACE ${CUDA_INCLUDE_DIRS})
+ target_link_libraries(
+ CUDA::${COMPONENT} INTERFACE ${CUDA_${COMPONENT}_LIBRARY} ${ARGN})
+ endmacro()
+
+ declare_imported_cuda_target(cublas)
+ declare_imported_cuda_target(cusolver)
+ declare_imported_cuda_target(cusparse)
+ declare_imported_cuda_target(cudart ${CUDA_LIBRARIES})
+
+ set(CERES_CUDA_TARGET_SUFFIX "")
+ set(CUDAToolkit_BIN_DIR ${CUDA_TOOLKIT_ROOT_DIR}/bin)
+
+ else (CUDA_FOUND)
+ message("-- Did not find CUDA, disabling CUDA support.")
+ update_cache_variable(USE_CUDA OFF)
+ endif (CUDA_FOUND)
+ else (CMAKE_VERSION VERSION_LESS 3.17)
+ find_package(CUDAToolkit QUIET)
+ if (CUDAToolkit_FOUND)
+ message("-- Found CUDA version ${CUDAToolkit_VERSION} installed in: "
+ "${CUDAToolkit_TARGET_DIR}")
+ set(CUDAToolkit_DEPENDENCY
+ "find_dependency(CUDAToolkit ${CUDAToolkit_VERSION})")
+ enable_language(CUDA)
+ if (CMAKE_VERSION VERSION_GREATER_EQUAL "3.18")
+ # Support Maxwell GPUs (Default).
+ set(CMAKE_CUDA_ARCHITECTURES "50")
+ # Support other architectures depending on CUDA toolkit version.
+ if (CUDAToolkit_VERSION VERSION_GREATER_EQUAL "8.0")
+ # Support Pascal GPUs.
+ list(APPEND CMAKE_CUDA_ARCHITECTURES "60")
+ endif(CUDAToolkit_VERSION VERSION_GREATER_EQUAL "8.0")
+ if (CUDAToolkit_VERSION VERSION_GREATER_EQUAL "9.0")
+ # Support Volta GPUs.
+ list(APPEND CMAKE_CUDA_ARCHITECTURES "70")
+ endif(CUDAToolkit_VERSION VERSION_GREATER_EQUAL "9.0")
+ if (CUDAToolkit_VERSION VERSION_GREATER_EQUAL "10.0")
+ # Support Turing GPUs.
+ list(APPEND CMAKE_CUDA_ARCHITECTURES "75")
+ endif(CUDAToolkit_VERSION VERSION_GREATER_EQUAL "10.0")
+ if (CUDAToolkit_VERSION VERSION_GREATER_EQUAL "11.0")
+ # Support Ampere GPUs.
+ list(APPEND CMAKE_CUDA_ARCHITECTURES "80")
+ endif(CUDAToolkit_VERSION VERSION_GREATER_EQUAL "11.0")
+ if (CUDAToolkit_VERSION VERSION_GREATER_EQUAL "11.8")
+ # Support Hopper GPUs.
+ list(APPEND CMAKE_CUDA_ARCHITECTURES "90")
+ endif(CUDAToolkit_VERSION VERSION_GREATER_EQUAL "11.8")
+ message("-- Setting CUDA Architecture to ${CMAKE_CUDA_ARCHITECTURES}")
+ endif()
+
+ if (USE_CUDA STREQUAL "static")
+ set(CERES_CUDA_TARGET_SUFFIX "_static")
+ else (USE_CUDA STREQUAL "static")
+ set(CERES_CUDA_TARGET_SUFFIX "")
+ endif (USE_CUDA STREQUAL "static")
+ else (CUDAToolkit_FOUND)
+ message("-- Did not find CUDA, disabling CUDA support.")
+ update_cache_variable(USE_CUDA OFF)
+ endif (CUDAToolkit_FOUND)
+ endif (CMAKE_VERSION VERSION_LESS 3.17)
+endif (USE_CUDA)
+
+if (USE_CUDA)
+ list(APPEND CERES_CUDA_LIBRARIES
+ CUDA::cublas${CERES_CUDA_TARGET_SUFFIX}
+ CUDA::cudart${CERES_CUDA_TARGET_SUFFIX}
+ CUDA::cusolver${CERES_CUDA_TARGET_SUFFIX}
+ CUDA::cusparse${CERES_CUDA_TARGET_SUFFIX})
+ unset (CERES_CUDA_TARGET_SUFFIX)
+ set(CMAKE_CUDA_RUNTIME_LIBRARY NONE)
+else (USE_CUDA)
+ message("-- Building without CUDA.")
+ list(APPEND CERES_COMPILE_OPTIONS CERES_NO_CUDA)
+endif (USE_CUDA)
if (LAPACK)
find_package(LAPACK QUIET)
@@ -274,70 +299,67 @@
list(APPEND CERES_COMPILE_OPTIONS CERES_NO_LAPACK)
endif (LAPACK)
+# Set the install path for the installed CeresConfig.cmake configuration file
+# relative to CMAKE_INSTALL_PREFIX.
+set(RELATIVE_CMAKECONFIG_INSTALL_DIR ${CMAKE_INSTALL_LIBDIR}/cmake/Ceres)
+
if (SUITESPARSE)
# By default, if SuiteSparse and all dependencies are found, Ceres is
# built with SuiteSparse support.
# Check for SuiteSparse and dependencies.
- find_package(SuiteSparse)
- if (SUITESPARSE_FOUND)
- # On Ubuntu the system install of SuiteSparse (v3.4.0) up to at least
- # Ubuntu 13.10 cannot be used to link shared libraries.
- if (BUILD_SHARED_LIBS AND
- SUITESPARSE_IS_BROKEN_SHARED_LINKING_UBUNTU_SYSTEM_VERSION)
- message(FATAL_ERROR "You are attempting to build Ceres as a shared "
- "library on Ubuntu using a system package install of SuiteSparse "
- "3.4.0. This package is broken and does not support the "
- "construction of shared libraries (you can still build Ceres as "
- "a static library). If you wish to build a shared version of Ceres "
- "you should uninstall the system install of SuiteSparse "
- "(libsuitesparse-dev) and perform a source install of SuiteSparse "
- "(we recommend that you use the latest version), "
- "see http://ceres-solver.org/building.html for more information.")
- endif (BUILD_SHARED_LIBS AND
- SUITESPARSE_IS_BROKEN_SHARED_LINKING_UBUNTU_SYSTEM_VERSION)
-
+ find_package(SuiteSparse 4.5.6 COMPONENTS CHOLMOD SPQR
+ OPTIONAL_COMPONENTS Partition)
+ if (SuiteSparse_FOUND)
+ set(SuiteSparse_DEPENDENCY "find_dependency(SuiteSparse ${SuiteSparse_VERSION})")
# By default, if all of SuiteSparse's dependencies are found, Ceres is
# built with SuiteSparse support.
- message("-- Found SuiteSparse ${SUITESPARSE_VERSION}, "
+ message("-- Found SuiteSparse ${SuiteSparse_VERSION}, "
"building with SuiteSparse.")
- else (SUITESPARSE_FOUND)
+
+ if (SuiteSparse_NO_CMAKE OR NOT SuiteSparse_DIR)
+ install(FILES ${Ceres_SOURCE_DIR}/cmake/FindSuiteSparse.cmake
+ ${Ceres_SOURCE_DIR}/cmake/FindMETIS.cmake
+ DESTINATION ${RELATIVE_CMAKECONFIG_INSTALL_DIR})
+ endif (SuiteSparse_NO_CMAKE OR NOT SuiteSparse_DIR)
+ else (SuiteSparse_FOUND)
# Disable use of SuiteSparse if it cannot be found and continue.
message("-- Did not find all SuiteSparse dependencies, disabling "
"SuiteSparse support.")
update_cache_variable(SUITESPARSE OFF)
list(APPEND CERES_COMPILE_OPTIONS CERES_NO_SUITESPARSE)
- endif (SUITESPARSE_FOUND)
+ endif (SuiteSparse_FOUND)
else (SUITESPARSE)
message("-- Building without SuiteSparse.")
list(APPEND CERES_COMPILE_OPTIONS CERES_NO_SUITESPARSE)
endif (SUITESPARSE)
-# CXSparse.
-if (CXSPARSE)
- # Don't search with REQUIRED as we can continue without CXSparse.
- find_package(CXSparse)
- if (CXSPARSE_FOUND)
- # By default, if CXSparse and all dependencies are found, Ceres is
- # built with CXSparse support.
- message("-- Found CXSparse version: ${CXSPARSE_VERSION}, "
- "building with CXSparse.")
- else (CXSPARSE_FOUND)
- # Disable use of CXSparse if it cannot be found and continue.
- message("-- Did not find CXSparse, Building without CXSparse.")
- update_cache_variable(CXSPARSE OFF)
- list(APPEND CERES_COMPILE_OPTIONS CERES_NO_CXSPARSE)
- endif (CXSPARSE_FOUND)
-else (CXSPARSE)
- message("-- Building without CXSparse.")
- list(APPEND CERES_COMPILE_OPTIONS CERES_NO_CXSPARSE)
- # Mark as advanced (remove from default GUI view) the CXSparse search
- # variables in case user enabled CXSPARSE, FindCXSparse did not find it, so
- # made search variables visible in GUI for user to set, but then user disables
- # CXSPARSE instead of setting them.
- mark_as_advanced(FORCE CXSPARSE_INCLUDE_DIR
- CXSPARSE_LIBRARY)
-endif (CXSPARSE)
+if (NOT SuiteSparse_Partition_FOUND)
+ list (APPEND CERES_COMPILE_OPTIONS CERES_NO_CHOLMOD_PARTITION)
+endif (NOT SuiteSparse_Partition_FOUND)
+
+if (EIGENMETIS)
+ find_package (METIS)
+ if (METIS_FOUND)
+ # Since METIS is a private dependency of Ceres, it requires access to the
+ # link-only METIS::METIS target to avoid undefined linker errors in projects
+ # relying on Ceres. We do not actually need to propagate anything besides
+ # the link libraries (such as include directories.)
+ set(METIS_DEPENDENCY "find_dependency(METIS ${METIS_VERSION})")
+ # METIS find module must be installed unless a package config is being used.
+ if (NOT METIS_DIR)
+ install(FILES ${Ceres_SOURCE_DIR}/cmake/FindMETIS.cmake
+ DESTINATION ${RELATIVE_CMAKECONFIG_INSTALL_DIR})
+ endif (NOT METIS_DIR)
+ else (METIS_FOUND)
+ message("-- Did not find METIS, disabling Eigen METIS support.")
+ update_cache_variable(EIGENMETIS OFF)
+ list (APPEND CERES_COMPILE_OPTIONS CERES_NO_EIGEN_METIS)
+ endif (METIS_FOUND)
+else (EIGENMETIS)
+ message("-- Building without Eigen METIS support.")
+ list (APPEND CERES_COMPILE_OPTIONS CERES_NO_EIGEN_METIS)
+endif (EIGENMETIS)
if (ACCELERATESPARSE)
find_package(AccelerateSparse)
@@ -358,9 +380,9 @@
endif()
# Ensure that the user understands they have disabled all sparse libraries.
-if (NOT SUITESPARSE AND NOT CXSPARSE AND NOT EIGENSPARSE AND NOT ACCELERATESPARSE)
+if (NOT SUITESPARSE AND NOT EIGENSPARSE AND NOT ACCELERATESPARSE)
message(" ===============================================================")
- message(" Compiling without any sparse library: SuiteSparse, CXSparse ")
+ message(" Compiling without any sparse library: SuiteSparse, ")
message(" EigenSparse & Apple's Accelerate are all disabled or unavailable. ")
message(" No sparse linear solvers (SPARSE_NORMAL_CHOLESKY & SPARSE_SCHUR)")
message(" will be available when Ceres is used.")
@@ -458,10 +480,9 @@
message("-- Disabling custom blas")
endif (NOT CUSTOM_BLAS)
-set_ceres_threading_model("${CERES_THREADING_MODEL}")
-
if (BUILD_BENCHMARKS)
- find_package(benchmark QUIET)
+ # Version 1.3 was first to provide import targets
+ find_package(benchmark 1.3 QUIET)
if (benchmark_FOUND)
message("-- Found Google benchmark library. Building Ceres benchmarks.")
else()
@@ -471,14 +492,9 @@
mark_as_advanced(benchmark_DIR)
endif()
+# TODO Report features using the FeatureSummary CMake module
if (BUILD_SHARED_LIBS)
message("-- Building Ceres as a shared library.")
- # The CERES_BUILDING_SHARED_LIBRARY compile definition is NOT stored in
- # CERES_COMPILE_OPTIONS as it must only be defined when Ceres is compiled
- # not when it is used as it controls the CERES_EXPORT macro which provides
- # symbol import/export support.
- add_definitions(-DCERES_BUILDING_SHARED_LIBRARY)
- list(APPEND CERES_COMPILE_OPTIONS CERES_USING_SHARED_LIBRARY)
else (BUILD_SHARED_LIBS)
message("-- Building Ceres as a static library.")
endif (BUILD_SHARED_LIBS)
@@ -516,76 +532,42 @@
# After the tweaks for the compile settings, disable some warnings on MSVC.
if (MSVC)
- # On MSVC, math constants are not included in <cmath> or <math.h> unless
- # _USE_MATH_DEFINES is defined [1]. As we use M_PI in the examples, ensure
- # that _USE_MATH_DEFINES is defined before the first inclusion of <cmath>.
- #
- # [1] https://msdn.microsoft.com/en-us/library/4hwaceh6.aspx
- add_definitions("-D_USE_MATH_DEFINES")
+ # Insecure standard library functions
+ add_compile_definitions(_CRT_SECURE_NO_WARNINGS)
+ # std::numeric_limits<T>::has_denorm is deprecated in C++23
+ add_compile_definitions($<$<COMPILE_LANGUAGE:CXX>:_SILENCE_CXX23_DENORM_DEPRECATION_WARNING>)
+ # std::aligned_storage is deprecated in C++23
+ add_compile_definitions($<$<COMPILE_LANGUAGE:CXX>:_SILENCE_CXX23_ALIGNED_STORAGE_DEPRECATION_WARNING>)
# Disable signed/unsigned int conversion warnings.
- add_compile_options("/wd4018" "/wd4267")
- # Disable warning about using struct/class for the same symobl.
- add_compile_options("/wd4099")
- # Disable warning about the insecurity of using "std::copy".
- add_compile_options("/wd4996")
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/wd4018>)
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/wd4267>)
+ # Disable warning about using struct/class for the same symbol.
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/wd4099>)
# Disable performance warning about int-to-bool conversion.
- add_compile_options("/wd4800")
- # Disable performance warning about fopen insecurity.
- add_compile_options("/wd4996")
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/wd4800>)
# Disable warning about int64 to int32 conversion. Disabling
# this warning may not be correct; needs investigation.
# TODO(keir): Investigate these warnings in more detail.
- add_compile_options("/wd4244")
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/wd4244>)
# It's not possible to use STL types in DLL interfaces in a portable and
# reliable way. However, that's what happens with Google Log and Google Flags
# on Windows. MSVC gets upset about this and throws warnings that we can't do
# much about. The real solution is to link static versions of Google Log and
# Google Test, but that seems tricky on Windows. So, disable the warning.
- add_compile_options("/wd4251")
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/wd4251>)
# Add bigobj flag otherwise the build would fail due to large object files
# probably resulting from generated headers (like the fixed-size schur
# specializations).
- add_compile_options("/bigobj")
+ add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/bigobj>)
# Google Flags doesn't have their DLL import/export stuff set up correctly,
# which results in linker warnings. This is irrelevant for Ceres, so ignore
# the warnings.
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /ignore:4049")
- # Update the C/CXX flags for MSVC to use either the static or shared
- # C-Run Time (CRT) library based on the user option: MSVC_USE_STATIC_CRT.
- list(APPEND C_CXX_FLAGS
- CMAKE_CXX_FLAGS
- CMAKE_CXX_FLAGS_DEBUG
- CMAKE_CXX_FLAGS_RELEASE
- CMAKE_CXX_FLAGS_MINSIZEREL
- CMAKE_CXX_FLAGS_RELWITHDEBINFO)
-
- foreach(FLAG_VAR ${C_CXX_FLAGS})
- if (MSVC_USE_STATIC_CRT)
- # Use static CRT.
- if (${FLAG_VAR} MATCHES "/MD")
- string(REGEX REPLACE "/MD" "/MT" ${FLAG_VAR} "${${FLAG_VAR}}")
- endif (${FLAG_VAR} MATCHES "/MD")
- else (MSVC_USE_STATIC_CRT)
- # Use shared, not static, CRT.
- if (${FLAG_VAR} MATCHES "/MT")
- string(REGEX REPLACE "/MT" "/MD" ${FLAG_VAR} "${${FLAG_VAR}}")
- endif (${FLAG_VAR} MATCHES "/MT")
- endif (MSVC_USE_STATIC_CRT)
- endforeach()
-
# Tuple sizes of 10 are used by Gtest.
add_definitions("-D_VARIADIC_MAX=10")
-
- include(CheckIfUnderscorePrefixedBesselFunctionsExist)
- check_if_underscore_prefixed_bessel_functions_exist(
- HAVE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS)
- if (HAVE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS)
- list(APPEND CERES_COMPILE_OPTIONS
- CERES_MSVC_USE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS)
- endif()
endif (MSVC)
if (UNIX)
@@ -605,17 +587,22 @@
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${CERES_STRICT_CXX_FLAGS}")
endif (UNIX)
-# Use a larger inlining threshold for Clang, since it hobbles Eigen,
-# resulting in an unreasonably slow version of the blas routines. The
-# -Qunused-arguments is needed because CMake passes the inline
-# threshold to the linker and clang complains about it and dies.
if (CMAKE_CXX_COMPILER_ID MATCHES "Clang") # Matches Clang & AppleClang.
- set(CMAKE_CXX_FLAGS
- "${CMAKE_CXX_FLAGS} -Qunused-arguments -mllvm -inline-threshold=600")
-
+ # Optimize for Eigen OR enable bitcode; you cannot do both since bitcode is an
+ # intermediate representation.
+ if (ENABLE_BITCODE)
+ set(CMAKE_CXX_FLAGS
+ "${CMAKE_CXX_FLAGS} -fembed-bitcode")
+ else ()
+ # Use a larger inlining threshold for Clang, since it hobbles Eigen,
+ # resulting in an unreasonably slow version of the blas routines. The
+ # -Qunused-arguments is needed because CMake passes the inline
+ # threshold to the linker and clang complains about it and dies.
+ set(CMAKE_CXX_FLAGS
+ "${CMAKE_CXX_FLAGS} -Qunused-arguments -mllvm -inline-threshold=600")
+ endif ()
# Older versions of Clang (<= 2.9) do not support the 'return-type-c-linkage'
# option, so check for its presence before adding it to the default flags set.
- include(CheckCXXCompilerFlag)
check_cxx_compiler_flag("-Wno-return-type-c-linkage"
HAVE_RETURN_TYPE_C_LINKAGE)
if (HAVE_RETURN_TYPE_C_LINKAGE)
@@ -623,6 +610,8 @@
endif(HAVE_RETURN_TYPE_C_LINKAGE)
endif ()
+add_compile_definitions($<$<BOOL:${WIN32}>:NOMINMAX>)
+
# Configure the Ceres config.h compile options header using the current
# compile options and put the configured header into the Ceres build
# directory. Note that the ceres/internal subdir in <build>/config where
@@ -632,17 +621,14 @@
list(REMOVE_DUPLICATES CERES_COMPILE_OPTIONS)
include(CreateCeresConfig)
create_ceres_config("${CERES_COMPILE_OPTIONS}"
- ${Ceres_BINARY_DIR}/config/ceres/internal)
+ ${Ceres_BINARY_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/ceres/internal)
add_subdirectory(internal/ceres)
if (BUILD_DOCUMENTATION)
- set(CERES_DOCS_INSTALL_DIR "share/doc/ceres" CACHE STRING
- "Ceres docs install path relative to CMAKE_INSTALL_PREFIX")
-
- find_package(Sphinx QUIET)
- if (NOT SPHINX_FOUND)
- message("-- Failed to find Sphinx, disabling build of documentation.")
+ find_package (Sphinx REQUIRED COMPONENTS sphinx_rtd_theme)
+ if (NOT Sphinx_FOUND)
+ message("-- Failed to find Sphinx and/or its dependencies, disabling build of documentation.")
update_cache_variable(BUILD_DOCUMENTATION OFF)
else()
# Generate the User's Guide (html).
@@ -661,21 +647,22 @@
# Setup installation of Ceres public headers.
file(GLOB CERES_HDRS ${Ceres_SOURCE_DIR}/include/ceres/*.h)
-install(FILES ${CERES_HDRS} DESTINATION include/ceres)
+install(FILES ${CERES_HDRS} DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/ceres)
file(GLOB CERES_PUBLIC_INTERNAL_HDRS ${Ceres_SOURCE_DIR}/include/ceres/internal/*.h)
-install(FILES ${CERES_PUBLIC_INTERNAL_HDRS} DESTINATION include/ceres/internal)
+install(FILES ${CERES_PUBLIC_INTERNAL_HDRS} DESTINATION
+ ${CMAKE_INSTALL_INCLUDEDIR}/ceres/internal)
# Also setup installation of Ceres config.h configured with the current
-# build options into the installed headers directory.
-install(FILES ${Ceres_BINARY_DIR}/config/ceres/internal/config.h
- DESTINATION include/ceres/internal)
+# build options and export.h into the installed headers directory.
+install(DIRECTORY ${Ceres_BINARY_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/
+ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
if (MINIGLOG)
# Install miniglog header if being used as logging #includes appear in
# installed public Ceres headers.
install(FILES ${Ceres_SOURCE_DIR}/internal/ceres/miniglog/glog/logging.h
- DESTINATION include/ceres/internal/miniglog/glog)
+ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/ceres/internal/miniglog/glog)
endif (MINIGLOG)
# Ceres supports two mechanisms by which it can be detected & imported into
@@ -715,14 +702,6 @@
# Install method #1: Put Ceres in CMAKE_INSTALL_PREFIX: /usr/local or equivalent.
-# Set the install path for the installed CeresConfig.cmake configuration file
-# relative to CMAKE_INSTALL_PREFIX.
-if (WIN32)
- set(RELATIVE_CMAKECONFIG_INSTALL_DIR CMake)
-else ()
- set(RELATIVE_CMAKECONFIG_INSTALL_DIR lib${LIB_SUFFIX}/cmake/Ceres)
-endif ()
-
# This "exports" for installation all targets which have been put into the
# export set "CeresExport". This generates a CeresTargets.cmake file which,
# when read in by a client project as part of find_package(Ceres) creates
@@ -788,13 +767,23 @@
${Ceres_BINARY_DIR}
${Ceres_SOURCE_DIR})
+ set (Ceres_EXPORT_TARGETS ceres)
+
+ if (TARGET ceres_cuda_kernels)
+ # The target ceres depends on ceres_cuda_kernels requiring the latter to be
+ # exported as part of the same export set.
+ list (APPEND Ceres_EXPORT_TARGETS ceres_cuda_kernels)
+ endif (TARGET ceres_cuda_kernels)
+
# Analogously to install(EXPORT ...), export the Ceres target from the build
# directory as a package called Ceres into the local CMake package registry.
- export(TARGETS ceres
+ export(TARGETS ${Ceres_EXPORT_TARGETS}
NAMESPACE Ceres::
FILE ${Ceres_BINARY_DIR}/CeresTargets.cmake)
export(PACKAGE ${CMAKE_PROJECT_NAME})
+ unset (Ceres_EXPORT_TARGETS)
+
# Configure a CeresConfig.cmake file for the export of the Ceres build
# directory from the template, reflecting the current build options.
set(SETUP_CERES_CONFIG_FOR_INSTALLATION FALSE)
diff --git a/LICENSE b/LICENSE
index cf69df2..b5d967c 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,5 +1,5 @@
Ceres Solver - A fast non-linear least squares minimizer
-Copyright 2015 Google Inc. All rights reserved.
+Copyright 2023 Google Inc. All rights reserved.
http://ceres-solver.org/
Redistribution and use in source and binary forms, with or without
diff --git a/README.md b/README.md
index b091574..7de8f0d 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,7 @@
-[](https://travis-ci.org/ceres-solver/ceres-solver)
+[](https://github.com/ceres-solver/ceres-solver/actions/workflows/android.yml)
+[](https://github.com/ceres-solver/ceres-solver/actions/workflows/linux.yml)
+[](https://github.com/ceres-solver/ceres-solver/actions/workflows/macos.yml)
+[](https://github.com/ceres-solver/ceres-solver/actions/workflows/windows.yml)
Ceres Solver
============
diff --git a/WORKSPACE b/WORKSPACE
index e1e5eca..40a84a3 100644
--- a/WORKSPACE
+++ b/WORKSPACE
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2018 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/bazel/ceres.bzl b/bazel/ceres.bzl
index ce170b2..2e5759e 100644
--- a/bazel/ceres.bzl
+++ b/bazel/ceres.bzl
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2018 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -31,10 +31,9 @@
CERES_SRCS = ["internal/ceres/" + filename for filename in [
"accelerate_sparse.cc",
"array_utils.cc",
- "blas.cc",
"block_evaluate_preparer.cc",
- "block_jacobian_writer.cc",
"block_jacobi_preconditioner.cc",
+ "block_jacobian_writer.cc",
"block_random_access_dense_matrix.cc",
"block_random_access_diagonal_matrix.cc",
"block_random_access_matrix.cc",
@@ -49,14 +48,16 @@
"compressed_row_jacobian_writer.cc",
"compressed_row_sparse_matrix.cc",
"conditioned_cost_function.cc",
- "conjugate_gradients_solver.cc",
"context.cc",
"context_impl.cc",
"coordinate_descent_minimizer.cc",
"corrector.cc",
+ "cost_function.cc",
"covariance.cc",
"covariance_impl.cc",
+ "dense_cholesky.cc",
"dense_normal_cholesky_solver.cc",
+ "dense_qr.cc",
"dense_qr_solver.cc",
"dense_sparse_matrix.cc",
"detect_structure.cc",
@@ -65,38 +66,42 @@
"dynamic_compressed_row_sparse_matrix.cc",
"dynamic_sparse_normal_cholesky_solver.cc",
"eigensparse.cc",
+ "evaluation_callback.cc",
"evaluator.cc",
"file.cc",
+ "first_order_function.cc",
+ "float_suitesparse.cc",
"function_sample.cc",
"gradient_checker.cc",
"gradient_checking_cost_function.cc",
"gradient_problem.cc",
"gradient_problem_solver.cc",
- "is_close.cc",
"implicit_schur_complement.cc",
"inner_product_computer.cc",
+ "is_close.cc",
+ "iteration_callback.cc",
"iterative_refiner.cc",
"iterative_schur_complement_solver.cc",
- "lapack.cc",
"levenberg_marquardt_strategy.cc",
"line_search.cc",
"line_search_direction.cc",
"line_search_minimizer.cc",
+ "line_search_preprocessor.cc",
"linear_least_squares_problems.cc",
"linear_operator.cc",
- "line_search_preprocessor.cc",
"linear_solver.cc",
- "local_parameterization.cc",
"loss_function.cc",
"low_rank_inverse_hessian.cc",
+ "manifold.cc",
"minimizer.cc",
"normal_prior.cc",
- "parallel_for_cxx.cc",
- "parallel_for_openmp.cc",
+ "parallel_invoke.cc",
"parallel_utils.cc",
+ "parallel_vector_ops.cc",
"parameter_block_ordering.cc",
"partitioned_matrix_view.cc",
"polynomial.cc",
+ "power_series_expansion_preconditioner.cc",
"preconditioner.cc",
"preprocessor.cc",
"problem.cc",
@@ -116,7 +121,6 @@
"sparse_cholesky.cc",
"sparse_matrix.cc",
"sparse_normal_cholesky_solver.cc",
- "split.cc",
"stringprintf.cc",
"subset_preconditioner.cc",
"suitesparse.cc",
@@ -173,14 +177,17 @@
"include/ceres/internal/*.h",
]) +
- # This is an empty config, since the Bazel-based build does not
- # generate a config.h from config.h.in. This is fine, since Bazel
- # properly handles propagating -D defines to dependent targets.
+ # This is an empty config and export, since the
+ # Bazel-based build does not generate a
+ # config.h/export.h. This is fine, since Bazel properly
+ # handles propagating -D defines to dependent targets.
native.glob([
"config/ceres/internal/config.h",
+ "config/ceres/internal/export.h",
]),
copts = [
"-I" + internal,
+ "-Wunused-parameter",
"-Wno-sign-compare",
] + schur_eliminator_copts,
@@ -191,12 +198,15 @@
# part of a Skylark Ceres target macro.
# https://github.com/ceres-solver/ceres-solver/issues/396
defines = [
- "CERES_NO_SUITESPARSE",
- "CERES_NO_CXSPARSE",
+ "CERES_EXPORT=",
"CERES_NO_ACCELERATE_SPARSE",
+ "CERES_NO_CHOLMOD_PARTITION",
+ "CERES_NO_CUDA",
+ "CERES_NO_EIGEN_METIS",
+ "CERES_NO_EXPORT=",
"CERES_NO_LAPACK",
+ "CERES_NO_SUITESPARSE",
"CERES_USE_EIGEN_SPARSE",
- "CERES_USE_CXX_THREADS",
],
includes = [
"config",
diff --git a/cmake/AddCompileFlagsIfSupported.cmake b/cmake/AddCompileFlagsIfSupported.cmake
index 1af9ee8..d947fdf 100644
--- a/cmake/AddCompileFlagsIfSupported.cmake
+++ b/cmake/AddCompileFlagsIfSupported.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2017 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/AddGerritCommitHook.cmake b/cmake/AddGerritCommitHook.cmake
index 65b2fab..070158c 100644
--- a/cmake/AddGerritCommitHook.cmake
+++ b/cmake/AddGerritCommitHook.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/AppendTargetProperty.cmake b/cmake/AppendTargetProperty.cmake
deleted file mode 100644
index e0bc3a4..0000000
--- a/cmake/AppendTargetProperty.cmake
+++ /dev/null
@@ -1,61 +0,0 @@
-# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
-# http://ceres-solver.org/
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-# * Neither the name of Google Inc. nor the names of its contributors may be
-# used to endorse or promote products derived from this software without
-# specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-#
-# Author: alexs.mac@gmail.com (Alex Stewart)
-
-# Append item(s) to a property on a declared CMake target:
-#
-# append_target_property(target property item_to_append1
-# [... item_to_appendN])
-#
-# The set_target_properties() CMake function will overwrite the contents of the
-# specified target property. This function instead appends to it, so can
-# be called multiple times with the same target & property to iteratively
-# populate it.
-function(append_target_property TARGET PROPERTY)
- if (NOT TARGET ${TARGET})
- message(FATAL_ERROR "Invalid target: ${TARGET} cannot append: ${ARGN} "
- "to property: ${PROPERTY}")
- endif()
- if (NOT PROPERTY)
- message(FATAL_ERROR "Invalid property to update for target: ${TARGET}")
- endif()
- # Get the initial state of the specified property for the target s/t
- # we can append to it (not overwrite it).
- get_target_property(INITIAL_PROPERTY_STATE ${TARGET} ${PROPERTY})
- if (NOT INITIAL_PROPERTY_STATE)
- # Ensure that if the state is unset, we do not insert the XXX-NOTFOUND
- # returned by CMake into the property.
- set(INITIAL_PROPERTY_STATE "")
- endif()
- # Delistify (remove ; separators) the potentially set of items to append
- # to the specified target property.
- string(REPLACE ";" " " ITEMS_TO_APPEND "${ARGN}")
- set_target_properties(${TARGET} PROPERTIES ${PROPERTY}
- "${INITIAL_PROPERTY_STATE} ${ITEMS_TO_APPEND}")
-endfunction()
diff --git a/cmake/CeresCompileOptionsToComponents.cmake b/cmake/CeresCompileOptionsToComponents.cmake
index 5be0fb2..64634d5 100644
--- a/cmake/CeresCompileOptionsToComponents.cmake
+++ b/cmake/CeresCompileOptionsToComponents.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2016 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -77,15 +77,9 @@
add_to_output_if_not_found(CURRENT_CERES_COMPILE_OPTIONS ${CERES_COMPONENTS_VAR}
CERES_NO_SUITESPARSE "SuiteSparse;SparseLinearAlgebraLibrary")
add_to_output_if_not_found(CURRENT_CERES_COMPILE_OPTIONS ${CERES_COMPONENTS_VAR}
- CERES_NO_CXSPARSE "CXSparse;SparseLinearAlgebraLibrary")
- add_to_output_if_not_found(CURRENT_CERES_COMPILE_OPTIONS ${CERES_COMPONENTS_VAR}
CERES_NO_ACCELERATE_SPARSE "AccelerateSparse;SparseLinearAlgebraLibrary")
add_to_output_if_not_found(CURRENT_CERES_COMPILE_OPTIONS ${CERES_COMPONENTS_VAR}
CERES_RESTRICT_SCHUR_SPECIALIZATION "SchurSpecializations")
- add_to_output_if_found(CURRENT_CERES_COMPILE_OPTIONS ${CERES_COMPONENTS_VAR}
- CERES_USE_OPENMP "OpenMP;Multithreading")
- add_to_output_if_found(CURRENT_CERES_COMPILE_OPTIONS ${CERES_COMPONENTS_VAR}
- CERES_USE_CXX_THREADS "Multithreading")
# Remove duplicates of SparseLinearAlgebraLibrary if multiple sparse backends
# are present.
list(REMOVE_DUPLICATES ${CERES_COMPONENTS_VAR})
diff --git a/cmake/CeresConfig.cmake.in b/cmake/CeresConfig.cmake.in
index e5e2976..ceb7e26 100644
--- a/cmake/CeresConfig.cmake.in
+++ b/cmake/CeresConfig.cmake.in
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2022 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -173,10 +173,14 @@
endif(CERES_WAS_INSTALLED)
# Set the version.
-set(CERES_VERSION @CERES_VERSION@ )
+set(CERES_VERSION @CERES_VERSION@)
include(CMakeFindDependencyMacro)
-find_dependency(Threads)
+# Optional dependencies
+@METIS_DEPENDENCY@
+@SuiteSparse_DEPENDENCY@
+@CUDAToolkit_DEPENDENCY@
+@Threads_DEPENDENCY@
# As imported CMake targets are not re-exported when a dependent target is
# exported, we must invoke find_package(XXX) here to reload the definition
@@ -187,30 +191,30 @@
# Eigen.
# Flag set during configuration and build of Ceres.
-set(CERES_EIGEN_VERSION @EIGEN3_VERSION_STRING@)
+set(CERES_EIGEN_VERSION @Eigen3_VERSION@)
# Search quietly to control the timing of the error message if not found. The
# search should be for an exact match, but for usability reasons do a soft
# match and reject with an explanation below.
find_package(Eigen3 ${CERES_EIGEN_VERSION} QUIET)
-if (EIGEN3_FOUND)
- if (NOT EIGEN3_VERSION_STRING VERSION_EQUAL CERES_EIGEN_VERSION)
+if (Eigen3_FOUND)
+ if (NOT Eigen3_VERSION VERSION_EQUAL CERES_EIGEN_VERSION)
# CMake's VERSION check in FIND_PACKAGE() will accept any version >= the
# specified version. However, only version = is supported. Improve
# usability by explaining why we don't accept non-exact version matching.
ceres_report_not_found("Found Eigen dependency, but the version of Eigen "
- "found (${EIGEN3_VERSION_STRING}) does not exactly match the version of Eigen "
+ "found (${Eigen3_VERSION}) does not exactly match the version of Eigen "
"Ceres was compiled with (${CERES_EIGEN_VERSION}). This can cause subtle "
"bugs by triggering violations of the One Definition Rule. See the "
"Wikipedia article http://en.wikipedia.org/wiki/One_Definition_Rule "
"for more details")
endif ()
ceres_message(STATUS "Found required Ceres dependency: "
- "Eigen version ${CERES_EIGEN_VERSION} in ${EIGEN3_INCLUDE_DIRS}")
-else (EIGEN3_FOUND)
+ "Eigen version ${CERES_EIGEN_VERSION} in ${Eigen3_DIR}")
+else (Eigen3_FOUND)
ceres_report_not_found("Missing required Ceres "
"dependency: Eigen version ${CERES_EIGEN_VERSION}, please set "
"Eigen3_DIR.")
-endif (EIGEN3_FOUND)
+endif (Eigen3_FOUND)
# glog (and maybe gflags).
#
diff --git a/cmake/CeresThreadingModels.cmake b/cmake/CeresThreadingModels.cmake
deleted file mode 100644
index 571dd7d..0000000
--- a/cmake/CeresThreadingModels.cmake
+++ /dev/null
@@ -1,82 +0,0 @@
-# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2018 Google Inc. All rights reserved.
-# http://ceres-solver.org/
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-# * Neither the name of Google Inc. nor the names of its contributors may be
-# used to endorse or promote products derived from this software without
-# specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-#
-# Author: alexs.mac@gmail.com (Alex Stewart)
-
-# Ordered by expected preference.
-set(CERES_THREADING_MODELS "CXX_THREADS;OPENMP;NO_THREADS")
-
-function(find_available_ceres_threading_models CERES_THREADING_MODELS_AVAILABLE_VAR)
- set(CERES_THREADING_MODELS_AVAILABLE ${CERES_THREADING_MODELS})
- # Remove any threading models for which the dependencies are not available.
- find_package(OpenMP QUIET)
- if (NOT OPENMP_FOUND)
- list(REMOVE_ITEM CERES_THREADING_MODELS_AVAILABLE "OPENMP")
- endif()
- if (NOT CERES_THREADING_MODELS_AVAILABLE)
- # At least NO_THREADS should never be removed. This check is purely
- # protective against future threading model updates.
- message(FATAL_ERROR "Ceres bug: Removed all threading models.")
- endif()
- set(${CERES_THREADING_MODELS_AVAILABLE_VAR}
- ${CERES_THREADING_MODELS_AVAILABLE} PARENT_SCOPE)
-endfunction()
-
-macro(set_ceres_threading_model_to_cxx11_threads)
- list(APPEND CERES_COMPILE_OPTIONS CERES_USE_CXX_THREADS)
-endmacro()
-
-macro(set_ceres_threading_model_to_openmp)
- find_package(OpenMP REQUIRED)
- list(APPEND CERES_COMPILE_OPTIONS CERES_USE_OPENMP)
- set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
- set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
-endmacro()
-
-macro(set_ceres_threading_model_to_no_threads)
- list(APPEND CERES_COMPILE_OPTIONS CERES_NO_THREADS)
-endmacro()
-
-macro(set_ceres_threading_model CERES_THREADING_MODEL_TO_SET)
- if ("${CERES_THREADING_MODEL_TO_SET}" STREQUAL "CXX_THREADS")
- set_ceres_threading_model_to_cxx11_threads()
- elseif ("${CERES_THREADING_MODEL_TO_SET}" STREQUAL "OPENMP")
- set_ceres_threading_model_to_openmp()
- elseif ("${CERES_THREADING_MODEL_TO_SET}" STREQUAL "NO_THREADS")
- set_ceres_threading_model_to_no_threads()
- else()
- include(PrettyPrintCMakeList)
- find_available_ceres_threading_models(_AVAILABLE_THREADING_MODELS)
- pretty_print_cmake_list(
- _AVAILABLE_THREADING_MODELS ${_AVAILABLE_THREADING_MODELS})
- message(FATAL_ERROR "Unknown threading model specified: "
- "'${CERES_THREADING_MODEL_TO_SET}'. Available threading models for "
- "this platform are: ${_AVAILABLE_THREADING_MODELS}")
- endif()
- message("-- Using Ceres threading model: ${CERES_THREADING_MODEL_TO_SET}")
-endmacro()
diff --git a/cmake/CheckIfUnderscorePrefixedBesselFunctionsExist.cmake b/cmake/CheckIfUnderscorePrefixedBesselFunctionsExist.cmake
deleted file mode 100644
index a05721c..0000000
--- a/cmake/CheckIfUnderscorePrefixedBesselFunctionsExist.cmake
+++ /dev/null
@@ -1,54 +0,0 @@
-# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2017 Google Inc. All rights reserved.
-# http://ceres-solver.org/
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-# * Neither the name of Google Inc. nor the names of its contributors may be
-# used to endorse or promote products derived from this software without
-# specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-#
-# Author: alexs.mac@gmail.com (Alex Stewart)
-
-# Microsoft deprecated the POSIX Bessel functions: j[0,1,n]() in favour
-# of _j[0,1,n](), it appears since at least MSVC 2005 [1]. This function
-# checks if the underscore prefixed versions of the Bessel functions are
-# defined, and sets ${HAVE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS_VAR} to
-# TRUE if they do.
-#
-# [1] https://msdn.microsoft.com/en-us/library/ms235384(v=vs.100).aspx
-function(check_if_underscore_prefixed_bessel_functions_exist
- HAVE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS_VAR)
- include(CheckCXXSourceCompiles)
- check_cxx_source_compiles(
- "#include <math.h>
- int main(int argc, char * argv[]) {
- double result;
- result = _j0(1.2345);
- result = _j1(1.2345);
- result = _jn(2, 1.2345);
- return 0;
- }"
- HAVE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS)
- set(${HAVE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS_VAR}
- ${HAVE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS}
- PARENT_SCOPE)
-endfunction()
diff --git a/cmake/CreateCeresConfig.cmake b/cmake/CreateCeresConfig.cmake
index 89db68c..f0037cc 100644
--- a/cmake/CreateCeresConfig.cmake
+++ b/cmake/CreateCeresConfig.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/DetectBrokenStackCheckMacOSXcodePairing.cmake b/cmake/DetectBrokenStackCheckMacOSXcodePairing.cmake
index 151e28c..f333ed9 100644
--- a/cmake/DetectBrokenStackCheckMacOSXcodePairing.cmake
+++ b/cmake/DetectBrokenStackCheckMacOSXcodePairing.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2019 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/EnableSanitizer.cmake b/cmake/EnableSanitizer.cmake
index 1ef68c3..9a8d484 100644
--- a/cmake/EnableSanitizer.cmake
+++ b/cmake/EnableSanitizer.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2019 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/FindAccelerateSparse.cmake b/cmake/FindAccelerateSparse.cmake
index f2f4340..3a2e431 100644
--- a/cmake/FindAccelerateSparse.cmake
+++ b/cmake/FindAccelerateSparse.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2018 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/FindCXSparse.cmake b/cmake/FindCXSparse.cmake
deleted file mode 100644
index 8b380c9..0000000
--- a/cmake/FindCXSparse.cmake
+++ /dev/null
@@ -1,261 +0,0 @@
-# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
-# http://ceres-solver.org/
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-# * Neither the name of Google Inc. nor the names of its contributors may be
-# used to endorse or promote products derived from this software without
-# specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-#
-# Author: alexs.mac@gmail.com (Alex Stewart)
-#
-
-# FindCXSparse.cmake - Find CXSparse libraries & dependencies.
-#
-# This module defines the following variables which should be referenced
-# by the caller to use the library.
-#
-# CXSPARSE_FOUND: TRUE iff CXSparse and all dependencies have been found.
-# CXSPARSE_INCLUDE_DIRS: Include directories for CXSparse.
-# CXSPARSE_LIBRARIES: Libraries for CXSparse and all dependencies.
-#
-# CXSPARSE_VERSION: Extracted from cs.h.
-# CXSPARSE_MAIN_VERSION: Equal to 3 if CXSPARSE_VERSION = 3.1.2
-# CXSPARSE_SUB_VERSION: Equal to 1 if CXSPARSE_VERSION = 3.1.2
-# CXSPARSE_SUBSUB_VERSION: Equal to 2 if CXSPARSE_VERSION = 3.1.2
-#
-# The following variables control the behaviour of this module:
-#
-# CXSPARSE_INCLUDE_DIR_HINTS: List of additional directories in which to
-# search for CXSparse includes,
-# e.g: /timbuktu/include.
-# CXSPARSE_LIBRARY_DIR_HINTS: List of additional directories in which to
-# search for CXSparse libraries, e.g: /timbuktu/lib.
-#
-# The following variables are also defined by this module, but in line with
-# CMake recommended FindPackage() module style should NOT be referenced directly
-# by callers (use the plural variables detailed above instead). These variables
-# do however affect the behaviour of the module via FIND_[PATH/LIBRARY]() which
-# are NOT re-called (i.e. search for library is not repeated) if these variables
-# are set with valid values _in the CMake cache_. This means that if these
-# variables are set directly in the cache, either by the user in the CMake GUI,
-# or by the user passing -DVAR=VALUE directives to CMake when called (which
-# explicitly defines a cache variable), then they will be used verbatim,
-# bypassing the HINTS variables and other hard-coded search locations.
-#
-# CXSPARSE_INCLUDE_DIR: Include directory for CXSparse, not including the
-# include directory of any dependencies.
-# CXSPARSE_LIBRARY: CXSparse library, not including the libraries of any
-# dependencies.
-
-# Reset CALLERS_CMAKE_FIND_LIBRARY_PREFIXES to its value when
-# FindCXSparse was invoked.
-macro(CXSPARSE_RESET_FIND_LIBRARY_PREFIX)
- if (MSVC)
- set(CMAKE_FIND_LIBRARY_PREFIXES "${CALLERS_CMAKE_FIND_LIBRARY_PREFIXES}")
- endif (MSVC)
-endmacro(CXSPARSE_RESET_FIND_LIBRARY_PREFIX)
-
-# Called if we failed to find CXSparse or any of it's required dependencies,
-# unsets all public (designed to be used externally) variables and reports
-# error message at priority depending upon [REQUIRED/QUIET/<NONE>] argument.
-macro(CXSPARSE_REPORT_NOT_FOUND REASON_MSG)
- unset(CXSPARSE_FOUND)
- unset(CXSPARSE_INCLUDE_DIRS)
- unset(CXSPARSE_LIBRARIES)
- # Make results of search visible in the CMake GUI if CXSparse has not
- # been found so that user does not have to toggle to advanced view.
- mark_as_advanced(CLEAR CXSPARSE_INCLUDE_DIR
- CXSPARSE_LIBRARY)
-
- cxsparse_reset_find_library_prefix()
-
- # Note <package>_FIND_[REQUIRED/QUIETLY] variables defined by FindPackage()
- # use the camelcase library name, not uppercase.
- if (CXSparse_FIND_QUIETLY)
- message(STATUS "Failed to find CXSparse - " ${REASON_MSG} ${ARGN})
- elseif (CXSparse_FIND_REQUIRED)
- message(FATAL_ERROR "Failed to find CXSparse - " ${REASON_MSG} ${ARGN})
- else()
- # Neither QUIETLY nor REQUIRED, use no priority which emits a message
- # but continues configuration and allows generation.
- message("-- Failed to find CXSparse - " ${REASON_MSG} ${ARGN})
- endif ()
- return()
-endmacro(CXSPARSE_REPORT_NOT_FOUND)
-
-# Protect against any alternative find_package scripts for this library having
-# been called previously (in a client project) which set CXSPARSE_FOUND, but not
-# the other variables we require / set here which could cause the search logic
-# here to fail.
-unset(CXSPARSE_FOUND)
-
-# Handle possible presence of lib prefix for libraries on MSVC, see
-# also CXSPARSE_RESET_FIND_LIBRARY_PREFIX().
-if (MSVC)
- # Preserve the caller's original values for CMAKE_FIND_LIBRARY_PREFIXES
- # s/t we can set it back before returning.
- set(CALLERS_CMAKE_FIND_LIBRARY_PREFIXES "${CMAKE_FIND_LIBRARY_PREFIXES}")
- # The empty string in this list is important, it represents the case when
- # the libraries have no prefix (shared libraries / DLLs).
- set(CMAKE_FIND_LIBRARY_PREFIXES "lib" "" "${CMAKE_FIND_LIBRARY_PREFIXES}")
-endif (MSVC)
-
-# On macOS, add the Homebrew prefix (with appropriate suffixes) to the
-# respective HINTS directories (after any user-specified locations). This
-# handles Homebrew installations into non-standard locations (not /usr/local).
-# We do not use CMAKE_PREFIX_PATH for this as given the search ordering of
-# find_xxx(), doing so would override any user-specified HINTS locations with
-# the Homebrew version if it exists.
-if (CMAKE_SYSTEM_NAME MATCHES "Darwin")
- find_program(HOMEBREW_EXECUTABLE brew)
- mark_as_advanced(FORCE HOMEBREW_EXECUTABLE)
- if (HOMEBREW_EXECUTABLE)
- # Detected a Homebrew install, query for its install prefix.
- execute_process(COMMAND ${HOMEBREW_EXECUTABLE} --prefix
- OUTPUT_VARIABLE HOMEBREW_INSTALL_PREFIX
- OUTPUT_STRIP_TRAILING_WHITESPACE)
- message(STATUS "Detected Homebrew with install prefix: "
- "${HOMEBREW_INSTALL_PREFIX}, adding to CMake search paths.")
- list(APPEND CXSPARSE_INCLUDE_DIR_HINTS "${HOMEBREW_INSTALL_PREFIX}/include")
- list(APPEND CXSPARSE_LIBRARY_DIR_HINTS "${HOMEBREW_INSTALL_PREFIX}/lib")
- endif()
-endif()
-
-# Search user-installed locations first, so that we prefer user installs
-# to system installs where both exist.
-#
-# TODO: Add standard Windows search locations for CXSparse.
-list(APPEND CXSPARSE_CHECK_INCLUDE_DIRS
- /usr/local/include
- /usr/local/homebrew/include # Mac OS X
- /opt/local/var/macports/software # Mac OS X.
- /opt/local/include
- /usr/include)
-list(APPEND CXSPARSE_CHECK_LIBRARY_DIRS
- /usr/local/lib
- /usr/local/homebrew/lib # Mac OS X.
- /opt/local/lib
- /usr/lib)
-# Additional suffixes to try appending to each search path.
-list(APPEND CXSPARSE_CHECK_PATH_SUFFIXES
- suitesparse) # Linux/Windows
-
-# Search supplied hint directories first if supplied.
-find_path(CXSPARSE_INCLUDE_DIR
- NAMES cs.h
- HINTS ${CXSPARSE_INCLUDE_DIR_HINTS}
- PATHS ${CXSPARSE_CHECK_INCLUDE_DIRS}
- PATH_SUFFIXES ${CXSPARSE_CHECK_PATH_SUFFIXES})
-if (NOT CXSPARSE_INCLUDE_DIR OR
- NOT EXISTS ${CXSPARSE_INCLUDE_DIR})
- cxsparse_report_not_found(
- "Could not find CXSparse include directory, set CXSPARSE_INCLUDE_DIR "
- "to directory containing cs.h")
-endif (NOT CXSPARSE_INCLUDE_DIR OR
- NOT EXISTS ${CXSPARSE_INCLUDE_DIR})
-
-find_library(CXSPARSE_LIBRARY NAMES cxsparse
- HINTS ${CXSPARSE_LIBRARY_DIR_HINTS}
- PATHS ${CXSPARSE_CHECK_LIBRARY_DIRS}
- PATH_SUFFIXES ${CXSPARSE_CHECK_PATH_SUFFIXES})
-if (NOT CXSPARSE_LIBRARY OR
- NOT EXISTS ${CXSPARSE_LIBRARY})
- cxsparse_report_not_found(
- "Could not find CXSparse library, set CXSPARSE_LIBRARY "
- "to full path to libcxsparse.")
-endif (NOT CXSPARSE_LIBRARY OR
- NOT EXISTS ${CXSPARSE_LIBRARY})
-
-# Mark internally as found, then verify. CXSPARSE_REPORT_NOT_FOUND() unsets
-# if called.
-set(CXSPARSE_FOUND TRUE)
-
-# Extract CXSparse version from cs.h
-if (CXSPARSE_INCLUDE_DIR)
- set(CXSPARSE_VERSION_FILE ${CXSPARSE_INCLUDE_DIR}/cs.h)
- if (NOT EXISTS ${CXSPARSE_VERSION_FILE})
- cxsparse_report_not_found(
- "Could not find file: ${CXSPARSE_VERSION_FILE} "
- "containing version information in CXSparse install located at: "
- "${CXSPARSE_INCLUDE_DIR}.")
- else (NOT EXISTS ${CXSPARSE_VERSION_FILE})
- file(READ ${CXSPARSE_INCLUDE_DIR}/cs.h CXSPARSE_VERSION_FILE_CONTENTS)
-
- string(REGEX MATCH "#define CS_VER [0-9]+"
- CXSPARSE_MAIN_VERSION "${CXSPARSE_VERSION_FILE_CONTENTS}")
- string(REGEX REPLACE "#define CS_VER ([0-9]+)" "\\1"
- CXSPARSE_MAIN_VERSION "${CXSPARSE_MAIN_VERSION}")
-
- string(REGEX MATCH "#define CS_SUBVER [0-9]+"
- CXSPARSE_SUB_VERSION "${CXSPARSE_VERSION_FILE_CONTENTS}")
- string(REGEX REPLACE "#define CS_SUBVER ([0-9]+)" "\\1"
- CXSPARSE_SUB_VERSION "${CXSPARSE_SUB_VERSION}")
-
- string(REGEX MATCH "#define CS_SUBSUB [0-9]+"
- CXSPARSE_SUBSUB_VERSION "${CXSPARSE_VERSION_FILE_CONTENTS}")
- string(REGEX REPLACE "#define CS_SUBSUB ([0-9]+)" "\\1"
- CXSPARSE_SUBSUB_VERSION "${CXSPARSE_SUBSUB_VERSION}")
-
- # This is on a single line s/t CMake does not interpret it as a list of
- # elements and insert ';' separators which would result in 3.;1.;2 nonsense.
- set(CXSPARSE_VERSION "${CXSPARSE_MAIN_VERSION}.${CXSPARSE_SUB_VERSION}.${CXSPARSE_SUBSUB_VERSION}")
- endif (NOT EXISTS ${CXSPARSE_VERSION_FILE})
-endif (CXSPARSE_INCLUDE_DIR)
-
-# Catch the case when the caller has set CXSPARSE_LIBRARY in the cache / GUI and
-# thus FIND_LIBRARY was not called, but specified library is invalid, otherwise
-# we would report CXSparse as found.
-# TODO: This regex for CXSparse library is pretty primitive, we use lowercase
-# for comparison to handle Windows using CamelCase library names, could
-# this check be better?
-string(TOLOWER "${CXSPARSE_LIBRARY}" LOWERCASE_CXSPARSE_LIBRARY)
-if (CXSPARSE_LIBRARY AND
- EXISTS ${CXSPARSE_LIBRARY} AND
- NOT "${LOWERCASE_CXSPARSE_LIBRARY}" MATCHES ".*cxsparse[^/]*")
- cxsparse_report_not_found(
- "Caller defined CXSPARSE_LIBRARY: "
- "${CXSPARSE_LIBRARY} does not match CXSparse.")
-endif (CXSPARSE_LIBRARY AND
- EXISTS ${CXSPARSE_LIBRARY} AND
- NOT "${LOWERCASE_CXSPARSE_LIBRARY}" MATCHES ".*cxsparse[^/]*")
-
-# Set standard CMake FindPackage variables if found.
-if (CXSPARSE_FOUND)
- set(CXSPARSE_INCLUDE_DIRS ${CXSPARSE_INCLUDE_DIR})
- set(CXSPARSE_LIBRARIES ${CXSPARSE_LIBRARY})
-endif (CXSPARSE_FOUND)
-
-cxsparse_reset_find_library_prefix()
-
-# Handle REQUIRED / QUIET optional arguments and version.
-include(FindPackageHandleStandardArgs)
-find_package_handle_standard_args(CXSparse
- REQUIRED_VARS CXSPARSE_INCLUDE_DIRS CXSPARSE_LIBRARIES
- VERSION_VAR CXSPARSE_VERSION)
-
-# Only mark internal variables as advanced if we found CXSparse, otherwise
-# leave them visible in the standard GUI for the user to set manually.
-if (CXSPARSE_FOUND)
- mark_as_advanced(FORCE CXSPARSE_INCLUDE_DIR
- CXSPARSE_LIBRARY)
-endif (CXSPARSE_FOUND)
diff --git a/cmake/FindGlog.cmake b/cmake/FindGlog.cmake
index 1a7b6c0..2ef6914 100644
--- a/cmake/FindGlog.cmake
+++ b/cmake/FindGlog.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -345,6 +345,11 @@
endif (GLOG_LIBRARY AND
NOT "${LOWERCASE_GLOG_LIBRARY}" MATCHES ".*glog[^/]*")
+ # add glog::glog target
+ add_library(glog::glog INTERFACE IMPORTED)
+ target_include_directories(glog::glog INTERFACE ${GLOG_INCLUDE_DIRS})
+ target_link_libraries(glog::glog INTERFACE ${GLOG_LIBRARY})
+
glog_reset_find_library_prefix()
endif(NOT GLOG_FOUND)
diff --git a/cmake/FindMETIS.cmake b/cmake/FindMETIS.cmake
new file mode 100644
index 0000000..5f41792
--- /dev/null
+++ b/cmake/FindMETIS.cmake
@@ -0,0 +1,110 @@
+#
+# Copyright (c) 2022 Sergiu Deitsch
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in all
+# copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTMETISLAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+# SOFTWARE.
+#
+#[=======================================================================[.rst:
+Module for locating METIS
+=========================
+
+Read-only variables:
+
+``METIS_FOUND``
+ Indicates whether the library has been found.
+
+``METIS_VERSION``
+ Indicates library version.
+
+Targets
+-------
+
+``METIS::METIS``
+ Specifies targets that should be passed to target_link_libararies.
+]=======================================================================]
+
+include (FindPackageHandleStandardArgs)
+
+find_path (METIS_INCLUDE_DIR NAMES metis.h
+ PATH_SUFFIXES include
+ DOC "METIS include directory")
+find_library (METIS_LIBRARY_DEBUG NAMES metis
+ PATH_SUFFIXES Debug
+ DOC "METIS debug library")
+find_library (METIS_LIBRARY_RELEASE NAMES metis
+ PATH_SUFFIXES Release
+ DOC "METIS release library")
+
+if (METIS_LIBRARY_RELEASE)
+ if (METIS_LIBRARY_DEBUG)
+ set (METIS_LIBRARY debug ${METIS_LIBRARY_DEBUG} optimized
+ ${METIS_LIBRARY_RELEASE} CACHE STRING "METIS library")
+ else (METIS_LIBRARY_DEBUG)
+ set (METIS_LIBRARY ${METIS_LIBRARY_RELEASE} CACHE FILEPATH "METIS library")
+ endif (METIS_LIBRARY_DEBUG)
+elseif (METIS_LIBRARY_DEBUG)
+ set (METIS_LIBRARY ${METIS_LIBRARY_DEBUG} CACHE FILEPATH "METIS library")
+endif (METIS_LIBRARY_RELEASE)
+
+set (_METIS_VERSION_HEADER ${METIS_INCLUDE_DIR}/metis.h)
+
+if (EXISTS ${_METIS_VERSION_HEADER})
+ file (READ ${_METIS_VERSION_HEADER} _METIS_VERSION_CONTENTS)
+
+ string (REGEX REPLACE ".*#define METIS_VER_MAJOR[ \t]+([0-9]+).*" "\\1"
+ METIS_VERSION_MAJOR "${_METIS_VERSION_CONTENTS}")
+ string (REGEX REPLACE ".*#define METIS_VER_MINOR[ \t]+([0-9]+).*" "\\1"
+ METIS_VERSION_MINOR "${_METIS_VERSION_CONTENTS}")
+ string (REGEX REPLACE ".*#define METIS_VER_SUBMINOR[ \t]+([0-9]+).*" "\\1"
+ METIS_VERSION_PATCH "${_METIS_VERSION_CONTENTS}")
+
+ set (METIS_VERSION
+ ${METIS_VERSION_MAJOR}.${METIS_VERSION_MINOR}.${METIS_VERSION_PATCH})
+ set (METIS_VERSION_COMPONENTS 3)
+endif (EXISTS ${_METIS_VERSION_HEADER})
+
+mark_as_advanced (METIS_INCLUDE_DIR METIS_LIBRARY_DEBUG METIS_LIBRARY_RELEASE
+ METIS_LIBRARY)
+
+if (NOT TARGET METIS::METIS)
+ if (METIS_INCLUDE_DIR OR METIS_LIBRARY)
+ add_library (METIS::METIS IMPORTED UNKNOWN)
+ endif (METIS_INCLUDE_DIR OR METIS_LIBRARY)
+endif (NOT TARGET METIS::METIS)
+
+if (METIS_INCLUDE_DIR)
+ set_property (TARGET METIS::METIS PROPERTY INTERFACE_INCLUDE_DIRECTORIES
+ ${METIS_INCLUDE_DIR})
+endif (METIS_INCLUDE_DIR)
+
+if (METIS_LIBRARY_RELEASE)
+ set_property (TARGET METIS::METIS PROPERTY IMPORTED_LOCATION_RELEASE
+ ${METIS_LIBRARY_RELEASE})
+ set_property (TARGET METIS::METIS APPEND PROPERTY IMPORTED_CONFIGURATIONS
+ RELEASE)
+endif (METIS_LIBRARY_RELEASE)
+
+if (METIS_LIBRARY_DEBUG)
+ set_property (TARGET METIS::METIS PROPERTY IMPORTED_LOCATION_DEBUG
+ ${METIS_LIBRARY_DEBUG})
+ set_property (TARGET METIS::METIS APPEND PROPERTY IMPORTED_CONFIGURATIONS
+ DEBUG)
+endif (METIS_LIBRARY_DEBUG)
+
+find_package_handle_standard_args (METIS REQUIRED_VARS
+ METIS_INCLUDE_DIR METIS_LIBRARY VERSION_VAR METIS_VERSION)
diff --git a/cmake/FindSphinx.cmake b/cmake/FindSphinx.cmake
index 220108d..d1488eb 100644
--- a/cmake/FindSphinx.cmake
+++ b/cmake/FindSphinx.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -29,38 +29,96 @@
# Author: pablo.speciale@gmail.com (Pablo Speciale)
#
-# Find the Sphinx documentation generator
-#
-# This modules defines
-# SPHINX_EXECUTABLE
-# SPHINX_FOUND
+#[=======================================================================[.rst:
+FindSphinx
+==========
-find_program(SPHINX_EXECUTABLE
- NAMES sphinx-build
- PATHS
- /usr/bin
- /usr/local/bin
- /opt/local/bin
- DOC "Sphinx documentation generator")
+Module for locating Sphinx and its components.
-if (NOT SPHINX_EXECUTABLE)
- set(_Python_VERSIONS 2.7 2.6 2.5 2.4 2.3 2.2 2.1 2.0 1.6 1.5)
+This modules defines the following variables:
- foreach (_version ${_Python_VERSIONS})
- set(_sphinx_NAMES sphinx-build-${_version})
+``Sphinx_FOUND``
+ ``TRUE`` iff Sphinx and all of its components have been found.
- find_program(SPHINX_EXECUTABLE
- NAMES ${_sphinx_NAMES}
- PATHS
- /usr/bin
- /usr/local/bin
- /opt/local/bin
- DOC "Sphinx documentation generator")
- endforeach ()
-endif ()
+``Sphinx_BUILD_EXECUTABLE``
+ Path to the ``sphinx-build`` tool.
+]=======================================================================]
-include(FindPackageHandleStandardArgs)
+include (FindPackageHandleStandardArgs)
-find_package_handle_standard_args(Sphinx DEFAULT_MSG SPHINX_EXECUTABLE)
+find_program (Sphinx_BUILD_EXECUTABLE
+ NAMES sphinx-build
+ PATHS /opt/local/bin
+ DOC "Sphinx documentation generator"
+)
-mark_as_advanced(SPHINX_EXECUTABLE)
+mark_as_advanced (Sphinx_BUILD_EXECUTABLE)
+
+if (Sphinx_BUILD_EXECUTABLE)
+ execute_process (
+ COMMAND ${Sphinx_BUILD_EXECUTABLE} --version
+ ERROR_STRIP_TRAILING_WHITESPACE
+ ERROR_VARIABLE _Sphinx_BUILD_ERROR
+ OUTPUT_STRIP_TRAILING_WHITESPACE
+ OUTPUT_VARIABLE _Sphinx_VERSION_STRING
+ RESULT_VARIABLE _Sphinx_BUILD_RESULT
+ )
+
+ if (_Sphinx_BUILD_RESULT EQUAL 0)
+ string (REGEX REPLACE "^sphinx-build[ \t]+([^ \t]+)$" "\\1" Sphinx_VERSION
+ "${_Sphinx_VERSION_STRING}")
+
+ if (Sphinx_VERSION MATCHES "[0-9]+\\.[0-9]+\\.[0-9]+")
+ set (Sphinx_VERSION_COMPONENTS 3)
+ set (Sphinx_VERSION_MAJOR ${CMAKE_MATCH_1})
+ set (Sphinx_VERSION_MINOR ${CMAKE_MATCH_2})
+ set (Sphinx_VERSION_PATCH ${CMAKE_MATCH_3})
+ endif (Sphinx_VERSION MATCHES "[0-9]+\\.[0-9]+\\.[0-9]+")
+ else (_Sphinx_BUILD_RESULT EQUAL 0)
+ message (WARNING "Could not determine sphinx-build version: ${_Sphinx_BUILD_ERROR}")
+ endif (_Sphinx_BUILD_RESULT EQUAL 0)
+
+ unset (_Sphinx_BUILD_ERROR)
+ unset (_Sphinx_BUILD_RESULT)
+ unset (_Sphinx_VERSION_STRING)
+
+ find_package (Python COMPONENTS Interpreter)
+ set (_Sphinx_BUILD_RESULT FALSE)
+
+ if (Python_Interpreter_FOUND)
+ # Check for Sphinx theme dependency for documentation
+ foreach (component IN LISTS Sphinx_FIND_COMPONENTS)
+ string (REGEX MATCH "^(.+_theme)$" theme_component "${component}")
+
+ if (NOT theme_component STREQUAL component)
+ continue ()
+ endif (NOT theme_component STREQUAL component)
+
+ execute_process (
+ COMMAND ${Python_EXECUTABLE} -c "import ${theme_component}"
+ ERROR_STRIP_TRAILING_WHITESPACE
+ ERROR_VARIABLE _Sphinx_BUILD_ERROR
+ OUTPUT_QUIET
+ RESULT_VARIABLE _Sphinx_BUILD_RESULT
+ )
+
+ if (_Sphinx_BUILD_RESULT EQUAL 0)
+ set (Sphinx_${component}_FOUND TRUE)
+ elseif (_Sphinx_BUILD_RESULT EQUAL 0)
+ message (WARNING "Could not determine whether Sphinx component '${theme_component}' is available: ${_Sphinx_BUILD_ERROR}")
+ set (Sphinx_${component}_FOUND FALSE)
+ endif (_Sphinx_BUILD_RESULT EQUAL 0)
+
+ unset (_Sphinx_BUILD_ERROR)
+ unset (_Sphinx_BUILD_RESULT)
+ endforeach (component)
+
+ unset (theme_component)
+ endif (Python_Interpreter_FOUND)
+endif (Sphinx_BUILD_EXECUTABLE)
+
+find_package_handle_standard_args (Sphinx
+ REQUIRED_VARS Sphinx_BUILD_EXECUTABLE
+ VERSION_VAR Sphinx_VERSION
+ HANDLE_COMPONENTS
+)
diff --git a/cmake/FindSuiteSparse.cmake b/cmake/FindSuiteSparse.cmake
index aad8904..49c089c 100644
--- a/cmake/FindSuiteSparse.cmake
+++ b/cmake/FindSuiteSparse.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -29,103 +29,146 @@
# Author: alexs.mac@gmail.com (Alex Stewart)
#
-# FindSuiteSparse.cmake - Find SuiteSparse libraries & dependencies.
-#
-# This module defines the following variables:
-#
-# SUITESPARSE_FOUND: TRUE iff SuiteSparse and all dependencies have been found.
-# SUITESPARSE_INCLUDE_DIRS: Include directories for all SuiteSparse components.
-# SUITESPARSE_LIBRARIES: Libraries for all SuiteSparse component libraries and
-# dependencies.
-# SUITESPARSE_VERSION: Extracted from UFconfig.h (<= v3) or
-# SuiteSparse_config.h (>= v4).
-# SUITESPARSE_MAIN_VERSION: Equal to 4 if SUITESPARSE_VERSION = 4.2.1
-# SUITESPARSE_SUB_VERSION: Equal to 2 if SUITESPARSE_VERSION = 4.2.1
-# SUITESPARSE_SUBSUB_VERSION: Equal to 1 if SUITESPARSE_VERSION = 4.2.1
-#
-# SUITESPARSE_IS_BROKEN_SHARED_LINKING_UBUNTU_SYSTEM_VERSION: TRUE iff running
-# on Ubuntu, SUITESPARSE_VERSION is 3.4.0 and found SuiteSparse is a system
-# install, in which case found version of SuiteSparse cannot be used to link
-# a shared library due to a bug (static linking is unaffected).
-#
-# The following variables control the behaviour of this module:
-#
-# SUITESPARSE_INCLUDE_DIR_HINTS: List of additional directories in which to
-# search for SuiteSparse includes,
-# e.g: /timbuktu/include.
-# SUITESPARSE_LIBRARY_DIR_HINTS: List of additional directories in which to
-# search for SuiteSparse libraries,
-# e.g: /timbuktu/lib.
-#
-# The following variables define the presence / includes & libraries for the
-# SuiteSparse components searched for, the SUITESPARSE_XX variables are the
-# union of the variables for all components.
-#
-# == Symmetric Approximate Minimum Degree (AMD)
-# AMD_FOUND
-# AMD_INCLUDE_DIR
-# AMD_LIBRARY
-#
-# == Constrained Approximate Minimum Degree (CAMD)
-# CAMD_FOUND
-# CAMD_INCLUDE_DIR
-# CAMD_LIBRARY
-#
-# == Column Approximate Minimum Degree (COLAMD)
-# COLAMD_FOUND
-# COLAMD_INCLUDE_DIR
-# COLAMD_LIBRARY
-#
-# Constrained Column Approximate Minimum Degree (CCOLAMD)
-# CCOLAMD_FOUND
-# CCOLAMD_INCLUDE_DIR
-# CCOLAMD_LIBRARY
-#
-# == Sparse Supernodal Cholesky Factorization and Update/Downdate (CHOLMOD)
-# CHOLMOD_FOUND
-# CHOLMOD_INCLUDE_DIR
-# CHOLMOD_LIBRARY
-#
-# == Multifrontal Sparse QR (SuiteSparseQR)
-# SUITESPARSEQR_FOUND
-# SUITESPARSEQR_INCLUDE_DIR
-# SUITESPARSEQR_LIBRARY
-#
-# == Common configuration for all but CSparse (SuiteSparse version >= 4).
-# SUITESPARSE_CONFIG_FOUND
-# SUITESPARSE_CONFIG_INCLUDE_DIR
-# SUITESPARSE_CONFIG_LIBRARY
-#
-# == Common configuration for all but CSparse (SuiteSparse version < 4).
-# UFCONFIG_FOUND
-# UFCONFIG_INCLUDE_DIR
-#
-# Optional SuiteSparse Dependencies:
-#
-# == Serial Graph Partitioning and Fill-reducing Matrix Ordering (METIS)
-# METIS_FOUND
-# METIS_LIBRARY
+#[=======================================================================[.rst:
+FindSuiteSparse
+===============
+
+Module for locating SuiteSparse libraries and its dependencies.
+
+This module defines the following variables:
+
+``SuiteSparse_FOUND``
+ ``TRUE`` iff SuiteSparse and all dependencies have been found.
+
+``SuiteSparse_VERSION``
+ Extracted from ``SuiteSparse_config.h`` (>= v4).
+
+``SuiteSparse_VERSION_MAJOR``
+ Equal to 4 if ``SuiteSparse_VERSION`` = 4.2.1
+
+``SuiteSparse_VERSION_MINOR``
+ Equal to 2 if ``SuiteSparse_VERSION`` = 4.2.1
+
+``SuiteSparse_VERSION_PATCH``
+ Equal to 1 if ``SuiteSparse_VERSION`` = 4.2.1
+
+The following variables control the behaviour of this module:
+
+``SuiteSparse_NO_CMAKE``
+ Do not attempt to use the native SuiteSparse CMake package configuration.
+
+
+Targets
+-------
+
+The following targets define the SuiteSparse components searched for.
+
+``SuiteSparse::AMD``
+ Symmetric Approximate Minimum Degree (AMD)
+
+``SuiteSparse::CAMD``
+ Constrained Approximate Minimum Degree (CAMD)
+
+``SuiteSparse::COLAMD``
+ Column Approximate Minimum Degree (COLAMD)
+
+``SuiteSparse::CCOLAMD``
+ Constrained Column Approximate Minimum Degree (CCOLAMD)
+
+``SuiteSparse::CHOLMOD``
+ Sparse Supernodal Cholesky Factorization and Update/Downdate (CHOLMOD)
+
+``SuiteSparse::Partition``
+ CHOLMOD with METIS support
+
+``SuiteSparse::SPQR``
+ Multifrontal Sparse QR (SuiteSparseQR)
+
+``SuiteSparse::Config``
+ Common configuration for all but CSparse (SuiteSparse version >= 4).
+
+Optional SuiteSparse dependencies:
+
+``METIS::METIS``
+ Serial Graph Partitioning and Fill-reducing Matrix Ordering (METIS)
+]=======================================================================]
+
+if (NOT SuiteSparse_NO_CMAKE)
+ find_package (SuiteSparse NO_MODULE QUIET)
+endif (NOT SuiteSparse_NO_CMAKE)
+
+if (SuiteSparse_FOUND)
+ return ()
+endif (SuiteSparse_FOUND)
+
+# Push CMP0057 to enable support for IN_LIST, when cmake_minimum_required is
+# set to <3.3.
+cmake_policy (PUSH)
+cmake_policy (SET CMP0057 NEW)
+
+if (NOT SuiteSparse_FIND_COMPONENTS)
+ set (SuiteSparse_FIND_COMPONENTS
+ AMD
+ CAMD
+ CCOLAMD
+ CHOLMOD
+ COLAMD
+ SPQR
+ )
+
+ foreach (component IN LISTS SuiteSparse_FIND_COMPONENTS)
+ set (SuiteSparse_FIND_REQUIRED_${component} TRUE)
+ endforeach (component IN LISTS SuiteSparse_FIND_COMPONENTS)
+endif (NOT SuiteSparse_FIND_COMPONENTS)
+
+# Assume SuiteSparse was found and set it to false only if third-party
+# dependencies could not be located. SuiteSparse components are handled by
+# FindPackageHandleStandardArgs HANDLE_COMPONENTS option.
+set (SuiteSparse_FOUND TRUE)
+
+include (CheckLibraryExists)
+include (CheckSymbolExists)
+include (CMakePushCheckState)
+
+# Config is a base component and thus always required
+set (SuiteSparse_IMPLICIT_COMPONENTS Config)
+
+# CHOLMOD depends on AMD, CAMD, CCOLAMD, and COLAMD.
+if (CHOLMOD IN_LIST SuiteSparse_FIND_COMPONENTS)
+ list (APPEND SuiteSparse_IMPLICIT_COMPONENTS AMD CAMD CCOLAMD COLAMD)
+endif (CHOLMOD IN_LIST SuiteSparse_FIND_COMPONENTS)
+
+# SPQR depends on CHOLMOD.
+if (SPQR IN_LIST SuiteSparse_FIND_COMPONENTS)
+ list (APPEND SuiteSparse_IMPLICIT_COMPONENTS CHOLMOD)
+endif (SPQR IN_LIST SuiteSparse_FIND_COMPONENTS)
+
+# Implicit components are always required
+foreach (component IN LISTS SuiteSparse_IMPLICIT_COMPONENTS)
+ set (SuiteSparse_FIND_REQUIRED_${component} TRUE)
+endforeach (component IN LISTS SuiteSparse_IMPLICIT_COMPONENTS)
+
+list (APPEND SuiteSparse_FIND_COMPONENTS ${SuiteSparse_IMPLICIT_COMPONENTS})
+
+# Do not list components multiple times.
+list (REMOVE_DUPLICATES SuiteSparse_FIND_COMPONENTS)
# Reset CALLERS_CMAKE_FIND_LIBRARY_PREFIXES to its value when
# FindSuiteSparse was invoked.
-macro(SUITESPARSE_RESET_FIND_LIBRARY_PREFIX)
+macro(SuiteSparse_RESET_FIND_LIBRARY_PREFIX)
if (MSVC)
set(CMAKE_FIND_LIBRARY_PREFIXES "${CALLERS_CMAKE_FIND_LIBRARY_PREFIXES}")
endif (MSVC)
-endmacro(SUITESPARSE_RESET_FIND_LIBRARY_PREFIX)
+endmacro(SuiteSparse_RESET_FIND_LIBRARY_PREFIX)
# Called if we failed to find SuiteSparse or any of it's required dependencies,
# unsets all public (designed to be used externally) variables and reports
# error message at priority depending upon [REQUIRED/QUIET/<NONE>] argument.
-macro(SUITESPARSE_REPORT_NOT_FOUND REASON_MSG)
- unset(SUITESPARSE_FOUND)
- unset(SUITESPARSE_INCLUDE_DIRS)
- unset(SUITESPARSE_LIBRARIES)
- unset(SUITESPARSE_VERSION)
- unset(SUITESPARSE_MAIN_VERSION)
- unset(SUITESPARSE_SUB_VERSION)
- unset(SUITESPARSE_SUBSUB_VERSION)
- # Do NOT unset SUITESPARSE_FOUND_REQUIRED_VARS here, as it is used by
+macro(SuiteSparse_REPORT_NOT_FOUND REASON_MSG)
+ # Will be set to FALSE by find_package_handle_standard_args
+ unset (SuiteSparse_FOUND)
+
+ # Do NOT unset SuiteSparse_REQUIRED_VARS here, as it is used by
# FindPackageHandleStandardArgs() to generate the automatic error message on
# failure which highlights which components are missing.
@@ -146,16 +189,10 @@
# Do not call return(), s/t we keep processing if not called with REQUIRED
# and report all missing components, rather than bailing after failing to find
# the first.
-endmacro(SUITESPARSE_REPORT_NOT_FOUND)
-
-# Protect against any alternative find_package scripts for this library having
-# been called previously (in a client project) which set SUITESPARSE_FOUND, but
-# not the other variables we require / set here which could cause the search
-# logic here to fail.
-unset(SUITESPARSE_FOUND)
+endmacro(SuiteSparse_REPORT_NOT_FOUND)
# Handle possible presence of lib prefix for libraries on MSVC, see
-# also SUITESPARSE_RESET_FIND_LIBRARY_PREFIX().
+# also SuiteSparse_RESET_FIND_LIBRARY_PREFIX().
if (MSVC)
# Preserve the caller's original values for CMAKE_FIND_LIBRARY_PREFIXES
# s/t we can set it back before returning.
@@ -165,119 +202,92 @@
set(CMAKE_FIND_LIBRARY_PREFIXES "lib" "" "${CMAKE_FIND_LIBRARY_PREFIXES}")
endif (MSVC)
-# On macOS, add the Homebrew prefix (with appropriate suffixes) to the
-# respective HINTS directories (after any user-specified locations). This
-# handles Homebrew installations into non-standard locations (not /usr/local).
-# We do not use CMAKE_PREFIX_PATH for this as given the search ordering of
-# find_xxx(), doing so would override any user-specified HINTS locations with
-# the Homebrew version if it exists.
-if (CMAKE_SYSTEM_NAME MATCHES "Darwin")
- find_program(HOMEBREW_EXECUTABLE brew)
- mark_as_advanced(FORCE HOMEBREW_EXECUTABLE)
- if (HOMEBREW_EXECUTABLE)
- # Detected a Homebrew install, query for its install prefix.
- execute_process(COMMAND ${HOMEBREW_EXECUTABLE} --prefix
- OUTPUT_VARIABLE HOMEBREW_INSTALL_PREFIX
- OUTPUT_STRIP_TRAILING_WHITESPACE)
- message(STATUS "Detected Homebrew with install prefix: "
- "${HOMEBREW_INSTALL_PREFIX}, adding to CMake search paths.")
- list(APPEND SUITESPARSE_INCLUDE_DIR_HINTS "${HOMEBREW_INSTALL_PREFIX}/include")
- list(APPEND SUITESPARSE_LIBRARY_DIR_HINTS "${HOMEBREW_INSTALL_PREFIX}/lib")
- endif()
-endif()
-
-# Specify search directories for include files and libraries (this is the union
-# of the search directories for all OSs). Search user-specified hint
-# directories first if supplied, and search user-installed locations first
-# so that we prefer user installs to system installs where both exist.
-list(APPEND SUITESPARSE_CHECK_INCLUDE_DIRS
- /opt/local/include
- /opt/local/include/ufsparse # Mac OS X
- /usr/local/homebrew/include # Mac OS X
- /usr/local/include
- /usr/include)
-list(APPEND SUITESPARSE_CHECK_LIBRARY_DIRS
- /opt/local/lib
- /opt/local/lib/ufsparse # Mac OS X
- /usr/local/homebrew/lib # Mac OS X
- /usr/local/lib
- /usr/lib)
# Additional suffixes to try appending to each search path.
-list(APPEND SUITESPARSE_CHECK_PATH_SUFFIXES
+list(APPEND SuiteSparse_CHECK_PATH_SUFFIXES
suitesparse) # Windows/Ubuntu
# Wrappers to find_path/library that pass the SuiteSparse search hints/paths.
#
# suitesparse_find_component(<component> [FILES name1 [name2 ...]]
-# [LIBRARIES name1 [name2 ...]]
-# [REQUIRED])
+# [LIBRARIES name1 [name2 ...]])
macro(suitesparse_find_component COMPONENT)
include(CMakeParseArguments)
- set(OPTIONS REQUIRED)
set(MULTI_VALUE_ARGS FILES LIBRARIES)
- cmake_parse_arguments(SUITESPARSE_FIND_${COMPONENT}
- "${OPTIONS}" "" "${MULTI_VALUE_ARGS}" ${ARGN})
+ cmake_parse_arguments(SuiteSparse_FIND_COMPONENT_${COMPONENT}
+ "" "" "${MULTI_VALUE_ARGS}" ${ARGN})
- if (SUITESPARSE_FIND_${COMPONENT}_REQUIRED)
- list(APPEND SUITESPARSE_FOUND_REQUIRED_VARS ${COMPONENT}_FOUND)
- endif()
-
- set(${COMPONENT}_FOUND TRUE)
- if (SUITESPARSE_FIND_${COMPONENT}_FILES)
- find_path(${COMPONENT}_INCLUDE_DIR
- NAMES ${SUITESPARSE_FIND_${COMPONENT}_FILES}
- HINTS ${SUITESPARSE_INCLUDE_DIR_HINTS}
- PATHS ${SUITESPARSE_CHECK_INCLUDE_DIRS}
- PATH_SUFFIXES ${SUITESPARSE_CHECK_PATH_SUFFIXES})
- if (${COMPONENT}_INCLUDE_DIR)
+ set(SuiteSparse_${COMPONENT}_FOUND TRUE)
+ if (SuiteSparse_FIND_COMPONENT_${COMPONENT}_FILES)
+ find_path(SuiteSparse_${COMPONENT}_INCLUDE_DIR
+ NAMES ${SuiteSparse_FIND_COMPONENT_${COMPONENT}_FILES}
+ PATH_SUFFIXES ${SuiteSparse_CHECK_PATH_SUFFIXES})
+ if (SuiteSparse_${COMPONENT}_INCLUDE_DIR)
message(STATUS "Found ${COMPONENT} headers in: "
- "${${COMPONENT}_INCLUDE_DIR}")
- mark_as_advanced(${COMPONENT}_INCLUDE_DIR)
+ "${SuiteSparse_${COMPONENT}_INCLUDE_DIR}")
+ mark_as_advanced(SuiteSparse_${COMPONENT}_INCLUDE_DIR)
else()
# Specified headers not found.
- set(${COMPONENT}_FOUND FALSE)
- if (SUITESPARSE_FIND_${COMPONENT}_REQUIRED)
+ set(SuiteSparse_${COMPONENT}_FOUND FALSE)
+ if (SuiteSparse_FIND_REQUIRED_${COMPONENT})
suitesparse_report_not_found(
"Did not find ${COMPONENT} header (required SuiteSparse component).")
else()
message(STATUS "Did not find ${COMPONENT} header (optional "
"SuiteSparse component).")
# Hide optional vars from CMake GUI even if not found.
- mark_as_advanced(${COMPONENT}_INCLUDE_DIR)
+ mark_as_advanced(SuiteSparse_${COMPONENT}_INCLUDE_DIR)
endif()
endif()
endif()
- if (SUITESPARSE_FIND_${COMPONENT}_LIBRARIES)
- find_library(${COMPONENT}_LIBRARY
- NAMES ${SUITESPARSE_FIND_${COMPONENT}_LIBRARIES}
- HINTS ${SUITESPARSE_LIBRARY_DIR_HINTS}
- PATHS ${SUITESPARSE_CHECK_LIBRARY_DIRS}
- PATH_SUFFIXES ${SUITESPARSE_CHECK_PATH_SUFFIXES})
- if (${COMPONENT}_LIBRARY)
- message(STATUS "Found ${COMPONENT} library: ${${COMPONENT}_LIBRARY}")
- mark_as_advanced(${COMPONENT}_LIBRARY)
+ if (SuiteSparse_FIND_COMPONENT_${COMPONENT}_LIBRARIES)
+ find_library(SuiteSparse_${COMPONENT}_LIBRARY
+ NAMES ${SuiteSparse_FIND_COMPONENT_${COMPONENT}_LIBRARIES}
+ PATH_SUFFIXES ${SuiteSparse_CHECK_PATH_SUFFIXES})
+ if (SuiteSparse_${COMPONENT}_LIBRARY)
+ message(STATUS "Found ${COMPONENT} library: ${SuiteSparse_${COMPONENT}_LIBRARY}")
+ mark_as_advanced(SuiteSparse_${COMPONENT}_LIBRARY)
else ()
# Specified libraries not found.
- set(${COMPONENT}_FOUND FALSE)
- if (SUITESPARSE_FIND_${COMPONENT}_REQUIRED)
+ set(SuiteSparse_${COMPONENT}_FOUND FALSE)
+ if (SuiteSparse_FIND_REQUIRED_${COMPONENT})
suitesparse_report_not_found(
"Did not find ${COMPONENT} library (required SuiteSparse component).")
else()
message(STATUS "Did not find ${COMPONENT} library (optional SuiteSparse "
"dependency)")
# Hide optional vars from CMake GUI even if not found.
- mark_as_advanced(${COMPONENT}_LIBRARY)
+ mark_as_advanced(SuiteSparse_${COMPONENT}_LIBRARY)
endif()
endif()
endif()
+
+ # A component can be optional (given to OPTIONAL_COMPONENTS). However, if the
+ # component is implicit (must be always present, such as the Config component)
+ # assume it be required as well.
+ if (SuiteSparse_FIND_REQUIRED_${COMPONENT})
+ list (APPEND SuiteSparse_REQUIRED_VARS SuiteSparse_${COMPONENT}_INCLUDE_DIR)
+ list (APPEND SuiteSparse_REQUIRED_VARS SuiteSparse_${COMPONENT}_LIBRARY)
+ endif (SuiteSparse_FIND_REQUIRED_${COMPONENT})
+
+ # Define the target only if the include directory and the library were found
+ if (SuiteSparse_${COMPONENT}_INCLUDE_DIR AND SuiteSparse_${COMPONENT}_LIBRARY)
+ if (NOT TARGET SuiteSparse::${COMPONENT})
+ add_library(SuiteSparse::${COMPONENT} IMPORTED UNKNOWN)
+ endif (NOT TARGET SuiteSparse::${COMPONENT})
+
+ set_property(TARGET SuiteSparse::${COMPONENT} PROPERTY
+ INTERFACE_INCLUDE_DIRECTORIES ${SuiteSparse_${COMPONENT}_INCLUDE_DIR})
+ set_property(TARGET SuiteSparse::${COMPONENT} PROPERTY
+ IMPORTED_LOCATION ${SuiteSparse_${COMPONENT}_LIBRARY})
+ endif (SuiteSparse_${COMPONENT}_INCLUDE_DIR AND SuiteSparse_${COMPONENT}_LIBRARY)
endmacro()
# Given the number of components of SuiteSparse, and to ensure that the
# automatic failure message generated by FindPackageHandleStandardArgs()
# when not all required components are found is helpful, we maintain a list
# of all variables that must be defined for SuiteSparse to be considered found.
-unset(SUITESPARSE_FOUND_REQUIRED_VARS)
+unset(SuiteSparse_REQUIRED_VARS)
# BLAS.
find_package(BLAS QUIET)
@@ -285,7 +295,6 @@
suitesparse_report_not_found(
"Did not find BLAS library (required for SuiteSparse).")
endif (NOT BLAS_FOUND)
-list(APPEND SUITESPARSE_FOUND_REQUIRED_VARS BLAS_FOUND)
# LAPACK.
find_package(LAPACK QUIET)
@@ -293,239 +302,226 @@
suitesparse_report_not_found(
"Did not find LAPACK library (required for SuiteSparse).")
endif (NOT LAPACK_FOUND)
-list(APPEND SUITESPARSE_FOUND_REQUIRED_VARS LAPACK_FOUND)
-suitesparse_find_component(AMD REQUIRED FILES amd.h LIBRARIES amd)
-suitesparse_find_component(CAMD REQUIRED FILES camd.h LIBRARIES camd)
-suitesparse_find_component(COLAMD REQUIRED FILES colamd.h LIBRARIES colamd)
-suitesparse_find_component(CCOLAMD REQUIRED FILES ccolamd.h LIBRARIES ccolamd)
-suitesparse_find_component(CHOLMOD REQUIRED FILES cholmod.h LIBRARIES cholmod)
-suitesparse_find_component(
- SUITESPARSEQR REQUIRED FILES SuiteSparseQR.hpp LIBRARIES spqr)
-if (SUITESPARSEQR_FOUND)
+foreach (component IN LISTS SuiteSparse_FIND_COMPONENTS)
+ if (component STREQUAL Partition)
+ # Partition is a meta component that neither provides additional headers nor
+ # a separate library. It is strictly part of CHOLMOD.
+ continue ()
+ endif (component STREQUAL Partition)
+ string (TOLOWER ${component} component_library)
+
+ if (component STREQUAL "Config")
+ set (component_header SuiteSparse_config.h)
+ set (component_library suitesparseconfig)
+ elseif (component STREQUAL "SPQR")
+ set (component_header SuiteSparseQR.hpp)
+ else (component STREQUAL "SPQR")
+ set (component_header ${component_library}.h)
+ endif (component STREQUAL "Config")
+
+ suitesparse_find_component(${component}
+ FILES ${component_header}
+ LIBRARIES ${component_library})
+endforeach (component IN LISTS SuiteSparse_FIND_COMPONENTS)
+
+if (TARGET SuiteSparse::SPQR)
# SuiteSparseQR may be compiled with Intel Threading Building Blocks,
# we assume that if TBB is installed, SuiteSparseQR was compiled with
# support for it, this will do no harm if it wasn't.
- find_package(TBB QUIET)
+ find_package(TBB QUIET NO_MODULE)
if (TBB_FOUND)
message(STATUS "Found Intel Thread Building Blocks (TBB) library "
- "(${TBB_VERSION_MAJOR}.${TBB_VERSION_MINOR} / ${TBB_INTERFACE_VERSION}) "
- "include location: ${TBB_INCLUDE_DIRS}. Assuming SuiteSparseQR was "
- "compiled with TBB.")
+ "(${TBB_VERSION_MAJOR}.${TBB_VERSION_MINOR} / ${TBB_INTERFACE_VERSION}). "
+ "Assuming SuiteSparseQR was compiled with TBB.")
# Add the TBB libraries to the SuiteSparseQR libraries (the only
# libraries to optionally depend on TBB).
- list(APPEND SUITESPARSEQR_LIBRARY ${TBB_LIBRARIES})
- else()
+ set_property (TARGET SuiteSparse::SPQR APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES TBB::tbb)
+ else (TBB_FOUND)
message(STATUS "Did not find Intel TBB library, assuming SuiteSparseQR was "
"not compiled with TBB.")
- endif()
-endif(SUITESPARSEQR_FOUND)
+ endif (TBB_FOUND)
+endif (TARGET SuiteSparse::SPQR)
-# UFconfig / SuiteSparse_config.
-#
-# If SuiteSparse version is >= 4 then SuiteSparse_config is required.
-# For SuiteSparse 3, UFconfig.h is required.
-suitesparse_find_component(
- SUITESPARSE_CONFIG FILES SuiteSparse_config.h LIBRARIES suitesparseconfig)
+check_library_exists(rt shm_open "" HAVE_LIBRT)
-if (SUITESPARSE_CONFIG_FOUND)
+if (TARGET SuiteSparse::Config)
# SuiteSparse_config (SuiteSparse version >= 4) requires librt library for
# timing by default when compiled on Linux or Unix, but not on OSX (which
# does not have librt).
- if (CMAKE_SYSTEM_NAME MATCHES "Linux" OR UNIX AND NOT APPLE)
- suitesparse_find_component(LIBRT LIBRARIES rt)
- if (LIBRT_FOUND)
- message(STATUS "Adding librt: ${LIBRT_LIBRARY} to "
- "SuiteSparse_config libraries (required on Linux & Unix [not OSX] if "
- "SuiteSparse is compiled with timing).")
- list(APPEND SUITESPARSE_CONFIG_LIBRARY ${LIBRT_LIBRARY})
- else()
- message(STATUS "Could not find librt, but found SuiteSparse_config, "
- "assuming that SuiteSparse was compiled without timing.")
- endif ()
- endif (CMAKE_SYSTEM_NAME MATCHES "Linux" OR UNIX AND NOT APPLE)
-else()
- # Failed to find SuiteSparse_config (>= v4 installs), instead look for
- # UFconfig header which should be present in < v4 installs.
- suitesparse_find_component(UFCONFIG FILES UFconfig.h)
-endif ()
+ if (HAVE_LIBRT)
+ message(STATUS "Adding librt to "
+ "SuiteSparse_config libraries (required on Linux & Unix [not OSX] if "
+ "SuiteSparse is compiled with timing).")
+ set_property (TARGET SuiteSparse::Config APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES $<LINK_ONLY:rt>)
+ else (HAVE_LIBRT)
+ message(STATUS "Could not find librt, but found SuiteSparse_config, "
+ "assuming that SuiteSparse was compiled without timing.")
+ endif (HAVE_LIBRT)
-if (NOT SUITESPARSE_CONFIG_FOUND AND
- NOT UFCONFIG_FOUND)
- suitesparse_report_not_found(
- "Failed to find either: SuiteSparse_config header & library (should be "
- "present in all SuiteSparse >= v4 installs), or UFconfig header (should "
- "be present in all SuiteSparse < v4 installs).")
-endif()
+ # Add BLAS and LAPACK as dependencies of SuiteSparse::Config for convenience
+ # given that all components depend on it.
+ if (BLAS_FOUND)
+ if (TARGET BLAS::BLAS)
+ set_property (TARGET SuiteSparse::Config APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES $<LINK_ONLY:BLAS::BLAS>)
+ else (TARGET BLAS::BLAS)
+ set_property (TARGET SuiteSparse::Config APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES ${BLAS_LIBRARIES})
+ endif (TARGET BLAS::BLAS)
+ endif (BLAS_FOUND)
-# Extract the SuiteSparse version from the appropriate header (UFconfig.h for
-# <= v3, SuiteSparse_config.h for >= v4).
-list(APPEND SUITESPARSE_FOUND_REQUIRED_VARS SUITESPARSE_VERSION)
+ if (LAPACK_FOUND)
+ if (TARGET LAPACK::LAPACK)
+ set_property (TARGET SuiteSparse::Config APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES $<LINK_ONLY:LAPACK::LAPACK>)
+ else (TARGET LAPACK::LAPACK)
+ set_property (TARGET SuiteSparse::Config APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES ${LAPACK_LIBRARIES})
+ endif (TARGET LAPACK::LAPACK)
+ endif (LAPACK_FOUND)
-if (UFCONFIG_FOUND)
- # SuiteSparse version <= 3.
- set(SUITESPARSE_VERSION_FILE ${UFCONFIG_INCLUDE_DIR}/UFconfig.h)
- if (NOT EXISTS ${SUITESPARSE_VERSION_FILE})
- suitesparse_report_not_found(
- "Could not find file: ${SUITESPARSE_VERSION_FILE} containing version "
- "information for <= v3 SuiteSparse installs, but UFconfig was found "
- "(only present in <= v3 installs).")
- else (NOT EXISTS ${SUITESPARSE_VERSION_FILE})
- file(READ ${SUITESPARSE_VERSION_FILE} UFCONFIG_CONTENTS)
-
- string(REGEX MATCH "#define SUITESPARSE_MAIN_VERSION [0-9]+"
- SUITESPARSE_MAIN_VERSION "${UFCONFIG_CONTENTS}")
- string(REGEX REPLACE "#define SUITESPARSE_MAIN_VERSION ([0-9]+)" "\\1"
- SUITESPARSE_MAIN_VERSION "${SUITESPARSE_MAIN_VERSION}")
-
- string(REGEX MATCH "#define SUITESPARSE_SUB_VERSION [0-9]+"
- SUITESPARSE_SUB_VERSION "${UFCONFIG_CONTENTS}")
- string(REGEX REPLACE "#define SUITESPARSE_SUB_VERSION ([0-9]+)" "\\1"
- SUITESPARSE_SUB_VERSION "${SUITESPARSE_SUB_VERSION}")
-
- string(REGEX MATCH "#define SUITESPARSE_SUBSUB_VERSION [0-9]+"
- SUITESPARSE_SUBSUB_VERSION "${UFCONFIG_CONTENTS}")
- string(REGEX REPLACE "#define SUITESPARSE_SUBSUB_VERSION ([0-9]+)" "\\1"
- SUITESPARSE_SUBSUB_VERSION "${SUITESPARSE_SUBSUB_VERSION}")
-
- # This is on a single line s/t CMake does not interpret it as a list of
- # elements and insert ';' separators which would result in 4.;2.;1 nonsense.
- set(SUITESPARSE_VERSION
- "${SUITESPARSE_MAIN_VERSION}.${SUITESPARSE_SUB_VERSION}.${SUITESPARSE_SUBSUB_VERSION}")
- endif (NOT EXISTS ${SUITESPARSE_VERSION_FILE})
-endif (UFCONFIG_FOUND)
-
-if (SUITESPARSE_CONFIG_FOUND)
# SuiteSparse version >= 4.
- set(SUITESPARSE_VERSION_FILE
- ${SUITESPARSE_CONFIG_INCLUDE_DIR}/SuiteSparse_config.h)
- if (NOT EXISTS ${SUITESPARSE_VERSION_FILE})
+ set(SuiteSparse_VERSION_FILE
+ ${SuiteSparse_Config_INCLUDE_DIR}/SuiteSparse_config.h)
+ if (NOT EXISTS ${SuiteSparse_VERSION_FILE})
suitesparse_report_not_found(
- "Could not find file: ${SUITESPARSE_VERSION_FILE} containing version "
+ "Could not find file: ${SuiteSparse_VERSION_FILE} containing version "
"information for >= v4 SuiteSparse installs, but SuiteSparse_config was "
"found (only present in >= v4 installs).")
- else (NOT EXISTS ${SUITESPARSE_VERSION_FILE})
- file(READ ${SUITESPARSE_VERSION_FILE} SUITESPARSE_CONFIG_CONTENTS)
+ else (NOT EXISTS ${SuiteSparse_VERSION_FILE})
+ file(READ ${SuiteSparse_VERSION_FILE} Config_CONTENTS)
- string(REGEX MATCH "#define SUITESPARSE_MAIN_VERSION [0-9]+"
- SUITESPARSE_MAIN_VERSION "${SUITESPARSE_CONFIG_CONTENTS}")
- string(REGEX REPLACE "#define SUITESPARSE_MAIN_VERSION ([0-9]+)" "\\1"
- SUITESPARSE_MAIN_VERSION "${SUITESPARSE_MAIN_VERSION}")
+ string(REGEX MATCH "#define SUITESPARSE_MAIN_VERSION[ \t]+([0-9]+)"
+ SuiteSparse_VERSION_LINE "${Config_CONTENTS}")
+ set (SuiteSparse_VERSION_MAJOR ${CMAKE_MATCH_1})
- string(REGEX MATCH "#define SUITESPARSE_SUB_VERSION [0-9]+"
- SUITESPARSE_SUB_VERSION "${SUITESPARSE_CONFIG_CONTENTS}")
- string(REGEX REPLACE "#define SUITESPARSE_SUB_VERSION ([0-9]+)" "\\1"
- SUITESPARSE_SUB_VERSION "${SUITESPARSE_SUB_VERSION}")
+ string(REGEX MATCH "#define SUITESPARSE_SUB_VERSION[ \t]+([0-9]+)"
+ SuiteSparse_VERSION_LINE "${Config_CONTENTS}")
+ set (SuiteSparse_VERSION_MINOR ${CMAKE_MATCH_1})
- string(REGEX MATCH "#define SUITESPARSE_SUBSUB_VERSION [0-9]+"
- SUITESPARSE_SUBSUB_VERSION "${SUITESPARSE_CONFIG_CONTENTS}")
- string(REGEX REPLACE "#define SUITESPARSE_SUBSUB_VERSION ([0-9]+)" "\\1"
- SUITESPARSE_SUBSUB_VERSION "${SUITESPARSE_SUBSUB_VERSION}")
+ string(REGEX MATCH "#define SUITESPARSE_SUBSUB_VERSION[ \t]+([0-9]+)"
+ SuiteSparse_VERSION_LINE "${Config_CONTENTS}")
+ set (SuiteSparse_VERSION_PATCH ${CMAKE_MATCH_1})
+
+ unset (SuiteSparse_VERSION_LINE)
# This is on a single line s/t CMake does not interpret it as a list of
# elements and insert ';' separators which would result in 4.;2.;1 nonsense.
- set(SUITESPARSE_VERSION
- "${SUITESPARSE_MAIN_VERSION}.${SUITESPARSE_SUB_VERSION}.${SUITESPARSE_SUBSUB_VERSION}")
- endif (NOT EXISTS ${SUITESPARSE_VERSION_FILE})
-endif (SUITESPARSE_CONFIG_FOUND)
+ set(SuiteSparse_VERSION
+ "${SuiteSparse_VERSION_MAJOR}.${SuiteSparse_VERSION_MINOR}.${SuiteSparse_VERSION_PATCH}")
-# METIS (Optional dependency).
-suitesparse_find_component(METIS LIBRARIES metis)
+ if (SuiteSparse_VERSION MATCHES "[0-9]+\\.[0-9]+\\.[0-9]+")
+ set(SuiteSparse_VERSION_COMPONENTS 3)
+ else (SuiteSparse_VERSION MATCHES "[0-9]+\\.[0-9]+\\.[0-9]+")
+ message (WARNING "Could not parse SuiteSparse_config.h: SuiteSparse "
+ "version will not be available")
-# Only mark SuiteSparse as found if all required components and dependencies
-# have been found.
-set(SUITESPARSE_FOUND TRUE)
-foreach(REQUIRED_VAR ${SUITESPARSE_FOUND_REQUIRED_VARS})
- if (NOT ${REQUIRED_VAR})
- set(SUITESPARSE_FOUND FALSE)
- endif (NOT ${REQUIRED_VAR})
-endforeach(REQUIRED_VAR ${SUITESPARSE_FOUND_REQUIRED_VARS})
+ unset (SuiteSparse_VERSION)
+ unset (SuiteSparse_VERSION_MAJOR)
+ unset (SuiteSparse_VERSION_MINOR)
+ unset (SuiteSparse_VERSION_PATCH)
+ endif (SuiteSparse_VERSION MATCHES "[0-9]+\\.[0-9]+\\.[0-9]+")
+ endif (NOT EXISTS ${SuiteSparse_VERSION_FILE})
+endif (TARGET SuiteSparse::Config)
-if (SUITESPARSE_FOUND)
- list(APPEND SUITESPARSE_INCLUDE_DIRS
- ${AMD_INCLUDE_DIR}
- ${CAMD_INCLUDE_DIR}
- ${COLAMD_INCLUDE_DIR}
- ${CCOLAMD_INCLUDE_DIR}
- ${CHOLMOD_INCLUDE_DIR}
- ${SUITESPARSEQR_INCLUDE_DIR})
- # Handle config separately, as otherwise at least one of them will be set
- # to NOTFOUND which would cause any check on SUITESPARSE_INCLUDE_DIRS to fail.
- if (SUITESPARSE_CONFIG_FOUND)
- list(APPEND SUITESPARSE_INCLUDE_DIRS
- ${SUITESPARSE_CONFIG_INCLUDE_DIR})
- endif (SUITESPARSE_CONFIG_FOUND)
- if (UFCONFIG_FOUND)
- list(APPEND SUITESPARSE_INCLUDE_DIRS
- ${UFCONFIG_INCLUDE_DIR})
- endif (UFCONFIG_FOUND)
- # As SuiteSparse includes are often all in the same directory, remove any
- # repetitions.
- list(REMOVE_DUPLICATES SUITESPARSE_INCLUDE_DIRS)
+# CHOLMOD requires AMD CAMD CCOLAMD COLAMD
+if (TARGET SuiteSparse::CHOLMOD)
+ foreach (component IN ITEMS AMD CAMD CCOLAMD COLAMD)
+ if (TARGET SuiteSparse::${component})
+ set_property (TARGET SuiteSparse::CHOLMOD APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES SuiteSparse::${component})
+ else (TARGET SuiteSparse::${component})
+ # Consider CHOLMOD not found if COLAMD cannot be found
+ set (SuiteSparse_CHOLMOD_FOUND FALSE)
+ endif (TARGET SuiteSparse::${component})
+ endforeach (component IN ITEMS AMD CAMD CCOLAMD COLAMD)
+endif (TARGET SuiteSparse::CHOLMOD)
- # Important: The ordering of these libraries is *NOT* arbitrary, as these
- # could potentially be static libraries their link ordering is important.
- list(APPEND SUITESPARSE_LIBRARIES
- ${SUITESPARSEQR_LIBRARY}
- ${CHOLMOD_LIBRARY}
- ${CCOLAMD_LIBRARY}
- ${CAMD_LIBRARY}
- ${COLAMD_LIBRARY}
- ${AMD_LIBRARY}
- ${LAPACK_LIBRARIES}
- ${BLAS_LIBRARIES})
- if (SUITESPARSE_CONFIG_FOUND)
- list(APPEND SUITESPARSE_LIBRARIES
- ${SUITESPARSE_CONFIG_LIBRARY})
- endif (SUITESPARSE_CONFIG_FOUND)
- if (METIS_FOUND)
- list(APPEND SUITESPARSE_LIBRARIES
- ${METIS_LIBRARY})
- endif (METIS_FOUND)
-endif()
+# SPQR requires CHOLMOD
+if (TARGET SuiteSparse::SPQR)
+ if (TARGET SuiteSparse::CHOLMOD)
+ set_property (TARGET SuiteSparse::SPQR APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES SuiteSparse::CHOLMOD)
+ else (TARGET SuiteSparse::CHOLMOD)
+ # Consider SPQR not found if CHOLMOD cannot be found
+ set (SuiteSparse_SQPR_FOUND FALSE)
+ endif (TARGET SuiteSparse::CHOLMOD)
+endif (TARGET SuiteSparse::SPQR)
-# Determine if we are running on Ubuntu with the package install of SuiteSparse
-# which is broken and does not support linking a shared library.
-set(SUITESPARSE_IS_BROKEN_SHARED_LINKING_UBUNTU_SYSTEM_VERSION FALSE)
-if (CMAKE_SYSTEM_NAME MATCHES "Linux" AND
- SUITESPARSE_VERSION VERSION_EQUAL 3.4.0)
- find_program(LSB_RELEASE_EXECUTABLE lsb_release)
- if (LSB_RELEASE_EXECUTABLE)
- # Any even moderately recent Ubuntu release (likely to be affected by
- # this bug) should have lsb_release, if it isn't present we are likely
- # on a different Linux distribution (should be fine).
- execute_process(COMMAND ${LSB_RELEASE_EXECUTABLE} -si
- OUTPUT_VARIABLE LSB_DISTRIBUTOR_ID
- OUTPUT_STRIP_TRAILING_WHITESPACE)
+# Add SuiteSparse::Config as dependency to all components
+if (TARGET SuiteSparse::Config)
+ foreach (component IN LISTS SuiteSparse_FIND_COMPONENTS)
+ if (component STREQUAL Config)
+ continue ()
+ endif (component STREQUAL Config)
- if (LSB_DISTRIBUTOR_ID MATCHES "Ubuntu" AND
- SUITESPARSE_LIBRARIES MATCHES "/usr/lib/libamd")
- # We are on Ubuntu, and the SuiteSparse version matches the broken
- # system install version and is a system install.
- set(SUITESPARSE_IS_BROKEN_SHARED_LINKING_UBUNTU_SYSTEM_VERSION TRUE)
- message(STATUS "Found system install of SuiteSparse "
- "${SUITESPARSE_VERSION} running on Ubuntu, which has a known bug "
- "preventing linking of shared libraries (static linking unaffected).")
- endif (LSB_DISTRIBUTOR_ID MATCHES "Ubuntu" AND
- SUITESPARSE_LIBRARIES MATCHES "/usr/lib/libamd")
- endif (LSB_RELEASE_EXECUTABLE)
-endif (CMAKE_SYSTEM_NAME MATCHES "Linux" AND
- SUITESPARSE_VERSION VERSION_EQUAL 3.4.0)
+ if (TARGET SuiteSparse::${component})
+ set_property (TARGET SuiteSparse::${component} APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES SuiteSparse::Config)
+ endif (TARGET SuiteSparse::${component})
+ endforeach (component IN LISTS SuiteSparse_FIND_COMPONENTS)
+endif (TARGET SuiteSparse::Config)
+
+# Check whether CHOLMOD was compiled with METIS support. The check can be
+# performed only after the main components have been set up.
+if (TARGET SuiteSparse::CHOLMOD)
+ # NOTE If SuiteSparse was compiled as a static library we'll need to link
+ # against METIS already during the check. Otherwise, the check can fail due to
+ # undefined references even though SuiteSparse was compiled with METIS.
+ find_package (METIS)
+
+ if (TARGET METIS::METIS)
+ cmake_push_check_state (RESET)
+ set (CMAKE_REQUIRED_LIBRARIES SuiteSparse::CHOLMOD METIS::METIS)
+ check_symbol_exists (cholmod_metis cholmod.h SuiteSparse_CHOLMOD_USES_METIS)
+ cmake_pop_check_state ()
+
+ if (SuiteSparse_CHOLMOD_USES_METIS)
+ set_property (TARGET SuiteSparse::CHOLMOD APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES $<LINK_ONLY:METIS::METIS>)
+
+ # Provide the SuiteSparse::Partition component whose availability indicates
+ # that CHOLMOD was compiled with the Partition module.
+ if (NOT TARGET SuiteSparse::Partition)
+ add_library (SuiteSparse::Partition IMPORTED INTERFACE)
+ endif (NOT TARGET SuiteSparse::Partition)
+
+ set_property (TARGET SuiteSparse::Partition APPEND PROPERTY
+ INTERFACE_LINK_LIBRARIES SuiteSparse::CHOLMOD)
+ endif (SuiteSparse_CHOLMOD_USES_METIS)
+ endif (TARGET METIS::METIS)
+endif (TARGET SuiteSparse::CHOLMOD)
+
+# We do not use suitesparse_find_component to find Partition and therefore must
+# handle the availability in an extra step.
+if (TARGET SuiteSparse::Partition)
+ set (SuiteSparse_Partition_FOUND TRUE)
+else (TARGET SuiteSparse::Partition)
+ set (SuiteSparse_Partition_FOUND FALSE)
+endif (TARGET SuiteSparse::Partition)
suitesparse_reset_find_library_prefix()
# Handle REQUIRED and QUIET arguments to FIND_PACKAGE
include(FindPackageHandleStandardArgs)
-if (SUITESPARSE_FOUND)
+if (SuiteSparse_FOUND)
find_package_handle_standard_args(SuiteSparse
- REQUIRED_VARS ${SUITESPARSE_FOUND_REQUIRED_VARS}
- VERSION_VAR SUITESPARSE_VERSION
- FAIL_MESSAGE "Failed to find some/all required components of SuiteSparse.")
-else (SUITESPARSE_FOUND)
+ REQUIRED_VARS ${SuiteSparse_REQUIRED_VARS}
+ VERSION_VAR SuiteSparse_VERSION
+ FAIL_MESSAGE "Failed to find some/all required components of SuiteSparse."
+ HANDLE_COMPONENTS)
+else (SuiteSparse_FOUND)
# Do not pass VERSION_VAR to FindPackageHandleStandardArgs() if we failed to
# find SuiteSparse to avoid a confusing autogenerated failure message
# that states 'not found (missing: FOO) (found version: x.y.z)'.
find_package_handle_standard_args(SuiteSparse
- REQUIRED_VARS ${SUITESPARSE_FOUND_REQUIRED_VARS}
- FAIL_MESSAGE "Failed to find some/all required components of SuiteSparse.")
-endif (SUITESPARSE_FOUND)
+ REQUIRED_VARS ${SuiteSparse_REQUIRED_VARS}
+ FAIL_MESSAGE "Failed to find some/all required components of SuiteSparse."
+ HANDLE_COMPONENTS)
+endif (SuiteSparse_FOUND)
+
+# Pop CMP0057.
+cmake_policy (POP)
diff --git a/cmake/FindTBB.cmake b/cmake/FindTBB.cmake
deleted file mode 100644
index 5ae7b61..0000000
--- a/cmake/FindTBB.cmake
+++ /dev/null
@@ -1,455 +0,0 @@
-# - Find ThreadingBuildingBlocks include dirs and libraries
-# Use this module by invoking find_package with the form:
-# find_package(TBB
-# [REQUIRED] # Fail with error if TBB is not found
-# ) #
-# Once done, this will define
-#
-# TBB_FOUND - system has TBB
-# TBB_INCLUDE_DIRS - the TBB include directories
-# TBB_LIBRARIES - TBB libraries to be lined, doesn't include malloc or
-# malloc proxy
-# TBB::tbb - imported target for the TBB library
-#
-# TBB_VERSION_MAJOR - Major Product Version Number
-# TBB_VERSION_MINOR - Minor Product Version Number
-# TBB_INTERFACE_VERSION - Engineering Focused Version Number
-# TBB_COMPATIBLE_INTERFACE_VERSION - The oldest major interface version
-# still supported. This uses the engineering
-# focused interface version numbers.
-#
-# TBB_MALLOC_FOUND - system has TBB malloc library
-# TBB_MALLOC_INCLUDE_DIRS - the TBB malloc include directories
-# TBB_MALLOC_LIBRARIES - The TBB malloc libraries to be lined
-# TBB::malloc - imported target for the TBB malloc library
-#
-# TBB_MALLOC_PROXY_FOUND - system has TBB malloc proxy library
-# TBB_MALLOC_PROXY_INCLUDE_DIRS = the TBB malloc proxy include directories
-# TBB_MALLOC_PROXY_LIBRARIES - The TBB malloc proxy libraries to be lined
-# TBB::malloc_proxy - imported target for the TBB malloc proxy library
-#
-#
-# This module reads hints about search locations from variables:
-# ENV TBB_ARCH_PLATFORM - for eg. set it to "mic" for Xeon Phi builds
-# ENV TBB_ROOT or just TBB_ROOT - root directory of tbb installation
-# ENV TBB_BUILD_PREFIX - specifies the build prefix for user built tbb
-# libraries. Should be specified with ENV TBB_ROOT
-# and optionally...
-# ENV TBB_BUILD_DIR - if build directory is different than ${TBB_ROOT}/build
-#
-#
-# Modified by Robert Maynard from the original OGRE source
-#
-#-------------------------------------------------------------------
-# This file is part of the CMake build system for OGRE
-# (Object-oriented Graphics Rendering Engine)
-# For the latest info, see http://www.ogre3d.org/
-#
-# The contents of this file are placed in the public domain. Feel
-# free to make use of it in any way you like.
-#-------------------------------------------------------------------
-#
-# =========================================================================
-# Taken from Copyright.txt in the root of the VTK source tree as per
-# instructions to substitute the full license in place of the summary
-# reference when distributing outside of VTK
-# =========================================================================
-#
-# Program: Visualization Toolkit
-# Module: Copyright.txt
-#
-# Copyright (c) 1993-2015 Ken Martin, Will Schroeder, Bill Lorensen
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-#
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-#
-# * Neither name of Ken Martin, Will Schroeder, or Bill Lorensen nor the names
-# of any contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS''
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE FOR
-# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
-# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
-# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# =========================================================================*/
-
-#=============================================================================
-# FindTBB helper functions and macros
-#
-
-#====================================================
-# Fix the library path in case it is a linker script
-#====================================================
-function(tbb_extract_real_library library real_library)
- if(NOT UNIX OR NOT EXISTS ${library})
- set(${real_library} "${library}" PARENT_SCOPE)
- return()
- endif()
-
- #Read in the first 4 bytes and see if they are the ELF magic number
- set(_elf_magic "7f454c46")
- file(READ ${library} _hex_data OFFSET 0 LIMIT 4 HEX)
- if(_hex_data STREQUAL _elf_magic)
- #we have opened a elf binary so this is what
- #we should link to
- set(${real_library} "${library}" PARENT_SCOPE)
- return()
- endif()
-
- file(READ ${library} _data OFFSET 0 LIMIT 1024)
- if("${_data}" MATCHES "INPUT \\(([^(]+)\\)")
- #extract out the .so name from REGEX MATCH command
- set(_proper_so_name "${CMAKE_MATCH_1}")
-
- #construct path to the real .so which is presumed to be in the same directory
- #as the input file
- get_filename_component(_so_dir "${library}" DIRECTORY)
- set(${real_library} "${_so_dir}/${_proper_so_name}" PARENT_SCOPE)
- else()
- #unable to determine what this library is so just hope everything works
- #and pass it unmodified.
- set(${real_library} "${library}" PARENT_SCOPE)
- endif()
-endfunction()
-
-#===============================================
-# Do the final processing for the package find.
-#===============================================
-macro(findpkg_finish PREFIX TARGET_NAME)
- if (${PREFIX}_INCLUDE_DIR AND ${PREFIX}_LIBRARY)
- set(${PREFIX}_FOUND TRUE)
- set (${PREFIX}_INCLUDE_DIRS ${${PREFIX}_INCLUDE_DIR})
- set (${PREFIX}_LIBRARIES ${${PREFIX}_LIBRARY})
- else ()
- if (${PREFIX}_FIND_REQUIRED AND NOT ${PREFIX}_FIND_QUIETLY)
- message(FATAL_ERROR "Required library ${PREFIX} not found.")
- endif ()
- endif ()
-
- if (NOT TARGET "TBB::${TARGET_NAME}")
- if (${PREFIX}_LIBRARY_RELEASE)
- tbb_extract_real_library(${${PREFIX}_LIBRARY_RELEASE} real_release)
- endif ()
- if (${PREFIX}_LIBRARY_DEBUG)
- tbb_extract_real_library(${${PREFIX}_LIBRARY_DEBUG} real_debug)
- endif ()
- add_library(TBB::${TARGET_NAME} UNKNOWN IMPORTED)
- set_target_properties(TBB::${TARGET_NAME} PROPERTIES
- INTERFACE_INCLUDE_DIRECTORIES "${${PREFIX}_INCLUDE_DIR}")
- if (${PREFIX}_LIBRARY_DEBUG AND ${PREFIX}_LIBRARY_RELEASE)
- set_target_properties(TBB::${TARGET_NAME} PROPERTIES
- IMPORTED_LOCATION "${real_release}"
- IMPORTED_LOCATION_DEBUG "${real_debug}"
- IMPORTED_LOCATION_RELEASE "${real_release}")
- elseif (${PREFIX}_LIBRARY_RELEASE)
- set_target_properties(TBB::${TARGET_NAME} PROPERTIES
- IMPORTED_LOCATION "${real_release}")
- elseif (${PREFIX}_LIBRARY_DEBUG)
- set_target_properties(TBB::${TARGET_NAME} PROPERTIES
- IMPORTED_LOCATION "${real_debug}")
- endif ()
- endif ()
-
- #mark the following variables as internal variables
- mark_as_advanced(${PREFIX}_INCLUDE_DIR
- ${PREFIX}_LIBRARY
- ${PREFIX}_LIBRARY_DEBUG
- ${PREFIX}_LIBRARY_RELEASE)
-endmacro()
-
-#===============================================
-# Generate debug names from given release names
-#===============================================
-macro(get_debug_names PREFIX)
- foreach(i ${${PREFIX}})
- set(${PREFIX}_DEBUG ${${PREFIX}_DEBUG} ${i}d ${i}D ${i}_d ${i}_D ${i}_debug ${i})
- endforeach()
-endmacro()
-
-#===============================================
-# See if we have env vars to help us find tbb
-#===============================================
-macro(getenv_path VAR)
- set(ENV_${VAR} $ENV{${VAR}})
- # replace won't work if var is blank
- if (ENV_${VAR})
- string( REGEX REPLACE "\\\\" "/" ENV_${VAR} ${ENV_${VAR}} )
- endif ()
-endmacro()
-
-#===============================================
-# Couple a set of release AND debug libraries
-#===============================================
-macro(make_library_set PREFIX)
- if (${PREFIX}_RELEASE AND ${PREFIX}_DEBUG)
- set(${PREFIX} optimized ${${PREFIX}_RELEASE} debug ${${PREFIX}_DEBUG})
- elseif (${PREFIX}_RELEASE)
- set(${PREFIX} ${${PREFIX}_RELEASE})
- elseif (${PREFIX}_DEBUG)
- set(${PREFIX} ${${PREFIX}_DEBUG})
- endif ()
-endmacro()
-
-#===============================================
-# Ensure that the release & debug libraries found are from the same installation.
-#===============================================
-macro(find_tbb_library_verifying_release_debug_locations PREFIX)
- find_library(${PREFIX}_RELEASE
- NAMES ${${PREFIX}_NAMES}
- HINTS ${TBB_LIB_SEARCH_PATH})
- if (${PREFIX}_RELEASE)
- # To avoid finding a mismatched set of release & debug libraries from
- # different installations if the first found does not have debug libraries
- # by forcing the search for debug to only occur within the detected release
- # library directory (if found). Although this would break detection if the
- # release & debug libraries were shipped in different directories, this is
- # not the case in the official TBB releases for any platform.
- get_filename_component(
- FOUND_RELEASE_LIB_DIR "${${PREFIX}_RELEASE}" DIRECTORY)
- find_library(${PREFIX}_DEBUG
- NAMES ${${PREFIX}_NAMES_DEBUG}
- HINTS ${FOUND_RELEASE_LIB_DIR}
- NO_DEFAULT_PATH)
- else()
- find_library(${PREFIX}_DEBUG
- NAMES ${${PREFIX}_NAMES_DEBUG}
- HINTS ${TBB_LIB_SEARCH_PATH})
- endif()
-endmacro()
-
-#=============================================================================
-# Now to actually find TBB
-#
-
-# Get path, convert backslashes as ${ENV_${var}}
-getenv_path(TBB_ROOT)
-
-# initialize search paths
-set(TBB_PREFIX_PATH ${TBB_ROOT} ${ENV_TBB_ROOT})
-set(TBB_INC_SEARCH_PATH "")
-set(TBB_LIB_SEARCH_PATH "")
-
-
-# If user built from sources
-set(TBB_BUILD_PREFIX $ENV{TBB_BUILD_PREFIX})
-if (TBB_BUILD_PREFIX AND ENV_TBB_ROOT)
- getenv_path(TBB_BUILD_DIR)
- if (NOT ENV_TBB_BUILD_DIR)
- set(ENV_TBB_BUILD_DIR ${ENV_TBB_ROOT}/build)
- endif ()
-
- # include directory under ${ENV_TBB_ROOT}/include
- list(APPEND TBB_LIB_SEARCH_PATH
- ${ENV_TBB_BUILD_DIR}/${TBB_BUILD_PREFIX}_release
- ${ENV_TBB_BUILD_DIR}/${TBB_BUILD_PREFIX}_debug)
-endif ()
-
-
-# For Windows, let's assume that the user might be using the precompiled
-# TBB packages from the main website. These use a rather awkward directory
-# structure (at least for automatically finding the right files) depending
-# on platform and compiler, but we'll do our best to accommodate it.
-# Not adding the same effort for the precompiled linux builds, though. Those
-# have different versions for CC compiler versions and linux kernels which
-# will never adequately match the user's setup, so there is no feasible way
-# to detect the "best" version to use. The user will have to manually
-# select the right files. (Chances are the distributions are shipping their
-# custom version of tbb, anyway, so the problem is probably nonexistent.)
-if (WIN32 AND MSVC)
- set(COMPILER_PREFIX "vc7.1")
- if (MSVC_VERSION EQUAL 1400)
- set(COMPILER_PREFIX "vc8")
- elseif(MSVC_VERSION EQUAL 1500)
- set(COMPILER_PREFIX "vc9")
- elseif(MSVC_VERSION EQUAL 1600)
- set(COMPILER_PREFIX "vc10")
- elseif(MSVC_VERSION EQUAL 1700)
- set(COMPILER_PREFIX "vc11")
- elseif(MSVC_VERSION EQUAL 1800)
- set(COMPILER_PREFIX "vc12")
- elseif(MSVC_VERSION GREATER_EQUAL 1900)
- set(COMPILER_PREFIX "vc14")
- endif ()
-
- # for each prefix path, add ia32/64\${COMPILER_PREFIX}\lib to the lib search path
- foreach (dir IN LISTS TBB_PREFIX_PATH)
- if (CMAKE_CL_64)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/ia64/${COMPILER_PREFIX}/lib)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/ia64/${COMPILER_PREFIX})
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/intel64/${COMPILER_PREFIX}/lib)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/intel64/${COMPILER_PREFIX})
- else ()
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/ia32/${COMPILER_PREFIX}/lib)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/ia32/${COMPILER_PREFIX})
- endif ()
- endforeach ()
-endif ()
-
-# For OS X binary distribution, choose libc++ based libraries for Mavericks (10.9)
-# and above and AppleClang
-if (CMAKE_SYSTEM_NAME STREQUAL "Darwin" AND
- NOT CMAKE_SYSTEM_VERSION VERSION_LESS 13.0)
- set (USE_LIBCXX OFF)
- cmake_policy(GET CMP0025 POLICY_VAR)
-
- if (POLICY_VAR STREQUAL "NEW")
- if (CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
- set (USE_LIBCXX ON)
- endif ()
- else ()
- if (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
- set (USE_LIBCXX ON)
- endif ()
- endif ()
-
- if (USE_LIBCXX)
- foreach (dir IN LISTS TBB_PREFIX_PATH)
- list (APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/libc++ ${dir}/libc++/lib)
- endforeach ()
- endif ()
-endif ()
-
-# check compiler ABI
-if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
- set(COMPILER_PREFIX)
- if (NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.8)
- list(APPEND COMPILER_PREFIX "gcc4.8")
- endif()
- if (NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.7)
- list(APPEND COMPILER_PREFIX "gcc4.7")
- endif()
- if (NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.4)
- list(APPEND COMPILER_PREFIX "gcc4.4")
- endif()
- list(APPEND COMPILER_PREFIX "gcc4.1")
-elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
- set(COMPILER_PREFIX)
- if (NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.0) # Complete guess
- list(APPEND COMPILER_PREFIX "gcc4.8")
- endif()
- if (NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 3.6)
- list(APPEND COMPILER_PREFIX "gcc4.7")
- endif()
- list(APPEND COMPILER_PREFIX "gcc4.4")
-else() # Assume compatibility with 4.4 for other compilers
- list(APPEND COMPILER_PREFIX "gcc4.4")
-endif ()
-
-# if platform architecture is explicitly specified
-set(TBB_ARCH_PLATFORM $ENV{TBB_ARCH_PLATFORM})
-if (TBB_ARCH_PLATFORM)
- foreach (dir IN LISTS TBB_PREFIX_PATH)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/${TBB_ARCH_PLATFORM}/lib)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/${TBB_ARCH_PLATFORM})
- endforeach ()
-endif ()
-
-foreach (dir IN LISTS TBB_PREFIX_PATH)
- foreach (prefix IN LISTS COMPILER_PREFIX)
- if (CMAKE_SIZEOF_VOID_P EQUAL 8)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/intel64)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/intel64/${prefix})
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/intel64/lib)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/intel64/${prefix}/lib)
- else ()
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/ia32)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib/ia32/${prefix})
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/ia32/lib)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/ia32/${prefix}/lib)
- endif ()
- endforeach()
-endforeach ()
-
-# add general search paths
-foreach (dir IN LISTS TBB_PREFIX_PATH)
- list(APPEND TBB_LIB_SEARCH_PATH ${dir}/lib ${dir}/Lib ${dir}/lib/tbb
- ${dir}/Libs)
- list(APPEND TBB_INC_SEARCH_PATH ${dir}/include ${dir}/Include
- ${dir}/include/tbb)
-endforeach ()
-
-set(TBB_LIBRARY_NAMES tbb)
-get_debug_names(TBB_LIBRARY_NAMES)
-
-find_path(TBB_INCLUDE_DIR
- NAMES tbb/tbb.h
- HINTS ${TBB_INC_SEARCH_PATH})
-find_tbb_library_verifying_release_debug_locations(TBB_LIBRARY)
-make_library_set(TBB_LIBRARY)
-
-findpkg_finish(TBB tbb)
-
-#if we haven't found TBB no point on going any further
-if (NOT TBB_FOUND)
- return()
-endif ()
-
-#=============================================================================
-# Look for TBB's malloc package
-set(TBB_MALLOC_LIBRARY_NAMES tbbmalloc)
-get_debug_names(TBB_MALLOC_LIBRARY_NAMES)
-
-find_path(TBB_MALLOC_INCLUDE_DIR
- NAMES tbb/tbb.h
- HINTS ${TBB_INC_SEARCH_PATH})
-find_tbb_library_verifying_release_debug_locations(TBB_MALLOC_LIBRARY)
-make_library_set(TBB_MALLOC_LIBRARY)
-
-findpkg_finish(TBB_MALLOC tbbmalloc)
-
-#=============================================================================
-# Look for TBB's malloc proxy package
-set(TBB_MALLOC_PROXY_LIBRARY_NAMES tbbmalloc_proxy)
-get_debug_names(TBB_MALLOC_PROXY_LIBRARY_NAMES)
-
-find_path(TBB_MALLOC_PROXY_INCLUDE_DIR
- NAMES tbb/tbbmalloc_proxy.h
- HINTS ${TBB_INC_SEARCH_PATH})
-find_tbb_library_verifying_release_debug_locations(TBB_MALLOC_PROXY_LIBRARY)
-make_library_set(TBB_MALLOC_PROXY_LIBRARY)
-
-findpkg_finish(TBB_MALLOC_PROXY tbbmalloc_proxy)
-
-
-#=============================================================================
-#parse all the version numbers from tbb
-if(NOT TBB_VERSION)
-
- #only read the start of the file
- file(STRINGS
- "${TBB_INCLUDE_DIR}/tbb/tbb_stddef.h"
- TBB_VERSION_CONTENTS
- REGEX "VERSION")
-
- string(REGEX REPLACE
- ".*#define TBB_VERSION_MAJOR ([0-9]+).*" "\\1"
- TBB_VERSION_MAJOR "${TBB_VERSION_CONTENTS}")
-
- string(REGEX REPLACE
- ".*#define TBB_VERSION_MINOR ([0-9]+).*" "\\1"
- TBB_VERSION_MINOR "${TBB_VERSION_CONTENTS}")
-
- string(REGEX REPLACE
- ".*#define TBB_INTERFACE_VERSION ([0-9]+).*" "\\1"
- TBB_INTERFACE_VERSION "${TBB_VERSION_CONTENTS}")
-
- string(REGEX REPLACE
- ".*#define TBB_COMPATIBLE_INTERFACE_VERSION ([0-9]+).*" "\\1"
- TBB_COMPATIBLE_INTERFACE_VERSION "${TBB_VERSION_CONTENTS}")
-
-endif()
diff --git a/cmake/PrettyPrintCMakeList.cmake b/cmake/PrettyPrintCMakeList.cmake
index 067883c..30151fe 100644
--- a/cmake/PrettyPrintCMakeList.cmake
+++ b/cmake/PrettyPrintCMakeList.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2018 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/ReadCeresVersionFromSource.cmake b/cmake/ReadCeresVersionFromSource.cmake
index 2859744..53e29c4 100644
--- a/cmake/ReadCeresVersionFromSource.cmake
+++ b/cmake/ReadCeresVersionFromSource.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/UpdateCacheVariable.cmake b/cmake/UpdateCacheVariable.cmake
index 82ae571..bf3b594 100644
--- a/cmake/UpdateCacheVariable.cmake
+++ b/cmake/UpdateCacheVariable.cmake
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/cmake/config.h.in b/cmake/config.h.in
index 4a516f6..1566795 100644
--- a/cmake/config.h.in
+++ b/cmake/config.h.in
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2022 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -50,15 +50,14 @@
// If defined, Ceres was compiled without SuiteSparse.
@CERES_NO_SUITESPARSE@
-// If defined, Ceres was compiled without CXSparse.
-@CERES_NO_CXSPARSE@
+// If defined, Ceres was compiled without CUDA.
+@CERES_NO_CUDA@
// If defined, Ceres was compiled without Apple's Accelerate framework solvers.
@CERES_NO_ACCELERATE_SPARSE@
#if defined(CERES_NO_SUITESPARSE) && \
defined(CERES_NO_ACCELERATE_SPARSE) && \
- defined(CERES_NO_CXSPARSE) && \
!defined(CERES_USE_EIGEN_SPARSE) // NOLINT
// If defined Ceres was compiled without any sparse linear algebra support.
#define CERES_NO_SPARSE
@@ -71,19 +70,26 @@
// routines.
@CERES_NO_CUSTOM_BLAS@
-// If defined, Ceres was compiled without multithreading support.
-@CERES_NO_THREADS@
-// If defined Ceres was compiled with OpenMP multithreading.
-@CERES_USE_OPENMP@
-// If defined Ceres was compiled with modern C++ multithreading.
-@CERES_USE_CXX_THREADS@
+// If defined, Ceres was compiled with a version of SuiteSparse/CHOLMOD without
+// the Partition module (requires METIS).
+@CERES_NO_CHOLMOD_PARTITION@
+// If defined Ceres was compiled without support for METIS via Eigen.
+@CERES_NO_EIGEN_METIS@
-// If defined, Ceres was built as a shared library.
-@CERES_USING_SHARED_LIBRARY@
-// If defined, Ceres was compiled with a version MSVC >= 2005 which
-// deprecated the standard POSIX names for bessel functions, replacing them
-// with underscore prefixed versions (e.g. j0() -> _j0()).
-@CERES_MSVC_USE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS@
+// CERES_NO_SPARSE should be automatically defined by config.h if Ceres was
+// compiled without any sparse back-end. Verify that it has not subsequently
+// been inconsistently redefined.
+#if defined(CERES_NO_SPARSE)
+#if !defined(CERES_NO_SUITESPARSE)
+#error CERES_NO_SPARSE requires CERES_NO_SUITESPARSE.
+#endif
+#if !defined(CERES_NO_ACCELERATE_SPARSE)
+#error CERES_NO_SPARSE requires CERES_NO_ACCELERATE_SPARSE
+#endif
+#if defined(CERES_USE_EIGEN_SPARSE)
+#error CERES_NO_SPARSE requires !CERES_USE_EIGEN_SPARSE
+#endif
+#endif
#endif // CERES_PUBLIC_INTERNAL_CONFIG_H_
diff --git a/cmake/iOS.cmake b/cmake/iOS.cmake
index 4029f96..0773d13 100644
--- a/cmake/iOS.cmake
+++ b/cmake/iOS.cmake
@@ -257,7 +257,7 @@
set(CMAKE_C_FLAGS
"${XCODE_IOS_PLATFORM_VERSION_FLAGS} -fobjc-abi-version=2 -fobjc-arc ${CMAKE_C_FLAGS}")
-# Hidden visibilty is required for C++ on iOS.
+# Hidden visibility is required for C++ on iOS.
set(CMAKE_CXX_FLAGS
"${XCODE_IOS_PLATFORM_VERSION_FLAGS} -fvisibility=hidden -fvisibility-inlines-hidden -fobjc-abi-version=2 -fobjc-arc ${CMAKE_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS_RELEASE "-DNDEBUG -O3 -fomit-frame-pointer -ffast-math ${CMAKE_CXX_FLAGS_RELEASE}")
@@ -309,7 +309,7 @@
${CMAKE_OSX_SYSROOT}/Developer/Library/Frameworks)
# Only search the specified iOS SDK, not the remainder of the host filesystem.
-set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM ONLY)
+set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
@@ -322,7 +322,6 @@
# This macro lets you find executable programs on the host system.
macro(find_host_package)
- set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE NEVER)
set(IOS FALSE)
@@ -330,7 +329,6 @@
find_package(${ARGN})
set(IOS TRUE)
- set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
endmacro(find_host_package)
diff --git a/config/ceres/internal/config.h b/config/ceres/internal/config.h
index 1cf034d..969e43b 100644
--- a/config/ceres/internal/config.h
+++ b/config/ceres/internal/config.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2022 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,16 +30,14 @@
// Default (empty) configuration options for Ceres.
//
-// IMPORTANT: Most users of Ceres will not use this file, when
-// compiling Ceres with CMake, CMake will configure a new
-// config.h with the currently selected Ceres compile
-// options in <BUILD_DIR>/config, which will be added to
-// the include path for compilation, and installed with the
-// public Ceres headers. However, for some users of Ceres
-// who compile without CMake (Android), this file ensures
-// that Ceres will compile, with the user either specifying
-// manually the Ceres compile options, or passing them
-// directly through the compiler.
+// IMPORTANT: Most users of Ceres will not use this file, when compiling Ceres
+// with CMake, CMake will configure a new config.h with the currently
+// selected Ceres compile options in <BUILD_DIR>/config, which will
+// be added to the include path for compilation, and installed with
+// the public Ceres headers. However, for some users of Ceres who
+// compile without CMake (Bazel), this file ensures that Ceres will
+// compile, with the user either specifying manually the Ceres
+// compile options, or passing them directly through the compiler.
#ifndef CERES_PUBLIC_INTERNAL_CONFIG_H_
#define CERES_PUBLIC_INTERNAL_CONFIG_H_
diff --git a/internal/ceres/float_cxsparse.h b/config/ceres/internal/export.h
similarity index 63%
rename from internal/ceres/float_cxsparse.h
rename to config/ceres/internal/export.h
index 9a274c2..0d4495d 100644
--- a/internal/ceres/float_cxsparse.h
+++ b/config/ceres/internal/export.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2022 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -26,33 +26,21 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
+// Author: alexs.mac@gmail.com (Alex Stewart)
-#ifndef CERES_INTERNAL_FLOAT_CXSPARSE_H_
-#define CERES_INTERNAL_FLOAT_CXSPARSE_H_
+// Default (empty) configuration options for Ceres.
+//
+// IMPORTANT: Most users of Ceres will not use this file, when compiling Ceres
+// with CMake, CMake will configure a new config.h with the currently
+// selected Ceres compile options in <BUILD_DIR>/export, which will
+// be added to the include path for compilation, and installed with
+// the public Ceres headers. However, for some users of Ceres who
+// compile without CMake (Bazel), this file ensures that Ceres will
+// compile, with the user either specifying manually the Ceres
+// compile options, or passing them directly through the compiler.
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#ifndef CERES_PUBLIC_INTERNAL_EXPORT_H_
+#define CERES_PUBLIC_INTERNAL_EXPORT_H_
-#if !defined(CERES_NO_CXSPARSE)
-#include <memory>
-
-#include "ceres/sparse_cholesky.h"
-
-namespace ceres {
-namespace internal {
-
-// Fake implementation of a single precision Sparse Cholesky using
-// CXSparse.
-class FloatCXSparseCholesky : public SparseCholesky {
- public:
- static std::unique_ptr<SparseCholesky> Create(OrderingType ordering_type);
-};
-
-} // namespace internal
-} // namespace ceres
-
-#endif // !defined(CERES_NO_CXSPARSE)
-
-#endif // CERES_INTERNAL_FLOAT_CXSPARSE_H_
+#endif // CERES_PUBLIC_INTERNAL_EXPORT_H_
diff --git a/docs/CMakeLists.txt b/docs/CMakeLists.txt
index cfdd910..b5588ef 100644
--- a/docs/CMakeLists.txt
+++ b/docs/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/docs/source/CMakeLists.txt b/docs/source/CMakeLists.txt
index 70bf998..1f8fed8 100644
--- a/docs/source/CMakeLists.txt
+++ b/docs/source/CMakeLists.txt
@@ -1,19 +1,21 @@
-find_package(Sphinx REQUIRED)
-
# HTML output directory
set(SPHINX_HTML_DIR "${Ceres_BINARY_DIR}/docs/html")
# Install documentation
install(DIRECTORY ${SPHINX_HTML_DIR}
- DESTINATION "${CERES_DOCS_INSTALL_DIR}"
+ DESTINATION ${CMAKE_INSTALL_DOCDIR}
COMPONENT Doc
PATTERN "${SPHINX_HTML_DIR}/*")
+# Find python
+find_package(Python REQUIRED COMPONENTS Interpreter)
+
# Building using 'make_docs.py' python script
add_custom_target(ceres_docs ALL
- python
+ $<TARGET_FILE:Python::Interpreter>
"${Ceres_SOURCE_DIR}/scripts/make_docs.py"
"${Ceres_SOURCE_DIR}"
"${Ceres_BINARY_DIR}/docs"
- "${SPHINX_EXECUTABLE}"
+ "${Sphinx_BUILD_EXECUTABLE}"
+ USES_TERMINAL
COMMENT "Building HTML documentation with Sphinx")
diff --git a/docs/source/automatic_derivatives.rst b/docs/source/automatic_derivatives.rst
index e15e911..3fe0727 100644
--- a/docs/source/automatic_derivatives.rst
+++ b/docs/source/automatic_derivatives.rst
@@ -37,9 +37,7 @@
};
- CostFunction* cost_function =
- new AutoDiffCostFunction<Rat43CostFunctor, 1, 4>(
- new Rat43CostFunctor(x, y));
+ auto* cost_function = new AutoDiffCostFunction<Rat43CostFunctor, 1, 4>(x, y);
Notice that compared to numeric differentiation, the only difference
when defining the functor for use with automatic differentiation is
@@ -298,7 +296,7 @@
There is no single solution to this problem. In some cases one needs
to reason explicitly about the points where indeterminacy may occur
-and use alternate expressions using `L'Hopital's rule
+and use alternate expressions using `L'Hôpital's rule
<https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule>`_ (see for
example some of the conversion routines in `rotation.h
<https://github.com/ceres-solver/ceres-solver/blob/master/include/ceres/rotation.h>`_. In
diff --git a/docs/source/bibliography.rst b/docs/source/bibliography.rst
index c13c676..ba3bc87 100644
--- a/docs/source/bibliography.rst
+++ b/docs/source/bibliography.rst
@@ -4,6 +4,20 @@
Bibliography
============
+Background Reading
+==================
+
+For a short but informative introduction to the subject we recommend
+the booklet by [Madsen]_ . For a general introduction to non-linear
+optimization we recommend [NocedalWright]_. [Bjorck]_ remains the
+seminal reference on least squares problems. [TrefethenBau]_ is our
+favorite text on introductory numerical linear algebra. [Triggs]_
+provides a thorough coverage of the bundle adjustment problem.
+
+
+References
+==========
+
.. [Agarwal] S. Agarwal, N. Snavely, S. M. Seitz and R. Szeliski,
**Bundle Adjustment in the Large**, *Proceedings of the European
Conference on Computer Vision*, pp. 29--42, 2010.
@@ -31,6 +45,9 @@
.. [Conn] A.R. Conn, N.I.M. Gould, and P.L. Toint, **Trust region
methods**, *Society for Industrial Mathematics*, 2000.
+.. [Davis] Timothy A. Davis, **Direct methods for Sparse Linear
+ Systems**, *SIAM*, 2006.
+
.. [Dellaert] F. Dellaert, J. Carlson, V. Ila, K. Ni and C. E. Thorpe,
**Subgraph-preconditioned conjugate gradients for large scale SLAM**,
*International Conference on Intelligent Robots and Systems*, 2010.
@@ -44,7 +61,7 @@
Preconditioners for Sparse Linear Least-Squares Problems**,
*ACM Trans. Math. Softw.*, 43(4), 2017.
-.. [HartleyZisserman] R.I. Hartley & A. Zisserman, **Multiview
+.. [HartleyZisserman] R.I. Hartley and A. Zisserman, **Multiview
Geometry in Computer Vision**, Cambridge University Press, 2004.
.. [Hertzberg] C. Hertzberg, R. Wagner, U. Frese and L. Schroder,
@@ -79,6 +96,11 @@
preconditioner for large sparse least squares problems**, *SIAM
Journal on Matrix Analysis and Applications*, 28(2):524-550, 2007.
+.. [LourakisArgyros] M. L. A. Lourakis, A. A. Argyros, **Is
+ Levenberg-Marquardt the most efficient algorithm for implementing
+ bundle adjustment?**, *International Conference on Computer
+ Vision*, 2005.
+
.. [Madsen] K. Madsen, H.B. Nielsen, and O. Tingleff, **Methods for
nonlinear least squares problems**, 2004.
@@ -100,7 +122,7 @@
.. [Nocedal] J. Nocedal, **Updating Quasi-Newton Matrices with Limited
Storage**, *Mathematics of Computation*, 35(151): 773--782, 1980.
-.. [NocedalWright] J. Nocedal & S. Wright, **Numerical Optimization**,
+.. [NocedalWright] J. Nocedal and S. Wright, **Numerical Optimization**,
Springer, 2004.
.. [Oren] S. S. Oren, **Self-scaling Variable Metric (SSVM) Algorithms
@@ -108,7 +130,7 @@
20(5), 863-874, 1974.
.. [Press] W. H. Press, S. A. Teukolsky, W. T. Vetterling
- & B. P. Flannery, **Numerical Recipes**, Cambridge University
+ and B. P. Flannery, **Numerical Recipes**, Cambridge University
Press, 2007.
.. [Ridders] C. J. F. Ridders, **Accurate computation of F'(x) and
@@ -122,27 +144,37 @@
systems**, SIAM, 2003.
.. [Simon] I. Simon, N. Snavely and S. M. Seitz, **Scene Summarization
- for Online Image Collections**, *International Conference on Computer Vision*, 2007.
+ for Online Image Collections**, *International Conference on
+ Computer Vision*, 2007.
.. [Stigler] S. M. Stigler, **Gauss and the invention of least
squares**, *The Annals of Statistics*, 9(3):465-474, 1981.
-.. [TenenbaumDirector] J. Tenenbaum & B. Director, **How Gauss
+.. [TenenbaumDirector] J. Tenenbaum and B. Director, **How Gauss
Determined the Orbit of Ceres**.
.. [TrefethenBau] L.N. Trefethen and D. Bau, **Numerical Linear
Algebra**, SIAM, 1997.
-.. [Triggs] B. Triggs, P. F. Mclauchlan, R. I. Hartley &
+.. [Triggs] B. Triggs, P. F. Mclauchlan, R. I. Hartley and
A. W. Fitzgibbon, **Bundle Adjustment: A Modern Synthesis**,
Proceedings of the International Workshop on Vision Algorithms:
Theory and Practice, pp. 298-372, 1999.
+.. [Weber] S. Weber, N. Demmel, TC Chan, D. Cremers, **Power Bundle
+ Adjustment for Large-Scale 3D Reconstruction**, *IEEE Conference on
+ Computer Vision and Pattern Recognition*, 2023.
+
.. [Wiberg] T. Wiberg, **Computation of principal components when data
are missing**, In Proc. *Second Symp. Computational Statistics*,
pages 229-236, 1976.
-.. [WrightHolt] S. J. Wright and J. N. Holt, **An Inexact
- Levenberg Marquardt Method for Large Sparse Nonlinear Least
- Squares**, *Journal of the Australian Mathematical Society Series
- B*, 26(4):387-403, 1985.
+.. [WrightHolt] S. J. Wright and J. N. Holt, **An Inexact Levenberg
+ Marquardt Method for Large Sparse Nonlinear Least Squares**,
+ *Journal of the Australian Mathematical Society Series B*,
+ 26(4):387-403, 1985.
+
+.. [Zheng] Q. Zheng, Y. Xi and Y. Saad, **A power Schur Complement
+ low-rank correction preconditioner for general sparse linear
+ systems**, *SIAM Journal on Matrix Analysis and
+ Applications*, 2021.
diff --git a/docs/source/conf.py b/docs/source/conf.py
index c83468f..faa2403 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -25,7 +25,7 @@
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = ['sphinx.ext.todo', 'sphinx.ext.mathjax', 'sphinx.ext.ifconfig']
+extensions = ['sphinx.ext.todo', 'sphinx.ext.mathjax', 'sphinx.ext.ifconfig', 'sphinxcontrib.jquery']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -41,16 +41,16 @@
# General information about the project.
project = u'Ceres Solver'
-copyright = u'2020 Google Inc'
+copyright = u'2023 Google Inc'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
-version = '2.0'
+version = '2.2'
# The full version, including alpha/beta/rc tags.
-release = '2.0.0'
+release = '2.2.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@@ -246,7 +246,7 @@
# By default MathJax does not use TeX fonts, which is a tragedy. Also
# scaling the fonts down a bit makes them fit better with font sizing
# in the "Read The Docs" theme.
-mathjax_config = {
+mathjax3_config = {
'HTML-CSS' : {
'availableFonts' : ["TeX"],
'scale' : 90
diff --git a/docs/source/contributing.rst b/docs/source/contributing.rst
index a128e30..cba274c 100644
--- a/docs/source/contributing.rst
+++ b/docs/source/contributing.rst
@@ -118,8 +118,8 @@
When the push succeeds, the console will display a URL showing the
address of the review. Go to the URL and add at least one of the
- maintainers (Sameer Agarwal, Keir Mierle, Alex Stewart or William
- Rucklidge) as reviewers.
+ maintainers (Sameer Agarwal, Keir Mierle, Alex Stewart, William
+ Rucklidge or Sergiu Deitsch) as reviewers.
3. Wait for a review.
diff --git a/docs/source/derivatives.rst b/docs/source/derivatives.rst
index bff6a29..d9a52b0 100644
--- a/docs/source/derivatives.rst
+++ b/docs/source/derivatives.rst
@@ -58,3 +58,4 @@
numerical_derivatives
automatic_derivatives
interfacing_with_autodiff
+ inverse_and_implicit_function_theorems
diff --git a/docs/source/faqs.rst b/docs/source/faqs.rst
index 5a28f41..65c64e6 100644
--- a/docs/source/faqs.rst
+++ b/docs/source/faqs.rst
@@ -16,14 +16,3 @@
modeling_faqs
solving_faqs
-
-
-Further Reading
-===============
-
-For a short but informative introduction to the subject we recommend
-the booklet by [Madsen]_ . For a general introduction to non-linear
-optimization we recommend [NocedalWright]_. [Bjorck]_ remains the
-seminal reference on least squares problems. [TrefethenBau]_ book is
-our favorite text on introductory numerical linear algebra. [Triggs]_
-provides a thorough coverage of the bundle adjustment problem.
diff --git a/docs/source/features.rst b/docs/source/features.rst
index 724d6dc..609f41c 100644
--- a/docs/source/features.rst
+++ b/docs/source/features.rst
@@ -1,11 +1,15 @@
+.. default-domain:: cpp
+
+.. cpp:namespace:: ceres
+
====
Why?
====
.. _chapter-features:
* **Code Quality** - Ceres Solver has been used in production at
- Google for more than four years now. It is clean, extensively tested
- and well documented code that is actively developed and supported.
+ Google since 2011. It is clean, extensively tested and well
+ documented code that is actively developed and supported.
* **Modeling API** - It is rarely the case that one starts with the
exact and complete formulation of the problem that one is trying to
@@ -27,10 +31,10 @@
allows the user to *shape* their residuals using a
:class:`LossFunction` to reduce the influence of outliers.
- - **Local Parameterization** In many cases, some parameters lie on a
- manifold other than Euclidean space, e.g., rotation matrices. In
- such cases, the user can specify the geometry of the local tangent
- space by specifying a :class:`LocalParameterization` object.
+ - **Manifolds** In many cases, some parameters lie on a manifold
+ other than Euclidean space, e.g., rotation matrices. In such
+ cases, the user can specify the geometry of the local tangent
+ space by specifying a :class:`Manifold` object.
* **Solver Choice** Depending on the size, sparsity structure, time &
memory budgets, and solution quality requirements, different
@@ -42,10 +46,11 @@
computational cost in all of these methods is the solution of a
linear system. To this end Ceres ships with a variety of linear
solvers - dense QR and dense Cholesky factorization (using
- `Eigen`_ or `LAPACK`_) for dense problems, sparse Cholesky
- factorization (`SuiteSparse`_, `CXSparse`_ or `Eigen`_) for large
- sparse problems, custom Schur complement based dense, sparse, and
- iterative linear solvers for `bundle adjustment`_ problems.
+ `Eigen`_, `LAPACK`_ or `CUDA`_) for dense problems, sparse
+ Cholesky factorization (`SuiteSparse`_, `Apple's Accelerate`_,
+ `Eigen`_) for large sparse problems, custom Schur complement based
+ dense, sparse, and iterative linear solvers for `bundle
+ adjustment`_ problems.
- **Line Search Solvers** - When the problem size is so large that
storing and factoring the Jacobian is not feasible or a low
@@ -54,18 +59,21 @@
of Non-linear Conjugate Gradients, BFGS and LBFGS.
* **Speed** - Ceres Solver has been extensively optimized, with C++
- templating, hand written linear algebra routines and OpenMP or
- modern C++ threads based multithreading of the Jacobian evaluation
- and the linear solvers.
+ templating, hand written linear algebra routines and modern C++
+ threads based multithreading of the Jacobian evaluation and the
+ linear solvers.
-* **Solution Quality** Ceres is the `best performing`_ solver on the NIST
- problem set used by Mondragon and Borchers for benchmarking
+* **GPU Acceleration** If your system supports `CUDA`_ then Ceres
+ Solver can use the Nvidia GPU on your system to speed up the solver.
+
+* **Solution Quality** Ceres is the `best performing`_ solver on the
+ NIST problem set used by Mondragon and Borchers for benchmarking
non-linear least squares solvers.
* **Covariance estimation** - Evaluate the sensitivity/uncertainty of
the solution by evaluating all or part of the covariance
- matrix. Ceres is one of the few solvers that allows you to do
- this analysis at scale.
+ matrix. Ceres is one of the few solvers that allows you to do this
+ analysis at scale.
* **Community** Since its release as an open source software, Ceres
has developed an active developer community that contributes new
@@ -82,6 +90,7 @@
.. _SuiteSparse: http://www.cise.ufl.edu/research/sparse/SuiteSparse/
.. _Eigen: http://eigen.tuxfamily.org/
.. _LAPACK: http://www.netlib.org/lapack/
-.. _CXSparse: https://www.cise.ufl.edu/research/sparse/CXSparse/
.. _automatic: http://en.wikipedia.org/wiki/Automatic_differentiation
.. _numeric: http://en.wikipedia.org/wiki/Numerical_differentiation
+.. _CUDA : https://developer.nvidia.com/cuda-toolkit
+.. _Apple's Accelerate: https://developer.apple.com/documentation/accelerate/sparse_solvers
diff --git a/docs/source/gradient_solver.rst b/docs/source/gradient_solver.rst
index dde9d7e..4e3fc71 100644
--- a/docs/source/gradient_solver.rst
+++ b/docs/source/gradient_solver.rst
@@ -2,6 +2,8 @@
.. default-domain:: cpp
+.. cpp:namespace:: ceres
+
.. _chapter-gradient_problem_solver:
==================================
@@ -54,9 +56,9 @@
public:
explicit GradientProblem(FirstOrderFunction* function);
GradientProblem(FirstOrderFunction* function,
- LocalParameterization* parameterization);
+ Manifold* manifold);
int NumParameters() const;
- int NumLocalParameters() const;
+ int NumTangentParameters() const;
bool Evaluate(const double* parameters, double* cost, double* gradient) const;
bool Plus(const double* x, const double* delta, double* x_plus_delta) const;
};
@@ -69,20 +71,18 @@
form of the objective function.
Structurally :class:`GradientProblem` is a composition of a
-:class:`FirstOrderFunction` and optionally a
-:class:`LocalParameterization`.
+:class:`FirstOrderFunction` and optionally a :class:`Manifold`.
The :class:`FirstOrderFunction` is responsible for evaluating the cost
and gradient of the objective function.
-The :class:`LocalParameterization` is responsible for going back and
-forth between the ambient space and the local tangent space. When a
-:class:`LocalParameterization` is not provided, then the tangent space
-is assumed to coincide with the ambient Euclidean space that the
-gradient vector lives in.
+The :class:`Manifold` is responsible for going back and forth between the
+ambient space and the local tangent space. When a :class:`Manifold` is not
+provided, then the tangent space is assumed to coincide with the ambient
+Euclidean space that the gradient vector lives in.
The constructor takes ownership of the :class:`FirstOrderFunction` and
-:class:`LocalParamterization` objects passed to it.
+:class:`Manifold` objects passed to it.
.. function:: void Solve(const GradientProblemSolver::Options& options, const GradientProblem& problem, double* parameters, GradientProblemSolver::Summary* summary)
@@ -103,7 +103,7 @@
behavior of the solver. We list the various settings and their
default values below.
-.. function:: bool GradientProblemSolver::Options::IsValid(string* error) const
+.. function:: bool GradientProblemSolver::Options::IsValid(std::string* error) const
Validate the values in the options struct and returns true on
success. If there is a problem, the method returns false with
@@ -123,7 +123,7 @@
Choices are ``ARMIJO`` and ``WOLFE`` (strong Wolfe conditions).
Note that in order for the assumptions underlying the ``BFGS`` and
``LBFGS`` line search direction algorithms to be guaranteed to be
- satisifed, the ``WOLFE`` line search should be used.
+ satisfied, the ``WOLFE`` line search should be used.
.. member:: NonlinearConjugateGradientType GradientProblemSolver::Options::nonlinear_conjugate_gradient_type
@@ -192,7 +192,7 @@
low-sensitivity parameters. It can also reduce the robustness of the
solution to errors in the Jacobians.
-.. member:: LineSearchIterpolationType GradientProblemSolver::Options::line_search_interpolation_type
+.. member:: LineSearchInterpolationType GradientProblemSolver::Options::line_search_interpolation_type
Default: ``CUBIC``
@@ -342,8 +342,8 @@
where :math:`\|\cdot\|_\infty` refers to the max norm, :math:`\Pi`
is projection onto the bounds constraints and :math:`\boxplus` is
- Plus operation for the overall local parameterization associated
- with the parameter vector.
+ Plus operation for the manifold associated with the parameter
+ vector.
.. member:: double GradientProblemSolver::Options::parameter_tolerance
@@ -388,14 +388,14 @@
#. ``it`` is the time take by the current iteration.
#. ``tt`` is the total time taken by the minimizer.
-.. member:: vector<IterationCallback> GradientProblemSolver::Options::callbacks
+.. member:: std::vector<IterationCallback> GradientProblemSolver::Options::callbacks
Callbacks that are executed at the end of each iteration of the
:class:`Minimizer`. They are executed in the order that they are
specified in this vector. By default, parameter blocks are updated
only at the end of the optimization, i.e., when the
:class:`Minimizer` terminates. This behavior is controlled by
- :member:`GradientProblemSolver::Options::update_state_every_variable`. If
+ :member:`GradientProblemSolver::Options::update_state_every_iteration`. If
the user wishes to have access to the update parameter blocks when
his/her callbacks are executed, then set
:member:`GradientProblemSolver::Options::update_state_every_iteration`
@@ -404,7 +404,7 @@
The solver does NOT take ownership of these pointers.
-.. member:: bool Solver::Options::update_state_every_iteration
+.. member:: bool GradientProblemSolver::Options::update_state_every_iteration
Default: ``false``
@@ -420,12 +420,12 @@
Summary of the various stages of the solver after termination.
-.. function:: string GradientProblemSolver::Summary::BriefReport() const
+.. function:: std::string GradientProblemSolver::Summary::BriefReport() const
A brief one line description of the state of the solver after
termination.
-.. function:: string GradientProblemSolver::Summary::FullReport() const
+.. function:: std::string GradientProblemSolver::Summary::FullReport() const
A full multiline description of the state of the solver after
termination.
@@ -444,7 +444,7 @@
The cause of the minimizer terminating.
-.. member:: string GradientProblemSolver::Summary::message
+.. member:: std::string GradientProblemSolver::Summary::message
Reason why the solver terminated.
@@ -458,7 +458,7 @@
Cost of the problem (value of the objective function) after the
optimization.
-.. member:: vector<IterationSummary> GradientProblemSolver::Summary::iterations
+.. member:: std::vector<IterationSummary> GradientProblemSolver::Summary::iterations
:class:`IterationSummary` for each minimizer iteration in order.
@@ -486,11 +486,11 @@
Number of parameters in the problem.
-.. member:: int GradientProblemSolver::Summary::num_local_parameters
+.. member:: int GradientProblemSolver::Summary::num_tangent_parameters
Dimension of the tangent space of the problem. This is different
from :member:`GradientProblemSolver::Summary::num_parameters` if a
- :class:`LocalParameterization` object is used.
+ :class:`Manifold` object is used.
.. member:: LineSearchDirectionType GradientProblemSolver::Summary::line_search_direction_type
diff --git a/docs/source/gradient_tutorial.rst b/docs/source/gradient_tutorial.rst
index 3fef6b6..2af44e1 100644
--- a/docs/source/gradient_tutorial.rst
+++ b/docs/source/gradient_tutorial.rst
@@ -2,52 +2,53 @@
.. default-domain:: cpp
+.. cpp:namespace:: ceres
+
.. _chapter-gradient_tutorial:
==================================
General Unconstrained Minimization
==================================
-While much of Ceres Solver is devoted to solving non-linear least
-squares problems, internally it contains a solver that can solve
-general unconstrained optimization problems using just their objective
-function value and gradients. The ``GradientProblem`` and
-``GradientProblemSolver`` objects give the user access to this solver.
-
-So without much further ado, let us look at how one goes about using
-them.
+Ceres Solver besides being able to solve non-linear least squares
+problem can also solve general unconstrained problems using just their
+objective function value and gradients. In this chapter we will see
+how to do this.
Rosenbrock's Function
=====================
-We consider the minimization of the famous `Rosenbrock's function
+Consider minimizing the famous `Rosenbrock's function
<http://en.wikipedia.org/wiki/Rosenbrock_function>`_ [#f1]_.
-We begin by defining an instance of the ``FirstOrderFunction``
-interface. This is the object that is responsible for computing the
-objective function value and the gradient (if required). This is the
-analog of the :class:`CostFunction` when defining non-linear least
-squares problems in Ceres.
+The simplest way to minimize is to define a templated functor to
+evaluate the objective value of this function and then use Ceres
+Solver's automatic differentiation to compute its derivatives.
+
+We begin by defining a templated functor and then using
+``AutoDiffFirstOrderFunction`` to construct an instance of the
+``FirstOrderFunction`` interface. This is the object that is
+responsible for computing the objective function value and the
+gradient (if required). This is the analog of the
+:class:`CostFunction` when defining non-linear least squares problems
+in Ceres.
.. code::
- class Rosenbrock : public ceres::FirstOrderFunction {
- public:
- virtual bool Evaluate(const double* parameters,
- double* cost,
- double* gradient) const {
- const double x = parameters[0];
- const double y = parameters[1];
-
+ // f(x,y) = (1-x)^2 + 100(y - x^2)^2;
+ struct Rosenbrock {
+ template <typename T>
+ bool operator()(const T* parameters, T* cost) const {
+ const T x = parameters[0];
+ const T y = parameters[1];
cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
- if (gradient != nullptr) {
- gradient[0] = -2.0 * (1.0 - x) - 200.0 * (y - x * x) * 2.0 * x;
- gradient[1] = 200.0 * (y - x * x);
- }
return true;
}
- virtual int NumParameters() const { return 2; }
+ static ceres::FirstOrderFunction* Create() {
+ constexpr int kNumParameters = 2;
+ return new ceres::AutoDiffFirstOrderFunction<Rosenbrock, kNumParameters>();
+ }
};
@@ -58,7 +59,7 @@
double parameters[2] = {-1.2, 1.0};
- ceres::GradientProblem problem(new Rosenbrock());
+ ceres::GradientProblem problem(Rosenbrock::Create());
ceres::GradientProblemSolver::Options options;
options.minimizer_progress_to_stdout = true;
@@ -74,65 +75,130 @@
.. code-block:: bash
- 0: f: 2.420000e+01 d: 0.00e+00 g: 2.16e+02 h: 0.00e+00 s: 0.00e+00 e: 0 it: 2.00e-05 tt: 2.00e-05
- 1: f: 4.280493e+00 d: 1.99e+01 g: 1.52e+01 h: 2.01e-01 s: 8.62e-04 e: 2 it: 7.32e-05 tt: 2.19e-04
- 2: f: 3.571154e+00 d: 7.09e-01 g: 1.35e+01 h: 3.78e-01 s: 1.34e-01 e: 3 it: 2.50e-05 tt: 2.68e-04
- 3: f: 3.440869e+00 d: 1.30e-01 g: 1.73e+01 h: 1.36e-01 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 2.92e-04
- 4: f: 3.213597e+00 d: 2.27e-01 g: 1.55e+01 h: 1.06e-01 s: 4.59e-01 e: 1 it: 2.86e-06 tt: 3.14e-04
- 5: f: 2.839723e+00 d: 3.74e-01 g: 1.05e+01 h: 1.34e-01 s: 5.24e-01 e: 1 it: 2.86e-06 tt: 3.36e-04
- 6: f: 2.448490e+00 d: 3.91e-01 g: 1.29e+01 h: 3.04e-01 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 3.58e-04
- 7: f: 1.943019e+00 d: 5.05e-01 g: 4.00e+00 h: 8.81e-02 s: 7.43e-01 e: 1 it: 4.05e-06 tt: 3.79e-04
- 8: f: 1.731469e+00 d: 2.12e-01 g: 7.36e+00 h: 1.71e-01 s: 4.60e-01 e: 2 it: 9.06e-06 tt: 4.06e-04
- 9: f: 1.503267e+00 d: 2.28e-01 g: 6.47e+00 h: 8.66e-02 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 4.33e-04
- 10: f: 1.228331e+00 d: 2.75e-01 g: 2.00e+00 h: 7.70e-02 s: 7.90e-01 e: 1 it: 3.81e-06 tt: 4.54e-04
- 11: f: 1.016523e+00 d: 2.12e-01 g: 5.15e+00 h: 1.39e-01 s: 3.76e-01 e: 2 it: 1.00e-05 tt: 4.82e-04
- 12: f: 9.145773e-01 d: 1.02e-01 g: 6.74e+00 h: 7.98e-02 s: 1.00e+00 e: 1 it: 3.10e-06 tt: 5.03e-04
- 13: f: 7.508302e-01 d: 1.64e-01 g: 3.88e+00 h: 5.76e-02 s: 4.93e-01 e: 1 it: 2.86e-06 tt: 5.25e-04
- 14: f: 5.832378e-01 d: 1.68e-01 g: 5.56e+00 h: 1.42e-01 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 5.47e-04
- 15: f: 3.969581e-01 d: 1.86e-01 g: 1.64e+00 h: 1.17e-01 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 5.68e-04
- 16: f: 3.171557e-01 d: 7.98e-02 g: 3.84e+00 h: 1.18e-01 s: 3.97e-01 e: 2 it: 9.06e-06 tt: 5.94e-04
- 17: f: 2.641257e-01 d: 5.30e-02 g: 3.27e+00 h: 6.14e-02 s: 1.00e+00 e: 1 it: 3.10e-06 tt: 6.16e-04
- 18: f: 1.909730e-01 d: 7.32e-02 g: 5.29e-01 h: 8.55e-02 s: 6.82e-01 e: 1 it: 4.05e-06 tt: 6.42e-04
- 19: f: 1.472012e-01 d: 4.38e-02 g: 3.11e+00 h: 1.20e-01 s: 3.47e-01 e: 2 it: 1.00e-05 tt: 6.69e-04
- 20: f: 1.093558e-01 d: 3.78e-02 g: 2.97e+00 h: 8.43e-02 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 6.91e-04
- 21: f: 6.710346e-02 d: 4.23e-02 g: 1.42e+00 h: 9.64e-02 s: 8.85e-01 e: 1 it: 3.81e-06 tt: 7.12e-04
- 22: f: 3.993377e-02 d: 2.72e-02 g: 2.30e+00 h: 1.29e-01 s: 4.63e-01 e: 2 it: 9.06e-06 tt: 7.39e-04
- 23: f: 2.911794e-02 d: 1.08e-02 g: 2.55e+00 h: 6.55e-02 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 7.62e-04
- 24: f: 1.457683e-02 d: 1.45e-02 g: 2.77e-01 h: 6.37e-02 s: 6.14e-01 e: 1 it: 3.81e-06 tt: 7.84e-04
- 25: f: 8.577515e-03 d: 6.00e-03 g: 2.86e+00 h: 1.40e-01 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 8.05e-04
- 26: f: 3.486574e-03 d: 5.09e-03 g: 1.76e-01 h: 1.23e-02 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 8.27e-04
- 27: f: 1.257570e-03 d: 2.23e-03 g: 1.39e-01 h: 5.08e-02 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 8.48e-04
- 28: f: 2.783568e-04 d: 9.79e-04 g: 6.20e-01 h: 6.47e-02 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 8.69e-04
- 29: f: 2.533399e-05 d: 2.53e-04 g: 1.68e-02 h: 1.98e-03 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 8.91e-04
- 30: f: 7.591572e-07 d: 2.46e-05 g: 5.40e-03 h: 9.27e-03 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 9.12e-04
- 31: f: 1.902460e-09 d: 7.57e-07 g: 1.62e-03 h: 1.89e-03 s: 1.00e+00 e: 1 it: 2.86e-06 tt: 9.33e-04
- 32: f: 1.003030e-12 d: 1.90e-09 g: 3.50e-05 h: 3.52e-05 s: 1.00e+00 e: 1 it: 3.10e-06 tt: 9.54e-04
- 33: f: 4.835994e-17 d: 1.00e-12 g: 1.05e-07 h: 1.13e-06 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 9.81e-04
- 34: f: 1.885250e-22 d: 4.84e-17 g: 2.69e-10 h: 1.45e-08 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 1.00e-03
+ 0: f: 2.420000e+01 d: 0.00e+00 g: 2.16e+02 h: 0.00e+00 s: 0.00e+00 e: 0 it: 1.19e-05 tt: 1.19e-05
+ 1: f: 4.280493e+00 d: 1.99e+01 g: 1.52e+01 h: 2.01e-01 s: 8.62e-04 e: 2 it: 7.30e-05 tt: 1.72e-04
+ 2: f: 3.571154e+00 d: 7.09e-01 g: 1.35e+01 h: 3.78e-01 s: 1.34e-01 e: 3 it: 1.60e-05 tt: 1.93e-04
+ 3: f: 3.440869e+00 d: 1.30e-01 g: 1.73e+01 h: 1.36e-01 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 1.97e-04
+ 4: f: 3.213597e+00 d: 2.27e-01 g: 1.55e+01 h: 1.06e-01 s: 4.59e-01 e: 1 it: 1.19e-06 tt: 2.00e-04
+ 5: f: 2.839723e+00 d: 3.74e-01 g: 1.05e+01 h: 1.34e-01 s: 5.24e-01 e: 1 it: 9.54e-07 tt: 2.03e-04
+ 6: f: 2.448490e+00 d: 3.91e-01 g: 1.29e+01 h: 3.04e-01 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 2.05e-04
+ 7: f: 1.943019e+00 d: 5.05e-01 g: 4.00e+00 h: 8.81e-02 s: 7.43e-01 e: 1 it: 9.54e-07 tt: 2.08e-04
+ 8: f: 1.731469e+00 d: 2.12e-01 g: 7.36e+00 h: 1.71e-01 s: 4.60e-01 e: 2 it: 2.15e-06 tt: 2.11e-04
+ 9: f: 1.503267e+00 d: 2.28e-01 g: 6.47e+00 h: 8.66e-02 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 2.14e-04
+ 10: f: 1.228331e+00 d: 2.75e-01 g: 2.00e+00 h: 7.70e-02 s: 7.90e-01 e: 1 it: 0.00e+00 tt: 2.16e-04
+ 11: f: 1.016523e+00 d: 2.12e-01 g: 5.15e+00 h: 1.39e-01 s: 3.76e-01 e: 2 it: 1.91e-06 tt: 2.25e-04
+ 12: f: 9.145773e-01 d: 1.02e-01 g: 6.74e+00 h: 7.98e-02 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 2.28e-04
+ 13: f: 7.508302e-01 d: 1.64e-01 g: 3.88e+00 h: 5.76e-02 s: 4.93e-01 e: 1 it: 9.54e-07 tt: 2.30e-04
+ 14: f: 5.832378e-01 d: 1.68e-01 g: 5.56e+00 h: 1.42e-01 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 2.33e-04
+ 15: f: 3.969581e-01 d: 1.86e-01 g: 1.64e+00 h: 1.17e-01 s: 1.00e+00 e: 1 it: 1.19e-06 tt: 2.36e-04
+ 16: f: 3.171557e-01 d: 7.98e-02 g: 3.84e+00 h: 1.18e-01 s: 3.97e-01 e: 2 it: 1.91e-06 tt: 2.39e-04
+ 17: f: 2.641257e-01 d: 5.30e-02 g: 3.27e+00 h: 6.14e-02 s: 1.00e+00 e: 1 it: 1.19e-06 tt: 2.42e-04
+ 18: f: 1.909730e-01 d: 7.32e-02 g: 5.29e-01 h: 8.55e-02 s: 6.82e-01 e: 1 it: 9.54e-07 tt: 2.45e-04
+ 19: f: 1.472012e-01 d: 4.38e-02 g: 3.11e+00 h: 1.20e-01 s: 3.47e-01 e: 2 it: 1.91e-06 tt: 2.49e-04
+ 20: f: 1.093558e-01 d: 3.78e-02 g: 2.97e+00 h: 8.43e-02 s: 1.00e+00 e: 1 it: 2.15e-06 tt: 2.52e-04
+ 21: f: 6.710346e-02 d: 4.23e-02 g: 1.42e+00 h: 9.64e-02 s: 8.85e-01 e: 1 it: 8.82e-06 tt: 2.81e-04
+ 22: f: 3.993377e-02 d: 2.72e-02 g: 2.30e+00 h: 1.29e-01 s: 4.63e-01 e: 2 it: 7.87e-06 tt: 2.96e-04
+ 23: f: 2.911794e-02 d: 1.08e-02 g: 2.55e+00 h: 6.55e-02 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.00e-04
+ 24: f: 1.457683e-02 d: 1.45e-02 g: 2.77e-01 h: 6.37e-02 s: 6.14e-01 e: 1 it: 1.19e-06 tt: 3.03e-04
+ 25: f: 8.577515e-03 d: 6.00e-03 g: 2.86e+00 h: 1.40e-01 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.06e-04
+ 26: f: 3.486574e-03 d: 5.09e-03 g: 1.76e-01 h: 1.23e-02 s: 1.00e+00 e: 1 it: 1.19e-06 tt: 3.09e-04
+ 27: f: 1.257570e-03 d: 2.23e-03 g: 1.39e-01 h: 5.08e-02 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.12e-04
+ 28: f: 2.783568e-04 d: 9.79e-04 g: 6.20e-01 h: 6.47e-02 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.15e-04
+ 29: f: 2.533399e-05 d: 2.53e-04 g: 1.68e-02 h: 1.98e-03 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.17e-04
+ 30: f: 7.591572e-07 d: 2.46e-05 g: 5.40e-03 h: 9.27e-03 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.20e-04
+ 31: f: 1.902460e-09 d: 7.57e-07 g: 1.62e-03 h: 1.89e-03 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.23e-04
+ 32: f: 1.003030e-12 d: 1.90e-09 g: 3.50e-05 h: 3.52e-05 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.26e-04
+ 33: f: 4.835994e-17 d: 1.00e-12 g: 1.05e-07 h: 1.13e-06 s: 1.00e+00 e: 1 it: 1.19e-06 tt: 3.34e-04
+ 34: f: 1.885250e-22 d: 4.84e-17 g: 2.69e-10 h: 1.45e-08 s: 1.00e+00 e: 1 it: 9.54e-07 tt: 3.37e-04
- Solver Summary (v 1.12.0-lapack-suitesparse-cxsparse-no_openmp)
+ Solver Summary (v 2.2.0-eigen-(3.4.0)-lapack-suitesparse-(7.1.0)-metis-(5.1.0)-acceleratesparse-eigensparse)
- Parameters 2
- Line search direction LBFGS (20)
- Line search type CUBIC WOLFE
+ Parameters 2
+ Line search direction LBFGS (20)
+ Line search type CUBIC WOLFE
- Cost:
- Initial 2.420000e+01
- Final 1.885250e-22
- Change 2.420000e+01
+ Cost:
+ Initial 2.420000e+01
+ Final 1.955192e-27
+ Change 2.420000e+01
- Minimizer iterations 35
+ Minimizer iterations 36
- Time (in seconds):
+ Time (in seconds):
- Cost evaluation 0.000
- Gradient evaluation 0.000
- Total 0.003
+ Cost evaluation 0.000000 (0)
+ Gradient & cost evaluation 0.000000 (44)
+ Polynomial minimization 0.000061
+ Total 0.000438
- Termination: CONVERGENCE (Gradient tolerance reached. Gradient max norm: 9.032775e-13 <= 1.000000e-10)
+ Termination: CONVERGENCE (Parameter tolerance reached. Relative step_norm: 1.890726e-11 <= 1.000000e-08.)
+
+ Initial x: -1.2 y: 1
+ Final x: 1 y: 1
+
+
+
+
+If you are unable to use automatic differentiation for some reason
+(say because you need to call an external library), then you can
+use numeric differentiation. In that case the functor is defined as
+follows [#f2]_.
+
+.. code::
+
+ // f(x,y) = (1-x)^2 + 100(y - x^2)^2;
+ struct Rosenbrock {
+ bool operator()(const double* parameters, double* cost) const {
+ const double x = parameters[0];
+ const double y = parameters[1];
+ cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
+ return true;
+ }
+
+ static ceres::FirstOrderFunction* Create() {
+ constexpr int kNumParameters = 2;
+ return new ceres::NumericDiffFirstOrderFunction<Rosenbrock,
+ ceres::CENTRAL,
+ kNumParameters>();
+ }
+ };
+
+And finally, if you would rather compute the derivatives by hand (say
+because the size of the parameter vector is too large to be
+automatically differentiated). Then you should define an instance of
+`FirstOrderFunction`, which is the analog of :class:`CostFunction` for
+non-linear least squares problems [#f3]_.
+
+.. code::
+
+ // f(x,y) = (1-x)^2 + 100(y - x^2)^2;
+ class Rosenbrock final : public ceres::FirstOrderFunction {
+ public:
+ bool Evaluate(const double* parameters,
+ double* cost,
+ double* gradient) const override {
+ const double x = parameters[0];
+ const double y = parameters[1];
+
+ cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
+ if (gradient) {
+ gradient[0] = -2.0 * (1.0 - x) - 200.0 * (y - x * x) * 2.0 * x;
+ gradient[1] = 200.0 * (y - x * x);
+ }
+ return true;
+ }
+
+ int NumParameters() const override { return 2; }
+ };
.. rubric:: Footnotes
.. [#f1] `examples/rosenbrock.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/rosenbrock.cc>`_
+
+.. [#f2] `examples/rosenbrock_numeric_diff.cc
+ <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/rosenbrock_numeric_diff.cc>`_
+
+.. [#f3] `examples/rosenbrock_analytic_diff.cc
+ <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/rosenbrock_analytic_diff.cc>`_
diff --git a/docs/source/index.rst b/docs/source/index.rst
index d72368f..497b12c 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -44,12 +44,15 @@
If you use Ceres Solver for a publication, please cite it as::
- @misc{ceres-solver,
- author = "Sameer Agarwal and Keir Mierle and Others",
- title = "Ceres Solver",
- howpublished = "\url{http://ceres-solver.org}",
- }
-
+ @software{Agarwal_Ceres_Solver_2022,
+ author = {Agarwal, Sameer and Mierle, Keir and The Ceres Solver Team},
+ title = {{Ceres Solver}},
+ license = {Apache-2.0},
+ url = {https://github.com/ceres-solver/ceres-solver},
+ version = {2.2},
+ year = {2023},
+ month = {10}
+ }
.. rubric:: Footnotes
diff --git a/docs/source/installation.rst b/docs/source/installation.rst
index 7f49783..4feb1a4 100644
--- a/docs/source/installation.rst
+++ b/docs/source/installation.rst
@@ -9,7 +9,7 @@
.. _section-source:
You can start with the `latest stable release
-<http://ceres-solver.org/ceres-solver-2.0.0.tar.gz>`_ . Or if you want
+<http://ceres-solver.org/ceres-solver-2.2.0.tar.gz>`_ . Or if you want
the latest version, you can clone the git repository
.. code-block:: bash
@@ -21,15 +21,16 @@
Dependencies
============
- .. NOTE ::
+ .. note ::
- Starting with v2.0 Ceres requires a **fully C++14-compliant**
- compiler. In versions <= 1.14, C++11 was an optional requirement.
+ Ceres Solver 2.2 requires a **fully C++17-compliant** compiler.
Ceres relies on a number of open source libraries, some of which are
optional. For details on customizing the build process, see
:ref:`section-customizing` .
+- `CMake <http://www.cmake.org>`_ 3.16 or later **required**.
+
- `Eigen <http://eigen.tuxfamily.org/index.php?title=Main_Page>`_
3.3 or later **required**.
@@ -39,9 +40,7 @@
library. Please see the documentation for ``EIGENSPARSE`` for
more details.
-- `CMake <http://www.cmake.org>`_ 3.5 or later **required**.
-
-- `glog <https://github.com/google/glog>`_ 0.3.1 or
+- `glog <https://github.com/google/glog>`_ 0.3.5 or
later. **Recommended**
``glog`` is used extensively throughout Ceres for logging detailed
@@ -65,22 +64,13 @@
recommend against it. ``miniglog`` has worse performance than
``glog`` and is much harder to control and use.
- .. NOTE ::
-
- If you are compiling ``glog`` from source, please note that
- currently, the unit tests for ``glog`` (which are enabled by
- default) do not compile against a default build of ``gflags`` 2.1
- as the gflags namespace changed from ``google::`` to
- ``gflags::``. A patch to fix this is available from `here
- <https://code.google.com/p/google-glog/issues/detail?id=194>`_.
-
- `gflags <https://github.com/gflags/gflags>`_. Needed to build
examples and tests and usually a dependency for glog.
-- `SuiteSparse
- <http://faculty.cse.tamu.edu/davis/suitesparse.html>`_. Needed for
- solving large sparse linear systems. **Optional; strongly recomended
- for large scale bundle adjustment**
+- `SuiteSparse <http://faculty.cse.tamu.edu/davis/suitesparse.html>`_
+ 4.5.6 or later. Needed for solving large sparse linear
+ systems. **Optional; strongly recommended for large scale bundle
+ adjustment**
.. NOTE ::
@@ -90,10 +80,10 @@
found TBB version. You can customize the searched TBB location
with the ``TBB_ROOT`` variable.
-- `CXSparse <http://faculty.cse.tamu.edu/davis/suitesparse.html>`_.
- Similar to ``SuiteSparse`` but simpler and slower. CXSparse has
- no dependencies on ``LAPACK`` and ``BLAS``. This makes for a simpler
- build process and a smaller binary. **Optional**
+ A CMake native version of SuiteSparse that can be compiled on a variety of
+ platforms (e.g., using Visual Studio, Xcode, MinGW, etc.) is maintained by the
+ `CMake support for SuiteSparse <https://github.com/sergiud/SuiteSparse>`_
+ project.
- `Apple's Accelerate sparse solvers <https://developer.apple.com/documentation/accelerate/sparse_solvers>`_.
As of Xcode 9.0, Apple's Accelerate framework includes support for
@@ -104,9 +94,13 @@
``SuiteSparse``, and optionally used by Ceres directly for some
operations.
- On ``UNIX`` OSes other than macOS we recommend `ATLAS
+ For best performance on ``x86`` based Linux systems we recommend
+ using `Intel MKL
+ <https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-mkl-for-dpcpp/top.html>`_.
+
+ Two other good options are `ATLAS
<http://math-atlas.sourceforge.net/>`_, which includes ``BLAS`` and
- ``LAPACK`` routines. It is also possible to use `OpenBLAS
+ ``LAPACK`` routines and `OpenBLAS
<https://github.com/xianyi/OpenBLAS>`_ . However, one needs to be
careful to `turn off the threading
<https://github.com/xianyi/OpenBLAS/wiki/faq#wiki-multi-threaded>`_
@@ -122,6 +116,15 @@
**Optional but required for** ``SuiteSparse``.
+- `CUDA <https://developer.nvidia.com/cuda-toolkit>`_ If you have an
+ NVIDIA GPU then Ceres Solver can use it accelerate the solution of
+ the Gauss-Newton linear systems using the CMake flag ``USE_CUDA``.
+ Currently this support is limited to using the dense linear solvers that ship
+ with ``CUDA``. As a result GPU acceleration can be used to speed up
+ ``DENSE_QR``, ``DENSE_NORMAL_CHOLESKY`` and
+ ``DENSE_SCHUR``. This also enables ``CUDA`` mixed precision solves
+ for ``DENSE_NORMAL_CHOLESKY`` and ``DENSE_SCHUR``. **Optional**.
+
.. _section-linux:
Linux
@@ -130,11 +133,12 @@
We will use `Ubuntu <http://www.ubuntu.com>`_ as our example linux
distribution.
- .. NOTE ::
+.. NOTE::
- These instructions are for Ubuntu 18.04 and newer. On Ubuntu 16.04
- you need to manually get a more recent version of Eigen, such as
- 3.3.7.
+ Ceres Solver always supports the previous and current Ubuntu LTS
+ releases, currently 18.04 and 20.04, using the default Ubuntu
+ repositories and compiler toolchain. Support for earlier versions
+ is not guaranteed or maintained.
Start by installing all the dependencies.
@@ -144,21 +148,21 @@
sudo apt-get install cmake
# google-glog + gflags
sudo apt-get install libgoogle-glog-dev libgflags-dev
- # BLAS & LAPACK
+ # Use ATLAS for BLAS & LAPACK
sudo apt-get install libatlas-base-dev
# Eigen3
sudo apt-get install libeigen3-dev
- # SuiteSparse and CXSparse (optional)
+ # SuiteSparse (optional)
sudo apt-get install libsuitesparse-dev
We are now ready to build, test, and install Ceres.
.. code-block:: bash
- tar zxf ceres-solver-2.0.0.tar.gz
+ tar zxf ceres-solver-2.2.0.tar.gz
mkdir ceres-bin
cd ceres-bin
- cmake ../ceres-solver-2.0.0
+ cmake ../ceres-solver-2.2.0
make -j3
make test
# Optionally install Ceres, it can also be exported using CMake which
@@ -172,7 +176,7 @@
.. code-block:: bash
- bin/simple_bundle_adjuster ../ceres-solver-2.0.0/data/problem-16-22106-pre.txt
+ bin/simple_bundle_adjuster ../ceres-solver-2.2.0/data/problem-16-22106-pre.txt
This runs Ceres for a maximum of 10 iterations using the
``DENSE_SCHUR`` linear solver. The output should look something like
@@ -181,63 +185,63 @@
.. code-block:: bash
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
- 0 4.185660e+06 0.00e+00 1.09e+08 0.00e+00 0.00e+00 1.00e+04 0 7.59e-02 3.37e-01
- 1 1.062590e+05 4.08e+06 8.99e+06 5.36e+02 9.82e-01 3.00e+04 1 1.65e-01 5.03e-01
- 2 4.992817e+04 5.63e+04 8.32e+06 3.19e+02 6.52e-01 3.09e+04 1 1.45e-01 6.48e-01
- 3 1.899774e+04 3.09e+04 1.60e+06 1.24e+02 9.77e-01 9.26e+04 1 1.43e-01 7.92e-01
- 4 1.808729e+04 9.10e+02 3.97e+05 6.39e+01 9.51e-01 2.78e+05 1 1.45e-01 9.36e-01
- 5 1.803399e+04 5.33e+01 1.48e+04 1.23e+01 9.99e-01 8.33e+05 1 1.45e-01 1.08e+00
- 6 1.803390e+04 9.02e-02 6.35e+01 8.00e-01 1.00e+00 2.50e+06 1 1.50e-01 1.23e+00
+ 0 4.185660e+06 0.00e+00 1.09e+08 0.00e+00 0.00e+00 1.00e+04 0 2.18e-02 6.57e-02
+ 1 1.062590e+05 4.08e+06 8.99e+06 0.00e+00 9.82e-01 3.00e+04 1 5.07e-02 1.16e-01
+ 2 4.992817e+04 5.63e+04 8.32e+06 3.19e+02 6.52e-01 3.09e+04 1 4.75e-02 1.64e-01
+ 3 1.899774e+04 3.09e+04 1.60e+06 1.24e+02 9.77e-01 9.26e+04 1 4.74e-02 2.11e-01
+ 4 1.808729e+04 9.10e+02 3.97e+05 6.39e+01 9.51e-01 2.78e+05 1 4.75e-02 2.59e-01
+ 5 1.803399e+04 5.33e+01 1.48e+04 1.23e+01 9.99e-01 8.33e+05 1 4.74e-02 3.06e-01
+ 6 1.803390e+04 9.02e-02 6.35e+01 8.00e-01 1.00e+00 2.50e+06 1 4.76e-02 3.54e-01
- Ceres Solver v2.0.0 Solve Report
- ----------------------------------
+ Solver Summary (v 2.2.0-eigen-(3.4.0)-lapack-suitesparse-(7.1.0)-metis-(5.1.0)-acceleratesparse-eigensparse)
+
Original Reduced
Parameter blocks 22122 22122
Parameters 66462 66462
Residual blocks 83718 83718
- Residual 167436 167436
+ Residuals 167436 167436
Minimizer TRUST_REGION
Dense linear algebra library EIGEN
Trust region strategy LEVENBERG_MARQUARDT
-
Given Used
Linear solver DENSE_SCHUR DENSE_SCHUR
Threads 1 1
- Linear solver threads 1 1
- Linear solver ordering AUTOMATIC 22106, 16
+ Linear solver ordering AUTOMATIC 22106,16
+ Schur structure 2,3,9 2,3,9
Cost:
Initial 4.185660e+06
Final 1.803390e+04
Change 4.167626e+06
- Minimizer iterations 6
- Successful steps 6
+ Minimizer iterations 7
+ Successful steps 7
Unsuccessful steps 0
Time (in seconds):
- Preprocessor 0.261
+ Preprocessor 0.043895
- Residual evaluation 0.082
- Jacobian evaluation 0.412
- Linear solver 0.442
- Minimizer 1.051
+ Residual only evaluation 0.029855 (7)
+ Jacobian & residual evaluation 0.120581 (7)
+ Linear solver 0.153665 (7)
+ Minimizer 0.339275
- Postprocessor 0.002
- Total 1.357
+ Postprocessor 0.000540
+ Total 0.383710
- Termination: CONVERGENCE (Function tolerance reached. |cost_change|/cost: 1.769766e-09 <= 1.000000e-06)
+ Termination: CONVERGENCE (Function tolerance reached. |cost_change|/cost: 1.769759e-09 <= 1.000000e-06)
+
.. section-macos:
macOS
=====
-On macOS, you can either use `Homebrew
-<https://brew.sh/>`_ (recommended) or `MacPorts
-<https://www.macports.org/>`_ to install Ceres Solver.
+On macOS, you can either use `Homebrew <https://brew.sh/>`_
+(recommended) or `MacPorts <https://www.macports.org/>`_ to install
+Ceres Solver.
If using `Homebrew <https://brew.sh/>`_, then
@@ -277,17 +281,17 @@
brew install glog gflags
# Eigen3
brew install eigen
- # SuiteSparse and CXSparse
+ # SuiteSparse
brew install suite-sparse
We are now ready to build, test, and install Ceres.
.. code-block:: bash
- tar zxf ceres-solver-2.0.0.tar.gz
+ tar zxf ceres-solver-2.2.0.tar.gz
mkdir ceres-bin
cd ceres-bin
- cmake ../ceres-solver-2.0.0
+ cmake ../ceres-solver-2.2.0
make -j3
make test
# Optionally install Ceres, it can also be exported using CMake which
@@ -295,53 +299,59 @@
# documentation for the EXPORT_BUILD_DIR option for more information.
make install
-Building with OpenMP on macOS
------------------------------
-
-Up to at least Xcode 12, OpenMP support was disabled in Apple's version of
-Clang. However, you can install the latest version of the LLVM toolchain
-from Homebrew which does support OpenMP, and thus build Ceres with OpenMP
-support on macOS. To do this, you must install llvm via Homebrew:
-
-.. code-block:: bash
-
- # Install latest version of LLVM toolchain.
- brew install llvm
-
-As the LLVM formula in Homebrew is keg-only, it will not be installed to
-``/usr/local`` to avoid conflicts with the standard Apple LLVM toolchain.
-To build Ceres with the Homebrew LLVM toolchain you should do the
-following:
-
-.. code-block:: bash
-
- tar zxf ceres-solver-2.0.0.tar.gz
- mkdir ceres-bin
- cd ceres-bin
- # Configure the local shell only (not persistent) to use the Homebrew LLVM
- # toolchain in favour of the default Apple version. This is taken
- # verbatim from the instructions output by Homebrew when installing the
- # llvm formula.
- export LDFLAGS="-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib"
- export CPPFLAGS="-I/usr/local/opt/llvm/include"
- export PATH="/usr/local/opt/llvm/bin:$PATH"
- # Force CMake to use the Homebrew version of Clang and enable OpenMP.
- cmake -DCMAKE_C_COMPILER=/usr/local/opt/llvm/bin/clang -DCMAKE_CXX_COMPILER=/usr/local/opt/llvm/bin/clang++ -DCERES_THREADING_MODEL=OPENMP ../ceres-solver-2.0.0
- make -j3
- make test
- # Optionally install Ceres. It can also be exported using CMake which
- # allows Ceres to be used without requiring installation. See the
- # documentation for the EXPORT_BUILD_DIR option for more information.
- make install
-
-Like the Linux build, you should now be able to run
-``bin/simple_bundle_adjuster``.
-
.. _section-windows:
Windows
=======
+Using a Library Manager
+-----------------------
+
+`vcpkg <https://github.com/microsoft/vcpkg>`_ is a library manager for Microsoft
+Windows that can be used to install Ceres Solver and all its dependencies.
+
+#. Install the library manager into a top-level directory ``vcpkg/`` on Windows
+ following the `guide
+ <https://github.com/microsoft/vcpkg#quick-start-windows>`_, e.g., using
+ Visual Studio 2022 community edition, or simply run
+
+ .. code:: bat
+
+ git clone https://github.com/Microsoft/vcpkg.git
+ cd vcpkg
+ .\bootstrap-vcpkg.bat
+ .\vcpkg integrate install
+
+#. Use vcpkg to install and build Ceres and all its dependencies, e.g., for 64
+ bit Windows
+
+ .. code:: bat
+
+ vcpkg\vcpkg.exe install ceres:x64-windows
+
+ Or with optional components, e.g., SuiteSparse, using
+
+ .. code:: bat
+
+ vcpkg\vcpkg.exe install ceres[suitesparse]:x64-windows
+
+#. Integrate vcpkg packages with Visual Studio to allow it to automatically
+ find all the libraries installed by vcpkg.
+
+ .. code:: bat
+
+ vcpkg\vcpkg.exe integrate install
+
+#. To use Ceres in a CMake project, follow our :ref:`instructions
+ <section-using-ceres>`.
+
+
+Building from Source
+--------------------
+
+Ceres Solver can also be built from source. For this purpose, we support Visual
+Studio 2019 and newer.
+
.. NOTE::
If you find the following CMake difficult to set up, then you may
@@ -349,36 +359,11 @@
<https://github.com/tbennun/ceres-windows>`_ for Ceres Solver by Tal
Ben-Nun.
-On Windows, we support building with Visual Studio 2015.2 of newer. Note
-that the Windows port is less featureful and less tested than the
-Linux or macOS versions due to the lack of an officially supported
-way of building SuiteSparse and CXSparse. There are however a number
-of unofficial ways of building these libraries. Building on Windows
-also a bit more involved since there is no automated way to install
-dependencies.
+#. Create a top-level directory for dependencies, build, and sources somewhere,
+ e.g., ``ceres/``
-.. NOTE:: Using ``google-glog`` & ``miniglog`` with windows.h.
-
- The windows.h header if used with GDI (Graphics Device Interface)
- defines ``ERROR``, which conflicts with the definition of ``ERROR``
- as a LogSeverity level in ``google-glog`` and ``miniglog``. There
- are at least two possible fixes to this problem:
-
- #. Use ``google-glog`` and define ``GLOG_NO_ABBREVIATED_SEVERITIES``
- when building Ceres and your own project, as documented `here
- <http://google-glog.googlecode.com/svn/trunk/doc/glog.html>`__.
- Note that this fix will not work for ``miniglog``, but use of
- ``miniglog`` is strongly discouraged on any platform for which
- ``google-glog`` is available (which includes Windows).
- #. If you do not require GDI, then define ``NOGDI`` **before**
- including windows.h. This solution should work for both
- ``google-glog`` and ``miniglog`` and is documented for
- ``google-glog`` `here
- <https://code.google.com/p/google-glog/issues/detail?id=33>`__.
-
-#. Make a toplevel directory for deps & build & src somewhere: ``ceres/``
#. Get dependencies; unpack them as subdirectories in ``ceres/``
- (``ceres/eigen``, ``ceres/glog``, etc)
+ (``ceres/eigen``, ``ceres/glog``, etc.)
#. ``Eigen`` 3.3 . Configure and optionally install Eigen. It should be
exported into the CMake package registry by default as part of the
@@ -394,45 +379,55 @@
project. If you wish to use ``SuiteSparse``, follow their
instructions for obtaining and building it.
- #. (Experimental) ``CXSparse`` Previously CXSparse was not
- available on Windows, there are now several ports that enable it
- to be, including: `[1] <https://github.com/PetterS/CXSparse>`_
- and `[2] <https://github.com/TheFrenchLeaf/CXSparse>`_. If you
- wish to use ``CXSparse``, follow their instructions for
- obtaining and building it.
+ Alternatively, Ceres Solver supports ``SuiteSparse`` binary
+ packages available for Visual Studio 2019 and 2022 provided by
+ the `CMake support for SuiteSparse
+ <https://github.com/sergiud/SuiteSparse>`_ project that also
+ include `reference LAPACK <http://www.netlib.org/blas>`_ (and
+ BLAS). The binary packages are used by Ceres Solver for
+ continuous testing on Github.
#. Unpack the Ceres tarball into ``ceres``. For the tarball, you
should get a directory inside ``ceres`` similar to
- ``ceres-solver-2.0.0``. Alternately, checkout Ceres via ``git`` to
+ ``ceres-solver-2.2.0``. Alternately, checkout Ceres via ``git`` to
get ``ceres-solver.git`` inside ``ceres``.
#. Install ``CMake``,
-#. Make a dir ``ceres/ceres-bin`` (for an out-of-tree build)
+#. Create a directory ``ceres/ceres-bin`` (for an out-of-tree build)
+
+ #. If you use the above binary ``SuiteSparse`` package, make sure CMake can
+ find it, e.g., by assigning the path of the directory that contains the
+ unzipped contents to the ``CMAKE_PREFIX_PATH`` environment variable. In a
+ Windows command prompt this can be achieved as follows:
+
+ .. code:: bat
+
+ export CMAKE_PREFIX_PATH=C:/Downloads/SuiteSparse-5.11.0-cmake.1-vc16-Win64-Release-shared-gpl
#. Run ``CMake``; select the ``ceres-solver-X.Y.Z`` or
``ceres-solver.git`` directory for the CMake file. Then select the
- ``ceres-bin`` for the build dir.
+ ``ceres-bin`` for the build directory.
-#. Try running ``Configure``. It won't work. It'll show a bunch of options.
- You'll need to set:
+#. Try running ``Configure`` which can fail at first because some dependencies
+ cannot be automatically located. In this case, you must set the following
+ CMake variables to the appropriate directories where you unpacked/built them:
#. ``Eigen3_DIR`` (Set to directory containing ``Eigen3Config.cmake``)
#. ``GLOG_INCLUDE_DIR_HINTS``
#. ``GLOG_LIBRARY_DIR_HINTS``
#. (Optional) ``gflags_DIR`` (Set to directory containing ``gflags-config.cmake``)
- #. (Optional) ``SUITESPARSE_INCLUDE_DIR_HINTS``
- #. (Optional) ``SUITESPARSE_LIBRARY_DIR_HINTS``
- #. (Optional) ``CXSPARSE_INCLUDE_DIR_HINTS``
- #. (Optional) ``CXSPARSE_LIBRARY_DIR_HINTS``
+ #. (SuiteSparse binary package) ``BLAS_blas_LIBRARY`` and
+ ``LAPACK_lapack_LIBRARY`` CMake variables must be `explicitly set` to
+ ``<path>/lib/blas.lib`` and ``<path>/lib/lapack.lib``, respectively, both
+ located in the unzipped package directory ``<path>``.
- to the appropriate directories where you unpacked/built them. If
- any of the variables are not visible in the ``CMake`` GUI, create a
- new entry for them. We recommend using the
- ``<NAME>_(INCLUDE/LIBRARY)_DIR_HINTS`` variables rather than
- setting the ``<NAME>_INCLUDE_DIR`` & ``<NAME>_LIBRARY`` variables
- directly to keep all of the validity checking, and to avoid having
- to specify the library files manually.
+ If any of the variables are not visible in the ``CMake`` GUI, create a new
+ entry for them. We recommend using the
+ ``<NAME>_(INCLUDE/LIBRARY)_DIR_HINTS`` variables rather than setting the
+ ``<NAME>_INCLUDE_DIR`` & ``<NAME>_LIBRARY`` variables directly to keep all of
+ the validity checking, and to avoid having to specify the library files
+ manually.
#. You may have to tweak some more settings to generate a MSVC
project. After each adjustment, try pressing Configure & Generate
@@ -447,17 +442,15 @@
Like the Linux build, you should now be able to run
``bin/simple_bundle_adjuster``.
-Notes:
+.. note::
-#. The default build is Debug; consider switching it to release mode.
-#. Currently ``system_test`` is not working properly.
-#. CMake puts the resulting test binaries in ``ceres-bin/examples/Debug``
- by default.
-#. The solvers supported on Windows are ``DENSE_QR``, ``DENSE_SCHUR``,
- ``CGNR``, and ``ITERATIVE_SCHUR``.
-#. We're looking for someone to work with upstream ``SuiteSparse`` to
- port their build system to something sane like ``CMake``, and get a
- fully supported Windows port.
+ #. The default build is ``Debug``; consider switching it to ``Release`` for
+ optimal performance.
+ #. CMake puts the resulting test binaries in ``ceres-bin/examples/Debug`` by
+ default.
+ #. Without a sparse linear algebra library, only a subset of
+ solvers is usable, namely: ``DENSE_QR``, ``DENSE_SCHUR``,
+ ``CGNR``, and ``ITERATIVE_SCHUR``.
.. _section-android:
@@ -556,9 +549,7 @@
The default CMake configuration builds a bare bones version of Ceres
Solver that only depends on Eigen (``MINIGLOG`` is compiled into Ceres
if it is used), this should be sufficient for solving small to
-moderate sized problems (No ``SPARSE_SCHUR``,
-``SPARSE_NORMAL_CHOLESKY`` linear solvers and no ``CLUSTER_JACOBI``
-and ``CLUSTER_TRIDIAGONAL`` preconditioners).
+moderate sized problems.
If you decide to use ``LAPACK`` and ``BLAS``, then you also need to
add ``Accelerate.framework`` to your Xcode project's linking
@@ -648,22 +639,14 @@
terms. Ceres requires some components that are only licensed under
GPL/Commercial terms.
-#. ``CXSPARSE [Default: ON]``: By default, Ceres will link to
- ``CXSparse`` if all its dependencies are present. Turn this ``OFF``
- to build Ceres without ``CXSparse``.
-
- .. NOTE::
-
- CXSparse is licensed under the LGPL.
-
#. ``ACCELERATESPARSE [Default: ON]``: By default, Ceres will link to
Apple's Accelerate framework directly if a version of it is detected
which supports solving sparse linear systems. Note that on Apple OSs
Accelerate usually also provides the BLAS/LAPACK implementations and
so would be linked against irrespective of the value of ``ACCELERATESPARSE``.
-#. ``EIGENSPARSE [Default: ON]``: By default, Ceres will not use
- Eigen's sparse Cholesky factorization.
+#. ``EIGENSPARSE [Default: ON]``: By default, Ceres will use Eigen's
+ sparse Cholesky factorization.
#. ``GFLAGS [Default: ON]``: Turn this ``OFF`` to build Ceres without
``gflags``. This will also prevent some of the example code from
@@ -680,11 +663,6 @@
gains in the ``SPARSE_SCHUR`` solver, you can disable some of the
template specializations by turning this ``OFF``.
-#. ``CERES_THREADING_MODEL [Default: CXX_THREADS > OPENMP > NO_THREADS]``:
- Multi-threading backend Ceres should be compiled with. This will
- automatically be set to only accept the available subset of threading
- options in the CMake GUI.
-
#. ``BUILD_SHARED_LIBS [Default: OFF]``: By default Ceres is built as
a static library, turn this ``ON`` to instead build Ceres as a
shared library.
@@ -701,8 +679,8 @@
#. ``BUILD_DOCUMENTATION [Default: OFF]``: Use this to enable building
the documentation, requires `Sphinx <http://sphinx-doc.org/>`_ and
- the `sphinx-better-theme
- <https://pypi.python.org/pypi/sphinx-better-theme>`_ package
+ the `sphinx-rtd-theme
+ <https://pypi.org/project/sphinx-rtd-theme/>`_ package
available from the Python package index. In addition, ``make
ceres_docs`` can be used to build only the documentation.
@@ -865,25 +843,18 @@
#. ``SuiteSparse``: Ceres built with SuiteSparse (``SUITESPARSE=ON``).
-#. ``CXSparse``: Ceres built with CXSparse (``CXSPARSE=ON``).
-
#. ``AccelerateSparse``: Ceres built with Apple's Accelerate sparse solvers (``ACCELERATESPARSE=ON``).
#. ``EigenSparse``: Ceres built with Eigen's sparse Cholesky factorization
(``EIGENSPARSE=ON``).
-#. ``SparseLinearAlgebraLibrary``: Ceres built with *at least one* sparse linear
- algebra library. This is equivalent to ``SuiteSparse`` **OR** ``CXSparse``
- **OR** ``AccelerateSparse`` **OR** ``EigenSparse``.
+#. ``SparseLinearAlgebraLibrary``: Ceres built with *at least one*
+ sparse linear algebra library. This is equivalent to
+ ``SuiteSparse`` **OR** ``AccelerateSparse`` **OR** ``EigenSparse``.
#. ``SchurSpecializations``: Ceres built with Schur specializations
(``SCHUR_SPECIALIZATIONS=ON``).
-#. ``OpenMP``: Ceres built with OpenMP (``CERES_THREADING_MODEL=OPENMP``).
-
-#. ``Multithreading``: Ceres built with *a* multithreading library.
- This is equivalent to (``CERES_THREAD != NO_THREADS``).
-
To specify one/multiple Ceres components use the ``COMPONENTS`` argument to
`find_package()
<http://www.cmake.org/cmake/help/v3.5/command/find_package.html>`_ like so:
diff --git a/docs/source/interfacing_with_autodiff.rst b/docs/source/interfacing_with_autodiff.rst
index 02f58b2..fa05835 100644
--- a/docs/source/interfacing_with_autodiff.rst
+++ b/docs/source/interfacing_with_autodiff.rst
@@ -37,7 +37,7 @@
template <typename T> T TemplatedComputeDistortion(const T r2) {
const double k1 = 0.0082;
const double k2 = 0.000023;
- return 1.0 + k1 * y2 + k2 * r2 * r2;
+ return 1.0 + k1 * r2 + k2 * r2 * r2;
}
struct Affine2DWithDistortion {
@@ -118,12 +118,13 @@
y[0] = y_in[0];
y[1] = y_in[1];
- compute_distortion.reset(new ceres::CostFunctionToFunctor<1, 1>(
- new ceres::NumericDiffCostFunction<ComputeDistortionValueFunctor,
- ceres::CENTRAL,
- 1,
- 1>(
- new ComputeDistortionValueFunctor)));
+ compute_distortion = std::make_unique<ceres::CostFunctionToFunctor<1, 1>>(
+ std::make_unique<ceres::NumericDiffCostFunction<
+ ComputeDistortionValueFunctor
+ , ceres::CENTRAL, 1, 1
+ >
+ >()
+ );
}
template <typename T>
@@ -140,7 +141,7 @@
double x[2];
double y[2];
- std::unique_ptr<ceres::CostFunctionToFunctor<1, 1> > compute_distortion;
+ std::unique_ptr<ceres::CostFunctionToFunctor<1, 1>> compute_distortion;
};
@@ -148,7 +149,7 @@
------------------------------------------------
Now suppose we are given a function :code:`ComputeDistortionValue`
-thatis able to compute its value and optionally its Jacobian on demand
+that is able to compute its value and optionally its Jacobian on demand
and has the following signature:
.. code-block:: c++
diff --git a/docs/source/inverse_and_implicit_function_theorems.rst b/docs/source/inverse_and_implicit_function_theorems.rst
new file mode 100644
index 0000000..7d8f7fa
--- /dev/null
+++ b/docs/source/inverse_and_implicit_function_theorems.rst
@@ -0,0 +1,214 @@
+.. default-domain:: cpp
+
+.. cpp:namespace:: ceres
+
+.. _chapter-inverse_function_theorem:
+
+==========================================
+Using Inverse & Implicit Function Theorems
+==========================================
+
+Until now we have considered methods for computing derivatives that
+work directly on the function being differentiated. However, this is
+not always possible. For example, if the function can only be computed
+via an iterative algorithm, or there is no explicit definition of the
+function available. In this section we will see how we can use two
+basic results from calculus to get around these difficulties.
+
+
+Inverse Function Theorem
+========================
+
+Suppose we wish to evaluate the derivative of a function :math:`f(x)`,
+but evaluating :math:`f(x)` is not easy. Say it involves running an
+iterative algorithm. You could try automatically differentiating the
+iterative algorithm, but even if that is possible, it can become quite
+expensive.
+
+In some cases we get lucky, and computing the inverse of :math:`f(x)`
+is an easy operation. In these cases, we can use the `Inverse Function
+Theorem <http://en.wikipedia.org/wiki/Inverse_function_theorem>`_ to
+compute the derivative exactly. Here is the key idea:
+
+Assuming that :math:`y=f(x)` is continuously differentiable in a
+neighborhood of a point :math:`x` and :math:`Df(x)` is the invertible
+Jacobian of :math:`f` at :math:`x`, then by applying the chain rule to
+the identity :math:`f^{-1}(f(x)) = x`, we have
+:math:`Df^{-1}(f(x))Df(x) = I`, or :math:`Df^{-1}(y) = (Df(x))^{-1}`,
+i.e., the Jacobian of :math:`f^{-1}` is the inverse of the Jacobian of
+:math:`f`, or :math:`Df(x) = (Df^{-1}(y))^{-1}`.
+
+For example, let :math:`f(x) = e^x`. Now of course we know that
+:math:`Df(x) = e^x`, but let's try and compute it via the Inverse
+Function Theorem. For :math:`x > 0`, we have :math:`f^{-1}(y) = \log
+y`, so :math:`Df^{-1}(y) = \frac{1}{y}`, so :math:`Df(x) =
+(Df^{-1}(y))^{-1} = y = e^x`.
+
+You maybe wondering why the above is true. A smoothly differentiable
+function in a small neighborhood is well approximated by a linear
+function. Indeed this is a good way to think about the Jacobian, it is
+the matrix that best approximates the function linearly. Once you do
+that, it is straightforward to see that *locally* :math:`f^{-1}(y)` is
+best approximated linearly by the inverse of the Jacobian of
+:math:`f(x)`.
+
+Let us now consider a more practical example.
+
+Geodetic Coordinate System Conversion
+-------------------------------------
+
+When working with data related to the Earth, one can use two different
+coordinate systems. The familiar (latitude, longitude, height)
+Latitude-Longitude-Altitude coordinate system or the `ECEF
+<http://en.wikipedia.org/wiki/ECEF>`_ coordinate systems. The former
+is familiar but is not terribly convenient analytically. The latter is
+a Cartesian system but not particularly intuitive. So systems that
+process earth related data have to go back and forth between these
+coordinate systems.
+
+The conversion between the LLA and the ECEF coordinate system requires
+a model of the Earth, the most commonly used one being `WGS84
+<https://en.wikipedia.org/wiki/World_Geodetic_System#1984_version>`_.
+
+Going from the spherical :math:`(\phi,\lambda,h)` to the ECEF
+:math:`(x,y,z)` coordinates is easy.
+
+.. math::
+
+ \chi &= \sqrt{1 - e^2 \sin^2 \phi}
+
+ X &= \left( \frac{a}{\chi} + h \right) \cos \phi \cos \lambda
+
+ Y &= \left( \frac{a}{\chi} + h \right) \cos \phi \sin \lambda
+
+ Z &= \left(\frac{a(1-e^2)}{\chi} +h \right) \sin \phi
+
+Here :math:`a` and :math:`e^2` are constants defined by `WGS84
+<https://en.wikipedia.org/wiki/World_Geodetic_System#1984_version>`_.
+
+Going from ECEF to LLA coordinates requires an iterative algorithm. So
+to compute the derivative of the this transformation we invoke the
+Inverse Function Theorem as follows:
+
+.. code-block:: c++
+
+ Eigen::Vector3d ecef; // Fill some values
+ // Iterative computation.
+ Eigen::Vector3d lla = ECEFToLLA(ecef);
+ // Analytic derivatives
+ Eigen::Matrix3d lla_to_ecef_jacobian = LLAToECEFJacobian(lla);
+ bool invertible;
+ Eigen::Matrix3d ecef_to_lla_jacobian;
+ lla_to_ecef_jacobian.computeInverseWithCheck(ecef_to_lla_jacobian, invertible);
+
+
+Implicit Function Theorem
+=========================
+
+Consider now the problem where we have two variables :math:`x \in
+\mathbb{R}^m` and :math:`y \in \mathbb{R}^n` and a function
+:math:`F:\mathbb{R}^m \times \mathbb{R}^n \rightarrow \mathbb{R}^n`
+such that :math:`F(x,y) = 0` and we wish to calculate the Jacobian of
+:math:`y` with respect to `x`. How do we do this?
+
+If for a given value of :math:`(x,y)`, the partial Jacobian
+:math:`D_2F(x,y)` is full rank, then the `Implicit Function Theorem
+<https://en.wikipedia.org/wiki/Implicit_function_theorem>`_ tells us
+that there exists a neighborhood of :math:`x` and a function :math:`G`
+such :math:`y = G(x)` in this neighborhood. Differentiating
+:math:`F(x,G(x)) = 0` gives us
+
+.. math::
+
+ D_1F(x,y) + D_2F(x,y)DG(x) &= 0
+
+ DG(x) &= -(D_2F(x,y))^{-1} D_1 F(x,y)
+
+ D y(x) &= -(D_2F(x,y))^{-1} D_1 F(x,y)
+
+This means that we can compute the derivative of :math:`y` with
+respect to :math:`x` by multiplying the Jacobian of :math:`F` w.r.t
+:math:`x` by the inverse of the Jacobian of :math:`F` w.r.t :math:`y`.
+
+Let's consider two examples.
+
+Roots of a Polynomial
+---------------------
+
+The first example we consider is a classic. Let :math:`p(x) = a_0 +
+a_1 x + \dots + a_n x^n` be a degree :math:`n` polynomial, and we wish
+to compute the derivative of its roots with respect to its
+coefficients. There is no closed form formula for computing the roots
+of a general degree :math:`n` polynomial. `Galois
+<https://en.wikipedia.org/wiki/%C3%89variste_Galois>`_ and `Abel
+<https://en.wikipedia.org/wiki/Niels_Henrik_Abel>`_ proved that. There
+are numerical algorithms like computing the eigenvalues of the
+`Companion Matrix
+<https://nhigham.com/2021/03/23/what-is-a-companion-matrix/>`_, but
+differentiating an eigenvalue solver does not seem like fun. But the
+Implicit Function Theorem offers us a simple path.
+
+If :math:`x` is a root of :math:`p(x)`, then :math:`F(\mathbf{a}, x) =
+a_0 + a_1 x + \dots + a_n x^n = 0`. So,
+
+.. math::
+
+ D_1 F(\mathbf{a}, x) &= [1, x, x^2, \dots, x^n]
+
+ D_2 F(\mathbf{a}, x) &= \sum_{k=1}^n k a_k x^{k-1} = Dp(x)
+
+ Dx(a) &= \frac{-1}{Dp(x)} [1, x, x^2, \dots, x^n]
+
+Differentiating the Solution to an Optimization Problem
+-------------------------------------------------------
+
+Sometimes we are required to solve optimization problems inside
+optimization problems, and this requires computing the derivative of
+the optimal solution (or a fixed point) of an optimization problem
+w.r.t its parameters.
+
+Let :math:`\theta \in \mathbb{R}^m` be a vector, :math:`A(\theta) \in
+\mathbb{R}^{k\times n}` be a matrix whose entries are a function of
+:math:`\theta` with :math:`k \ge n` and let :math:`b \in \mathbb{R}^k`
+be a constant vector, then consider the linear least squares problem:
+
+.. math::
+
+ x^* = \arg \min_x \|A(\theta) x - b\|_2^2
+
+How do we compute :math:`D_\theta x^*(\theta)`?
+
+One approach would be to observe that :math:`x^*(\theta) =
+(A^\top(\theta)A(\theta))^{-1}A^\top(\theta)b` and then differentiate
+this w.r.t :math:`\theta`. But this would require differentiating
+through the inverse of the matrix
+:math:`(A^\top(\theta)A(\theta))^{-1}`. Not exactly easy. Let's use
+the Implicit Function Theorem instead.
+
+The first step is to observe that :math:`x^*` satisfies the so called
+*normal equations*.
+
+.. math::
+
+ A^\top(\theta)A(\theta)x^* - A^\top(\theta)b = 0
+
+We will compute :math:`D_\theta x^*` column-wise, treating
+:math:`A(\theta)` as a function of one coordinate (:math:`\theta_i`)
+of :math:`\theta` at a time. So using the normal equations, let's
+define :math:`F(\theta_i, x^*) = A^\top(\theta_i)A(\theta_i)x^* -
+A^\top(\theta_i)b = 0`. Using which can now compute:
+
+.. math::
+
+ D_1F(\theta_i, x^*) &= D_{\theta_i}A^\top A + A^\top
+ D_{\theta_i}Ax^* - D_{\theta_i} A^\top b = g_i
+
+ D_2F(\theta_i, x^*) &= A^\top A
+
+ Dx^*(\theta_i) & = -(A^\top A)^{-1} g_i
+
+ Dx^*(\theta) & = -(A^\top A )^{-1} \left[g_1, \dots, g_m\right]
+
+Observe that we only need to compute the inverse of :math:`A^\top A`,
+to compute :math:`D x^*(\theta)`, which we needed anyways to compute
+:math:`x^*`.
diff --git a/docs/source/license.rst b/docs/source/license.rst
index a3c55c9..ed85f6a 100644
--- a/docs/source/license.rst
+++ b/docs/source/license.rst
@@ -12,7 +12,7 @@
Ceres Solver is licensed under the New BSD license, whose terms are as follows.
-Copyright 2016 Google Inc. All rights reserved.
+Copyright 2023 Google Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
diff --git a/docs/source/loss.png b/docs/source/loss.png
index 9f98d00..5c9ac07 100644
--- a/docs/source/loss.png
+++ b/docs/source/loss.png
Binary files differ
diff --git a/docs/source/modeling_faqs.rst b/docs/source/modeling_faqs.rst
index a0c8f2f..0d23de4 100644
--- a/docs/source/modeling_faqs.rst
+++ b/docs/source/modeling_faqs.rst
@@ -37,7 +37,7 @@
automatic and numeric differentiation. See
:class:`CostFunctionToFunctor`.
-#. When using Quaternions, consider using :class:`QuaternionParameterization`.
+#. When using Quaternions, consider using :class:`QuaternionManifold`.
`Quaternions <https://en.wikipedia.org/wiki/Quaternion>`_ are a
four dimensional parameterization of the space of three dimensional
@@ -47,14 +47,14 @@
associate a local parameterization with parameter blocks
representing a Quaternion. Assuming that the order of entries in
your parameter block is :math:`w,x,y,z`, you can use
- :class:`QuaternionParameterization`.
+ :class:`QuaternionManifold`.
.. NOTE::
If you are using `Eigen's Quaternion
<http://eigen.tuxfamily.org/dox/classEigen_1_1Quaternion.html>`_
object, whose layout is :math:`x,y,z,w`, then you should use
- :class:`EigenQuaternionParameterization`.
+ :class:`EigenQuaternionManifold`.
#. How do I solve problems with general linear & non-linear
@@ -85,50 +85,4 @@
#. How do I set one or more components of a parameter block constant?
- Using :class:`SubsetParameterization`.
-
-#. Putting `Inverse Function Theorem
- <http://en.wikipedia.org/wiki/Inverse_function_theorem>`_ to use.
-
- Every now and then we have to deal with functions which cannot be
- evaluated analytically. Computing the Jacobian in such cases is
- tricky. A particularly interesting case is where the inverse of the
- function is easy to compute analytically. An example of such a
- function is the Coordinate transformation between the `ECEF
- <http://en.wikipedia.org/wiki/ECEF>`_ and the `WGS84
- <http://en.wikipedia.org/wiki/World_Geodetic_System>`_ where the
- conversion from WGS84 to ECEF is analytic, but the conversion
- back to WGS84 uses an iterative algorithm. So how do you compute the
- derivative of the ECEF to WGS84 transformation?
-
- One obvious approach would be to numerically
- differentiate the conversion function. This is not a good idea. For
- one, it will be slow, but it will also be numerically quite
- bad.
-
- Turns out you can use the `Inverse Function Theorem
- <http://en.wikipedia.org/wiki/Inverse_function_theorem>`_ in this
- case to compute the derivatives more or less analytically.
-
- The key result here is. If :math:`x = f^{-1}(y)`, and :math:`Df(x)`
- is the invertible Jacobian of :math:`f` at :math:`x`. Then the
- Jacobian :math:`Df^{-1}(y) = [Df(x)]^{-1}`, i.e., the Jacobian of
- the :math:`f^{-1}` is the inverse of the Jacobian of :math:`f`.
-
- Algorithmically this means that given :math:`y`, compute :math:`x =
- f^{-1}(y)` by whatever means you can. Evaluate the Jacobian of
- :math:`f` at :math:`x`. If the Jacobian matrix is invertible, then
- its inverse is the Jacobian of :math:`f^{-1}(y)` at :math:`y`.
-
- One can put this into practice with the following code fragment.
-
- .. code-block:: c++
-
- Eigen::Vector3d ecef; // Fill some values
- // Iterative computation.
- Eigen::Vector3d lla = ECEFToLLA(ecef);
- // Analytic derivatives
- Eigen::Matrix3d lla_to_ecef_jacobian = LLAToECEFJacobian(lla);
- bool invertible;
- Eigen::Matrix3d ecef_to_lla_jacobian;
- lla_to_ecef_jacobian.computeInverseWithCheck(ecef_to_lla_jacobian, invertible);
+ Using :class:`SubsetManifold`.
diff --git a/docs/source/nnls_covariance.rst b/docs/source/nnls_covariance.rst
index 66afd44..f95d246 100644
--- a/docs/source/nnls_covariance.rst
+++ b/docs/source/nnls_covariance.rst
@@ -1,3 +1,4 @@
+.. highlight:: c++
.. default-domain:: cpp
@@ -115,7 +116,7 @@
four dimensional quaternion used to parameterize :math:`SO(3)`,
which is a three dimensional manifold. In cases like this, the
user should use an appropriate
- :class:`LocalParameterization`. Not only will this lead to better
+ :class:`Manifold`. Not only will this lead to better
numerical behaviour of the Solver, it will also expose the rank
deficiency to the :class:`Covariance` object so that it can
handle it correctly.
@@ -166,7 +167,7 @@
moderately fast algorithm suitable for small to medium sized
matrices. For best performance we recommend using
``SuiteSparseQR`` which is enabled by setting
- :member:`Covaraince::Options::sparse_linear_algebra_library_type`
+ :member:`Covariance::Options::sparse_linear_algebra_library_type`
to ``SUITE_SPARSE``.
``SPARSE_QR`` cannot compute the covariance if the
@@ -187,6 +188,23 @@
well as rank deficient Jacobians.
+.. member:: double Covariance::Options::column_pivot_threshold
+
+ Default: :math:`-1`
+
+ During QR factorization, if a column with Euclidean norm less than
+ ``column_pivot_threshold`` is encountered it is treated as zero.
+
+ If ``column_pivot_threshold < 0``, then an automatic default value
+ of `20*(m+n)*eps*sqrt(max(diag(J’*J)))` is used. Here `m` and `n`
+ are the number of rows and columns of the Jacobian (`J`)
+ respectively.
+
+ This is an advanced option meant for users who know enough about
+ their Jacobian matrices that they can determine a value better
+ than the default.
+
+
.. member:: int Covariance::Options::min_reciprocal_condition_number
Default: :math:`10^{-14}`
@@ -221,7 +239,7 @@
.. math:: \frac{\sigma_{\text{min}}}{\sigma_{\text{max}}} < \sqrt{\text{min_reciprocal_condition_number}}
where :math:`\sigma_{\text{min}}` and
- :math:`\sigma_{\text{max}}` are the minimum and maxiumum
+ :math:`\sigma_{\text{max}}` are the minimum and maximum
singular values of :math:`J` respectively.
2. ``SPARSE_QR``
@@ -285,7 +303,7 @@
entire documentation for :class:`Covariance::Options` before using
:class:`Covariance`.
-.. function:: bool Covariance::Compute(const vector<pair<const double*, const double*> >& covariance_blocks, Problem* problem)
+.. function:: bool Covariance::Compute(const std::vector<std::pair<const double*, const double*> >& covariance_blocks, Problem* problem)
Compute a part of the covariance matrix.
@@ -361,7 +379,7 @@
Covariance::Options options;
Covariance covariance(options);
- vector<pair<const double*, const double*> > covariance_blocks;
+ std::vector<std::pair<const double*, const double*> > covariance_blocks;
covariance_blocks.push_back(make_pair(x, x));
covariance_blocks.push_back(make_pair(y, y));
covariance_blocks.push_back(make_pair(x, y));
diff --git a/docs/source/nnls_modeling.rst b/docs/source/nnls_modeling.rst
index c0c3227..be87149 100644
--- a/docs/source/nnls_modeling.rst
+++ b/docs/source/nnls_modeling.rst
@@ -1,3 +1,5 @@
+.. highlight:: c++
+
.. default-domain:: cpp
.. cpp:namespace:: ceres
@@ -50,7 +52,7 @@
As a special case, when :math:`\rho_i(x) = x`, i.e., the identity
function, and :math:`l_j = -\infty` and :math:`u_j = \infty` we get
-the more familiar unconstrained `non-linear least squares problem
+the usual unconstrained `non-linear least squares problem
<http://en.wikipedia.org/wiki/Non-linear_least_squares>`_.
.. math:: :label: ceresproblemunconstrained
@@ -80,12 +82,12 @@
public:
virtual bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) = 0;
- const vector<int32>& parameter_block_sizes();
+ double** jacobians) const = 0;
+ const std::vector<int32>& parameter_block_sizes();
int num_residuals() const;
protected:
- vector<int32>* mutable_parameter_block_sizes();
+ std::vector<int32>* mutable_parameter_block_sizes();
void set_num_residuals(int num_residuals);
};
@@ -98,7 +100,7 @@
the corresponding accessors. This information will be verified by the
:class:`Problem` when added with :func:`Problem::AddResidualBlock`.
-.. function:: bool CostFunction::Evaluate(double const* const* parameters, double* residuals, double** jacobians)
+.. function:: bool CostFunction::Evaluate(double const* const* parameters, double* residuals, double** jacobians) const
Compute the residual vector and the Jacobian matrices.
@@ -179,12 +181,19 @@
class AutoDiffCostFunction : public
SizedCostFunction<kNumResiduals, Ns> {
public:
- AutoDiffCostFunction(CostFunctor* functor, ownership = TAKE_OWNERSHIP);
+ // Instantiate CostFunctor using the supplied arguments.
+ template<class ...Args>
+ explicit AutoDiffCostFunction(Args&& ...args);
+ explicit AutoDiffCostFunction(std::unique_ptr<CostFunctor> functor);
+ explicit AutoDiffCostFunction(CostFunctor* functor, ownership = TAKE_OWNERSHIP);
+
// Ignore the template parameter kNumResiduals and use
// num_residuals instead.
AutoDiffCostFunction(CostFunctor* functor,
int num_residuals,
ownership = TAKE_OWNERSHIP);
+ AutoDiffCostFunction(std::unique_ptr<CostFunctor> functor,
+ int num_residuals);
};
To get an auto differentiated cost function, you must define a
@@ -242,9 +251,9 @@
.. code-block:: c++
- CostFunction* cost_function
- = new AutoDiffCostFunction<MyScalarCostFunctor, 1, 2, 2>(
- new MyScalarCostFunctor(1.0)); ^ ^ ^
+ auto* cost_function
+ = new AutoDiffCostFunction<MyScalarCostFunctor, 1, 2, 2>(1.0);
+ ^ ^ ^
| | |
Dimension of residual ------+ | |
Dimension of x ----------------+ |
@@ -270,7 +279,7 @@
.. code-block:: c++
MyScalarCostFunctor functor(1.0)
- CostFunction* cost_function
+ auto* cost_function
= new AutoDiffCostFunction<MyScalarCostFunctor, 1, 2, 2>(
&functor, DO_NOT_TAKE_OWNERSHIP);
@@ -279,9 +288,11 @@
.. code-block:: c++
- CostFunction* cost_function
- = new AutoDiffCostFunction<MyScalarCostFunctor, DYNAMIC, 2, 2>(
- new CostFunctorWithDynamicNumResiduals(1.0), ^ ^ ^
+ auto functor = std::make_unique<CostFunctorWithDynamicNumResiduals>(1.0);
+ auto* cost_function
+ = new AutoDiffCostFunction<CostFunctorWithDynamicNumResiduals,
+ DYNAMIC, 2, 2>(
+ std::move(functor), ^ ^ ^
runtime_number_of_residuals); <----+ | | |
| | | |
| | | |
@@ -290,13 +301,13 @@
Dimension of x ------------------------------------+ |
Dimension of y ---------------------------------------+
- **WARNING 1** A common beginner's error when first using
- :class:`AutoDiffCostFunction` is to get the sizing wrong. In particular,
- there is a tendency to set the template parameters to (dimension of
- residual, number of parameters) instead of passing a dimension
- parameter for *every parameter block*. In the example above, that
- would be ``<MyScalarCostFunction, 1, 2>``, which is missing the 2
- as the last template argument.
+ .. warning::
+ A common beginner's error when first using :class:`AutoDiffCostFunction`
+ is to get the sizing wrong. In particular, there is a tendency to set the
+ template parameters to (dimension of residual, number of parameters)
+ instead of passing a dimension parameter for *every parameter block*. In
+ the example above, that would be ``<MyScalarCostFunction, 1, 2>``, which
+ is missing the 2 as the last template argument.
:class:`DynamicAutoDiffCostFunction`
@@ -334,9 +345,7 @@
.. code-block:: c++
- DynamicAutoDiffCostFunction<MyCostFunctor, 4>* cost_function =
- new DynamicAutoDiffCostFunction<MyCostFunctor, 4>(
- new MyCostFunctor());
+ auto* cost_function = new DynamicAutoDiffCostFunction<MyCostFunctor, 4>();
cost_function->AddParameterBlock(5);
cost_function->AddParameterBlock(10);
cost_function->SetNumResiduals(21);
@@ -441,9 +450,9 @@
.. code-block:: c++
- CostFunction* cost_function
- = new NumericDiffCostFunction<MyScalarCostFunctor, CENTRAL, 1, 2, 2>(
- new MyScalarCostFunctor(1.0)); ^ ^ ^ ^
+ auto* cost_function
+ = new NumericDiffCostFunction<MyScalarCostFunctor, CENTRAL, 1, 2, 2>(1.0)
+ ^ ^ ^ ^
| | | |
Finite Differencing Scheme -+ | | |
Dimension of residual ------------+ | |
@@ -463,17 +472,18 @@
.. code-block:: c++
- CostFunction* cost_function
- = new NumericDiffCostFunction<MyScalarCostFunctor, CENTRAL, DYNAMIC, 2, 2>(
- new CostFunctorWithDynamicNumResiduals(1.0), ^ ^ ^
- TAKE_OWNERSHIP, | | |
- runtime_number_of_residuals); <----+ | | |
- | | | |
- | | | |
- Actual number of residuals ------+ | | |
- Indicate dynamic number of residuals --------------------+ | |
- Dimension of x ------------------------------------------------+ |
- Dimension of y ---------------------------------------------------+
+ auto functor = std::make_unique<CostFunctorWithDynamicNumResiduals>(1.0);
+ auto* cost_function
+ = new NumericDiffCostFunction<CostFunctorWithDynamicNumResiduals,
+ CENTRAL, DYNAMIC, 2, 2>(
+ std::move(functor), ^ ^ ^
+ runtime_number_of_residuals); <----+ | | |
+ | | | |
+ | | | |
+ Actual number of residuals ------+ | | |
+ Indicate dynamic number of residuals --------+ | |
+ Dimension of x ------------------------------------+ |
+ Dimension of y ---------------------------------------+
There are three available numeric differentiation schemes in ceres-solver:
@@ -502,18 +512,18 @@
results, either try forward difference to improve performance or
Ridders' method to improve accuracy.
- **WARNING** A common beginner's error when first using
- :class:`NumericDiffCostFunction` is to get the sizing wrong. In
- particular, there is a tendency to set the template parameters to
- (dimension of residual, number of parameters) instead of passing a
- dimension parameter for *every parameter*. In the example above,
- that would be ``<MyScalarCostFunctor, 1, 2>``, which is missing the
- last ``2`` argument. Please be careful when setting the size
- parameters.
+ .. warning::
+ A common beginner's error when first using
+ :class:`NumericDiffCostFunction` is to get the sizing wrong. In
+ particular, there is a tendency to set the template parameters to
+ (dimension of residual, number of parameters) instead of passing a
+ dimension parameter for *every parameter*. In the example above, that
+ would be ``<MyScalarCostFunctor, 1, 2>``, which is missing the last ``2``
+ argument. Please be careful when setting the size parameters.
-Numeric Differentiation & LocalParameterization
------------------------------------------------
+Numeric Differentiation & Manifolds
+-----------------------------------
If your cost function depends on a parameter block that must lie on
a manifold and the functor cannot be evaluated for values of that
@@ -522,11 +532,10 @@
This is because numeric differentiation in Ceres is performed by
perturbing the individual coordinates of the parameter blocks that
- a cost functor depends on. In doing so, we assume that the
- parameter blocks live in an Euclidean space and ignore the
- structure of manifold that they live As a result some of the
- perturbations may not lie on the manifold corresponding to the
- parameter block.
+ a cost functor depends on. This perturbation assumes that the
+ parameter block lives on a Euclidean Manifold rather than the
+ actual manifold associated with the parameter block. As a result
+ some of the perturbed points may not lie on the manifold anymore.
For example consider a four dimensional parameter block that is
interpreted as a unit Quaternion. Perturbing the coordinates of
@@ -534,7 +543,7 @@
parameter block.
Fixing this problem requires that :class:`NumericDiffCostFunction`
- be aware of the :class:`LocalParameterization` associated with each
+ be aware of the :class:`Manifold` associated with each
parameter block and only generate perturbations in the local
tangent space of each parameter block.
@@ -568,9 +577,8 @@
.. code-block:: c++
- CostFunction* cost_function
- = new NumericDiffCostFunction<MyCostFunction, CENTRAL, 1, 4, 8>(
- new MyCostFunction(...), TAKE_OWNERSHIP);
+ auto* cost_function
+ = new NumericDiffCostFunction<MyCostFunction, CENTRAL, 1, 4, 8>(...);
where ``MyCostFunction`` has 1 residual and 2 parameter blocks with
sizes 4 and 8 respectively. Look at the tests for a more detailed
@@ -611,8 +619,7 @@
.. code-block:: c++
- DynamicNumericDiffCostFunction<MyCostFunctor>* cost_function =
- new DynamicNumericDiffCostFunction<MyCostFunctor>(new MyCostFunctor);
+ auto cost_function = std::make_unique<DynamicNumericDiffCostFunction<MyCostFunctor>>();
cost_function->AddParameterBlock(5);
cost_function->AddParameterBlock(10);
cost_function->SetNumResiduals(21);
@@ -620,9 +627,9 @@
As a rule of thumb, try using :class:`NumericDiffCostFunction` before
you use :class:`DynamicNumericDiffCostFunction`.
- **WARNING** The same caution about mixing local parameterizations
- with numeric differentiation applies as is the case with
- :class:`NumericDiffCostFunction`.
+ .. warning::
+ The same caution about mixing manifolds with numeric differentiation
+ applies as is the case with :class:`NumericDiffCostFunction`.
:class:`CostFunctionToFunctor`
==============================
@@ -671,8 +678,8 @@
.. code-block:: c++
struct CameraProjection {
- CameraProjection(double* observation)
- : intrinsic_projection_(new IntrinsicProjection(observation)) {
+ explicit CameraProjection(double* observation)
+ : intrinsic_projection_(std::make_unique<IntrinsicProjection>(observation)) {
}
template <typename T>
@@ -690,7 +697,7 @@
}
private:
- CostFunctionToFunctor<2,5,3> intrinsic_projection_;
+ CostFunctionToFunctor<2, 5, 3> intrinsic_projection_;
};
Note that :class:`CostFunctionToFunctor` takes ownership of the
@@ -732,10 +739,9 @@
.. code-block:: c++
struct CameraProjection {
- CameraProjection(double* observation)
+ explicit CameraProjection(double* observation)
: intrinsic_projection_(
- new NumericDiffCostFunction<IntrinsicProjection, CENTRAL, 2, 5, 3>(
- new IntrinsicProjection(observation))) {}
+ std::make_unique<NumericDiffCostFunction<IntrinsicProjection, CENTRAL, 2, 5, 3>>()) {}
template <typename T>
bool operator()(const T* rotation,
@@ -793,8 +799,8 @@
.. code-block:: c++
struct CameraProjection {
- CameraProjection(double* observation)
- : intrinsic_projection_(new IntrinsicProjection(observation)) {
+ explicit CameraProjection(double* observation)
+ : intrinsic_projection_(std::make_unique<IntrinsicProjection>(observation)) {
}
template <typename T>
@@ -841,7 +847,7 @@
// my_cost_function produces N residuals
CostFunction* my_cost_function = ...
CHECK_EQ(N, my_cost_function->num_residuals());
- vector<CostFunction*> conditioners;
+ std::vector<CostFunction*> conditioners;
// Make N 1x1 cost functions (1 parameter, 1 residual)
CostFunction* f_1 = ...
@@ -864,38 +870,39 @@
:class:`GradientChecker`
-================================
+========================
.. class:: GradientChecker
- This class compares the Jacobians returned by a cost function against
- derivatives estimated using finite differencing. It is meant as a tool for
- unit testing, giving you more fine-grained control than the check_gradients
- option in the solver options.
+ This class compares the Jacobians returned by a cost function
+ against derivatives estimated using finite differencing. It is
+ meant as a tool for unit testing, giving you more fine-grained
+ control than the check_gradients option in the solver options.
The condition enforced is that
.. math:: \forall{i,j}: \frac{J_{ij} - J'_{ij}}{max_{ij}(J_{ij} - J'_{ij})} < r
- where :math:`J_{ij}` is the jacobian as computed by the supplied cost
- function (by the user) multiplied by the local parameterization Jacobian,
+ where :math:`J_{ij}` is the jacobian as computed by the supplied
+ cost function multiplied by the `Manifold::PlusJacobian`,
:math:`J'_{ij}` is the jacobian as computed by finite differences,
- multiplied by the local parameterization Jacobian as well, and :math:`r`
+ multiplied by the `Manifold::PlusJacobian` as well, and :math:`r`
is the relative precision.
Usage:
.. code-block:: c++
- // my_cost_function takes two parameter blocks. The first has a local
- // parameterization associated with it.
+ // my_cost_function takes two parameter blocks. The first has a
+ // manifold associated with it.
+
CostFunction* my_cost_function = ...
- LocalParameterization* my_parameterization = ...
+ Manifold* my_manifold = ...
NumericDiffOptions numeric_diff_options;
- std::vector<LocalParameterization*> local_parameterizations;
- local_parameterizations.push_back(my_parameterization);
- local_parameterizations.push_back(nullptr);
+ std::vector<Manifold*> manifolds;
+ manifolds.push_back(my_manifold);
+ manifolds.push_back(nullptr);
std::vector parameter1;
std::vector parameter2;
@@ -906,7 +913,8 @@
parameter_blocks.push_back(parameter2.data());
GradientChecker gradient_checker(my_cost_function,
- local_parameterizations, numeric_diff_options);
+ manifolds,
+ numeric_diff_options);
GradientCheckResults results;
if (!gradient_checker.Probe(parameter_blocks.data(), 1e-9, &results) {
LOG(ERROR) << "An error has occurred:\n" << results.error_log;
@@ -1065,6 +1073,10 @@
.. math:: \rho(s,a,b) = b \log(1 + e^{(s - a) / b}) - b \log(1 + e^{-a / b})
+.. class:: TukeyLoss
+
+ .. math:: \rho(s) = \begin{cases} \frac{1}{3} (1 - (1 - s)^3) & s \le 1\\ \frac{1}{3} & s > 1 \end{cases}
+
.. class:: ComposedLoss
Given two loss functions ``f`` and ``g``, implements the loss
@@ -1115,9 +1127,8 @@
// Add parameter blocks
- CostFunction* cost_function =
- new AutoDiffCostFunction < UW_Camera_Mapper, 2, 9, 3>(
- new UW_Camera_Mapper(feature_x, feature_y));
+ auto* cost_function =
+ new AutoDiffCostFunction<UW_Camera_Mapper, 2, 9, 3>(feature_x, feature_y);
LossFunctionWrapper* loss_function(new HuberLoss(1.0), TAKE_OWNERSHIP);
problem.AddResidualBlock(cost_function, loss_function, parameters);
@@ -1154,7 +1165,6 @@
matrix such that the robustified Gauss-Newton step corresponds to an
ordinary linear least squares problem.
-
Let :math:`\alpha` be a root of
.. math:: \frac{1}{2}\alpha^2 - \alpha - \frac{\rho''}{\rho'}\|f(x)\|^2 = 0.
@@ -1178,363 +1188,644 @@
problems.
-:class:`LocalParameterization`
-==============================
-
-.. class:: LocalParameterization
-
- In many optimization problems, especially sensor fusion problems,
- one has to model quantities that live in spaces known as `Manifolds
- <https://en.wikipedia.org/wiki/Manifold>`_ , for example the
- rotation/orientation of a sensor that is represented by a
- `Quaternion
- <https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation>`_.
-
- Manifolds are spaces, which locally look like Euclidean spaces. More
- precisely, at each point on the manifold there is a linear space
- that is tangent to the manifold. It has dimension equal to the
- intrinsic dimension of the manifold itself, which is less than or
- equal to the ambient space in which the manifold is embedded.
-
- For example, the tangent space to a point on a sphere in three
- dimensions is the two dimensional plane that is tangent to the
- sphere at that point. There are two reasons tangent spaces are
- interesting:
-
- 1. They are Euclidean spaces, so the usual vector space operations
- apply there, which makes numerical operations easy.
-
- 2. Movement in the tangent space translate into movements along the
- manifold. Movements perpendicular to the tangent space do not
- translate into movements on the manifold.
-
- Returning to our sphere example, moving in the 2 dimensional
- plane tangent to the sphere and projecting back onto the sphere
- will move you away from the point you started from but moving
- along the normal at the same point and the projecting back onto
- the sphere brings you back to the point.
-
- Besides the mathematical niceness, modeling manifold valued
- quantities correctly and paying attention to their geometry has
- practical benefits too:
-
- 1. It naturally constrains the quantity to the manifold through out
- the optimization. Freeing the user from hacks like *quaternion
- normalization*.
-
- 2. It reduces the dimension of the optimization problem to its
- *natural* size. For example, a quantity restricted to a line, is a
- one dimensional object regardless of the dimension of the ambient
- space in which this line lives.
-
- Working in the tangent space reduces not just the computational
- complexity of the optimization algorithm, but also improves the
- numerical behaviour of the algorithm.
-
- A basic operation one can perform on a manifold is the
- :math:`\boxplus` operation that computes the result of moving along
- delta in the tangent space at x, and then projecting back onto the
- manifold that x belongs to. Also known as a *Retraction*,
- :math:`\boxplus` is a generalization of vector addition in Euclidean
- spaces. Formally, :math:`\boxplus` is a smooth map from a
- manifold :math:`\mathcal{M}` and its tangent space
- :math:`T_\mathcal{M}` to the manifold :math:`\mathcal{M}` that
- obeys the identity
-
- .. math:: \boxplus(x, 0) = x,\quad \forall x.
-
- That is, it ensures that the tangent space is *centered* at :math:`x`
- and the zero vector is the identity element. For more see
- [Hertzberg]_ and section A.6.9 of [HartleyZisserman]_.
-
- Let us consider two examples:
-
- The Euclidean space :math:`R^n` is the simplest example of a
- manifold. It has dimension :math:`n` (and so does its tangent space)
- and :math:`\boxplus` is the familiar vector sum operation.
-
- .. math:: \boxplus(x, \Delta) = x + \Delta
-
- A more interesting case is :math:`SO(3)`, the special orthogonal
- group in three dimensions - the space of 3x3 rotation
- matrices. :math:`SO(3)` is a three dimensional manifold embedded in
- :math:`R^9` or :math:`R^{3\times 3}`.
-
- :math:`\boxplus` on :math:`SO(3)` is defined using the *Exponential*
- map, from the tangent space (:math:`R^3`) to the manifold. The
- Exponential map :math:`\operatorname{Exp}` is defined as:
-
- .. math::
-
- \operatorname{Exp}([p,q,r]) = \left [ \begin{matrix}
- \cos \theta + cp^2 & -sr + cpq & sq + cpr \\
- sr + cpq & \cos \theta + cq^2& -sp + cqr \\
- -sq + cpr & sp + cqr & \cos \theta + cr^2
- \end{matrix} \right ]
-
- where,
-
- .. math::
- \theta = \sqrt{p^2 + q^2 + r^2}, s = \frac{\sin \theta}{\theta},
- c = \frac{1 - \cos \theta}{\theta^2}.
-
- Then,
-
- .. math::
-
- \boxplus(x, \Delta) = x \operatorname{Exp}(\Delta)
-
- The ``LocalParameterization`` interface allows the user to define
- and associate with parameter blocks the manifold that they belong
- to. It does so by defining the ``Plus`` (:math:`\boxplus`) operation
- and its derivative with respect to :math:`\Delta` at :math:`\Delta =
- 0`.
-
- .. code-block:: c++
-
- class LocalParameterization {
- public:
- virtual ~LocalParameterization() {}
- virtual bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const = 0;
- virtual bool ComputeJacobian(const double* x, double* jacobian) const = 0;
- virtual bool MultiplyByJacobian(const double* x,
- const int num_rows,
- const double* global_matrix,
- double* local_matrix) const;
- virtual int GlobalSize() const = 0;
- virtual int LocalSize() const = 0;
- };
+While the theory described above is elegant, in practice we observe
+that using the Triggs correction when :math:`\rho'' > 0` leads to poor
+performance, so we upper bound it by zero. For more details see
+`corrector.cc <https://github.com/ceres-solver/ceres-solver/blob/master/internal/ceres/corrector.cc#L51>`_
-.. function:: int LocalParameterization::GlobalSize()
+:class:`Manifold`
+==================
- The dimension of the ambient space in which the parameter block
- :math:`x` lives.
+.. class:: Manifold
-.. function:: int LocalParameterization::LocalSize()
+In sensor fusion problems, we often have to model quantities that live
+in spaces known as `Manifolds
+<https://en.wikipedia.org/wiki/Manifold>`_, for example the
+rotation/orientation of a sensor that is represented by a `Quaternion
+<https://en.wikipedia.org/wiki/Quaternion>`_.
- The size of the tangent space that :math:`\Delta` lives in.
+Manifolds are spaces which locally look like Euclidean spaces. More
+precisely, at each point on the manifold there is a linear space that
+is tangent to the manifold. It has dimension equal to the intrinsic
+dimension of the manifold itself, which is less than or equal to the
+ambient space in which the manifold is embedded.
-.. function:: bool LocalParameterization::Plus(const double* x, const double* delta, double* x_plus_delta) const
+For example, the tangent space to a point on a sphere in three
+dimensions is the two dimensional plane that is tangent to the sphere
+at that point. There are two reasons tangent spaces are interesting:
- :func:`LocalParameterization::Plus` implements :math:`\boxplus(x,\Delta)`.
+1. They are Eucliean spaces so the usual vector space operations apply
+ there, which makes numerical operations easy.
-.. function:: bool LocalParameterization::ComputeJacobian(const double* x, double* jacobian) const
+2. Movements in the tangent space translate into movements along the
+ manifold. Movements perpendicular to the tangent space do not
+ translate into movements on the manifold.
- Computes the Jacobian matrix
+However, moving along the 2 dimensional plane tangent to the sphere
+and projecting back onto the sphere will move you away from the point
+you started from but moving along the normal at the same point and the
+projecting back onto the sphere brings you back to the point.
- .. math:: J = D_2 \boxplus(x, 0)
+Besides the mathematical niceness, modeling manifold valued
+quantities correctly and paying attention to their geometry has
+practical benefits too:
- in row major form.
+1. It naturally constrains the quantity to the manifold throughout the
+ optimization, freeing the user from hacks like *quaternion
+ normalization*.
-.. function:: bool MultiplyByJacobian(const double* x, const int num_rows, const double* global_matrix, double* local_matrix) const
+2. It reduces the dimension of the optimization problem to its
+ *natural* size. For example, a quantity restricted to a line is a
+ one dimensional object regardless of the dimension of the ambient
+ space in which this line lives.
- ``local_matrix = global_matrix * jacobian``
+ Working in the tangent space reduces not just the computational
+ complexity of the optimization algorithm, but also improves the
+ numerical behaviour of the algorithm.
- ``global_matrix`` is a ``num_rows x GlobalSize`` row major matrix.
- ``local_matrix`` is a ``num_rows x LocalSize`` row major matrix.
- ``jacobian`` is the matrix returned by :func:`LocalParameterization::ComputeJacobian` at :math:`x`.
+A basic operation one can perform on a manifold is the
+:math:`\boxplus` operation that computes the result of moving along
+:math:`\delta` in the tangent space at :math:`x`, and then projecting
+back onto the manifold that :math:`x` belongs to. Also known as a
+*Retraction*, :math:`\boxplus` is a generalization of vector addition
+in Euclidean spaces.
- This is only used by :class:`GradientProblem`. For most normal
- uses, it is okay to use the default implementation.
+The inverse of :math:`\boxplus` is :math:`\boxminus`, which given two
+points :math:`y` and :math:`x` on the manifold computes the tangent
+vector :math:`\Delta` at :math:`x` s.t. :math:`\boxplus(x, \Delta) =
+y`.
+
+Let us now consider two examples.
+
+The `Euclidean space <https://en.wikipedia.org/wiki/Euclidean_space>`_
+:math:`\mathbb{R}^n` is the simplest example of a manifold. It has
+dimension :math:`n` (and so does its tangent space) and
+:math:`\boxplus` and :math:`\boxminus` are the familiar vector sum and
+difference operations.
+
+.. math::
+ \begin{align*}
+ \boxplus(x, \Delta) &= x + \Delta = y\\
+ \boxminus(y, x) &= y - x = \Delta.
+ \end{align*}
+
+A more interesting case is the case :math:`SO(3)`, the `special
+orthogonal group <https://en.wikipedia.org/wiki/3D_rotation_group>`_
+in three dimensions - the space of :math:`3\times3` rotation
+matrices. :math:`SO(3)` is a three dimensional manifold embedded in
+:math:`\mathbb{R}^9` or :math:`\mathbb{R}^{3\times 3}`. So points on :math:`SO(3)` are
+represented using 9 dimensional vectors or :math:`3\times 3` matrices,
+and points in its tangent spaces are represented by 3 dimensional
+vectors.
+
+For :math:`SO(3)`, :math:`\boxplus` and :math:`\boxminus` are defined
+in terms of the matrix :math:`\exp` and :math:`\log` operations as
+follows:
+
+Given a 3-vector :math:`\Delta = [\begin{matrix}p,&q,&r\end{matrix}]`, we have
+
+.. math::
+
+ \exp(\Delta) & = \left [ \begin{matrix}
+ \cos \theta + cp^2 & -sr + cpq & sq + cpr \\
+ sr + cpq & \cos \theta + cq^2& -sp + cqr \\
+ -sq + cpr & sp + cqr & \cos \theta + cr^2
+ \end{matrix} \right ]
+
+where,
+
+.. math::
+ \begin{align}
+ \theta &= \sqrt{p^2 + q^2 + r^2},\\
+ s &= \frac{\sin \theta}{\theta},\\
+ c &= \frac{1 - \cos \theta}{\theta^2}.
+ \end{align}
+
+Given :math:`x \in SO(3)`, we have
+
+.. math::
+
+ \log(x) = 1/(2 \sin(\theta)/\theta)\left[\begin{matrix} x_{32} - x_{23},& x_{13} - x_{31},& x_{21} - x_{12}\end{matrix} \right]
+
+
+where,
+
+.. math:: \theta = \cos^{-1}((\operatorname{Trace}(x) - 1)/2)
+
+Then,
+
+.. math::
+ \begin{align*}
+ \boxplus(x, \Delta) &= x \exp(\Delta)
+ \\
+ \boxminus(y, x) &= \log(x^T y)
+ \end{align*}
+
+For :math:`\boxplus` and :math:`\boxminus` to be mathematically
+consistent, the following identities must be satisfied at all points
+:math:`x` on the manifold:
+
+1. :math:`\boxplus(x, 0) = x`. This ensures that the tangent space is
+ *centered* at :math:`x`, and the zero vector is the identity
+ element.
+2. For all :math:`y` on the manifold, :math:`\boxplus(x,
+ \boxminus(y,x)) = y`. This ensures that any :math:`y` can be
+ reached from :math:`x`.
+3. For all :math:`\Delta`, :math:`\boxminus(\boxplus(x, \Delta), x) =
+ \Delta`. This ensures that :math:`\boxplus` is an injective
+ (one-to-one) map.
+4. For all :math:`\Delta_1, \Delta_2\ |\boxminus(\boxplus(x, \Delta_1),
+ \boxplus(x, \Delta_2)) \leq |\Delta_1 - \Delta_2|`. Allows us to define
+ a metric on the manifold.
+
+Additionally we require that :math:`\boxplus` and :math:`\boxminus` be
+sufficiently smooth. In particular they need to be differentiable
+everywhere on the manifold.
+
+For more details, please see [Hertzberg]_
+
+The :class:`Manifold` interface allows the user to define a manifold
+for the purposes optimization by implementing ``Plus`` and ``Minus``
+operations and their derivatives (corresponding naturally to
+:math:`\boxplus` and :math:`\boxminus`).
+
+.. code-block:: c++
+
+ class Manifold {
+ public:
+ virtual ~Manifold();
+ virtual int AmbientSize() const = 0;
+ virtual int TangentSize() const = 0;
+ virtual bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const = 0;
+ virtual bool PlusJacobian(const double* x, double* jacobian) const = 0;
+ virtual bool RightMultiplyByPlusJacobian(const double* x,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const;
+ virtual bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const = 0;
+ virtual bool MinusJacobian(const double* x, double* jacobian) const = 0;
+ };
+
+
+.. function:: int Manifold::AmbientSize() const;
+
+ Dimension of the ambient space in which the manifold is embedded.
+
+.. function:: int Manifold::TangentSize() const;
+
+ Dimension of the manifold/tangent space.
+
+.. function:: bool Plus(const double* x, const double* delta, double* x_plus_delta) const;
+
+ Implements the :math:`\boxplus(x,\Delta)` operation for the manifold.
+
+ A generalization of vector addition in Euclidean space, ``Plus``
+ computes the result of moving along ``delta`` in the tangent space
+ at ``x``, and then projecting back onto the manifold that ``x``
+ belongs to.
+
+ ``x`` and ``x_plus_delta`` are :func:`Manifold::AmbientSize` vectors.
+ ``delta`` is a :func:`Manifold::TangentSize` vector.
+
+ Return value indicates if the operation was successful or not.
+
+.. function:: bool PlusJacobian(const double* x, double* jacobian) const;
+
+ Compute the derivative of :math:`\boxplus(x, \Delta)` w.r.t
+ :math:`\Delta` at :math:`\Delta = 0`, i.e. :math:`(D_2
+ \boxplus)(x, 0)`.
+
+ ``jacobian`` is a row-major :func:`Manifold::AmbientSize`
+ :math:`\times` :func:`Manifold::TangentSize` matrix.
+
+ Return value indicates whether the operation was successful or not.
+
+.. function:: bool RightMultiplyByPlusJacobian(const double* x, const int num_rows, const double* ambient_matrix, double* tangent_matrix) const;
+
+ ``tangent_matrix`` = ``ambient_matrix`` :math:`\times` plus_jacobian.
+
+
+ ``ambient_matrix`` is a row-major ``num_rows`` :math:`\times`
+ :func:`Manifold::AmbientSize` matrix.
+
+ ``tangent_matrix`` is a row-major ``num_rows`` :math:`\times`
+ :func:`Manifold::TangentSize` matrix.
+
+ Return value indicates whether the operation was successful or not.
+
+ This function is only used by the :class:`GradientProblemSolver`,
+ where the dimension of the parameter block can be large and it may
+ be more efficient to compute this product directly rather than
+ first evaluating the Jacobian into a matrix and then doing a matrix
+ vector product.
+
+ Because this is not an often used function, we provide a default
+ implementation for convenience. If performance becomes an issue
+ then the user should consider implementing a specialization.
+
+.. function:: bool Minus(const double* y, const double* x, double* y_minus_x) const;
+
+ Implements :math:`\boxminus(y,x)` operation for the manifold.
+
+ A generalization of vector subtraction in Euclidean spaces, given
+ two points ``x`` and ``y`` on the manifold, ``Minus`` computes the
+ change to ``x`` in the tangent space at ``x``, that will take it to
+ ``y``.
+
+ ``x`` and ``y`` are :func:`Manifold::AmbientSize` vectors.
+ ``y_minus_x`` is a ::func:`Manifold::TangentSize` vector.
+
+ Return value indicates if the operation was successful or not.
+
+.. function:: bool MinusJacobian(const double* x, double* jacobian) const = 0;
+
+ Compute the derivative of :math:`\boxminus(y, x)` w.r.t :math:`y`
+ at :math:`y = x`, i.e :math:`(D_1 \boxminus) (x, x)`.
+
+ ``jacobian`` is a row-major :func:`Manifold::TangentSize`
+ :math:`\times` :func:`Manifold::AmbientSize` matrix.
+
+ Return value indicates whether the operation was successful or not.
Ceres Solver ships with a number of commonly used instances of
-:class:`LocalParameterization`. Another great place to find high
-quality implementations of :math:`\boxplus` operations on a variety of
-manifolds is the `Sophus <https://github.com/strasdat/Sophus>`_
-library developed by Hauke Strasdat and his collaborators.
+:class:`Manifold`.
-:class:`IdentityParameterization`
----------------------------------
+For `Lie Groups <https://en.wikipedia.org/wiki/Lie_group>`_, a great
+place to find high quality implementations is the `Sophus
+<https://github.com/strasdat/Sophus>`_ library developed by Hauke
+Strasdat and his collaborators.
-A trivial version of :math:`\boxplus` is when :math:`\Delta` is of the
-same size as :math:`x` and
+:class:`EuclideanManifold`
+--------------------------
-.. math:: \boxplus(x, \Delta) = x + \Delta
+.. class:: EuclideanManifold
-This is the same as :math:`x` living in a Euclidean manifold.
+:class:`EuclideanManifold` as the name implies represents a Euclidean
+space, where the :math:`\boxplus` and :math:`\boxminus` operations are
+the usual vector addition and subtraction.
-:class:`QuaternionParameterization`
------------------------------------
+.. math::
-Another example that occurs commonly in Structure from Motion problems
-is when camera rotations are parameterized using a quaternion. This is
-a 3-dimensional manifold that lives in 4-dimensional space.
+ \begin{align*}
+ \boxplus(x, \Delta) &= x + \Delta\\
+ \boxminus(y,x) &= y - x
+ \end{align*}
-.. math:: \boxplus(x, \Delta) = \left[ \cos(|\Delta|), \frac{\sin\left(|\Delta|\right)}{|\Delta|} \Delta \right] * x
+By default parameter blocks are assumed to be Euclidean, so there is
+no need to use this manifold on its own. It is provided for the
+purpose of testing and for use in combination with other manifolds
+using :class:`ProductManifold`.
-The multiplication :math:`*` between the two 4-vectors on the right
-hand side is the standard quaternion product.
+The class works with dynamic and static ambient space dimensions. If
+the ambient space dimensions is known at compile time use
-:class:`EigenQuaternionParameterization`
-----------------------------------------
+.. code-block:: c++
-`Eigen <http://eigen.tuxfamily.org/index.php?title=Main_Page>`_ uses a
-different internal memory layout for the elements of the quaternion
-than what is commonly used. Specifically, Eigen stores the elements in
-memory as :math:`(x, y, z, w)`, i.e., the *real* part (:math:`w`) is
-stored as the last element. Note, when creating an Eigen quaternion
-through the constructor the elements are accepted in :math:`w, x, y,
-z` order.
+ EuclideanManifold<3> manifold;
-Since Ceres operates on parameter blocks which are raw ``double``
-pointers this difference is important and requires a different
-parameterization. :class:`EigenQuaternionParameterization` uses the
-same ``Plus`` operation as :class:`QuaternionParameterization` but
-takes into account Eigen's internal memory element ordering.
+If the ambient space dimensions is not known at compile time the
+template parameter needs to be set to `ceres::DYNAMIC` and the actual
+dimension needs to be provided as a constructor argument:
-:class:`SubsetParameterization`
--------------------------------
+.. code-block:: c++
+
+ EuclideanManifold<ceres::DYNAMIC> manifold(ambient_dim);
+
+:class:`SubsetManifold`
+-----------------------
+
+.. class:: SubsetManifold
Suppose :math:`x` is a two dimensional vector, and the user wishes to
hold the first coordinate constant. Then, :math:`\Delta` is a scalar
and :math:`\boxplus` is defined as
-.. math:: \boxplus(x, \Delta) = x + \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \Delta
+.. math::
+ \boxplus(x, \Delta) = x + \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \Delta
-:class:`SubsetParameterization` generalizes this construction to hold
+and given two, two-dimensional vectors :math:`x` and :math:`y` with
+the same first coordinate, :math:`\boxminus` is defined as:
+
+.. math::
+ \boxminus(y, x) = y[1] - x[1]
+
+:class:`SubsetManifold` generalizes this construction to hold
any part of a parameter block constant by specifying the set of
coordinates that are held constant.
.. NOTE::
- It is legal to hold all coordinates of a parameter block to constant
- using a :class:`SubsetParameterization`. It is the same as calling
+
+ It is legal to hold *all* coordinates of a parameter block to
+ constant using a :class:`SubsetManifold`. It is the same as calling
:func:`Problem::SetParameterBlockConstant` on that parameter block.
-:class:`HomogeneousVectorParameterization`
-------------------------------------------
-In computer vision, homogeneous vectors are commonly used to represent
-objects in projective geometry such as points in projective space. One
-example where it is useful to use this over-parameterization is in
-representing points whose triangulation is ill-conditioned. Here it is
-advantageous to use homogeneous vectors, instead of an Euclidean
-vector, because it can represent points at and near infinity.
+:class:`ProductManifold`
+------------------------
-:class:`HomogeneousVectorParameterization` defines a
-:class:`LocalParameterization` for an :math:`n-1` dimensional
-manifold that embedded in :math:`n` dimensional space where the
-scale of the vector does not matter, i.e., elements of the
-projective space :math:`\mathbb{P}^{n-1}`. It assumes that the last
-coordinate of the :math:`n`-vector is the *scalar* component of the
-homogenous vector, i.e., *finite* points in this representation are
-those for which the *scalar* component is non-zero.
-
-Further, ``HomogeneousVectorParameterization::Plus`` preserves the
-scale of :math:`x`.
-
-:class:`LineParameterization`
------------------------------
-
-This class provides a parameterization for lines, where the line is
-defined using an origin point and a direction vector. So the
-parameter vector size needs to be two times the ambient space
-dimension, where the first half is interpreted as the origin point
-and the second half as the direction. This local parameterization is
-a special case of the `Affine Grassmannian manifold
-<https://en.wikipedia.org/wiki/Affine_Grassmannian_(manifold))>`_
-for the case :math:`\operatorname{Graff}_1(R^n)`.
-
-Note that this is a parameterization for a line, rather than a point
-constrained to lie on a line. It is useful when one wants to optimize
-over the space of lines. For example, :math:`n` distinct points in 3D
-(measurements) we want to find the line that minimizes the sum of
-squared distances to all the points.
-
-:class:`ProductParameterization`
---------------------------------
-
-Consider an optimization problem over the space of rigid
-transformations :math:`SE(3)`, which is the Cartesian product of
-:math:`SO(3)` and :math:`\mathbb{R}^3`. Suppose you are using
-Quaternions to represent the rotation, Ceres ships with a local
-parameterization for that and :math:`\mathbb{R}^3` requires no, or
-:class:`IdentityParameterization` parameterization. So how do we
-construct a local parameterization for a parameter block a rigid
-transformation?
+.. class:: ProductManifold
In cases, where a parameter block is the Cartesian product of a number
-of manifolds and you have the local parameterization of the individual
-manifolds available, :class:`ProductParameterization` can be used to
-construct a local parameterization of the cartesian product. For the
-case of the rigid transformation, where say you have a parameter block
-of size 7, where the first four entries represent the rotation as a
-quaternion, a local parameterization can be constructed as
+of manifolds and you have the manifold of the individual
+parameter blocks available, :class:`ProductManifold` can be used to
+construct a :class:`Manifold` of the Cartesian product.
+
+For the case of the rigid transformation, where say you have a
+parameter block of size 7, where the first four entries represent the
+rotation as a quaternion, and the next three the translation, a
+manifold can be constructed as:
.. code-block:: c++
- ProductParameterization se3_param(new QuaternionParameterization(),
- new IdentityParameterization(3));
+ ProductManifold<QuaternionManifold, EuclideanManifold<3>> se3;
+
+Manifolds can be copied and moved to :class:`ProductManifold`:
+
+.. code-block:: c++
+
+ SubsetManifold manifold1(5, {2});
+ SubsetManifold manifold2(3, {0, 1});
+ ProductManifold<SubsetManifold, SubsetManifold> manifold(manifold1,
+ manifold2);
+
+In advanced use cases, manifolds can be dynamically allocated and passed as (smart) pointers:
+
+.. code-block:: c++
+
+ ProductManifold<std::unique_ptr<QuaternionManifold>, EuclideanManifold<3>> se3
+ {std::make_unique<QuaternionManifold>(), EuclideanManifold<3>{}};
+
+The template parameters can also be left out as they are deduced automatically
+making the initialization much simpler:
+
+.. code-block:: c++
+
+ ProductManifold se3{QuaternionManifold{}, EuclideanManifold<3>{}};
-:class:`AutoDiffLocalParameterization`
-======================================
+:class:`QuaternionManifold`
+---------------------------
-.. class:: AutoDiffLocalParameterization
+.. class:: QuaternionManifold
- :class:`AutoDiffLocalParameterization` does for
- :class:`LocalParameterization` what :class:`AutoDiffCostFunction`
- does for :class:`CostFunction`. It allows the user to define a
- templated functor that implements the
- :func:`LocalParameterization::Plus` operation and it uses automatic
- differentiation to implement the computation of the Jacobian.
+.. NOTE::
- To get an auto differentiated local parameterization, you must
- define a class with a templated operator() (a functor) that computes
+ If you are using ``Eigen`` quaternions, then you should use
+ :class:`EigenQuaternionManifold` instead because ``Eigen`` uses a
+ different memory layout for its Quaternions.
- .. math:: x' = \boxplus(x, \Delta x),
+Manifold for a Hamilton `Quaternion
+<https://en.wikipedia.org/wiki/Quaternion>`_. Quaternions are a three
+dimensional manifold represented as unit norm 4-vectors, i.e.
- For example, Quaternions have a three dimensional local
- parameterization. Its plus operation can be implemented as (taken
- from `internal/ceres/autodiff_local_parameterization_test.cc
- <https://ceres-solver.googlesource.com/ceres-solver/+/master/internal/ceres/autodiff_local_parameterization_test.cc>`_
- )
+.. math:: q = \left [\begin{matrix}q_0,& q_1,& q_2,& q_3\end{matrix}\right], \quad \|q\| = 1
- .. code-block:: c++
+is the ambient space representation. Here :math:`q_0` is the scalar
+part. :math:`q_1` is the coefficient of :math:`i`, :math:`q_2` is the
+coefficient of :math:`j`, and :math:`q_3` is the coefficient of
+:math:`k`. Where:
- struct QuaternionPlus {
- template<typename T>
- bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
- const T squared_norm_delta =
- delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
+.. math::
- T q_delta[4];
- if (squared_norm_delta > 0.0) {
- T norm_delta = sqrt(squared_norm_delta);
- const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
- q_delta[0] = cos(norm_delta);
- q_delta[1] = sin_delta_by_delta * delta[0];
- q_delta[2] = sin_delta_by_delta * delta[1];
- q_delta[3] = sin_delta_by_delta * delta[2];
- } else {
- // We do not just use q_delta = [1,0,0,0] here because that is a
- // constant and when used for automatic differentiation will
- // lead to a zero derivative. Instead we take a first order
- // approximation and evaluate it at zero.
- q_delta[0] = T(1.0);
- q_delta[1] = delta[0];
- q_delta[2] = delta[1];
- q_delta[3] = delta[2];
- }
+ \begin{align*}
+ i\times j &= k,\\
+ j\times k &= i,\\
+ k\times i &= j,\\
+ i\times i &= -1,\\
+ j\times j &= -1,\\
+ k\times k &= -1.
+ \end{align*}
- Quaternionproduct(q_delta, x, x_plus_delta);
- return true;
- }
- };
+The tangent space is three dimensional and the :math:`\boxplus` and
+:math:`\boxminus` operators are defined in term of :math:`\exp` and
+:math:`\log` operations.
- Given this struct, the auto differentiated local
- parameterization can now be constructed as
+.. math::
+ \begin{align*}
+ \boxplus(x, \Delta) &= \exp\left(\Delta\right) \otimes x \\
+ \boxminus(y,x) &= \log\left(y \otimes x^{-1}\right)
+ \end{align*}
- .. code-block:: c++
+Where :math:`\otimes` is the `Quaternion product
+<https://en.wikipedia.org/wiki/Quaternion#Hamilton_product>`_ and
+since :math:`x` is a unit quaternion, :math:`x^{-1} = [\begin{matrix}
+q_0,& -q_1,& -q_2,& -q_3\end{matrix}]`. Given a vector :math:`\Delta
+\in \mathbb{R}^3`,
- LocalParameterization* local_parameterization =
- new AutoDiffLocalParameterization<QuaternionPlus, 4, 3>;
- | |
- Global Size ---------------+ |
- Local Size -------------------+
+.. math::
+ \exp(\Delta) = \left[ \begin{matrix}
+ \cos\left(\|\Delta\|\right)\\
+ \frac{\displaystyle \sin\left(|\Delta\|\right)}{\displaystyle \|\Delta\|} \Delta
+ \end{matrix} \right]
+and given a unit quaternion :math:`q = \left [\begin{matrix}q_0,& q_1,& q_2,& q_3\end{matrix}\right]`
+
+.. math::
+
+ \log(q) = \frac{\operatorname{atan2}\left(\sqrt{1-q_0^2},q_0\right)}{\sqrt{1-q_0^2}} \left [\begin{matrix}q_1,& q_2,& q_3\end{matrix}\right]
+
+
+:class:`EigenQuaternionManifold`
+--------------------------------
+
+.. class:: EigenQuaternionManifold
+
+Implements the quaternion manifold for `Eigen's
+<http://eigen.tuxfamily.org/index.php?title=Main_Page>`_
+representation of the Hamilton quaternion. Geometrically it is exactly
+the same as the :class:`QuaternionManifold` defined above. However,
+Eigen uses a different internal memory layout for the elements of the
+quaternion than what is commonly used. It stores the quaternion in
+memory as :math:`[q_1, q_2, q_3, q_0]` or :math:`[x, y, z, w]` where
+the real (scalar) part is last.
+
+Since Ceres operates on parameter blocks which are raw double pointers
+this difference is important and requires a different manifold.
+
+:class:`SphereManifold`
+-----------------------
+
+.. class:: SphereManifold
+
+This provides a manifold on a sphere meaning that the norm of the
+vector stays the same. Such cases often arises in Structure for Motion
+problems. One example where they are used is in representing points
+whose triangulation is ill-conditioned. Here it is advantageous to use
+an over-parameterization since homogeneous vectors can represent
+points at infinity.
+
+The ambient space dimension is required to be greater than 1.
+
+The class works with dynamic and static ambient space dimensions. If
+the ambient space dimensions is known at compile time use
+
+.. code-block:: c++
+
+ SphereManifold<3> manifold;
+
+If the ambient space dimensions is not known at compile time the
+template parameter needs to be set to `ceres::DYNAMIC` and the actual
+dimension needs to be provided as a constructor argument:
+
+.. code-block:: c++
+
+ SphereManifold<ceres::DYNAMIC> manifold(ambient_dim);
+
+For more details, please see Section B.2 (p.25) in [Hertzberg]_
+
+
+:class:`LineManifold`
+---------------------
+
+.. class:: LineManifold
+
+This class provides a manifold for lines, where the line is defined
+using an origin point and a direction vector. So the ambient size
+needs to be two times the dimension of the space in which the line
+lives. The first half of the parameter block is interpreted as the
+origin point and the second half as the direction. This manifold is a
+special case of the `Affine Grassmannian manifold
+<https://en.wikipedia.org/wiki/Affine_Grassmannian_(manifold))>`_ for
+the case :math:`\operatorname{Graff}_1(R^n)`.
+
+Note that this is a manifold for a line, rather than a point
+constrained to lie on a line. It is useful when one wants to optimize
+over the space of lines. For example, given :math:`n` distinct points
+in 3D (measurements) we want to find the line that minimizes the sum
+of squared distances to all the points.
+
+:class:`AutoDiffManifold`
+=========================
+
+.. class:: AutoDiffManifold
+
+Create a :class:`Manifold` with Jacobians computed via automatic
+differentiation.
+
+To get an auto differentiated manifold, you must define a Functor with
+templated ``Plus`` and ``Minus`` functions that compute:
+
+.. code-block:: c++
+
+ x_plus_delta = Plus(x, delta);
+ y_minus_x = Minus(y, x);
+
+Where, ``x``, ``y`` and ``x_plus_delta`` are vectors on the manifold in
+the ambient space (so they are ``kAmbientSize`` vectors) and
+``delta``, ``y_minus_x`` are vectors in the tangent space (so they are
+``kTangentSize`` vectors).
+
+The Functor should have the signature:
+
+.. code-block:: c++
+
+ struct Functor {
+ template <typename T>
+ bool Plus(const T* x, const T* delta, T* x_plus_delta) const;
+
+ template <typename T>
+ bool Minus(const T* y, const T* x, T* y_minus_x) const;
+ };
+
+
+Observe that the ``Plus`` and ``Minus`` operations are templated on
+the parameter ``T``. The autodiff framework substitutes appropriate
+``Jet`` objects for ``T`` in order to compute the derivative when
+necessary. This is the same mechanism that is used to compute
+derivatives when using :class:`AutoDiffCostFunction`.
+
+``Plus`` and ``Minus`` should return true if the computation is
+successful and false otherwise, in which case the result will not be
+used.
+
+Given this Functor, the corresponding :class:`Manifold` can be constructed as:
+
+.. code-block:: c++
+
+ AutoDiffManifold<Functor, kAmbientSize, kTangentSize> manifold;
+
+.. NOTE::
+
+ The following is only used for illustration purposes. Ceres Solver
+ ships with an optimized, production grade :class:`QuaternionManifold`
+ implementation.
+
+As a concrete example consider the case of `Quaternions
+<https://en.wikipedia.org/wiki/Quaternion>`_. Quaternions form a three
+dimensional manifold embedded in :math:`\mathbb{R}^4`, i.e. they have
+an ambient dimension of 4 and their tangent space has dimension 3. The
+following Functor defines the ``Plus`` and ``Minus`` operations on the
+Quaternion manifold. It assumes that the quaternions are laid out as
+``[w,x,y,z]`` in memory, i.e. the real or scalar part is the first
+coordinate.
+
+.. code-block:: c++
+
+ struct QuaternionFunctor {
+ template <typename T>
+ bool Plus(const T* x, const T* delta, T* x_plus_delta) const {
+ const T squared_norm_delta =
+ delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
+
+ T q_delta[4];
+ if (squared_norm_delta > T(0.0)) {
+ T norm_delta = sqrt(squared_norm_delta);
+ const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
+ q_delta[0] = cos(norm_delta);
+ q_delta[1] = sin_delta_by_delta * delta[0];
+ q_delta[2] = sin_delta_by_delta * delta[1];
+ q_delta[3] = sin_delta_by_delta * delta[2];
+ } else {
+ // We do not just use q_delta = [1,0,0,0] here because that is a
+ // constant and when used for automatic differentiation will
+ // lead to a zero derivative. Instead we take a first order
+ // approximation and evaluate it at zero.
+ q_delta[0] = T(1.0);
+ q_delta[1] = delta[0];
+ q_delta[2] = delta[1];
+ q_delta[3] = delta[2];
+ }
+
+ QuaternionProduct(q_delta, x, x_plus_delta);
+ return true;
+ }
+
+ template <typename T>
+ bool Minus(const T* y, const T* x, T* y_minus_x) const {
+ T minus_x[4] = {x[0], -x[1], -x[2], -x[3]};
+ T ambient_y_minus_x[4];
+ QuaternionProduct(y, minus_x, ambient_y_minus_x);
+ T u_norm = sqrt(ambient_y_minus_x[1] * ambient_y_minus_x[1] +
+ ambient_y_minus_x[2] * ambient_y_minus_x[2] +
+ ambient_y_minus_x[3] * ambient_y_minus_x[3]);
+ if (u_norm > 0.0) {
+ T theta = atan2(u_norm, ambient_y_minus_x[0]);
+ y_minus_x[0] = theta * ambient_y_minus_x[1] / u_norm;
+ y_minus_x[1] = theta * ambient_y_minus_x[2] / u_norm;
+ y_minus_x[2] = theta * ambient_y_minus_x[3] / u_norm;
+ } else {
+ We do not use [0,0,0] here because even though the value part is
+ a constant, the derivative part is not.
+ y_minus_x[0] = ambient_y_minus_x[1];
+ y_minus_x[1] = ambient_y_minus_x[2];
+ y_minus_x[2] = ambient_y_minus_x[3];
+ }
+ return true;
+ }
+ };
+
+
+Then given this struct, the auto differentiated Quaternion Manifold can now
+be constructed as
+
+.. code-block:: c++
+
+ Manifold* manifold = new AutoDiffManifold<QuaternionFunctor, 4, 3>;
:class:`Problem`
================
@@ -1581,10 +1872,10 @@
:func:`Problem::AddParameterBlock` explicitly adds a parameter
block to the :class:`Problem`. Optionally it allows the user to
- associate a :class:`LocalParameterization` object with the
- parameter block too. Repeated calls with the same arguments are
- ignored. Repeated calls with the same double pointer but a
- different size results in undefined behavior.
+ associate a :class:`Manifold` object with the parameter block
+ too. Repeated calls with the same arguments are ignored. Repeated
+ calls with the same double pointer but a different size results in
+ undefined behavior.
You can set any parameter block to be constant using
:func:`Problem::SetParameterBlockConstant` and undo this using
@@ -1604,17 +1895,16 @@
**Ownership**
:class:`Problem` by default takes ownership of the
- ``cost_function``, ``loss_function`` and ``local_parameterization``
- pointers. These objects remain live for the life of the
- :class:`Problem`. If the user wishes to keep control over the
- destruction of these objects, then they can do this by setting the
- corresponding enums in the :class:`Problem::Options` struct.
+ ``cost_function``, ``loss_function`` and ``manifold`` pointers. These
+ objects remain live for the life of the :class:`Problem`. If the user wishes
+ to keep control over the destruction of these objects, then they can do this
+ by setting the corresponding enums in the :class:`Problem::Options` struct.
- Note that even though the Problem takes ownership of ``cost_function``
- and ``loss_function``, it does not preclude the user from re-using
- them in another residual block. The destructor takes care to call
- delete on each ``cost_function`` or ``loss_function`` pointer only
- once, regardless of how many residual blocks refer to them.
+ Note that even though the Problem takes ownership of objects,
+ ``cost_function`` and ``loss_function``, it does not preclude the
+ user from re-using them in another residual block. Similarly the
+ same ``manifold`` object can be used with multiple parameter blocks. The
+ destructor takes care to call delete on each owned object exactly once.
.. class:: Problem::Options
@@ -1627,7 +1917,7 @@
This option controls whether the Problem object owns the cost
functions.
- If set to TAKE_OWNERSHIP, then the problem object will delete the
+ If set to ``TAKE_OWNERSHIP``, then the problem object will delete the
cost functions on destruction. The destructor is careful to delete
the pointers only once, since sharing cost functions is allowed.
@@ -1638,21 +1928,19 @@
This option controls whether the Problem object owns the loss
functions.
- If set to TAKE_OWNERSHIP, then the problem object will delete the
+ If set to ``TAKE_OWNERSHIP``, then the problem object will delete the
loss functions on destruction. The destructor is careful to delete
the pointers only once, since sharing loss functions is allowed.
-.. member:: Ownership Problem::Options::local_parameterization_ownership
+.. member:: Ownership Problem::Options::manifold_ownership
Default: ``TAKE_OWNERSHIP``
- This option controls whether the Problem object owns the local
- parameterizations.
+ This option controls whether the Problem object owns the manifolds.
- If set to TAKE_OWNERSHIP, then the problem object will delete the
- local parameterizations on destruction. The destructor is careful
- to delete the pointers only once, since sharing local
- parameterizations is allowed.
+ If set to ``TAKE_OWNERSHIP``, then the problem object will delete the
+ manifolds on destruction. The destructor is careful to delete the
+ pointers only once, since sharing manifolds is allowed.
.. member:: bool Problem::Options::enable_fast_removal
@@ -1689,12 +1977,13 @@
overhead you want to avoid, then you can set
disable_all_safety_checks to true.
- **WARNING** Do not set this to true, unless you are absolutely
- sure of what you are doing.
+ .. warning::
+ Do not set this to true, unless you are absolutely sure of what you are
+ doing.
.. member:: Context* Problem::Options::context
- Default: `nullptr`
+ Default: ``nullptr``
A Ceres global context to use for solving this problem. This may
help to reduce computation time as Ceres can reuse expensive
@@ -1705,7 +1994,7 @@
.. member:: EvaluationCallback* Problem::Options::evaluation_callback
- Default: `nullptr`
+ Default: ``nullptr``
Using this callback interface, Ceres will notify you when it is
about to evaluate the residuals or Jacobians.
@@ -1725,11 +2014,11 @@
Evaluation callbacks are incompatible with inner iterations. So
calling Solve with
- :member:`Solver::Options::use_inner_iterations` set to `true`
+ :member:`Solver::Options::use_inner_iterations` set to ``true``
on a :class:`Problem` with a non-null evaluation callback is an
error.
-.. function:: ResidualBlockId Problem::AddResidualBlock(CostFunction* cost_function, LossFunction* loss_function, const vector<double*> parameter_blocks)
+.. function:: ResidualBlockId Problem::AddResidualBlock(CostFunction* cost_function, LossFunction* loss_function, const std::vector<double*> parameter_blocks)
.. function:: template <typename Ts...> ResidualBlockId Problem::AddResidualBlock(CostFunction* cost_function, LossFunction* loss_function, double* x0, Ts... xs)
@@ -1738,7 +2027,7 @@
parameter blocks it expects. The function checks that these match
the sizes of the parameter blocks listed in parameter_blocks. The
program aborts if a mismatch is detected. loss_function can be
- `nullptr`, in which case the cost of the term is just the squared
+ ``nullptr``, in which case the cost of the term is just the squared
norm of the residuals.
The parameter blocks may be passed together as a
@@ -1756,11 +2045,12 @@
keep control over the destruction of these objects, then they can
do this by setting the corresponding enums in the Options struct.
- Note: Even though the Problem takes ownership of cost_function
- and loss_function, it does not preclude the user from re-using
- them in another residual block. The destructor takes care to call
- delete on each cost_function or loss_function pointer only once,
- regardless of how many residual blocks refer to them.
+ .. note::
+ Even though the Problem takes ownership of ``cost_function``
+ and ``loss_function``, it does not preclude the user from re-using
+ them in another residual block. The destructor takes care to call
+ delete on each cost_function or loss_function pointer only once,
+ regardless of how many residual blocks refer to them.
Example usage:
@@ -1769,9 +2059,9 @@
double x1[] = {1.0, 2.0, 3.0};
double x2[] = {1.0, 2.0, 5.0, 6.0};
double x3[] = {3.0, 6.0, 2.0, 5.0, 1.0};
- vector<double*> v1;
+ std::vector<double*> v1;
v1.push_back(x1);
- vector<double*> v2;
+ std::vector<double*> v2;
v2.push_back(x2);
v2.push_back(x1);
@@ -1782,12 +2072,19 @@
problem.AddResidualBlock(new MyUnaryCostFunction(...), nullptr, v1);
problem.AddResidualBlock(new MyBinaryCostFunction(...), nullptr, v2);
-.. function:: void Problem::AddParameterBlock(double* values, int size, LocalParameterization* local_parameterization)
+.. function:: void Problem::AddParameterBlock(double* values, int size, Manifold* manifold)
- Add a parameter block with appropriate size to the problem.
+ Add a parameter block with appropriate size and Manifold to the
+ problem. It is okay for ``manifold`` to be ``nullptr``.
+
Repeated calls with the same arguments are ignored. Repeated calls
- with the same double pointer but a different size results in
- undefined behavior.
+ with the same double pointer but a different size results in a crash
+ (unless :member:`Solver::Options::disable_all_safety_checks` is set to true).
+
+ Repeated calls with the same double pointer and size but different
+ :class:`Manifold` is equivalent to calling `SetManifold(manifold)`,
+ i.e., any previously associated :class:`Manifold` object will be replaced
+ with the `manifold`.
.. function:: void Problem::AddParameterBlock(double* values, int size)
@@ -1798,35 +2095,46 @@
.. function:: void Problem::RemoveResidualBlock(ResidualBlockId residual_block)
- Remove a residual block from the problem. Any parameters that the residual
- block depends on are not removed. The cost and loss functions for the
- residual block will not get deleted immediately; won't happen until the
- problem itself is deleted. If Problem::Options::enable_fast_removal is
- true, then the removal is fast (almost constant time). Otherwise, removing a
- residual block will incur a scan of the entire Problem object to verify that
- the residual_block represents a valid residual in the problem.
+ Remove a residual block from the problem.
- **WARNING:** Removing a residual or parameter block will destroy
- the implicit ordering, rendering the jacobian or residuals returned
- from the solver uninterpretable. If you depend on the evaluated
- jacobian, do not use remove! This may change in a future release.
- Hold the indicated parameter block constant during optimization.
+ Since residual blocks are allowed to share cost function and loss
+ function objects, Ceres Solver uses a reference counting
+ mechanism. So when a residual block is deleted, the reference count
+ for the corresponding cost function and loss function objects are
+ decreased and when this count reaches zero, they are deleted.
+
+ If :member:`Problem::Options::enable_fast_removal` is ``true``, then the removal
+ is fast (almost constant time). Otherwise it is linear, requiring a
+ scan of the entire problem.
+
+ Removing a residual block has no effect on the parameter blocks
+ that the problem depends on.
+
+ .. warning::
+ Removing a residual or parameter block will destroy the implicit
+ ordering, rendering the jacobian or residuals returned from the solver
+ uninterpretable. If you depend on the evaluated jacobian, do not use
+ remove! This may change in a future release. Hold the indicated parameter
+ block constant during optimization.
.. function:: void Problem::RemoveParameterBlock(const double* values)
- Remove a parameter block from the problem. The parameterization of
- the parameter block, if it exists, will persist until the deletion
- of the problem (similar to cost/loss functions in residual block
- removal). Any residual blocks that depend on the parameter are also
- removed, as described above in RemoveResidualBlock(). If
- Problem::Options::enable_fast_removal is true, then
- the removal is fast (almost constant time). Otherwise, removing a
- parameter block will incur a scan of the entire Problem object.
+ Remove a parameter block from the problem. Any residual blocks that
+ depend on the parameter are also removed, as described above in
+ :func:`RemoveResidualBlock()`.
- **WARNING:** Removing a residual or parameter block will destroy
- the implicit ordering, rendering the jacobian or residuals returned
- from the solver uninterpretable. If you depend on the evaluated
- jacobian, do not use remove! This may change in a future release.
+ The manifold of the parameter block, if it exists, will persist until the
+ deletion of the problem.
+
+ If :member:`Problem::Options::enable_fast_removal` is ``true``, then the removal
+ is fast (almost constant time). Otherwise, removing a parameter
+ block will scan the entire Problem.
+
+ .. warning::
+ Removing a residual or parameter block will destroy the implicit
+ ordering, rendering the jacobian or residuals returned from the solver
+ uninterpretable. If you depend on the evaluated jacobian, do not use
+ remove! This may change in a future release.
.. function:: void Problem::SetParameterBlockConstant(const double* values)
@@ -1841,23 +2149,34 @@
Returns ``true`` if a parameter block is set constant, and false
otherwise. A parameter block may be set constant in two ways:
either by calling ``SetParameterBlockConstant`` or by associating a
- ``LocalParameterization`` with a zero dimensional tangent space
- with it.
+ :class:`Manifold` with a zero dimensional tangent space with it.
-.. function:: void Problem::SetParameterization(double* values, LocalParameterization* local_parameterization)
+.. function:: void SetManifold(double* values, Manifold* manifold);
- Set the local parameterization for one of the parameter blocks.
- The local_parameterization is owned by the Problem by default. It
- is acceptable to set the same parameterization for multiple
- parameters; the destructor is careful to delete local
- parameterizations only once. Calling `SetParameterization` with
- `nullptr` will clear any previously set parameterization.
+ Set the :class:`Manifold` for the parameter block. Calling
+ :func:`Problem::SetManifold` with ``nullptr`` will clear any
+ previously set :class:`Manifold` for the parameter block.
-.. function:: LocalParameterization* Problem::GetParameterization(const double* values) const
+ Repeated calls will result in any previously associated
+ :class:`Manifold` object to be replaced with ``manifold``.
- Get the local parameterization object associated with this
- parameter block. If there is no parameterization object associated
- then `nullptr` is returned
+ ``manifold`` is owned by :class:`Problem` by default (See
+ :class:`Problem::Options` to override this behaviour).
+
+ It is acceptable to set the same :class:`Manifold` for multiple
+ parameter blocks.
+
+.. function:: const Manifold* GetManifold(const double* values) const;
+
+ Get the :class:`Manifold` object associated with this parameter block.
+
+ If there is no :class:`Manifold` object associated with the parameter block,
+ then ``nullptr`` is returned.
+
+.. function:: bool HasManifold(const double* values) const;
+
+ Returns ``true`` if a :class:`Manifold` is associated with this parameter
+ block, ``false`` otherwise.
.. function:: void Problem::SetParameterLowerBound(double* values, int index, double lower_bound)
@@ -1909,43 +2228,42 @@
The size of the parameter block.
-.. function:: int Problem::ParameterBlockLocalSize(const double* values) const
+.. function:: int Problem::ParameterBlockTangentSize(const double* values) const
- The size of local parameterization for the parameter block. If
- there is no local parameterization associated with this parameter
- block, then ``ParameterBlockLocalSize`` = ``ParameterBlockSize``.
+ The dimension of the tangent space of the :class:`Manifold` for the
+ parameter block. If there is no :class:`Manifold` associated with this
+ parameter block, then ``ParameterBlockTangentSize = ParameterBlockSize``.
.. function:: bool Problem::HasParameterBlock(const double* values) const
Is the given parameter block present in the problem or not?
-.. function:: void Problem::GetParameterBlocks(vector<double*>* parameter_blocks) const
+.. function:: void Problem::GetParameterBlocks(std::vector<double*>* parameter_blocks) const
Fills the passed ``parameter_blocks`` vector with pointers to the
parameter blocks currently in the problem. After this call,
``parameter_block.size() == NumParameterBlocks``.
-.. function:: void Problem::GetResidualBlocks(vector<ResidualBlockId>* residual_blocks) const
+.. function:: void Problem::GetResidualBlocks(std::vector<ResidualBlockId>* residual_blocks) const
Fills the passed `residual_blocks` vector with pointers to the
residual blocks currently in the problem. After this call,
`residual_blocks.size() == NumResidualBlocks`.
-.. function:: void Problem::GetParameterBlocksForResidualBlock(const ResidualBlockId residual_block, vector<double*>* parameter_blocks) const
+.. function:: void Problem::GetParameterBlocksForResidualBlock(const ResidualBlockId residual_block, std::vector<double*>* parameter_blocks) const
Get all the parameter blocks that depend on the given residual
block.
-.. function:: void Problem::GetResidualBlocksForParameterBlock(const double* values, vector<ResidualBlockId>* residual_blocks) const
+.. function:: void Problem::GetResidualBlocksForParameterBlock(const double* values, std::vector<ResidualBlockId>* residual_blocks) const
Get all the residual blocks that depend on the given parameter
block.
- If `Problem::Options::enable_fast_removal` is
- `true`, then getting the residual blocks is fast and depends only
+ If :member:`Problem::Options::enable_fast_removal` is
+ ``true``, then getting the residual blocks is fast and depends only
on the number of residual blocks. Otherwise, getting the residual
- blocks for a parameter block will incur a scan of the entire
- :class:`Problem` object.
+ blocks for a parameter block will scan the entire problem.
.. function:: const CostFunction* Problem::GetCostFunctionForResidualBlock(const ResidualBlockId residual_block) const
@@ -1974,11 +2292,10 @@
function returns false, the caller should expect the output
memory locations to have been modified.
- The returned cost and jacobians have had robustification and local
- parameterizations applied already; for example, the jacobian for a
- 4-dimensional quaternion parameter using the
- :class:`QuaternionParameterization` is ``num_residuals x 3``
- instead of ``num_residuals x 4``.
+ The returned cost and jacobians have had robustification and
+ :class:`Manifold` applied already; for example, the jacobian for a
+ 4-dimensional quaternion parameter using the :class:`QuaternionManifold` is
+ ``num_residuals x 3`` instead of ``num_residuals x 4``.
``apply_loss_function`` as the name implies allows the user to
switch the application of the loss function on and off.
@@ -2014,10 +2331,10 @@
:func:`Problem::EvaluateResidualBlock`).
-.. function:: bool Problem::Evaluate(const Problem::EvaluateOptions& options, double* cost, vector<double>* residuals, vector<double>* gradient, CRSMatrix* jacobian)
+.. function:: bool Problem::Evaluate(const Problem::EvaluateOptions& options, double* cost, std::vector<double>* residuals, std::vector<double>* gradient, CRSMatrix* jacobian)
Evaluate a :class:`Problem`. Any of the output pointers can be
- `nullptr`. Which residual blocks and parameter blocks are used is
+ ``nullptr``. Which residual blocks and parameter blocks are used is
controlled by the :class:`Problem::EvaluateOptions` struct below.
.. NOTE::
@@ -2048,10 +2365,10 @@
.. NOTE::
- If no local parameterizations are used, then the size of
- the gradient vector is the sum of the sizes of all the parameter
- blocks. If a parameter block has a local parameterization, then
- it contributes "LocalSize" entries to the gradient vector.
+ If no :class:`Manifold` are used, then the size of the gradient vector is
+ the sum of the sizes of all the parameter blocks. If a parameter block has
+ a manifold then it contributes "TangentSize" entries to the gradient
+ vector.
.. NOTE::
@@ -2070,7 +2387,7 @@
Options struct that is used to control :func:`Problem::Evaluate`.
-.. member:: vector<double*> Problem::EvaluateOptions::parameter_blocks
+.. member:: std::vector<double*> Problem::EvaluateOptions::parameter_blocks
The set of parameter blocks for which evaluation should be
performed. This vector determines the order in which parameter
@@ -2086,7 +2403,7 @@
should NOT point to new memory locations. Bad things will happen if
you do.
-.. member:: vector<ResidualBlockId> Problem::EvaluateOptions::residual_blocks
+.. member:: std::vector<ResidualBlockId> Problem::EvaluateOptions::residual_blocks
The set of residual blocks for which evaluation should be
performed. This vector determines the order in which the residuals
@@ -2105,7 +2422,7 @@
.. member:: int Problem::EvaluateOptions::num_threads
- Number of threads to use. (Requires OpenMP).
+ Number of threads to use.
:class:`EvaluationCallback`
@@ -2120,9 +2437,9 @@
class EvaluationCallback {
public:
- virtual ~EvaluationCallback() {}
- virtual void PrepareForEvaluation()(bool evaluate_jacobians
- bool new_evaluation_point) = 0;
+ virtual ~EvaluationCallback();
+ virtual void PrepareForEvaluation(bool evaluate_jacobians,
+ bool new_evaluation_point) = 0;
};
.. function:: void EvaluationCallback::PrepareForEvaluation(bool evaluate_jacobians, bool new_evaluation_point)
@@ -2131,14 +2448,14 @@
every time, and once before it computes the residuals and/or the
Jacobians.
- User parameters (the double* values provided by the us) are fixed
+ User parameters (the double* values provided by the user) are fixed
until the next call to
:func:`EvaluationCallback::PrepareForEvaluation`. If
``new_evaluation_point == true``, then this is a new point that is
different from the last evaluated point. Otherwise, it is the same
point that was evaluated previously (either Jacobian or residual)
and the user can use cached results from previous evaluations. If
- ``evaluate_jacobians`` is true, then Ceres will request Jacobians
+ ``evaluate_jacobians`` is ``true``, then Ceres will request Jacobians
in the upcoming cost evaluation.
Using this callback interface, Ceres can notify you when it is
@@ -2165,7 +2482,10 @@
to use a global shared variable (discouraged; bug-prone). As far
as Ceres is concerned, it is evaluating cost functions like any
other; it just so happens that behind the scenes the cost functions
- reuse pre-computed data to execute faster.
+ reuse pre-computed data to execute faster. See
+ `examples/evaluation_callback_example.cc
+ <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/evaluation_callback_example.cc>`_
+ for an example.
See ``evaluation_callback_test.cc`` for code that explicitly
verifies the preconditions between
@@ -2207,14 +2527,14 @@
.. function:: template <typename T> void RotationMatrixToAngleAxis(T const * R, T * angle_axis)
.. function:: template <typename T> void AngleAxisToRotationMatrix(T const * angle_axis, T * R)
- Conversions between 3x3 rotation matrix with given column and row strides and
+ Conversions between :math:`3\times3` rotation matrix with given column and row strides and
axis-angle rotation representations. The functions that take a pointer to T instead
of a MatrixAdapter assume a column major representation with unit row stride and a column stride of 3.
.. function:: template <typename T, int row_stride, int col_stride> void EulerAnglesToRotationMatrix(const T* euler, const MatrixAdapter<T, row_stride, col_stride>& R)
.. function:: template <typename T> void EulerAnglesToRotationMatrix(const T* euler, int row_stride, T* R)
- Conversions between 3x3 rotation matrix with given column and row strides and
+ Conversions between :math:`3\times3` rotation matrix with given column and row strides and
Euler angle (in degrees) rotation representations.
The {pitch,roll,yaw} Euler angles are rotations around the {x,y,z}
@@ -2228,7 +2548,7 @@
.. function:: template <typename T, int row_stride, int col_stride> void QuaternionToScaledRotation(const T q[4], const MatrixAdapter<T, row_stride, col_stride>& R)
.. function:: template <typename T> void QuaternionToScaledRotation(const T q[4], T R[3 * 3])
- Convert a 4-vector to a 3x3 scaled rotation matrix.
+ Convert a 4-vector to a :math:`3\times3` scaled rotation matrix.
The choice of rotation is such that the quaternion
:math:`\begin{bmatrix} 1 &0 &0 &0\end{bmatrix}` goes to an identity
@@ -2242,8 +2562,8 @@
which corresponds to a Rodrigues approximation, the last matrix
being the cross-product matrix of :math:`\begin{bmatrix} a& b&
- c\end{bmatrix}`. Together with the property that :math:`R(q1 * q2)
- = R(q1) * R(q2)` this uniquely defines the mapping from :math:`q` to
+ c\end{bmatrix}`. Together with the property that :math:`R(q_1 \otimes q_2)
+ = R(q_1) R(q_2)` this uniquely defines the mapping from :math:`q` to
:math:`R`.
In the function that accepts a pointer to T instead of a MatrixAdapter,
@@ -2252,7 +2572,7 @@
No normalization of the quaternion is performed, i.e.
:math:`R = \|q\|^2 Q`, where :math:`Q` is an orthonormal matrix
- such that :math:`\det(Q) = 1` and :math:`Q*Q' = I`.
+ such that :math:`\det(Q) = 1` and :math:`QQ' = I`.
.. function:: template <typename T> void QuaternionToRotation(const T q[4], const MatrixAdapter<T, row_stride, col_stride>& R)
@@ -2279,9 +2599,9 @@
.. function:: template <typename T> void QuaternionProduct(const T z[4], const T w[4], T zw[4])
- .. math:: zw = z * w
+ .. math:: zw = z \otimes w
- where :math:`*` is the Quaternion product between 4-vectors.
+ where :math:`\otimes` is the Quaternion product between 4-vectors.
.. function:: template <typename T> void CrossProduct(const T x[3], const T y[3], T x_cross_y[3])
diff --git a/docs/source/nnls_solving.rst b/docs/source/nnls_solving.rst
index 285df3a..184b94b 100644
--- a/docs/source/nnls_solving.rst
+++ b/docs/source/nnls_solving.rst
@@ -1,3 +1,4 @@
+.. highlight:: c++
.. default-domain:: cpp
@@ -12,10 +13,10 @@
Introduction
============
-Effective use of Ceres requires some familiarity with the basic
+Effective use of Ceres Solver requires some familiarity with the basic
components of a non-linear least squares solver, so before we describe
how to configure and use the solver, we will take a brief look at how
-some of the core optimization algorithms in Ceres work.
+some of the core optimization algorithms in Ceres Solver work.
Let :math:`x \in \mathbb{R}^n` be an :math:`n`-dimensional vector of
variables, and
@@ -27,15 +28,15 @@
L \le x \le U
:label: nonlinsq
-Where, :math:`L` and :math:`U` are lower and upper bounds on the
-parameter vector :math:`x`.
+Where, :math:`L` and :math:`U` are vector lower and upper bounds on
+the parameter vector :math:`x`. The inequality holds component-wise.
Since the efficient global minimization of :eq:`nonlinsq` for
general :math:`F(x)` is an intractable problem, we will have to settle
for finding a local minimum.
In the following, the Jacobian :math:`J(x)` of :math:`F(x)` is an
-:math:`m\times n` matrix, where :math:`J_{ij}(x) = \partial_j f_i(x)`
+:math:`m\times n` matrix, where :math:`J_{ij}(x) = D_j f_i(x)`
and the gradient vector is :math:`g(x) = \nabla \frac{1}{2}\|F(x)\|^2
= J(x)^\top F(x)`.
@@ -75,7 +76,7 @@
Trust region methods are in some sense dual to line search methods:
trust region methods first choose a step size (the size of the trust
region) and then a step direction while line search methods first
-choose a step direction and then a step size. Ceres implements
+choose a step direction and then a step size. Ceres Solver implements
multiple algorithms in both categories.
.. _section-trust-region-methods:
@@ -152,8 +153,9 @@
solving an unconstrained optimization of the form
.. math:: \arg\min_{\Delta x} \frac{1}{2}\|J(x)\Delta x + F(x)\|^2 +\lambda \|D(x)\Delta x\|^2
+ :label: lsqr-naive
-Where, :math:`\lambda` is a Lagrange multiplier that is inverse
+Where, :math:`\lambda` is a Lagrange multiplier that is inversely
related to :math:`\mu`. In Ceres, we solve for
.. math:: \arg\min_{\Delta x} \frac{1}{2}\|J(x)\Delta x + F(x)\|^2 + \frac{1}{\mu} \|D(x)\Delta x\|^2
@@ -162,39 +164,47 @@
The matrix :math:`D(x)` is a non-negative diagonal matrix, typically
the square root of the diagonal of the matrix :math:`J(x)^\top J(x)`.
-Before going further, let us make some notational simplifications. We
-will assume that the matrix :math:`\frac{1}{\sqrt{\mu}} D` has been concatenated
-at the bottom of the matrix :math:`J` and similarly a vector of zeros
-has been added to the bottom of the vector :math:`f` and the rest of
-our discussion will be in terms of :math:`J` and :math:`F`, i.e, the
-linear least squares problem.
+Before going further, let us make some notational simplifications.
+
+We will assume that the matrix :math:`\frac{1}{\sqrt{\mu}} D` has been
+concatenated at the bottom of the matrix :math:`J(x)` and a
+corresponding vector of zeroes has been added to the bottom of
+:math:`F(x)`, i.e.:
+
+.. math:: J(x) = \begin{bmatrix} J(x) \\ \frac{1}{\sqrt{\mu}} D
+ \end{bmatrix},\quad F(x) = \begin{bmatrix} F(x) \\ 0
+ \end{bmatrix}.
+
+This allows us to re-write :eq:`lsqr` as
.. math:: \min_{\Delta x} \frac{1}{2} \|J(x)\Delta x + F(x)\|^2 .
:label: simple
-For all but the smallest problems the solution of :eq:`simple` in
-each iteration of the Levenberg-Marquardt algorithm is the dominant
-computational cost in Ceres. Ceres provides a number of different
-options for solving :eq:`simple`. There are two major classes of
-methods - factorization and iterative.
+and only talk about :math:`J(x)` and :math:`F(x)` going forward.
+
+For all but the smallest problems the solution of :eq:`simple` in each
+iteration of the Levenberg-Marquardt algorithm is the dominant
+computational cost. Ceres provides a number of different options for
+solving :eq:`simple`. There are two major classes of methods -
+factorization and iterative.
The factorization methods are based on computing an exact solution of
-:eq:`lsqr` using a Cholesky or a QR factorization and lead to an exact
-step Levenberg-Marquardt algorithm. But it is not clear if an exact
-solution of :eq:`lsqr` is necessary at each step of the LM algorithm
-to solve :eq:`nonlinsq`. In fact, we have already seen evidence
-that this may not be the case, as :eq:`lsqr` is itself a regularized
-version of :eq:`linearapprox`. Indeed, it is possible to
-construct non-linear optimization algorithms in which the linearized
-problem is solved approximately. These algorithms are known as inexact
-Newton or truncated Newton methods [NocedalWright]_.
+:eq:`lsqr` using a Cholesky or a QR factorization and lead to the so
+called exact step Levenberg-Marquardt algorithm. But it is not clear
+if an exact solution of :eq:`lsqr` is necessary at each step of the
+Levenberg-Mardquardt algorithm. We have already seen evidence that
+this may not be the case, as :eq:`lsqr` is itself a regularized
+version of :eq:`linearapprox`. Indeed, it is possible to construct
+non-linear optimization algorithms in which the linearized problem is
+solved approximately. These algorithms are known as inexact Newton or
+truncated Newton methods [NocedalWright]_.
An inexact Newton method requires two ingredients. First, a cheap
method for approximately solving systems of linear
equations. Typically an iterative linear solver like the Conjugate
-Gradients method is used for this
-purpose [NocedalWright]_. Second, a termination rule for
-the iterative solver. A typical termination rule is of the form
+Gradients method is used for this purpose [NocedalWright]_. Second, a
+termination rule for the iterative solver. A typical termination rule
+is of the form
.. math:: \|H(x) \Delta x + g(x)\| \leq \eta_k \|g(x)\|.
:label: inexact
@@ -212,14 +222,18 @@
iterative linear solver, the inexact step Levenberg-Marquardt
algorithm is used.
+We will talk more about the various linear solvers that you can use in
+:ref:`section-linear-solver`.
+
.. _section-dogleg:
Dogleg
------
Another strategy for solving the trust region problem :eq:`trp` was
-introduced by M. J. D. Powell. The key idea there is to compute two
-vectors
+introduced by
+`M. J. D. Powell <https://en.wikipedia.org/wiki/Michael_J._D._Powell>`_. The
+key idea there is to compute two vectors
.. math::
@@ -253,10 +267,14 @@
Levenberg-Marquardt solves the linear approximation from scratch with
a smaller value of :math:`\mu`. Dogleg on the other hand, only needs
to compute the interpolation between the Gauss-Newton and the Cauchy
-vectors, as neither of them depend on the value of :math:`\mu`.
+vectors, as neither of them depend on the value of :math:`\mu`. As a
+result the Dogleg method only solves one linear system per successful
+step, while Levenberg-Marquardt may need to solve an arbitrary number
+of linear systems before it can make progress [LourakisArgyros]_.
-The Dogleg method can only be used with the exact factorization based
-linear solvers.
+A disadvantage of the Dogleg implementation in Ceres Solver is that is
+can only be used with method can only be used with exact factorization
+based linear solvers.
.. _section-inner-iterations:
@@ -349,10 +367,10 @@
enables the non-monotonic trust region algorithm as described by Conn,
Gould & Toint in [Conn]_.
-Even though the value of the objective function may be larger
-than the minimum value encountered over the course of the
-optimization, the final parameters returned to the user are the
-ones corresponding to the minimum cost over all iterations.
+Even though the value of the objective function may be larger than the
+minimum value encountered over the course of the optimization, the
+final parameters returned to the user are the ones corresponding to
+the minimum cost over all iterations.
The option to take non-monotonic steps is available for all trust
region strategies.
@@ -363,11 +381,13 @@
Line Search Methods
===================
-The line search method in Ceres Solver cannot handle bounds
-constraints right now, so it can only be used for solving
-unconstrained problems.
+.. NOTE::
-Line search algorithms
+ The line search method in Ceres Solver cannot handle bounds
+ constraints right now, so it can only be used for solving
+ unconstrained problems.
+
+The basic line search algorithm looks something like this:
1. Given an initial point :math:`x`
2. :math:`\Delta x = -H^{-1}(x) g(x)`
@@ -387,7 +407,7 @@
direction :math:`\Delta x` and the method used for one dimensional
optimization along :math:`\Delta x`. The choice of :math:`H(x)` is the
primary source of computational complexity in these
-methods. Currently, Ceres Solver supports three choices of search
+methods. Currently, Ceres Solver supports four choices of search
directions, all aimed at large scale problems.
1. ``STEEPEST_DESCENT`` This corresponds to choosing :math:`H(x)` to
@@ -405,8 +425,8 @@
3. ``BFGS`` A generalization of the Secant method to multiple
dimensions in which a full, dense approximation to the inverse
Hessian is maintained and used to compute a quasi-Newton step
- [NocedalWright]_. BFGS is currently the best known general
- quasi-Newton algorithm.
+ [NocedalWright]_. ``BFGS`` and its limited memory variant ``LBFGS``
+ are currently the best known general quasi-Newton algorithm.
4. ``LBFGS`` A limited memory approximation to the full ``BFGS``
method in which the last `M` iterations are used to approximate the
@@ -414,26 +434,31 @@
[ByrdNocedal]_.
Currently Ceres Solver supports both a backtracking and interpolation
-based Armijo line search algorithm, and a sectioning / zoom
-interpolation (strong) Wolfe condition line search algorithm.
-However, note that in order for the assumptions underlying the
-``BFGS`` and ``LBFGS`` methods to be guaranteed to be satisfied the
-Wolfe line search algorithm should be used.
+based `Armijo line search algorithm
+<https://en.wikipedia.org/wiki/Backtracking_line_search>`_ (``ARMIJO``)
+, and a sectioning / zoom interpolation (strong) `Wolfe condition line
+search algorithm <https://en.wikipedia.org/wiki/Wolfe_conditions>`_
+(``WOLFE``).
+
+.. NOTE::
+
+ In order for the assumptions underlying the ``BFGS`` and ``LBFGS``
+ methods to be satisfied the ``WOLFE`` algorithm must be used.
.. _section-linear-solver:
-LinearSolver
-============
+Linear Solvers
+==============
-Recall that in both of the trust-region methods described above, the
+Observe that for both of the trust-region methods described above, the
key computational cost is the solution of a linear least squares
problem of the form
-.. math:: \min_{\Delta x} \frac{1}{2} \|J(x)\Delta x + f(x)\|^2 .
+.. math:: \min_{\Delta x} \frac{1}{2} \|J(x)\Delta x + F(x)\|^2 .
:label: simple2
Let :math:`H(x)= J(x)^\top J(x)` and :math:`g(x) = -J(x)^\top
-f(x)`. For notational convenience let us also drop the dependence on
+F(x)`. For notational convenience let us also drop the dependence on
:math:`x`. Then it is easy to see that solving :eq:`simple2` is
equivalent to solving the *normal equations*.
@@ -444,31 +469,65 @@
.. _section-qr:
-``DENSE_QR``
-------------
+DENSE_QR
+--------
For small problems (a couple of hundred parameters and a few thousand
-residuals) with relatively dense Jacobians, ``DENSE_QR`` is the method
-of choice [Bjorck]_. Let :math:`J = QR` be the QR-decomposition of
-:math:`J`, where :math:`Q` is an orthonormal matrix and :math:`R` is
-an upper triangular matrix [TrefethenBau]_. Then it can be shown that
-the solution to :eq:`normal` is given by
+residuals) with relatively dense Jacobians, QR-decomposition is the
+method of choice [Bjorck]_. Let :math:`J = QR` be the QR-decomposition
+of :math:`J`, where :math:`Q` is an orthonormal matrix and :math:`R`
+is an upper triangular matrix [TrefethenBau]_. Then it can be shown
+that the solution to :eq:`normal` is given by
.. math:: \Delta x^* = -R^{-1}Q^\top f
+You can use QR-decomposition by setting
+:member:`Solver::Options::linear_solver_type` to ``DENSE_QR``.
-Ceres uses ``Eigen`` 's dense QR factorization routines.
+By default (``Solver::Options::dense_linear_algebra_library_type =
+EIGEN``) Ceres Solver will use `Eigen Householder QR factorization
+<https://eigen.tuxfamily.org/dox-devel/classEigen_1_1HouseholderQR.html>`_
+.
-.. _section-cholesky:
+If Ceres Solver has been built with an optimized LAPACK
+implementation, then the user can also choose to use LAPACK's
+`DGEQRF`_ routine by setting
+:member:`Solver::Options::dense_linear_algebra_library_type` to
+``LAPACK``. Depending on the `LAPACK` and the underlying `BLAS`
+implementation this may perform better than using Eigen's Householder
+QR factorization.
-``DENSE_NORMAL_CHOLESKY`` & ``SPARSE_NORMAL_CHOLESKY``
-------------------------------------------------------
+.. _DGEQRF: https://netlib.org/lapack/explore-html/df/dc5/group__variants_g_ecomputational_ga3766ea903391b5cf9008132f7440ec7b.html
-Large non-linear least square problems are usually sparse. In such
-cases, using a dense QR factorization is inefficient. Let :math:`H =
-R^\top R` be the Cholesky factorization of the normal equations, where
-:math:`R` is an upper triangular matrix, then the solution to
-:eq:`normal` is given by
+
+If an NVIDIA GPU is available and Ceres Solver has been built with
+CUDA support enabled, then the user can also choose to perform the
+QR-decomposition on the GPU by setting
+:member:`Solver::Options::dense_linear_algebra_library_type` to
+``CUDA``. Depending on the GPU this can lead to a substantial
+speedup. Using CUDA only makes sense for moderate to large sized
+problems. This is because to perform the decomposition on the GPU the
+matrix :math:`J` needs to be transferred from the CPU to the GPU and
+this incurs a cost. So unless the speedup from doing the decomposition
+on the GPU is large enough to also account for the time taken to
+transfer the Jacobian to the GPU, using CUDA will not be better than
+just doing the decomposition on the CPU.
+
+.. _section-dense-normal-cholesky:
+
+DENSE_NORMAL_CHOLESKY
+---------------------
+
+It is often the case that the number of rows in the Jacobian :math:`J`
+are much larger than the the number of columns. The complexity of QR
+factorization scales linearly with the number of rows, so beyond a
+certain size it is more efficient to solve :eq:`normal` using a dense
+`Cholesky factorization
+<https://en.wikipedia.org/wiki/Cholesky_decomposition>`_.
+
+Let :math:`H = R^\top R` be the Cholesky factorization of the normal
+equations, where :math:`R` is an upper triangular matrix, then the
+solution to :eq:`normal` is given by
.. math::
@@ -479,69 +538,155 @@
factorization of :math:`H` is the same upper triangular matrix
:math:`R` in the QR factorization of :math:`J`. Since :math:`Q` is an
orthonormal matrix, :math:`J=QR` implies that :math:`J^\top J = R^\top
-Q^\top Q R = R^\top R`. There are two variants of Cholesky
-factorization -- sparse and dense.
+Q^\top Q R = R^\top R`.
-``DENSE_NORMAL_CHOLESKY`` as the name implies performs a dense
-Cholesky factorization of the normal equations. Ceres uses
-``Eigen`` 's dense LDLT factorization routines.
+Unfortunately, forming the matrix :math:`H = J'J` squares the
+condition number. As a result while the cost of forming :math:`H` and
+computing its Cholesky factorization is lower than computing the
+QR-factorization of :math:`J`, we pay the price in terms of increased
+numerical instability and potential failure of the Cholesky
+factorization for ill-conditioned Jacobians.
-``SPARSE_NORMAL_CHOLESKY``, as the name implies performs a sparse
-Cholesky factorization of the normal equations. This leads to
-substantial savings in time and memory for large sparse
-problems. Ceres uses the sparse Cholesky factorization routines in
-Professor Tim Davis' ``SuiteSparse`` or ``CXSparse`` packages [Chen]_
-or the sparse Cholesky factorization algorithm in ``Eigen`` (which
-incidently is a port of the algorithm implemented inside ``CXSparse``)
+You can use dense Cholesky factorization by setting
+:member:`Solver::Options::linear_solver_type` to
+``DENSE_NORMAL_CHOLESKY``.
+
+By default (``Solver::Options::dense_linear_algebra_library_type =
+EIGEN``) Ceres Solver will use `Eigen's LLT factorization`_ routine.
+
+.. _Eigen's LLT Factorization: https://eigen.tuxfamily.org/dox/classEigen_1_1LLT.html
+
+If Ceres Solver has been built with an optimized LAPACK
+implementation, then the user can also choose to use LAPACK's
+`DPOTRF`_ routine by setting
+:member:`Solver::Options::dense_linear_algebra_library_type` to
+``LAPACK``. Depending on the `LAPACK` and the underlying `BLAS`
+implementation this may perform better than using Eigen's Cholesky
+factorization.
+
+.. _DPOTRF: https://www.netlib.org/lapack/explore-html/d1/d7a/group__double_p_ocomputational_ga2f55f604a6003d03b5cd4a0adcfb74d6.html
+
+If an NVIDIA GPU is available and Ceres Solver has been built with
+CUDA support enabled, then the user can also choose to perform the
+Cholesky factorization on the GPU by setting
+:member:`Solver::Options::dense_linear_algebra_library_type` to
+``CUDA``. Depending on the GPU this can lead to a substantial speedup.
+Using CUDA only makes sense for moderate to large sized problems. This
+is because to perform the decomposition on the GPU the matrix
+:math:`H` needs to be transferred from the CPU to the GPU and this
+incurs a cost. So unless the speedup from doing the decomposition on
+the GPU is large enough to also account for the time taken to transfer
+the Jacobian to the GPU, using CUDA will not be better than just doing
+the decomposition on the CPU.
+
+
+.. _section-sparse-normal-cholesky:
+
+SPARSE_NORMAL_CHOLESKY
+----------------------
+
+Large non-linear least square problems are usually sparse. In such
+cases, using a dense QR or Cholesky factorization is inefficient. For
+such problems, Cholesky factorization routines which treat :math:`H`
+as a sparse matrix and computes a sparse factor :math:`R` are better
+suited [Davis]_. This can lead to substantial savings in memory and
+CPU time for large sparse problems.
+
+You can use dense Cholesky factorization by setting
+:member:`Solver::Options::linear_solver_type` to
+``SPARSE_NORMAL_CHOLESKY``.
+
+The use of this linear solver requires that Ceres is compiled with
+support for at least one of:
+
+ 1. `SuiteSparse <https://people.engr.tamu.edu/davis/suitesparse.html>`_ (``SUITE_SPARSE``).
+ 2. `Apple's Accelerate framework
+ <https://developer.apple.com/documentation/accelerate/sparse_solvers?language=objc>`_
+ (``ACCELERATE_SPARSE``).
+ 3. `Eigen's sparse linear solvers
+ <https://eigen.tuxfamily.org/dox/group__SparseCholesky__Module.html>`_
+ (``EIGEN_SPARSE``).
+
+SuiteSparse and Accelerate offer high performance sparse Cholesky
+factorization routines as they level-3 BLAS routines
+internally. Eigen's sparse Cholesky routines are *simplicial* and do
+not use dense linear algebra routines and as a result cannot compete
+with SuiteSparse and Accelerate, especially on large problems. As a
+result to get the best performance out of SuiteSparse it should be
+linked to high quality BLAS and LAPACK implementations e.g. `ATLAS
+<https://math-atlas.sourceforge.net/>`_, `OpenBLAS
+<https://www.openblas.net/>`_ or `Intel MKL
+<https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl.html>`_.
+
+A critical part of a sparse Cholesky factorization routine is the use
+a fill-reducing ordering. By default Ceres Solver uses the Approximate
+Minimum Degree (``AMD``) ordering, which usually performs well, but
+there are other options that may perform better depending on the
+actual sparsity structure of the Jacobian. See :ref:`section-ordering`
+for more details.
.. _section-cgnr:
-``CGNR``
---------
+CGNR
+----
-For general sparse problems, if the problem is too large for
-``CHOLMOD`` or a sparse linear algebra library is not linked into
-Ceres, another option is the ``CGNR`` solver. This solver uses the
-Conjugate Gradients solver on the *normal equations*, but without
-forming the normal equations explicitly. It exploits the relation
+For general sparse problems, if the problem is too large for sparse
+Cholesky factorization or a sparse linear algebra library is not
+linked into Ceres, another option is the ``CGNR`` solver. This solver
+uses the `Conjugate Gradients
+<https://en.wikipedia.org/wiki/Conjugate_gradient_method>_` method on
+the *normal equations*, but without forming the normal equations
+explicitly. It exploits the relation
.. math::
H x = J^\top J x = J^\top(J x)
-The convergence of Conjugate Gradients depends on the conditioner
-number :math:`\kappa(H)`. Usually :math:`H` is poorly conditioned and
-a :ref:`section-preconditioner` must be used to get reasonable
-performance. Currently only the ``JACOBI`` preconditioner is available
-for use with ``CGNR``. It uses the block diagonal of :math:`H` to
-precondition the normal equations.
+Because ``CGNR`` never solves the linear system exactly, when the user
+chooses ``CGNR`` as the linear solver, Ceres automatically switches
+from the exact step algorithm to an inexact step algorithm. This also
+means that ``CGNR`` can only be used with ``LEVENBERG_MARQUARDT`` and
+not with ``DOGLEG`` trust region strategy.
-When the user chooses ``CGNR`` as the linear solver, Ceres
-automatically switches from the exact step algorithm to an inexact
-step algorithm.
+``CGNR`` by default runs on the CPU. However, if an NVIDIA GPU is
+available and Ceres Solver has been built with CUDA support enabled,
+then the user can also choose to run ``CGNR`` on the GPU by setting
+:member:`Solver::Options::sparse_linear_algebra_library_type` to
+``CUDA_SPARSE``. The key complexity of ``CGNR`` comes from evaluating
+the two sparse-matrix vector products (SpMV) :math:`Jx` and
+:math:`J'y`. GPUs are particularly well suited for doing sparse
+matrix-vector products. As a result, for large problems using a GPU
+can lead to a substantial speedup.
+
+The convergence of Conjugate Gradients depends on the conditioner
+number :math:`\kappa(H)`. Usually :math:`H` is quite poorly
+conditioned and a `Preconditioner
+<https://en.wikipedia.org/wiki/Preconditioner>`_ must be used to get
+reasonable performance. See section on :ref:`section-preconditioner`
+for more details.
.. _section-schur:
-``DENSE_SCHUR`` & ``SPARSE_SCHUR``
-----------------------------------
+DENSE_SCHUR & SPARSE_SCHUR
+--------------------------
While it is possible to use ``SPARSE_NORMAL_CHOLESKY`` to solve bundle
-adjustment problems, bundle adjustment problem have a special
-structure, and a more efficient scheme for solving :eq:`normal`
-can be constructed.
+adjustment problems, they have a special sparsity structure that can
+be exploited to solve the normal equations more efficiently.
-Suppose that the SfM problem consists of :math:`p` cameras and
-:math:`q` points and the variable vector :math:`x` has the block
-structure :math:`x = [y_{1}, ... ,y_{p},z_{1}, ... ,z_{q}]`. Where,
-:math:`y` and :math:`z` correspond to camera and point parameters,
-respectively. Further, let the camera blocks be of size :math:`c` and
-the point blocks be of size :math:`s` (for most problems :math:`c` =
-:math:`6`--`9` and :math:`s = 3`). Ceres does not impose any constancy
-requirement on these block sizes, but choosing them to be constant
-simplifies the exposition.
+Suppose that the bundle adjustment problem consists of :math:`p`
+cameras and :math:`q` points and the variable vector :math:`x` has the
+block structure :math:`x = [y_{1}, ... ,y_{p},z_{1},
+... ,z_{q}]`. Where, :math:`y` and :math:`z` correspond to camera and
+point parameters respectively. Further, let the camera blocks be of
+size :math:`c` and the point blocks be of size :math:`s` (for most
+problems :math:`c` = :math:`6`--`9` and :math:`s = 3`). Ceres does not
+impose any constancy requirement on these block sizes, but choosing
+them to be constant simplifies the exposition.
-A key characteristic of the bundle adjustment problem is that there is
-no term :math:`f_{i}` that includes two or more point blocks. This in
-turn implies that the matrix :math:`H` is of the form
+The key property of bundle adjustment problems which we will exploit
+is the fact that no term :math:`f_{i}` in :eq:`nonlinsq` includes two
+or more point blocks at the same time. This in turn implies that the
+matrix :math:`H` is of the form
.. math:: H = \left[ \begin{matrix} B & E\\ E^\top & C \end{matrix} \right]\ ,
:label: hblock
@@ -585,123 +730,208 @@
observe at least one common point.
-Now, :eq:`linear2` can be solved by first forming :math:`S`, solving for
-:math:`\Delta y`, and then back-substituting :math:`\Delta y` to
+Now :eq:`linear2` can be solved by first forming :math:`S`, solving
+for :math:`\Delta y`, and then back-substituting :math:`\Delta y` to
obtain the value of :math:`\Delta z`. Thus, the solution of what was
an :math:`n\times n`, :math:`n=pc+qs` linear system is reduced to the
inversion of the block diagonal matrix :math:`C`, a few matrix-matrix
and matrix-vector multiplies, and the solution of block sparse
:math:`pc\times pc` linear system :eq:`schur`. For almost all
problems, the number of cameras is much smaller than the number of
-points, :math:`p \ll q`, thus solving :eq:`schur` is
-significantly cheaper than solving :eq:`linear2`. This is the
-*Schur complement trick* [Brown]_.
+points, :math:`p \ll q`, thus solving :eq:`schur` is significantly
+cheaper than solving :eq:`linear2`. This is the *Schur complement
+trick* [Brown]_.
-This still leaves open the question of solving :eq:`schur`. The
-method of choice for solving symmetric positive definite systems
-exactly is via the Cholesky factorization [TrefethenBau]_ and
-depending upon the structure of the matrix, there are, in general, two
-options. The first is direct factorization, where we store and factor
-:math:`S` as a dense matrix [TrefethenBau]_. This method has
+This still leaves open the question of solving :eq:`schur`. As we
+discussed when considering the exact solution of the normal equations
+using Cholesky factorization, we have two options.
+
+1. ``DENSE_SCHUR`` - The first is **dense Cholesky factorization**,
+where we store and factor :math:`S` as a dense matrix. This method has
:math:`O(p^2)` space complexity and :math:`O(p^3)` time complexity and
-is only practical for problems with up to a few hundred cameras. Ceres
-implements this strategy as the ``DENSE_SCHUR`` solver.
+is only practical for problems with up to a few hundred cameras.
-
-But, :math:`S` is typically a fairly sparse matrix, as most images
-only see a small fraction of the scene. This leads us to the second
-option: Sparse Direct Methods. These methods store :math:`S` as a
+2. ``SPARSE_SCHUR`` - For large bundle adjustment problems :math:`S`
+is typically a fairly sparse matrix, as most images only see a small
+fraction of the scene. This leads us to the second option: **sparse
+Cholesky factorization** [Davis]_. Here we store :math:`S` as a
sparse matrix, use row and column re-ordering algorithms to maximize
the sparsity of the Cholesky decomposition, and focus their compute
-effort on the non-zero part of the factorization [Chen]_. Sparse
-direct methods, depending on the exact sparsity structure of the Schur
-complement, allow bundle adjustment algorithms to significantly scale
-up over those based on dense factorization. Ceres implements this
-strategy as the ``SPARSE_SCHUR`` solver.
+effort on the non-zero part of the factorization [Davis]_ [Chen]_
+. Sparse direct methods, depending on the exact sparsity structure of
+the Schur complement, allow bundle adjustment algorithms to scenes
+with thousands of cameras.
+
.. _section-iterative_schur:
-``ITERATIVE_SCHUR``
--------------------
+ITERATIVE_SCHUR
+---------------
-Another option for bundle adjustment problems is to apply
-Preconditioned Conjugate Gradients to the reduced camera matrix
-:math:`S` instead of :math:`H`. One reason to do this is that
-:math:`S` is a much smaller matrix than :math:`H`, but more
-importantly, it can be shown that :math:`\kappa(S)\leq \kappa(H)`.
+Another option for bundle adjustment problems is to apply Conjugate
+Gradients to the reduced camera matrix :math:`S` instead of
+:math:`H`. One reason to do this is that :math:`S` is a much smaller
+matrix than :math:`H`, but more importantly, it can be shown that
+:math:`\kappa(S)\leq \kappa(H)` [Agarwal]_.
+
Ceres implements Conjugate Gradients on :math:`S` as the
``ITERATIVE_SCHUR`` solver. When the user chooses ``ITERATIVE_SCHUR``
as the linear solver, Ceres automatically switches from the exact step
algorithm to an inexact step algorithm.
+
The key computational operation when using Conjuagate Gradients is the
evaluation of the matrix vector product :math:`Sx` for an arbitrary
-vector :math:`x`. There are two ways in which this product can be
-evaluated, and this can be controlled using
-``Solver::Options::use_explicit_schur_complement``. Depending on the
-problem at hand, the performance difference between these two methods
-can be quite substantial.
+vector :math:`x`. Because PCG only needs access to :math:`S` via its
+product with a vector, one way to evaluate :math:`Sx` is to observe
+that
- 1. **Implicit** This is default. Implicit evaluation is suitable for
- large problems where the cost of computing and storing the Schur
- Complement :math:`S` is prohibitive. Because PCG only needs
- access to :math:`S` via its product with a vector, one way to
- evaluate :math:`Sx` is to observe that
+.. math:: x_1 &= E^\top x\\
+ x_2 &= C^{-1} x_1\\
+ x_3 &= Ex_2\\
+ x_4 &= Bx\\
+ Sx &= x_4 - x_3
+ :label: schurtrick1
- .. math:: x_1 &= E^\top x\\
- x_2 &= C^{-1} x_1\\
- x_3 &= Ex_2\\
- x_4 &= Bx\\
- Sx &= x_4 - x_3
- :label: schurtrick1
+Thus, we can run Conjugate Gradients on :math:`S` with the same
+computational effort per iteration as Conjugate Gradients on
+:math:`H`, while reaping the benefits of a more powerful
+preconditioner. In fact, we do not even need to compute :math:`H`,
+:eq:`schurtrick1` can be implemented using just the columns of
+:math:`J`.
- Thus, we can run PCG on :math:`S` with the same computational
- effort per iteration as PCG on :math:`H`, while reaping the
- benefits of a more powerful preconditioner. In fact, we do not
- even need to compute :math:`H`, :eq:`schurtrick1` can be
- implemented using just the columns of :math:`J`.
+Equation :eq:`schurtrick1` is closely related to *Domain Decomposition
+methods* for solving large linear systems that arise in structural
+engineering and partial differential equations. In the language of
+Domain Decomposition, each point in a bundle adjustment problem is a
+domain, and the cameras form the interface between these domains. The
+iterative solution of the Schur complement then falls within the
+sub-category of techniques known as Iterative Sub-structuring [Saad]_
+[Mathew]_.
- Equation :eq:`schurtrick1` is closely related to *Domain
- Decomposition methods* for solving large linear systems that
- arise in structural engineering and partial differential
- equations. In the language of Domain Decomposition, each point in
- a bundle adjustment problem is a domain, and the cameras form the
- interface between these domains. The iterative solution of the
- Schur complement then falls within the sub-category of techniques
- known as Iterative Sub-structuring [Saad]_ [Mathew]_.
+While in most cases the above method for evaluating :math:`Sx` is the
+way to go, for some problems it is better to compute the Schur
+complemenent :math:`S` explicitly and then run Conjugate Gradients on
+it. This can be done by settin
+``Solver::Options::use_explicit_schur_complement`` to ``true``. This
+option can only be used with the ``SCHUR_JACOBI`` preconditioner.
- 2. **Explicit** The complexity of implicit matrix-vector product
- evaluation scales with the number of non-zeros in the
- Jacobian. For small to medium sized problems, the cost of
- constructing the Schur Complement is small enough that it is
- better to construct it explicitly in memory and use it to
- evaluate the product :math:`Sx`.
-When the user chooses ``ITERATIVE_SCHUR`` as the linear solver, Ceres
-automatically switches from the exact step algorithm to an inexact
-step algorithm.
+.. _section-schur_power_series_expansion:
- .. NOTE::
+SCHUR_POWER_SERIES_EXPANSION
+----------------------------
- In exact arithmetic, the choice of implicit versus explicit Schur
- complement would have no impact on solution quality. However, in
- practice if the Jacobian is poorly conditioned, one may observe
- (usually small) differences in solution quality. This is a
- natural consequence of performing computations in finite arithmetic.
+It can be shown that the inverse of the Schur complement can be
+written as an infinite power-series [Weber]_ [Zheng]_:
+.. math:: S &= B - EC^{-1}E^\top\\
+ &= B(I - B^{-1}EC^{-1}E^\top)\\
+ S^{-1} &= (I - B^{-1}EC^{-1}E^\top)^{-1} B^{-1}\\
+ & = \sum_{i=0}^\infty \left(B^{-1}EC^{-1}E^\top\right)^{i} B^{-1}
+
+As a result a truncated version of this power series expansion can be
+used to approximate the inverse and therefore the solution to
+:eq:`schur`. Ceres allows the user to use Schur power series expansion
+in three ways.
+
+1. As a linear solver. This is what [Weber]_ calls **Power Bundle
+ Adjustment** and corresponds to using the truncated power series to
+ approximate the inverse of the Schur complement. This is done by
+ setting the following options.
+
+ .. code-block:: c++
+
+ Solver::Options::linear_solver_type = ITERATIVE_SCHUR
+ Solver::Options::preconditioner_type = IDENTITY
+ Solver::Options::use_spse_initialization = true
+ Solver::Options::max_linear_solver_iterations = 0;
+
+ // The following two settings are worth tuning for your application.
+ Solver::Options::max_num_spse_iterations = 5;
+ Solver::Options::spse_tolerance = 0.1;
+
+
+2. As a preconditioner for ``ITERATIVE_SCHUR``. Any method for
+ approximating the inverse of a matrix can also be used as a
+ preconditioner. This is enabled by setting the following options.
+
+ .. code-block:: c++
+
+ Solver::Options::linear_solver_type = ITERATIVE_SCHUR
+ Solver::Options::preconditioner_type = SCHUR_POWER_SERIES_EXPANSION;
+ Solver::Options::use_spse_initialization = false;
+
+ // This is worth tuning for your application.
+ Solver::Options::max_num_spse_iterations = 5;
+
+
+3. As initialization for ``ITERATIIVE_SCHUR`` with any
+ preconditioner. This is a combination of the above two, where the
+ Schur Power Series Expansion
+
+ .. code-block:: c++
+
+ Solver::Options::linear_solver_type = ITERATIVE_SCHUR
+ Solver::Options::preconditioner_type = ... // Preconditioner of your choice.
+ Solver::Options::use_spse_initialization = true
+ Solver::Options::max_linear_solver_iterations = 0;
+
+ // The following two settings are worth tuning for your application.
+ Solver::Options::max_num_spse_iterations = 5;
+ // This only affects the initialization but not the preconditioner.
+ Solver::Options::spse_tolerance = 0.1;
+
+
+.. _section-mixed-precision:
+
+Mixed Precision Solves
+======================
+
+Generally speaking Ceres Solver does all its arithmetic in double
+precision. Sometimes though, one can use single precision arithmetic
+to get substantial speedups. Currently, for linear solvers that
+perform Cholesky factorization (sparse or dense) the user has the
+option cast the linear system to single precision and then use
+single precision Cholesky factorization routines to solve the
+resulting linear system. This can be enabled by setting
+:member:`Solver::Options::use_mixed_precision_solves` to ``true``.
+
+Depending on the conditioning of the problem, the use of single
+precision factorization may lead to some loss of accuracy. Some of
+this accuracy can be recovered by performing `Iterative Refinement
+<https://en.wikipedia.org/wiki/Iterative_refinement>`_. The number of
+iterations of iterative refinement are controlled by
+:member:`Solver::Options::max_num_refinement_iterations`. The default
+value of this parameter is zero, which means if
+:member:`Solver::Options::use_mixed_precision_solves` is ``true``,
+then no iterative refinement is performed. Usually 2-3 refinement
+iterations are enough.
+
+Mixed precision solves are available in the following linear solver
+configurations:
+
+1. ``DENSE_NORMAL_CHOLESKY`` + ``EIGEN``/ ``LAPACK`` / ``CUDA``.
+2. ``DENSE_SCHUR`` + ``EIGEN``/ ``LAPACK`` / ``CUDA``.
+3. ``SPARSE_NORMAL_CHOLESKY`` + ``EIGEN_SPARSE`` / ``ACCELERATE_SPARSE``
+4. ``SPARSE_SCHUR`` + ``EIGEN_SPARSE`` / ``ACCELERATE_SPARSE``
+
+Mixed precision solves area not available when using ``SUITE_SPARSE``
+as the sparse linear algebra backend because SuiteSparse/CHOLMOD does
+not support single precision solves.
.. _section-preconditioner:
-Preconditioner
-==============
+Preconditioners
+===============
-The convergence rate of Conjugate Gradients for
-solving :eq:`normal` depends on the distribution of eigenvalues
-of :math:`H` [Saad]_. A useful upper bound is
-:math:`\sqrt{\kappa(H)}`, where, :math:`\kappa(H)` is the condition
-number of the matrix :math:`H`. For most bundle adjustment problems,
-:math:`\kappa(H)` is high and a direct application of Conjugate
-Gradients to :eq:`normal` results in extremely poor performance.
+The convergence rate of Conjugate Gradients for solving :eq:`normal`
+depends on the distribution of eigenvalues of :math:`H` [Saad]_. A
+useful upper bound is :math:`\sqrt{\kappa(H)}`, where,
+:math:`\kappa(H)` is the condition number of the matrix :math:`H`. For
+most non-linear least squares problems, :math:`\kappa(H)` is high and
+a direct application of Conjugate Gradients to :eq:`normal` results in
+extremely poor performance.
The solution to this problem is to replace :eq:`normal` with a
*preconditioned* system. Given a linear system, :math:`Ax =b` and a
@@ -733,8 +963,15 @@
problems with general sparsity as well as the special sparsity
structure encountered in bundle adjustment problems.
-``JACOBI``
-----------
+IDENTITY
+--------
+
+This is equivalent to using an identity matrix as a preconditioner,
+i.e. no preconditioner at all.
+
+
+JACOBI
+------
The simplest of all preconditioners is the diagonal or Jacobi
preconditioner, i.e., :math:`M=\operatorname{diag}(A)`, which for
@@ -742,24 +979,27 @@
block Jacobi preconditioner. The ``JACOBI`` preconditioner in Ceres
when used with :ref:`section-cgnr` refers to the block diagonal of
:math:`H` and when used with :ref:`section-iterative_schur` refers to
-the block diagonal of :math:`B` [Mandel]_. For detailed performance
-data about the performance of ``JACOBI`` on bundle adjustment problems
-see [Agarwal]_.
+the block diagonal of :math:`B` [Mandel]_.
+
+For detailed performance data about the performance of ``JACOBI`` on
+bundle adjustment problems see [Agarwal]_.
-``SCHUR_JACOBI``
-----------------
+SCHUR_JACOBI
+------------
Another obvious choice for :ref:`section-iterative_schur` is the block
diagonal of the Schur complement matrix :math:`S`, i.e, the block
Jacobi preconditioner for :math:`S`. In Ceres we refer to it as the
-``SCHUR_JACOBI`` preconditioner. For detailed performance data about
-the performance of ``SCHUR_JACOBI`` on bundle adjustment problems see
-[Agarwal]_.
+``SCHUR_JACOBI`` preconditioner.
-``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL``
-----------------------------------------------
+For detailed performance data about the performance of
+``SCHUR_JACOBI`` on bundle adjustment problems see [Agarwal]_.
+
+
+CLUSTER_JACOBI and CLUSTER_TRIDIAGONAL
+--------------------------------------
For bundle adjustment problems arising in reconstruction from
community photo collections, more effective preconditioners can be
@@ -790,8 +1030,19 @@
algorithm. The choice of clustering algorithm is controlled by
:member:`Solver::Options::visibility_clustering_type`.
-``SUBSET``
-----------
+SCHUR_POWER_SERIES_EXPANSION
+----------------------------
+
+As explained in :ref:`section-schur_power_series_expansion`, the Schur
+complement matrix admits a power series expansion and a truncated
+version of this power series can be used as a preconditioner for
+``ITERATIVE_SCHUR``. When used as a preconditioner
+:member:`Solver::Options::max_num_spse_iterations` controls the number
+of terms in the power series that are used.
+
+
+SUBSET
+------
This is a preconditioner for problems with general sparsity. Given a
subset of residual blocks of a problem, it uses the corresponding
@@ -811,6 +1062,8 @@
:math:`Q` approximates :math:`J^\top J`, or how well the chosen
residual blocks approximate the full problem.
+This preconditioner is NOT available when running ``CGNR`` using
+``CUDA``.
.. _section-ordering:
@@ -821,33 +1074,36 @@
have a significant of impact on the efficiency and accuracy of the
method. For example when doing sparse Cholesky factorization, there
are matrices for which a good ordering will give a Cholesky factor
-with :math:`O(n)` storage, where as a bad ordering will result in an
+with :math:`O(n)` storage, whereas a bad ordering will result in an
completely dense factor.
Ceres allows the user to provide varying amounts of hints to the
solver about the variable elimination ordering to use. This can range
-from no hints, where the solver is free to decide the best ordering
-based on the user's choices like the linear solver being used, to an
-exact order in which the variables should be eliminated, and a variety
-of possibilities in between.
+from no hints, where the solver is free to decide the best possible
+ordering based on the user's choices like the linear solver being
+used, to an exact order in which the variables should be eliminated,
+and a variety of possibilities in between.
-Instances of the :class:`ParameterBlockOrdering` class are used to
-communicate this information to Ceres.
+The simplest thing to do is to just set
+:member:`Solver::Options::linear_solver_ordering_type` to ``AMD``
+(default) or ``NESDIS`` based on your understanding of the problem or
+empirical testing.
-Formally an ordering is an ordered partitioning of the parameter
-blocks. Each parameter block belongs to exactly one group, and each
-group has a unique integer associated with it, that determines its
-order in the set of groups. We call these groups *Elimination Groups*
-Given such an ordering, Ceres ensures that the parameter blocks in the
-lowest numbered elimination group are eliminated first, and then the
-parameter blocks in the next lowest numbered elimination group and so
-on. Within each elimination group, Ceres is free to order the
-parameter blocks as it chooses. For example, consider the linear system
+More information can be commmuniucated by using an instance
+:class:`ParameterBlockOrdering` class.
+
+Formally an ordering is an ordered partitioning of the
+parameter blocks, i.e, each parameter block belongs to exactly
+one group, and each group has a unique non-negative integer
+associated with it, that determines its order in the set of
+groups.
+
+e.g. Consider the linear system
.. math::
- x + y &= 3\\
- 2x + 3y &= 7
+ x + y &= 3 \\
+ 2x + 3y &= 7
There are two ways in which it can be solved. First eliminating
:math:`x` from the two equations, solving for :math:`y` and then back
@@ -855,33 +1111,92 @@
for :math:`x` and back substituting for :math:`y`. The user can
construct three orderings here.
-1. :math:`\{0: x\}, \{1: y\}` : Eliminate :math:`x` first.
-2. :math:`\{0: y\}, \{1: x\}` : Eliminate :math:`y` first.
-3. :math:`\{0: x, y\}` : Solver gets to decide the elimination order.
+1. :math:`\{0: x\}, \{1: y\}` - eliminate :math:`x` first.
+2. :math:`\{0: y\}, \{1: x\}` - eliminate :math:`y` first.
+3. :math:`\{0: x, y\}` - Solver gets to decide the elimination order.
-Thus, to have Ceres determine the ordering automatically using
-heuristics, put all the variables in the same elimination group. The
-identity of the group does not matter. This is the same as not
-specifying an ordering at all. To control the ordering for every
-variable, create an elimination group per variable, ordering them in
-the desired order.
+Thus, to have Ceres determine the ordering automatically, put all the
+variables in group 0 and to control the ordering for every variable,
+create groups :math:`0 \dots N-1`, one per variable, in the desired
+order.
+
+``linear_solver_ordering == nullptr`` and an ordering where all the
+parameter blocks are in one elimination group mean the same thing -
+the solver is free to choose what it thinks is the best elimination
+ordering using the ordering algorithm (specified using
+:member:`Solver::Options::linear_solver_ordering_type`). Therefore in
+the following we will only consider the case where
+``linear_solver_ordering != nullptr``.
+
+The exact interpretation of the ``linear_solver_ordering`` depends on
+the values of :member:`Solver::Options::linear_solver_ordering_type`,
+:member:`Solver::Options::linear_solver_type`,
+:member:`Solver::Options::preconditioner_type` and
+:member:`Solver::Options::sparse_linear_algebra_library_type` as we will
+explain below.
+
+Bundle Adjustment
+-----------------
If the user is using one of the Schur solvers (``DENSE_SCHUR``,
``SPARSE_SCHUR``, ``ITERATIVE_SCHUR``) and chooses to specify an
ordering, it must have one important property. The lowest numbered
elimination group must form an independent set in the graph
corresponding to the Hessian, or in other words, no two parameter
-blocks in in the first elimination group should co-occur in the same
+blocks in the first elimination group should co-occur in the same
residual block. For the best performance, this elimination group
should be as large as possible. For standard bundle adjustment
problems, this corresponds to the first elimination group containing
-all the 3d points, and the second containing the all the cameras
-parameter blocks.
+all the 3d points, and the second containing the parameter blocks for
+all the cameras.
If the user leaves the choice to Ceres, then the solver uses an
approximate maximum independent set algorithm to identify the first
elimination group [LiSaad]_.
+``sparse_linear_algebra_library_type = SUITE_SPARSE``
+-----------------------------------------------------
+
+**linear_solver_ordering_type = AMD**
+
+A constrained Approximate Minimum Degree (CAMD) ordering is used where
+the parameter blocks in the lowest numbered group are eliminated
+first, and then the parameter blocks in the next lowest numbered group
+and so on. Within each group, CAMD is free to order the parameter blocks
+as it chooses.
+
+**linear_solver_ordering_type = NESDIS**
+
+a. ``linear_solver_type = SPARSE_NORMAL_CHOLESKY`` or
+ ``linear_solver_type = CGNR`` and ``preconditioner_type = SUBSET``
+
+ The value of ``linear_solver_ordering`` is ignored and a Nested
+ Dissection algorithm is used to compute a fill reducing ordering.
+
+b. ``linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR``
+
+ ONLY the lowest group are used to compute the Schur complement, and
+ Nested Dissection is used to compute a fill reducing ordering for
+ the Schur Complement (or its preconditioner).
+
+``sparse_linear_algebra_library_type = EIGEN_SPARSE/ACCELERATE_SPARSE``
+-----------------------------------------------------------------------
+
+a. ``linear_solver_type = SPARSE_NORMAL_CHOLESKY`` or
+ ``linear_solver_type = CGNR`` and ``preconditioner_type = SUBSET``
+
+ The value of ``linear_solver_ordering`` is ignored and ``AMD`` or
+ ``NESDIS`` is used to compute a fill reducing ordering as requested
+ by the user.
+
+b. ``linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR``
+
+ ONLY the lowest group are used to compute the Schur complement, and
+ ``AMD`` or ``NESID`` is used to compute a fill reducing ordering
+ for the Schur Complement (or its preconditioner) as requested by
+ the user.
+
+
.. _section-solver-options:
:class:`Solver::Options`
@@ -892,7 +1207,7 @@
:class:`Solver::Options` controls the overall behavior of the
solver. We list the various settings and their default values below.
-.. function:: bool Solver::Options::IsValid(string* error) const
+.. function:: bool Solver::Options::IsValid(std::string* error) const
Validate the values in the options struct and returns true on
success. If there is a problem, the method returns false with
@@ -913,14 +1228,18 @@
Choices are ``STEEPEST_DESCENT``, ``NONLINEAR_CONJUGATE_GRADIENT``,
``BFGS`` and ``LBFGS``.
+ See :ref:`section-line-search-methods` for more details.
+
.. member:: LineSearchType Solver::Options::line_search_type
Default: ``WOLFE``
Choices are ``ARMIJO`` and ``WOLFE`` (strong Wolfe conditions).
Note that in order for the assumptions underlying the ``BFGS`` and
- ``LBFGS`` line search direction algorithms to be guaranteed to be
- satisifed, the ``WOLFE`` line search should be used.
+ ``LBFGS`` line search direction algorithms to be satisfied, the
+ ``WOLFE`` line search must be used.
+
+ See :ref:`section-line-search-methods` for more details.
.. member:: NonlinearConjugateGradientType Solver::Options::nonlinear_conjugate_gradient_type
@@ -931,26 +1250,28 @@
.. member:: int Solver::Options::max_lbfgs_rank
- Default: 20
+ Default: ``20``
- The L-BFGS hessian approximation is a low rank approximation to the
- inverse of the Hessian matrix. The rank of the approximation
- determines (linearly) the space and time complexity of using the
- approximation. Higher the rank, the better is the quality of the
- approximation. The increase in quality is however is bounded for a
- number of reasons.
+ The LBFGS hessian approximation is a low rank approximation to
+ the inverse of the Hessian matrix. The rank of the
+ approximation determines (linearly) the space and time
+ complexity of using the approximation. Higher the rank, the
+ better is the quality of the approximation. The increase in
+ quality is however is bounded for a number of reasons.
- 1. The method only uses secant information and not actual
- derivatives.
+ 1. The method only uses secant information and not actual
+ derivatives.
+ 2. The Hessian approximation is constrained to be positive
+ definite.
- 2. The Hessian approximation is constrained to be positive
- definite.
+ So increasing this rank to a large number will cost time and
+ space complexity without the corresponding increase in solution
+ quality. There are no hard and fast rules for choosing the
+ maximum rank. The best choice usually requires some problem
+ specific experimentation.
- So increasing this rank to a large number will cost time and space
- complexity without the corresponding increase in solution
- quality. There are no hard and fast rules for choosing the maximum
- rank. The best choice usually requires some problem specific
- experimentation.
+ For more theoretical and implementation details of the LBFGS
+ method, please see [Nocedal]_.
.. member:: bool Solver::Options::use_approximate_eigenvalue_bfgs_scaling
@@ -999,6 +1320,8 @@
.. member:: double Solver::Options::min_line_search_step_size
+ Default: ``1e-9``
+
The line search terminates if:
.. math:: \|\Delta x_k\|_\infty < \text{min_line_search_step_size}
@@ -1153,7 +1476,8 @@
.. member:: double Solver::Options::max_solver_time_in_seconds
- Default: ``1e6``
+ Default: ``1e9``
+
Maximum amount of time for which the solver should run.
.. member:: int Solver::Options::num_threads
@@ -1239,8 +1563,8 @@
where :math:`\|\cdot\|_\infty` refers to the max norm, :math:`\Pi`
is projection onto the bounds constraints and :math:`\boxplus` is
- Plus operation for the overall local parameterization associated
- with the parameter vector.
+ Plus operation for the overall manifold associated with the
+ parameter vector.
.. member:: double Solver::Options::parameter_tolerance
@@ -1260,7 +1584,7 @@
Type of linear solver used to compute the solution to the linear
least squares problem in each iteration of the Levenberg-Marquardt
algorithm. If Ceres is built with support for ``SuiteSparse`` or
- ``CXSparse`` or ``Eigen``'s sparse Cholesky factorization, the
+ ``Accelerate`` or ``Eigen``'s sparse Cholesky factorization, the
default is ``SPARSE_NORMAL_CHOLESKY``, it is ``DENSE_QR``
otherwise.
@@ -1271,8 +1595,9 @@
The preconditioner used by the iterative linear solver. The default
is the block Jacobi preconditioner. Valid values are (in increasing
order of complexity) ``IDENTITY``, ``JACOBI``, ``SCHUR_JACOBI``,
- ``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL``. See
- :ref:`section-preconditioner` for more details.
+ ``CLUSTER_JACOBI``, ``CLUSTER_TRIDIAGONAL``, ``SUBSET`` and
+ ``SCHUR_POWER_SERIES_EXPANSION``. See :ref:`section-preconditioner`
+ for more details.
.. member:: VisibilityClusteringType Solver::Options::visibility_clustering_type
@@ -1296,7 +1621,7 @@
recommend that you try ``CANONICAL_VIEWS`` first and if it is too
expensive try ``SINGLE_LINKAGE``.
-.. member:: std::unordered_set<ResidualBlockId> residual_blocks_for_subset_preconditioner
+.. member:: std::unordered_set<ResidualBlockId> Solver::Options::residual_blocks_for_subset_preconditioner
``SUBSET`` preconditioner is a preconditioner for problems with
general sparsity. Given a subset of residual blocks of a problem,
@@ -1321,65 +1646,76 @@
.. member:: DenseLinearAlgebraLibrary Solver::Options::dense_linear_algebra_library_type
- Default:``EIGEN``
+ Default: ``EIGEN``
Ceres supports using multiple dense linear algebra libraries for
- dense matrix factorizations. Currently ``EIGEN`` and ``LAPACK`` are
- the valid choices. ``EIGEN`` is always available, ``LAPACK`` refers
- to the system ``BLAS + LAPACK`` library which may or may not be
- available.
+ dense matrix factorizations. Currently ``EIGEN``, ``LAPACK`` and
+ ``CUDA`` are the valid choices. ``EIGEN`` is always available,
+ ``LAPACK`` refers to the system ``BLAS + LAPACK`` library which may
+ or may not be available. ``CUDA`` refers to Nvidia's GPU based
+ dense linear algebra library which may or may not be available.
This setting affects the ``DENSE_QR``, ``DENSE_NORMAL_CHOLESKY``
- and ``DENSE_SCHUR`` solvers. For small to moderate sized probem
+ and ``DENSE_SCHUR`` solvers. For small to moderate sized problem
``EIGEN`` is a fine choice but for large problems, an optimized
- ``LAPACK + BLAS`` implementation can make a substantial difference
- in performance.
+ ``LAPACK + BLAS`` or ``CUDA`` implementation can make a substantial
+ difference in performance.
.. member:: SparseLinearAlgebraLibrary Solver::Options::sparse_linear_algebra_library_type
Default: The highest available according to: ``SUITE_SPARSE`` >
- ``CX_SPARSE`` > ``EIGEN_SPARSE`` > ``NO_SPARSE``
+ ``ACCELERATE_SPARSE`` > ``EIGEN_SPARSE`` > ``NO_SPARSE``
Ceres supports the use of three sparse linear algebra libraries,
``SuiteSparse``, which is enabled by setting this parameter to
- ``SUITE_SPARSE``, ``CXSparse``, which can be selected by setting
- this parameter to ``CX_SPARSE`` and ``Eigen`` which is enabled by
- setting this parameter to ``EIGEN_SPARSE``. Lastly, ``NO_SPARSE``
- means that no sparse linear solver should be used; note that this is
- irrespective of whether Ceres was compiled with support for one.
+ ``SUITE_SPARSE``, ``Acclerate``, which can be selected by setting
+ this parameter to ``ACCELERATE_SPARSE`` and ``Eigen`` which is
+ enabled by setting this parameter to ``EIGEN_SPARSE``. Lastly,
+ ``NO_SPARSE`` means that no sparse linear solver should be used;
+ note that this is irrespective of whether Ceres was compiled with
+ support for one.
- ``SuiteSparse`` is a sophisticated and complex sparse linear
- algebra library and should be used in general.
+ ``SuiteSparse`` is a sophisticated sparse linear algebra library
+ and should be used in general. On MacOS you may want to use the
+ ``Accelerate`` framework.
If your needs/platforms prevent you from using ``SuiteSparse``,
- consider using ``CXSparse``, which is a much smaller, easier to
- build library. As can be expected, its performance on large
- problems is not comparable to that of ``SuiteSparse``.
+ consider using the sparse linear algebra routines in ``Eigen``. The
+ sparse Cholesky algorithms currently included with ``Eigen`` are
+ not as sophisticated as the ones in ``SuiteSparse`` and
+ ``Accelerate`` and as a result its performance is considerably
+ worse.
- Last but not the least you can use the sparse linear algebra
- routines in ``Eigen``. Currently the performance of this library is
- the poorest of the three. But this should change in the near
- future.
+.. member:: LinearSolverOrderingType Solver::Options::linear_solver_ordering_type
- Another thing to consider here is that the sparse Cholesky
- factorization libraries in Eigen are licensed under ``LGPL`` and
- building Ceres with support for ``EIGEN_SPARSE`` will result in an
- LGPL licensed library (since the corresponding code from Eigen is
- compiled into the library).
+ Default: ``AMD``
- The upside is that you do not need to build and link to an external
- library to use ``EIGEN_SPARSE``.
+ The order in which variables are eliminated in a linear solver can
+ have a significant impact on the efficiency and accuracy of the
+ method. e.g., when doing sparse Cholesky factorization, there are
+ matrices for which a good ordering will give a Cholesky factor
+ with :math:`O(n)` storage, where as a bad ordering will result in
+ an completely dense factor.
+ Sparse direct solvers like ``SPARSE_NORMAL_CHOLESKY`` and
+ ``SPARSE_SCHUR`` use a fill reducing ordering of the columns and
+ rows of the matrix being factorized before computing the numeric
+ factorization.
-.. member:: shared_ptr<ParameterBlockOrdering> Solver::Options::linear_solver_ordering
+ This enum controls the type of algorithm used to compute this fill
+ reducing ordering. There is no single algorithm that works on all
+ matrices, so determining which algorithm works better is a matter
+ of empirical experimentation.
- Default: ``NULL``
+.. member:: std::shared_ptr<ParameterBlockOrdering> Solver::Options::linear_solver_ordering
+
+ Default: ``nullptr``
An instance of the ordering object informs the solver about the
desired order in which parameter blocks should be eliminated by the
- linear solvers. See section~\ref{sec:ordering`` for more details.
+ linear solvers.
- If ``NULL``, the solver is free to choose an ordering that it
+ If ``nullptr``, the solver is free to choose an ordering that it
thinks is best.
See :ref:`section-ordering` for more details.
@@ -1404,41 +1740,15 @@
efficient to explicitly compute it and use it for evaluating the
matrix-vector products.
- Enabling this option tells ``ITERATIVE_SCHUR`` to use an explicitly
- computed Schur complement. This can improve the performance of the
- ``ITERATIVE_SCHUR`` solver significantly.
-
- .. NOTE:
+ .. NOTE::
This option can only be used with the ``SCHUR_JACOBI``
preconditioner.
-.. member:: bool Solver::Options::use_post_ordering
+.. member:: bool Solver::Options::dynamic_sparsity
Default: ``false``
- Sparse Cholesky factorization algorithms use a fill-reducing
- ordering to permute the columns of the Jacobian matrix. There are
- two ways of doing this.
-
- 1. Compute the Jacobian matrix in some order and then have the
- factorization algorithm permute the columns of the Jacobian.
-
- 2. Compute the Jacobian with its columns already permuted.
-
- The first option incurs a significant memory penalty. The
- factorization algorithm has to make a copy of the permuted Jacobian
- matrix, thus Ceres pre-permutes the columns of the Jacobian matrix
- and generally speaking, there is no performance penalty for doing
- so.
-
- In some rare cases, it is worth using a more complicated reordering
- algorithm which has slightly better runtime performance at the
- expense of an extra copy of the Jacobian matrix. Setting
- ``use_postordering`` to ``true`` enables this tradeoff.
-
-.. member:: bool Solver::Options::dynamic_sparsity
-
Some non-linear least squares problems are symbolically dense but
numerically sparse. i.e. at any given state only a small number of
Jacobian entries are non-zero, but the position and number of
@@ -1453,6 +1763,29 @@
This setting only affects the `SPARSE_NORMAL_CHOLESKY` solver.
+.. member:: bool Solver::Options::use_mixed_precision_solves
+
+ Default: ``false``
+
+ If true, the Gauss-Newton matrix is computed in *double* precision, but
+ its factorization is computed in **single** precision. This can result in
+ significant time and memory savings at the cost of some accuracy in the
+ Gauss-Newton step. Iterative refinement is used to recover some
+ of this accuracy back.
+
+ If ``use_mixed_precision_solves`` is true, we recommend setting
+ ``max_num_refinement_iterations`` to 2-3.
+
+ See :ref:`section-mixed-precision` for more details.
+
+.. member:: int Solver::Options::max_num_refinement_iterations
+
+ Default: ``0``
+
+ Number steps of the iterative refinement process to run when
+ computing the Gauss-Newton step, see
+ :member:`Solver::Options::use_mixed_precision_solves`.
+
.. member:: int Solver::Options::min_linear_solver_iterations
Default: ``0``
@@ -1469,6 +1802,34 @@
makes sense when the linear solver is an iterative solver, e.g.,
``ITERATIVE_SCHUR`` or ``CGNR``.
+.. member:: int Solver::Options::max_num_spse_iterations
+
+ Default: `5`
+
+ Maximum number of iterations performed by
+ ``SCHUR_POWER_SERIES_EXPANSION``. Each iteration corresponds to one
+ more term in the power series expansion od the inverse of the Schur
+ complement. This value controls the maximum number of iterations
+ whether it is used as a preconditioner or just to initialize the
+ solution for ``ITERATIVE_SCHUR``.
+
+.. member:: bool Solver:Options::use_spse_initialization
+
+ Default: ``false``
+
+ Use Schur power series expansion to initialize the solution for
+ ``ITERATIVE_SCHUR``. This option can be set ``true`` regardless of
+ what preconditioner is being used.
+
+.. member:: double Solver::Options::spse_tolerance
+
+ Default: `0.1`
+
+ When ``use_spse_initialization`` is ``true``, this parameter along
+ with ``max_num_spse_iterations`` controls the number of
+ ``SCHUR_POWER_SERIES_EXPANSION`` iterations performed for
+ initialization. It is not used to control the preconditioner.
+
.. member:: double Solver::Options::eta
Default: ``1e-1``
@@ -1502,6 +1863,25 @@
objects that have an :class:`EvaluationCallback` associated with
them.
+.. member:: std::shared_ptr<ParameterBlockOrdering> Solver::Options::inner_iteration_ordering
+
+ Default: ``nullptr``
+
+ If :member:`Solver::Options::use_inner_iterations` true, then the
+ user has two choices.
+
+ 1. Let the solver heuristically decide which parameter blocks to
+ optimize in each inner iteration. To do this, set
+ :member:`Solver::Options::inner_iteration_ordering` to ``nullptr``.
+
+ 2. Specify a collection of of ordered independent sets. The lower
+ numbered groups are optimized before the higher number groups
+ during the inner optimization phase. Each group must be an
+ independent set. Not all parameter blocks need to be included in
+ the ordering.
+
+ See :ref:`section-ordering` for more details.
+
.. member:: double Solver::Options::inner_iteration_tolerance
Default: ``1e-3``
@@ -1516,37 +1896,21 @@
inner iterations in subsequent trust region minimizer iterations is
disabled.
-.. member:: shared_ptr<ParameterBlockOrdering> Solver::Options::inner_iteration_ordering
-
- Default: ``NULL``
-
- If :member:`Solver::Options::use_inner_iterations` true, then the
- user has two choices.
-
- 1. Let the solver heuristically decide which parameter blocks to
- optimize in each inner iteration. To do this, set
- :member:`Solver::Options::inner_iteration_ordering` to ``NULL``.
-
- 2. Specify a collection of of ordered independent sets. The lower
- numbered groups are optimized before the higher number groups
- during the inner optimization phase. Each group must be an
- independent set. Not all parameter blocks need to be included in
- the ordering.
-
- See :ref:`section-ordering` for more details.
.. member:: LoggingType Solver::Options::logging_type
Default: ``PER_MINIMIZER_ITERATION``
+ Valid values are ``SILENT`` and ``PER_MINIMIZER_ITERATION``.
+
.. member:: bool Solver::Options::minimizer_progress_to_stdout
Default: ``false``
- By default the :class:`Minimizer` progress is logged to ``STDERR``
+ By default the Minimizer's progress is logged to ``STDERR``
depending on the ``vlog`` level. If this flag is set to true, and
- :member:`Solver::Options::logging_type` is not ``SILENT``, the logging
- output is sent to ``STDOUT``.
+ :member:`Solver::Options::logging_type` is not ``SILENT``, the
+ logging output is sent to ``STDOUT``.
For ``TRUST_REGION_MINIMIZER`` the progress display looks like
@@ -1595,7 +1959,7 @@
#. ``it`` is the time take by the current iteration.
#. ``tt`` is the total time taken by the minimizer.
-.. member:: vector<int> Solver::Options::trust_region_minimizer_iterations_to_dump
+.. member:: std::vector<int> Solver::Options::trust_region_minimizer_iterations_to_dump
Default: ``empty``
@@ -1603,7 +1967,7 @@
the trust region problem. Useful for testing and benchmarking. If
``empty``, no problems are dumped.
-.. member:: string Solver::Options::trust_region_problem_dump_directory
+.. member:: std::string Solver::Options::trust_region_problem_dump_directory
Default: ``/tmp``
@@ -1614,7 +1978,7 @@
:member:`Solver::Options::trust_region_problem_dump_format_type` is not
``CONSOLE``.
-.. member:: DumpFormatType Solver::Options::trust_region_problem_dump_format
+.. member:: DumpFormatType Solver::Options::trust_region_problem_dump_format_type
Default: ``TEXTFILE``
@@ -1708,23 +2072,32 @@
should not expect to look at the parameter blocks and interpret
their values.
-.. member:: vector<IterationCallback> Solver::Options::callbacks
+.. member:: std::vector<IterationCallback*> Solver::Options::callbacks
+
+ Default: ``empty``
Callbacks that are executed at the end of each iteration of the
- :class:`Minimizer`. They are executed in the order that they are
+ minimizer. They are executed in the order that they are
specified in this vector.
By default, parameter blocks are updated only at the end of the
- optimization, i.e., when the :class:`Minimizer` terminates. This
- means that by default, if an :class:`IterationCallback` inspects
- the parameter blocks, they will not see them changing in the course
- of the optimization.
+ optimization, i.e., when the minimizer terminates. This means that
+ by default, if an :class:`IterationCallback` inspects the parameter
+ blocks, they will not see them changing in the course of the
+ optimization.
To tell Ceres to update the parameter blocks at the end of each
iteration and before calling the user's callback, set
:member:`Solver::Options::update_state_every_iteration` to
``true``.
+ See `examples/iteration_callback_example.cc
+ <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/iteration_callback_example.cc>`_
+ for an example of an :class:`IterationCallback` that uses
+ :member:`Solver::Options::update_state_every_iteration` to log
+ changes to the parameter blocks over the course of the
+ optimization.
+
The solver does NOT take ownership of these pointers.
:class:`ParameterBlockOrdering`
@@ -1786,8 +2159,8 @@
Number of groups with one or more elements.
-:class:`IterationCallback`
-==========================
+:class:`IterationSummary`
+=========================
.. class:: IterationSummary
@@ -1863,7 +2236,7 @@
Size of the trust region at the end of the current iteration. For
the Levenberg-Marquardt algorithm, the regularization parameter is
- 1.0 / member::`IterationSummary::trust_region_radius`.
+ 1.0 / :member:`IterationSummary::trust_region_radius`.
.. member:: double IterationSummary::eta
@@ -1906,6 +2279,8 @@
Time (in seconds) since the user called Solve().
+:class:`IterationCallback`
+==========================
.. class:: IterationCallback
@@ -1939,6 +2314,10 @@
#. ``SOLVER_CONTINUE`` indicates that the solver should continue
optimizing.
+ The return values can be used to implement custom termination
+ criterion that supercede the iteration/time/tolerance based
+ termination implemented by Ceres.
+
For example, the following :class:`IterationCallback` is used
internally by Ceres to log the progress of the optimization.
@@ -1978,6 +2357,12 @@
};
+ See `examples/evaluation_callback_example.cc
+ <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/iteration_callback_example.cc>`_
+ for another example that uses
+ :member:`Solver::Options::update_state_every_iteration` to log
+ changes to the parameter blocks over the course of the optimization.
+
:class:`CRSMatrix`
==================
@@ -1995,13 +2380,13 @@
Number of columns.
-.. member:: vector<int> CRSMatrix::rows
+.. member:: std::vector<int> CRSMatrix::rows
:member:`CRSMatrix::rows` is a :member:`CRSMatrix::num_rows` + 1
sized array that points into the :member:`CRSMatrix::cols` and
:member:`CRSMatrix::values` array.
-.. member:: vector<int> CRSMatrix::cols
+.. member:: std::vector<int> CRSMatrix::cols
:member:`CRSMatrix::cols` contain as many entries as there are
non-zeros in the matrix.
@@ -2009,7 +2394,7 @@
For each row ``i``, ``cols[rows[i]]`` ... ``cols[rows[i + 1] - 1]``
are the indices of the non-zero columns of row ``i``.
-.. member:: vector<int> CRSMatrix::values
+.. member:: std::vector<double> CRSMatrix::values
:member:`CRSMatrix::values` contain as many entries as there are
non-zeros in the matrix.
@@ -2043,12 +2428,12 @@
Summary of the various stages of the solver after termination.
-.. function:: string Solver::Summary::BriefReport() const
+.. function:: std::string Solver::Summary::BriefReport() const
A brief one line description of the state of the solver after
termination.
-.. function:: string Solver::Summary::FullReport() const
+.. function:: std::string Solver::Summary::FullReport() const
A full multiline description of the state of the solver after
termination.
@@ -2071,7 +2456,7 @@
The cause of the minimizer terminating.
-.. member:: string Solver::Summary::message
+.. member:: std::string Solver::Summary::message
Reason why the solver terminated.
@@ -2091,14 +2476,14 @@
were held fixed by the preprocessor because all the parameter
blocks that they depend on were fixed.
-.. member:: vector<IterationSummary> Solver::Summary::iterations
+.. member:: std::vector<IterationSummary> Solver::Summary::iterations
:class:`IterationSummary` for each minimizer iteration in order.
.. member:: int Solver::Summary::num_successful_steps
Number of minimizer iterations in which the step was
- accepted. Unless :member:`Solver::Options::use_non_monotonic_steps`
+ accepted. Unless :member:`Solver::Options::use_nonmonotonic_steps`
is `true` this is also the number of steps in which the objective
function value/cost went down.
@@ -2112,13 +2497,13 @@
Number of times inner iterations were performed.
- .. member:: int Solver::Summary::num_line_search_steps
+.. member:: int Solver::Summary::num_line_search_steps
- Total number of iterations inside the line search algorithm across
- all invocations. We call these iterations "steps" to distinguish
- them from the outer iterations of the line search and trust region
- minimizer algorithms which call the line search algorithm as a
- subroutine.
+ Total number of iterations inside the line search algorithm across
+ all invocations. We call these iterations "steps" to distinguish
+ them from the outer iterations of the line search and trust region
+ minimizer algorithms which call the line search algorithm as a
+ subroutine.
.. member:: double Solver::Summary::preprocessor_time_in_seconds
@@ -2126,7 +2511,7 @@
.. member:: double Solver::Summary::minimizer_time_in_seconds
- Time (in seconds) spent in the Minimizer.
+ Time (in seconds) spent in the minimizer.
.. member:: double Solver::Summary::postprocessor_time_in_seconds
@@ -2180,7 +2565,7 @@
Dimension of the tangent space of the problem (or the number of
columns in the Jacobian for the problem). This is different from
:member:`Solver::Summary::num_parameters` if a parameter block is
- associated with a :class:`LocalParameterization`.
+ associated with a :class:`Manifold`.
.. member:: int Solver::Summary::num_residual_blocks
@@ -2206,7 +2591,7 @@
number of columns in the Jacobian for the reduced problem). This is
different from :member:`Solver::Summary::num_parameters_reduced` if
a parameter block in the reduced problem is associated with a
- :class:`LocalParameterization`.
+ :class:`Manifold`.
.. member:: int Solver::Summary::num_residual_blocks_reduced
@@ -2224,9 +2609,7 @@
.. member:: int Solver::Summary::num_threads_used
Number of threads actually used by the solver for Jacobian and
- residual evaluation. This number is not equal to
- :member:`Solver::Summary::num_threads_given` if none of `OpenMP`
- or `CXX_THREADS` is available.
+ residual evaluation.
.. member:: LinearSolverType Solver::Summary::linear_solver_type_given
@@ -2242,12 +2625,12 @@
`SPARSE_NORMAL_CHOLESKY` but no sparse linear algebra library was
available.
-.. member:: vector<int> Solver::Summary::linear_solver_ordering_given
+.. member:: std::vector<int> Solver::Summary::linear_solver_ordering_given
Size of the elimination groups given by the user as hints to the
linear solver.
-.. member:: vector<int> Solver::Summary::linear_solver_ordering_used
+.. member:: std::vector<int> Solver::Summary::linear_solver_ordering_used
Size of the parameter groups used by the solver when ordering the
columns of the Jacobian. This maybe different from
@@ -2285,12 +2668,12 @@
actually performed. For example, in a problem with just one parameter
block, inner iterations are not performed.
-.. member:: vector<int> inner_iteration_ordering_given
+.. member:: std::vector<int> Solver::Summary::inner_iteration_ordering_given
Size of the parameter groups given by the user for performing inner
iterations.
-.. member:: vector<int> inner_iteration_ordering_used
+.. member:: std::vector<int> Solver::Summary::inner_iteration_ordering_used
Size of the parameter groups given used by the solver for
performing inner iterations. This maybe different from
@@ -2315,7 +2698,7 @@
Type of clustering algorithm used for visibility based
preconditioning. Only meaningful when the
- :member:`Solver::Summary::preconditioner_type` is
+ :member:`Solver::Summary::preconditioner_type_used` is
``CLUSTER_JACOBI`` or ``CLUSTER_TRIDIAGONAL``.
.. member:: TrustRegionStrategyType Solver::Summary::trust_region_strategy_type
diff --git a/docs/source/nnls_tutorial.rst b/docs/source/nnls_tutorial.rst
index 6c89032..6de800e 100644
--- a/docs/source/nnls_tutorial.rst
+++ b/docs/source/nnls_tutorial.rst
@@ -2,6 +2,8 @@
.. default-domain:: cpp
+.. cpp:namespace:: ceres
+
.. _chapter-nnls_tutorial:
========================
@@ -110,7 +112,7 @@
// Set up the only cost function (also known as residual). This uses
// auto-differentiation to obtain the derivative (jacobian).
CostFunction* cost_function =
- new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
+ new AutoDiffCostFunction<CostFunctor, 1, 1>();
problem.AddResidualBlock(cost_function, nullptr, &x);
// Run the solver!
@@ -210,8 +212,7 @@
.. code-block:: c++
CostFunction* cost_function =
- new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>(
- new NumericDiffCostFunctor);
+ new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>();
problem.AddResidualBlock(cost_function, nullptr, &x);
Notice the parallel from when we were using automatic differentiation
@@ -219,7 +220,7 @@
.. code-block:: c++
CostFunction* cost_function =
- new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
+ new AutoDiffCostFunction<CostFunctor, 1, 1>();
problem.AddResidualBlock(cost_function, nullptr, &x);
The construction looks almost identical to the one used for automatic
@@ -355,16 +356,16 @@
Problem problem;
- // Add residual terms to the problem using the using the autodiff
+ // Add residual terms to the problem using the autodiff
// wrapper to get the derivatives automatically.
problem.AddResidualBlock(
- new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), nullptr, &x1, &x2);
+ new AutoDiffCostFunction<F1, 1, 1, 1>(), nullptr, &x1, &x2);
problem.AddResidualBlock(
- new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), nullptr, &x3, &x4);
+ new AutoDiffCostFunction<F2, 1, 1, 1>(), nullptr, &x3, &x4);
problem.AddResidualBlock(
- new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), nullptr, &x2, &x3)
+ new AutoDiffCostFunction<F3, 1, 1, 1>(), nullptr, &x2, &x3);
problem.AddResidualBlock(
- new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), nullptr, &x1, &x4);
+ new AutoDiffCostFunction<F4, 1, 1, 1>(), nullptr, &x1, &x4);
Note that each ``ResidualBlock`` only depends on the two parameters
@@ -377,62 +378,65 @@
Initial x1 = 3, x2 = -1, x3 = 0, x4 = 1
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
- 0 1.075000e+02 0.00e+00 1.55e+02 0.00e+00 0.00e+00 1.00e+04 0 4.95e-04 2.30e-03
- 1 5.036190e+00 1.02e+02 2.00e+01 2.16e+00 9.53e-01 3.00e+04 1 4.39e-05 2.40e-03
- 2 3.148168e-01 4.72e+00 2.50e+00 6.23e-01 9.37e-01 9.00e+04 1 9.06e-06 2.43e-03
- 3 1.967760e-02 2.95e-01 3.13e-01 3.08e-01 9.37e-01 2.70e+05 1 8.11e-06 2.45e-03
- 4 1.229900e-03 1.84e-02 3.91e-02 1.54e-01 9.37e-01 8.10e+05 1 6.91e-06 2.48e-03
- 5 7.687123e-05 1.15e-03 4.89e-03 7.69e-02 9.37e-01 2.43e+06 1 7.87e-06 2.50e-03
- 6 4.804625e-06 7.21e-05 6.11e-04 3.85e-02 9.37e-01 7.29e+06 1 5.96e-06 2.52e-03
- 7 3.003028e-07 4.50e-06 7.64e-05 1.92e-02 9.37e-01 2.19e+07 1 5.96e-06 2.55e-03
- 8 1.877006e-08 2.82e-07 9.54e-06 9.62e-03 9.37e-01 6.56e+07 1 5.96e-06 2.57e-03
- 9 1.173223e-09 1.76e-08 1.19e-06 4.81e-03 9.37e-01 1.97e+08 1 7.87e-06 2.60e-03
- 10 7.333425e-11 1.10e-09 1.49e-07 2.40e-03 9.37e-01 5.90e+08 1 6.20e-06 2.63e-03
- 11 4.584044e-12 6.88e-11 1.86e-08 1.20e-03 9.37e-01 1.77e+09 1 6.91e-06 2.65e-03
- 12 2.865573e-13 4.30e-12 2.33e-09 6.02e-04 9.37e-01 5.31e+09 1 5.96e-06 2.67e-03
- 13 1.791438e-14 2.69e-13 2.91e-10 3.01e-04 9.37e-01 1.59e+10 1 7.15e-06 2.69e-03
+ 0 1.075000e+02 0.00e+00 1.55e+02 0.00e+00 0.00e+00 1.00e+04 0 2.91e-05 3.40e-04
+ 1 5.036190e+00 1.02e+02 2.00e+01 0.00e+00 9.53e-01 3.00e+04 1 4.98e-05 3.99e-04
+ 2 3.148168e-01 4.72e+00 2.50e+00 6.23e-01 9.37e-01 9.00e+04 1 2.15e-06 4.06e-04
+ 3 1.967760e-02 2.95e-01 3.13e-01 3.08e-01 9.37e-01 2.70e+05 1 9.54e-07 4.10e-04
+ 4 1.229900e-03 1.84e-02 3.91e-02 1.54e-01 9.37e-01 8.10e+05 1 1.91e-06 4.14e-04
+ 5 7.687123e-05 1.15e-03 4.89e-03 7.69e-02 9.37e-01 2.43e+06 1 1.91e-06 4.18e-04
+ 6 4.804625e-06 7.21e-05 6.11e-04 3.85e-02 9.37e-01 7.29e+06 1 1.19e-06 4.21e-04
+ 7 3.003028e-07 4.50e-06 7.64e-05 1.92e-02 9.37e-01 2.19e+07 1 1.91e-06 4.25e-04
+ 8 1.877006e-08 2.82e-07 9.54e-06 9.62e-03 9.37e-01 6.56e+07 1 9.54e-07 4.28e-04
+ 9 1.173223e-09 1.76e-08 1.19e-06 4.81e-03 9.37e-01 1.97e+08 1 9.54e-07 4.32e-04
+ 10 7.333425e-11 1.10e-09 1.49e-07 2.40e-03 9.37e-01 5.90e+08 1 9.54e-07 4.35e-04
+ 11 4.584044e-12 6.88e-11 1.86e-08 1.20e-03 9.37e-01 1.77e+09 1 9.54e-07 4.38e-04
+ 12 2.865573e-13 4.30e-12 2.33e-09 6.02e-04 9.37e-01 5.31e+09 1 2.15e-06 4.42e-04
+ 13 1.791438e-14 2.69e-13 2.91e-10 3.01e-04 9.37e-01 1.59e+10 1 1.91e-06 4.45e-04
+ 14 1.120029e-15 1.68e-14 3.64e-11 1.51e-04 9.37e-01 4.78e+10 1 2.15e-06 4.48e-04
- Ceres Solver v1.12.0 Solve Report
- ----------------------------------
+ Solver Summary (v 2.2.0-eigen-(3.4.0)-lapack-suitesparse-(7.1.0)-metis-(5.1.0)-acceleratesparse-eigensparse)
+
Original Reduced
Parameter blocks 4 4
Parameters 4 4
Residual blocks 4 4
- Residual 4 4
+ Residuals 4 4
Minimizer TRUST_REGION
Dense linear algebra library EIGEN
Trust region strategy LEVENBERG_MARQUARDT
-
Given Used
Linear solver DENSE_QR DENSE_QR
Threads 1 1
- Linear solver threads 1 1
+ Linear solver ordering AUTOMATIC 4
Cost:
Initial 1.075000e+02
- Final 1.791438e-14
+ Final 1.120029e-15
Change 1.075000e+02
- Minimizer iterations 14
- Successful steps 14
+ Minimizer iterations 15
+ Successful steps 15
Unsuccessful steps 0
Time (in seconds):
- Preprocessor 0.002
+ Preprocessor 0.000311
- Residual evaluation 0.000
- Jacobian evaluation 0.000
- Linear solver 0.000
- Minimizer 0.001
+ Residual only evaluation 0.000002 (14)
+ Jacobian & residual evaluation 0.000023 (15)
+ Linear solver 0.000043 (14)
+ Minimizer 0.000163
- Postprocessor 0.000
- Total 0.005
+ Postprocessor 0.000012
+ Total 0.000486
Termination: CONVERGENCE (Gradient tolerance reached. Gradient max norm: 3.642190e-11 <= 1.000000e-10)
- Final x1 = 0.000292189, x2 = -2.92189e-05, x3 = 4.79511e-05, x4 = 4.79511e-05
+ Final x1 = 0.000146222, x2 = -1.46222e-05, x3 = 2.40957e-05, x4 = 2.40957e-05
+
+
+
It is easy to see that the optimal solution to this problem is at
:math:`x_1=0, x_2=0, x_3=0, x_4=0` with an objective function value of
@@ -494,8 +498,8 @@
Problem problem;
for (int i = 0; i < kNumObservations; ++i) {
CostFunction* cost_function =
- new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
- new ExponentialResidual(data[2 * i], data[2 * i + 1]));
+ new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>
+ (data[2 * i], data[2 * i + 1]);
problem.AddResidualBlock(cost_function, nullptr, &m, &c);
}
@@ -670,8 +674,8 @@
// the client code.
static ceres::CostFunction* Create(const double observed_x,
const double observed_y) {
- return (new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
- new SnavelyReprojectionError(observed_x, observed_y)));
+ return new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>
+ (observed_x, observed_y);
}
double observed_x;
@@ -728,7 +732,7 @@
For a more sophisticated bundle adjustment example which demonstrates
the use of Ceres' more advanced features including its various linear
-solvers, robust loss functions and local parameterizations see
+solvers, robust loss functions and manifolds see
`examples/bundle_adjuster.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
@@ -862,23 +866,23 @@
measurement and the predicted measurement is:
.. math:: r_{ab} =
- \left[
- \begin{array}{c}
- R_a^T\left(p_b - p_a\right) - \hat{p}_{ab} \\
- \mathrm{Normalize}\left(\psi_b - \psi_a - \hat{\psi}_{ab}\right)
- \end{array}
- \right]
+ \left[
+ \begin{array}{c}
+ R_a^T\left(p_b - p_a\right) - \hat{p}_{ab} \\
+ \mathrm{Normalize}\left(\psi_b - \psi_a - \hat{\psi}_{ab}\right)
+ \end{array}
+ \right]
where the function :math:`\mathrm{Normalize}()` normalizes the angle in the range
:math:`[-\pi,\pi)`, and :math:`R` is the rotation matrix given by
.. math:: R_a =
- \left[
- \begin{array}{cc}
- \cos \psi_a & -\sin \psi_a \\
- \sin \psi_a & \cos \psi_a \\
- \end{array}
- \right]
+ \left[
+ \begin{array}{cc}
+ \cos \psi_a & -\sin \psi_a \\
+ \sin \psi_a & \cos \psi_a \\
+ \end{array}
+ \right]
To finish the cost function, we need to weight the residual by the
uncertainty of the measurement. Hence, we pre-multiply the residual by the
@@ -886,10 +890,12 @@
i.e. :math:`\Sigma_{ab}^{-\frac{1}{2}} r_{ab}` where :math:`\Sigma_{ab}` is
the covariance.
- Lastly, we use a local parameterization to normalize the orientation in the
- range which is normalized between :math:`[-\pi,\pi)`. Specially, we define
- the :member:`AngleLocalParameterization::operator()` function to be:
- :math:`\mathrm{Normalize}(\psi + \delta \psi)`.
+ Lastly, we use a manifold to normalize the orientation in the range
+ :math:`[-\pi,\pi)`. Specially, we define the
+ :member:`AngleManifold::Plus()` function to be:
+ :math:`\mathrm{Normalize}(\psi + \Delta)` and
+ :member:`AngleManifold::Minus()` function to be
+ :math:`\mathrm{Normalize}(y) - \mathrm{Normalize}(x)`.
This package includes an executable :member:`pose_graph_2d` that will read a
problem definition file. This executable can work with any 2D problem
@@ -980,17 +986,18 @@
i.e. :math:`\Sigma_{ab}^{-\frac{1}{2}} r_{ab}` where :math:`\Sigma_{ab}` is
the covariance.
- Given that we are using a quaternion to represent the orientation, we need to
- use a local parameterization (:class:`EigenQuaternionParameterization`) to
+ Given that we are using a quaternion to represent the orientation,
+ we need to use a manifold (:class:`EigenQuaternionManifold`) to
only apply updates orthogonal to the 4-vector defining the
- quaternion. Eigen's quaternion uses a different internal memory layout for
- the elements of the quaternion than what is commonly used. Specifically,
- Eigen stores the elements in memory as :math:`[x, y, z, w]` where the real
- part is last whereas it is typically stored first. Note, when creating an
- Eigen quaternion through the constructor the elements are accepted in
- :math:`w`, :math:`x`, :math:`y`, :math:`z` order. Since Ceres operates on
- parameter blocks which are raw double pointers this difference is important
- and requires a different parameterization.
+ quaternion. Eigen's quaternion uses a different internal memory
+ layout for the elements of the quaternion than what is commonly
+ used. Specifically, Eigen stores the elements in memory as
+ :math:`[x, y, z, w]` where the real part is last whereas it is
+ typically stored first. Note, when creating an Eigen quaternion
+ through the constructor the elements are accepted in :math:`w`,
+ :math:`x`, :math:`y`, :math:`z` order. Since Ceres operates on
+ parameter blocks which are raw double pointers this difference is
+ important and requires a different parameterization.
This package includes an executable :member:`pose_graph_3d` that will read a
problem definition file. This executable can work with any 3D problem
diff --git a/docs/source/numerical_derivatives.rst b/docs/source/numerical_derivatives.rst
index 57b46bf..8d7fb3a 100644
--- a/docs/source/numerical_derivatives.rst
+++ b/docs/source/numerical_derivatives.rst
@@ -61,8 +61,7 @@
}
CostFunction* cost_function =
- new NumericDiffCostFunction<Rat43CostFunctor, FORWARD, 1, 4>(
- new Rat43CostFunctor(x, y));
+ new NumericDiffCostFunction<Rat43CostFunctor, FORWARD, 1, 4>(x, y);
This is about the minimum amount of work one can expect to do to
define the cost function. The only thing that the user needs to do is
@@ -326,7 +325,7 @@
Compared to the *correct* value :math:`Df(1.0) = 140.73773557129658`,
:math:`A(5, 1)` has a relative error of :math:`10^{-13}`. For
comparison, the relative error for the central difference formula with
-the same stepsize (:math:`0.01/2^4 = 0.000625`) is :math:`10^{-5}`.
+the same step size (:math:`0.01/2^4 = 0.000625`) is :math:`10^{-5}`.
The above tableau is the basis of Ridders' method for numeric
differentiation. The full implementation is an adaptive scheme that
diff --git a/docs/source/solving_faqs.rst b/docs/source/solving_faqs.rst
index 64604c4..3842e4d 100644
--- a/docs/source/solving_faqs.rst
+++ b/docs/source/solving_faqs.rst
@@ -23,16 +23,13 @@
2. For general sparse problems (i.e., the Jacobian matrix has a
substantial number of zeros) use
- ``SPARSE_NORMAL_CHOLESKY``. This requires that you have
- ``SuiteSparse`` or ``CXSparse`` installed.
+ ``SPARSE_NORMAL_CHOLESKY``.
3. For bundle adjustment problems with up to a hundred or so
cameras, use ``DENSE_SCHUR``.
4. For larger bundle adjustment problems with sparse Schur
- Complement/Reduced camera matrices use ``SPARSE_SCHUR``. This
- requires that you build Ceres with support for ``SuiteSparse``,
- ``CXSparse`` or Eigen's sparse linear algebra libraries.
+ Complement/Reduced camera matrices use ``SPARSE_SCHUR``.
If you do not have access to these libraries for whatever
reason, ``ITERATIVE_SCHUR`` with ``SCHUR_JACOBI`` is an
diff --git a/docs/source/version_history.rst b/docs/source/version_history.rst
index 72ae832..50f8353 100644
--- a/docs/source/version_history.rst
+++ b/docs/source/version_history.rst
@@ -1,9 +1,320 @@
+.. default-domain:: cpp
+
+.. highlight:: c++
+
+.. cpp:namespace:: ceres
+
+
.. _chapter-version-history:
===============
Version History
===============
+2.2.0
+=====
+
+New Features
+------------
+
+#. Substantial improvement to threading performance across the board
+ (Dmitry Korchemkin)
+#. Mixed precision solves + iterative refinement when using ``CUDA`` or
+ CPU based dense linear solvers, or ``EIGEN_SPARSE`` as the sparse
+ linear algebra library. (Sameer Agarwal & Joydeep Biswas)
+#. Cuda based CGNR and preconditioner support (Joydeep Biswas & Sameer
+ Agarwal)
+#. Nested Dissection (``NESDIS``) is now supported as an ordering method
+ in addition to ``AMD``. (Sameer Agarwal, Alex Stewart & Sergiu
+ Deitsch)
+#. **Power Bundle Adjustment** is available as a linear solver and as
+ a preconditioner by the name of ``SCHUR POWER SERIES EXPANSION``
+ (Mark Shachkov).
+#. Generalized Euler Angle conversions (hs293go@)
+
+
+Backward Incompatible API Changes
+---------------------------------
+
+#. :class:`LocalParameterization` has been removed, use
+ :class:`Manifold` instead.
+#. Ceres Solver now requires a C++17 compliant compiler.
+#. Ceres Solver now requires CMake version 3.16 or later.
+#. Ceres Solver now requires SuiteSparse version 4.5.6 or later.
+#. OpenMP and NO_THREADING backends have been removed. C++ threads is
+ how all threading is done.
+#. Support for ``CX_SPARSE`` as a sparse linear algebra backend has
+ been removed. Similar or better performance can be expected from
+ ``Eigen`` as the sparse linear algebra library.
+
+
+Bug Fixes & Minor Changes
+-------------------------
+#. Optimize the computation of the LM diagonal in TinySolver
+#. Improvements to multi-threaded performance for small problems that
+ had regressed due to changes to threading (Dmitrity Korchemkin)
+#. Fix handling of M_PI for MSVC (Sergiu Deitsch)
+#. Add a default value for Solver::Summary::linear_solver_ordering_type (Sameer Agarwal)
+#. Make sure that the code compiles well with CUDA 11 (Dmitriy
+ Korchemkin)
+#. Rework MSVC warning suppression (Sergiu Deitsch)
+#. Add an example for EvaluationCallback (Sameer Agarwal)
+#. Add an example for IterationCallback (Sameer Agarwal)
+#. Add end-to-end BA tests for SCHUR_POWER_SERIES_EXPANSION (Sameer Agarwal)
+#. Update documentation for linear solvers (Sameer Agarwal)
+#. Add an accessor for the CostFunctor in DynamicAutoDiffCostFunction (Sameer Agarwal)
+#. Runtime check for cudaMallocAsync support (Dmitriy Korchemkin)
+#. Remove cuda-memcheck based tests (Sameer Agarwal)
+#. Modernize ``Sphinx`` related CMake handling as well the ``Sphinx``
+ build process in the terminal. (Sergiu Deitsch)
+#. Fix macos ``sprintf`` security related warnings (Sergiu Deitsch)
+#. Lots of Cuda releated build system fixes (Sergiu Deitsch, Dmitriy
+ Korchemkin, Jason Mak)
+#. Improved windows build support (Sergiu Deitsch)
+#. Various documentation fixes (Maxim Smolskiy, Evan Levine)
+#. Improved handling of large Jacobians (Sameer Agarwal)
+#. Improved handling of infinite initial cost (Sameer Agarwal)
+#. Improved traits support for Jets (Sameer Agarwal)
+#. Improved tests for Euler angle conversion routines (@Hs293Go)
+#. Use a std::tuple to store ProductManifold for better efficiency
+ (Sergiu Deitsch)
+#. Allow default construction of ProductManifold when underlying
+ manifolds have default constructors (Sergiu Deitsch)
+#. Move LineManifold and SphereManifold into their own headers (Sameer
+ Agarwal)
+#. Fix a byte vs number of elements error when dealing with CUDA
+ workspace computations (Joydeep Biswas)
+#. Hide and prevent internal symbols from being exported (Sergiu
+ Deitsch)
+#. Switch to imported SuiteSparse, CXSparse & METIS targets.
+#. Improve compilation on Ubuntu 20.04 (Sergiu Deitsch)
+#. Update to using gtest 1.11.0 (Sameer Agarwal)
+#. Fix Euler angle conversion code to not rely on constexpr
+ constrctors for Jets. (Sameer Agarwal)
+#. BlockRandomAccessSparseMatrix now uses a BlockSparseMatrix as
+ storage instead of TripletSparseMatrix. (Dmitriy Korchemkin)
+#. Deduction guide for DynamicAutoDiffCostFunction (Sergiu Deitsch)
+#. Explicit conversions from long to ints (Alexander Ivanov)
+#. Unused code deletion/commenting and code modernization (Alexander
+ Ivanov)
+#. Improve the bazel build & tests (Alexander Ivanov)
+#. Fix a bug in QuaternionRotatePoint introduced by the use of hypot
+ earlier in this release cycle (Jonathan Taylor & Sameer Agarwal)
+#. Lots of GitHub CI improvements (Sergiu Deitsch & Dmitry Korchemkin)
+#. Improve the robustness of the Cuda based dense linear algebra tests
+ (Joydeep Biswas)
+#. Refactor storage & threading support in BlockRandomAccessMatrix and
+ its subclasses (Sameer Agarwal)
+#. Fix a bug in CoordinateDescentMinimizer related to uninitialized
+ variables (Sameer Agarwal)
+#. Remove OpenMP and NO_THREADS backends. (Sameer Agarwal)
+#. Fix version string parsing starting with SuiteSparse 6.0 (Sergiu
+ Deitsch)
+#. Use FindCUDAToolkit for CMake >= 3.17 (Alex Stewart)
+#. Add a const accessor for the Problem::Options struct used by
+ Problem. (Alex Stewart)
+#. Fix a serious performance regression when using SuiteSparse
+ introduced in `d09f7e9d5e
+ <https://github.com/ceres-solver/ceres-solver/commit/d09f7e9d5e3bfab2d7ec7e81fd6a55786edca17a>`_. (Sameer
+ Agarwal)
+#. Fix the build on QNX (Alex Stewart)
+#. Improve testing macros and documentation for Manifolds (Alex
+ Stewart)
+#. Improved code formatting (Tyler Hovanec)
+#. Better use of std::unique_ptr in the code (Mike Vitus)
+#. Fix a memory leak in ContextImpl (Sameer Agarwal)
+#. Faster locking when num_thread = 1 (Sameer Agarwal)
+#. Fix how x_norm is computed in TrustRegionMinimizer (Sameer Agarwal)
+#. Faster JACOBI preconditioner for CGNR (Sameer Agarwal)
+#. Convert internal enums to class enums (Sameer Agarwal)
+#. Improve the code in small_blas to be more compiler friendly (Sameer
+ Agarwal)
+#. Add the ability to specify the pivot threshold in
+ ::class::`Covariance::Options` (Sameer Agarwal)
+#. Modernize the internals to use C++17 (Sameer Agarwal)
+#. Choose SPMV algorithm based on the CUDA SDK Version (Joydeep
+ Biswas)
+#. Better defaults in ``bundle_adjuster.cc`` (Sameer Agarwal)
+#. Use ``foo.data()`` instead of ``&foo[0]`` (Sameer Agarwal)
+#. Fix GCC 12.1.1 LTO -Walloc-size-larger-than= warnings (Sergiu
+ Deitsch)
+#. Improved determinism in tests by re-using the same PRNG (Sergiu
+ Deitsch)
+#. Improved docs for ``vcpkg`` installation. (Sergiu Deitsch)
+#. Update FindGlog.cmake to create glog::glog target (KrisThielemans@)
+#. Improve consistency & correctness of Sphere & Line Manifolds
+ (Julio L. Paneque)
+#. Remove ``ceres/internal/random.h`` in favor of ``<random>``.
+#. Fix a crash in ``InnerProductComputer`` (Sameer Agarwal)
+#. Various fixes to improve compilation on windows using MinGW & MSVC
+ (Sergiu Deitsch)
+#. Fix fmin/fmax() to use Jet averaging on equality (Alex Stewart)
+#. Fix use of conditional preprocessor checks within a macro in tests
+ (Alex Stewart)
+#. Better support for ``CUDA memcheck`` (Joydeep Biswas)
+#. Improve the logic for linking to the platform specific threading
+ library (Sergiu Deitsch)
+#. Generate the version string at compile time (Sergiu Deitsch)
+#. :class:`NumericDiffFirstOrderFunction` can now take a dynamically
+ sized parameter vector. (Sameer Agarwal)
+#. Fix compilation with SuiteSparse 7.2.0 (Mark Shackov)
+
+2.1.0
+=====
+
+New Features
+------------
+
+#. Support for CUDA based dense solvers - ``DENSE_QR``,
+ ``DENSE_NORMAL_CHOLESKY`` & ``DENSE_SCHUR`` (Joydeep Biswas, Sameer
+ Agarwal)
+
+#. :class:`Manifold` is the new
+ :class:`LocalParameterization`. Version 2.1 is the transition
+ release where users can use both :class:`LocalParameterization` as
+ well as :class:`Manifold` objects as they transition from the
+ former to the latter. :class:`LocalParameterization` will be
+ removed in version 2.2. There should be no numerical change to the
+ results as a result of this change. (Sameer Agarwal, Johannes Beck,
+ Sergiu Deitsch)
+
+#. A number of changes to :class:`Jet` s (Sergiu Deitsch)
+
+ * :class:`Jet` gained support for, ``copysign``, ``fma`` (fused
+ multiply-add), ``midpoint`` (C++20 and above), ``lerp`` (C++20
+ and above), 3-argument ``hypot`` (C++17 and above), ``log10``,
+ ``log1p``, ``exp1m``, ``norm`` (squared :math:`L^2` norm).
+
+ * Quiet floating-point comparison: ``isless``, ``isgreater``,
+ ``islessgreater``, ``islessequal``, ``isgreaterequal``,
+ ``isunordered``, ``signbit``, ``fdim``
+
+ * Categorization and comparison operations are applied exclusively
+ and consistently to the scalar part of a Jet now: ``isnan``,
+ ``isinf``, ``isnormal``, ``isfinite``, ``fpclassify`` (new),
+ ``fmin``, ``fmax``
+
+ * It is now possible to safely compare a :class:`Jet` against a scalar
+ (or literal) without constructing a :class:`Jet` first (even if it's
+ nested):
+
+ .. code-block:: c++
+
+ Jet<Jet<Jet<T, N>, M>, O> x;
+ if (x == 2) { } // equivalent to x.a.a.a == 2
+
+
+ This enables interaction with various arithmetic functions that
+ expect a scalar like instance, such as ``boost::math::pow<-N>``
+ for reciprocal computation.
+
+#. Add :class:`NumericDiffFirstOrderFunction` (Sameer Agarwal)
+
+
+Backward Incompatible API Changes
+---------------------------------
+
+#. :class:`LocalParameterization` is deprecated. It will be removed in
+ version 2.2. Use :class:`Manifold` instead.
+#. Classification functions like ``IsFinite`` are deprecated. Use the
+ ``C++11`` functions (``isfinite``, ``isnan`` etc) going
+ forward. However to maintain consistent behaviour with comparison
+ operators, these functions only inspect the scalar part of the
+ :class:`Jet`.
+
+Bug Fixes & Minor Changes
+-------------------------
+
+#. Worked around an MSVC ordering bug when using C++17/20 (Sergiu
+ Deitsch)
+#. Added a CITATION.cff file. (Sergiu Deitsch)
+#. Updated included gtest version to 1.11.0. This should fix some
+ ``C++20`` compilation problems. (Sameer Agarwal).
+#. Workaround ``MSVC`` ``STL`` deficiency in ``C++17`` mode (Sergiu
+ Deitsch)
+#. Fix ``Jet`` test failures on ``ARMv8`` with recent ``Xcode``
+ (Sergiu Deitsch)
+#. Fix unused arguments of ``Make1stOrderPerturbation`` (Dmitriy
+ Korchemkin)
+#. Fix ``SuiteSparse`` path and version reporting (Sergiu Deitsch)
+#. Enable `GitHub` workflows and deprecate ``TravisCI`` (Sergiu
+ Deitsch)
+#. Add missing includes (Sergiu Deitsch, Sameer Agarwal)
+#. Fix path for ``cuda-memcheck`` tests (Joydeep Biswas)
+#. ClangFormat cleanup (Sameer Agarwal)
+#. Set ``CMP0057`` policy for ``IN_LIST`` operator in
+ ``FindSuiteSparse.cmake`` (Brent Yi)
+#. Do not define unusable import targets (Sergiu Deitsch)
+#. Fix Ubuntu 18.04 shared library build (Sergiu Deitsch)
+#. Force ``C++`` linker when building the ``C`` API (Sergiu Deitsch)
+#. Modernize the code to be inline with ``C++14`` (Sergiu Deitsch,
+ Sameer Agarwal)
+#. Lots of fixes to make Ceres compile out of the box on Windows
+ (Sergiu Deitsch)
+#. Standardize path handling using ``GNUImstallDirs`` (Sergiu Deitsch)
+#. Add final specifier to classes to help the compiler with
+ devirtualization (Sameer Agarwal)
+#. LOTs of clean & modernization of the CMake build files (Sergiu
+ Deitsch & Alex Stewart)
+#. Simplification to the symbol export logic (Sergiu Deitsch)
+#. Add cmake option ``ENABLE_BITCODE`` for iOS builds (John Harrison)
+#. Add const accessor for functor wrapped by auto/numeric-diff objects
+ (Alex Stewart)
+#. Cleanup & refactor ``jet_test.cc``. (Sameer Agarwal)
+#. Fix docs of supported sparse backends for mixed precision solvers
+ (Alex Stewart)
+#. Fix C++20 compilation (Sergiu Deitsch)
+#. Add an example for ``BiCubicInterpolator`` (Dmitriy Korcchemkin)
+#. Add a section to the documentation on implicit and inverse function
+ theorems (Sameer Agarwal)
+#. Add a note about Trigg's correction (Sameer Agarwal)
+#. Fix the docs for ``Problem::RemoveResidualBlock`` &
+ ``Problem::RemoveParameterBlock`` (Sameer Agarwal)
+#. Fix an incorrect check in ``reorder_program.cc`` (William Gandler)
+#. Add ``function_tolerance`` based convergence testing to ``TinySolver``
+ (Sameer Agarwal).
+#. Fix a number of typos in ``rotation.h`` (@yiping)
+#. Fix a typo in ``interfacing_with_autodiff.rst`` (@tangobravo)
+#. Fix a matrix sizing bug in covariance_impl.cc (William Gandler)
+#. Fix a bug in ``system_test.cc`` (William Gandler)
+#. Fix the Jacobian computation in ``trust_region_minimizer_test.cc``
+ (William Gandler)
+#. Fix a bug in ``local_parameterization_test.cc`` (William Gandler)
+#. Add accessors to ``GradientProblem`` (Sameer Agarwal)
+#. Refactor ``small_blas_gemm_benchmark`` (Ahmed Taei)
+#. Refactor ``small_blas_test`` (Ahmed Taei)
+#. Fix dependency check for building documentation (Sumit Dey)
+#. Fix an errant double link in the docs (Timon Knigge)
+#. Fix a typo in the version history (Noah Snavely)
+#. Fix typo in LossFunctionWrapper sample code (Dmitriy Korchemkin)
+#. Add fmax/fmin overloads for scalars (Alex Karatarakis)
+#. Introduce benchmarks for ``Jet`` operations (Alexander Karatarakis)
+#. Fix typos in documentation and fix the documentation for
+ ``IterationSummary`` (Alexander Karatarakis)
+#. Do not check MaxNumThreadsAvailable if the thread number is set
+ to 1. (Fuhao Shi)
+#. Add a macro ``CERES_GET_FLAG``. (Sameer Agarwal)
+#. Reduce log spam in ``covariance_impl.cc`` (Daniel Henell)
+#. Fix FindTBB version detection with TBB >= 2021.1.1 (Alex Stewart)
+#. Fix Eigen3_VERSION (Florian Berchtold)
+#. Allow Unity Build (Tobias Schluter)
+#. Make miniglog's InitGoogleLogging argument const (Tobias Schluter)
+#. Use portable expression for constant 2/sqrt(pi) (Tobias Schluter)
+#. Fix a number of compile errors related (Austin Schuch)
+
+ * ``format not a string literal``
+ * ``-Wno-maybe-uninitialized error``
+ * ``nonnull arg compared to NULL``
+ * ``-Wno-format-nonliteral``
+ * ``-Wmissing-field-initializers``
+ * ``-Werror``
+
+#. Fix ``cc_binary`` includes so examples build as an external repo
+ (Austin Schuh)
+#. Fix an explicit double in TinySolver (Bogdan Burlacu)
+#. Fix unit quaternion rotation (Mykyta Kozlov)
+
+
2.0.0
=====
@@ -1532,8 +1843,8 @@
=======
Ceres Solver grew out of the need for general least squares solving at
-Google. In early 2010, Sameer Agarwal and Fredrik Schaffalitzky
-started the development of Ceres Solver. Fredrik left Google shortly
+Google. In early 2010, Sameer Agarwal and Frederik Schaffalitzky
+started the development of Ceres Solver. Frederik left Google shortly
thereafter and Keir Mierle stepped in to take his place. After two
years of on-and-off development, Ceres Solver was released as open
source in May of 2012.
diff --git a/examples/BUILD b/examples/BUILD
index 90daf21..4f68eda 100644
--- a/examples/BUILD
+++ b/examples/BUILD
@@ -31,6 +31,8 @@
EXAMPLE_COPTS = [
# Needed to silence GFlags complaints.
"-Wno-sign-compare",
+ # Needed to put fscanf in a function.
+ "-Wno-format-nonliteral",
]
EXAMPLE_DEPS = [
@@ -45,7 +47,6 @@
"bal_problem.cc",
"bal_problem.h",
"bundle_adjuster.cc",
- "random.h",
"snavely_reprojection_error.h",
],
copts = EXAMPLE_COPTS,
@@ -67,26 +68,24 @@
cc_binary(
name = "robot_pose_mle",
srcs = [
- "random.h",
"robot_pose_mle.cc",
],
copts = EXAMPLE_COPTS,
deps = EXAMPLE_DEPS,
)
-SLAM_COPTS = EXAMPLE_COPTS + ["-Iexamples/slam"]
-
cc_binary(
name = "pose_graph_2d",
srcs = [
"slam/common/read_g2o.h",
- "slam/pose_graph_2d/angle_local_parameterization.h",
+ "slam/pose_graph_2d/angle_manifold.h",
"slam/pose_graph_2d/normalize_angle.h",
"slam/pose_graph_2d/pose_graph_2d.cc",
"slam/pose_graph_2d/pose_graph_2d_error_term.h",
"slam/pose_graph_2d/types.h",
],
- copts = SLAM_COPTS,
+ copts = EXAMPLE_COPTS,
+ includes = ["slam"],
deps = EXAMPLE_DEPS,
)
@@ -98,7 +97,8 @@
"slam/pose_graph_3d/pose_graph_3d_error_term.h",
"slam/pose_graph_3d/types.h",
],
- copts = SLAM_COPTS,
+ copts = EXAMPLE_COPTS,
+ includes = ["slam"],
deps = EXAMPLE_DEPS,
)
diff --git a/examples/CMakeLists.txt b/examples/CMakeLists.txt
index 7f9b117..8af2077 100644
--- a/examples/CMakeLists.txt
+++ b/examples/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2022 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -28,86 +28,96 @@
#
# Author: keir@google.com (Keir Mierle)
-# Only Ceres itself should be compiled with CERES_BUILDING_SHARED_LIBRARY
-# defined, any users of Ceres will have CERES_USING_SHARED_LIBRARY defined
-# for them in Ceres' config.h if appropriate.
-if (BUILD_SHARED_LIBS)
- remove_definitions(-DCERES_BUILDING_SHARED_LIBRARY)
-endif()
-
add_executable(helloworld helloworld.cc)
-target_link_libraries(helloworld Ceres::ceres)
+target_link_libraries(helloworld PRIVATE Ceres::ceres)
add_executable(helloworld_numeric_diff helloworld_numeric_diff.cc)
-target_link_libraries(helloworld_numeric_diff Ceres::ceres)
+target_link_libraries(helloworld_numeric_diff PRIVATE Ceres::ceres)
add_executable(helloworld_analytic_diff helloworld_analytic_diff.cc)
-target_link_libraries(helloworld_analytic_diff Ceres::ceres)
+target_link_libraries(helloworld_analytic_diff PRIVATE Ceres::ceres)
add_executable(curve_fitting curve_fitting.cc)
-target_link_libraries(curve_fitting Ceres::ceres)
+target_link_libraries(curve_fitting PRIVATE Ceres::ceres)
add_executable(rosenbrock rosenbrock.cc)
-target_link_libraries(rosenbrock Ceres::ceres)
+target_link_libraries(rosenbrock PRIVATE Ceres::ceres)
+
+add_executable(rosenbrock_analytic_diff rosenbrock_analytic_diff.cc)
+target_link_libraries(rosenbrock_analytic_diff PRIVATE Ceres::ceres)
+
+add_executable(rosenbrock_numeric_diff rosenbrock_numeric_diff.cc)
+target_link_libraries(rosenbrock_numeric_diff PRIVATE Ceres::ceres)
add_executable(curve_fitting_c curve_fitting.c)
-target_link_libraries(curve_fitting_c Ceres::ceres)
-# Force CMake to link curve_fitting_c using the C linker.
-set_target_properties(curve_fitting_c PROPERTIES LINKER_LANGUAGE C)
+target_link_libraries(curve_fitting_c PRIVATE Ceres::ceres)
+# Force CMake to link curve_fitting_c using the C++ linker.
+set_target_properties(curve_fitting_c PROPERTIES LINKER_LANGUAGE CXX)
# As this is a C file #including <math.h> we have to explicitly add the math
# library (libm). Although some compilers (dependent upon options) will accept
# the indirect link to libm via Ceres, at least GCC 4.8 on pure Debian won't.
-if (NOT MSVC)
- target_link_libraries(curve_fitting_c m)
-endif (NOT MSVC)
+if (HAVE_LIBM)
+ target_link_libraries(curve_fitting_c PRIVATE m)
+endif (HAVE_LIBM)
add_executable(ellipse_approximation ellipse_approximation.cc)
-target_link_libraries(ellipse_approximation Ceres::ceres)
+target_link_libraries(ellipse_approximation PRIVATE Ceres::ceres)
add_executable(robust_curve_fitting robust_curve_fitting.cc)
-target_link_libraries(robust_curve_fitting Ceres::ceres)
+target_link_libraries(robust_curve_fitting PRIVATE Ceres::ceres)
add_executable(simple_bundle_adjuster simple_bundle_adjuster.cc)
-target_link_libraries(simple_bundle_adjuster Ceres::ceres)
+target_link_libraries(simple_bundle_adjuster PRIVATE Ceres::ceres)
+
+add_executable(bicubic_interpolation bicubic_interpolation.cc)
+target_link_libraries(bicubic_interpolation PRIVATE Ceres::ceres)
+
+add_executable(bicubic_interpolation_analytic bicubic_interpolation_analytic.cc)
+target_link_libraries(bicubic_interpolation_analytic PRIVATE Ceres::ceres)
+
+add_executable(iteration_callback_example iteration_callback_example.cc)
+target_link_libraries(iteration_callback_example PRIVATE Ceres::ceres)
+
+add_executable(evaluation_callback_example evaluation_callback_example.cc)
+target_link_libraries(evaluation_callback_example PRIVATE Ceres::ceres)
if (GFLAGS)
add_executable(powell powell.cc)
- target_link_libraries(powell Ceres::ceres gflags)
+ target_link_libraries(powell PRIVATE Ceres::ceres gflags)
add_executable(nist nist.cc)
- target_link_libraries(nist Ceres::ceres gflags)
- if (MSVC)
- target_compile_options(nist PRIVATE "/bigobj")
+ target_link_libraries(nist PRIVATE Ceres::ceres gflags)
+ if (HAVE_BIGOBJ)
+ target_compile_options(nist PRIVATE /bigobj)
endif()
add_executable(more_garbow_hillstrom more_garbow_hillstrom.cc)
- target_link_libraries(more_garbow_hillstrom Ceres::ceres gflags)
+ target_link_libraries(more_garbow_hillstrom PRIVATE Ceres::ceres gflags)
add_executable(circle_fit circle_fit.cc)
- target_link_libraries(circle_fit Ceres::ceres gflags)
+ target_link_libraries(circle_fit PRIVATE Ceres::ceres gflags)
add_executable(bundle_adjuster
bundle_adjuster.cc
bal_problem.cc)
- target_link_libraries(bundle_adjuster Ceres::ceres gflags)
+ target_link_libraries(bundle_adjuster PRIVATE Ceres::ceres gflags)
add_executable(libmv_bundle_adjuster
libmv_bundle_adjuster.cc)
- target_link_libraries(libmv_bundle_adjuster Ceres::ceres gflags)
+ target_link_libraries(libmv_bundle_adjuster PRIVATE Ceres::ceres gflags)
add_executable(libmv_homography
libmv_homography.cc)
- target_link_libraries(libmv_homography Ceres::ceres gflags)
+ target_link_libraries(libmv_homography PRIVATE Ceres::ceres gflags)
add_executable(denoising
denoising.cc
fields_of_experts.cc)
- target_link_libraries(denoising Ceres::ceres gflags)
+ target_link_libraries(denoising PRIVATE Ceres::ceres gflags)
add_executable(robot_pose_mle
robot_pose_mle.cc)
- target_link_libraries(robot_pose_mle Ceres::ceres gflags)
-
+ target_link_libraries(robot_pose_mle PRIVATE Ceres::ceres gflags)
endif (GFLAGS)
add_subdirectory(sampled_function)
diff --git a/examples/Makefile.example b/examples/Makefile.example
deleted file mode 100644
index f2b0dc0..0000000
--- a/examples/Makefile.example
+++ /dev/null
@@ -1,82 +0,0 @@
-# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
-# http://ceres-solver.org/
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-# * Neither the name of Google Inc. nor the names of its contributors may be
-# used to endorse or promote products derived from this software without
-# specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-#
-# Author: keir@google.com (Keir Mierle)
-#
-# This is an example Makefile for using Ceres. In practice, the Ceres authors
-# suggest using CMake instead, but if Make is needed for some reason, this
-# example serves to make it easy to do so.
-
-# This should point to place where you unpacked or cloned Ceres.
-CERES_SRC_DIR := /home/keir/wrk/ceres-extra
-
-# This should point to the place where you built Ceres. If you got Ceres by
-# installing it, then this will likely be /usr/local/lib.
-CERES_BIN_DIR := /home/keir/wrk/ceres-extra-bin
-
-# The place you unpacked or cloned Eigen. If Eigen was installed from packages,
-# this will likely be /usr/local/include.
-EIGEN_SRC_DIR := /home/keir/src/eigen-3.0.5
-
-INCLUDES := -I$(CERES_SRC_DIR)/include \
- -I$(EIGEN_SRC_DIR)
-
-CERES_LIBRARY := -lceres
-CERES_LIBRARY_PATH := -L$(CERES_BIN_DIR)/lib
-CERES_LIBRARY_DEPENDENCIES = -lgflags -lglog
-
-# If Ceres was built with Suitesparse:
-CERES_LIBRARY_DEPENDENCIES += -llapack -lcamd -lamd -lccolamd -lcolamd -lcholmod
-
-# If Ceres was built with CXSparse:
-CERES_LIBRARY_DEPENDENCIES += -lcxsparse
-
-# If Ceres was built with OpenMP:
-CERES_LIBRARY_DEPENDENCIES += -fopenmp -lpthread -lgomp -lm
-
-# The set of object files for your application.
-APPLICATION_OBJS := simple_bundle_adjuster.o
-
-all: simple_bundle_adjuster
-
-simple_bundle_adjuster: $(APPLICATION_OBJS)
- g++ \
- $(APPLICATION_OBJS) \
- $(CERES_LIBRARY_PATH) \
- $(CERES_LIBRARY) \
- $(CERES_LIBRARY_DEPENDENCIES) \
- -o simple_bundle_adjuster
-
-# Disabling debug asserts via -DNDEBUG helps make Eigen faster, at the cost of
-# not getting handy assert failures when there are issues in your code.
-CFLAGS := -O2 -DNDEBUG
-
-# If you have files ending in .cpp instead of .cc, fix the next line
-# appropriately.
-%.o: %.cc $(DEPS)
- g++ -c -o $@ $< $(CFLAGS) $(INCLUDES)
diff --git a/examples/bal_problem.cc b/examples/bal_problem.cc
index ceac89a..ccf7449 100644
--- a/examples/bal_problem.cc
+++ b/examples/bal_problem.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,22 +30,22 @@
#include "bal_problem.h"
+#include <algorithm>
#include <cstdio>
-#include <cstdlib>
#include <fstream>
+#include <functional>
+#include <random>
#include <string>
#include <vector>
#include "Eigen/Core"
#include "ceres/rotation.h"
#include "glog/logging.h"
-#include "random.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
namespace {
-typedef Eigen::Map<Eigen::VectorXd> VectorRef;
-typedef Eigen::Map<const Eigen::VectorXd> ConstVectorRef;
+using VectorRef = Eigen::Map<Eigen::VectorXd>;
+using ConstVectorRef = Eigen::Map<const Eigen::VectorXd>;
template <typename T>
void FscanfOrDie(FILE* fptr, const char* format, T* value) {
@@ -55,15 +55,14 @@
}
}
-void PerturbPoint3(const double sigma, double* point) {
+void PerturbPoint3(std::function<double()> dist, double* point) {
for (int i = 0; i < 3; ++i) {
- point[i] += RandNormal() * sigma;
+ point[i] += dist();
}
}
double Median(std::vector<double>* data) {
- int n = data->size();
- std::vector<double>::iterator mid_point = data->begin() + n / 2;
+ auto mid_point = data->begin() + data->size() / 2;
std::nth_element(data->begin(), mid_point, data->end());
return *mid_point;
}
@@ -73,12 +72,12 @@
BALProblem::BALProblem(const std::string& filename, bool use_quaternions) {
FILE* fptr = fopen(filename.c_str(), "r");
- if (fptr == NULL) {
+ if (fptr == nullptr) {
LOG(FATAL) << "Error: unable to open file " << filename;
return;
};
- // This wil die horribly on invalid files. Them's the breaks.
+ // This will die horribly on invalid files. Them's the breaks.
FscanfOrDie(fptr, "%d", &num_cameras_);
FscanfOrDie(fptr, "%d", &num_points_);
FscanfOrDie(fptr, "%d", &num_observations_);
@@ -111,7 +110,7 @@
if (use_quaternions) {
// Switch the angle-axis rotations to quaternions.
num_parameters_ = 10 * num_cameras_ + 3 * num_points_;
- double* quaternion_parameters = new double[num_parameters_];
+ auto* quaternion_parameters = new double[num_parameters_];
double* original_cursor = parameters_;
double* quaternion_cursor = quaternion_parameters;
for (int i = 0; i < num_cameras_; ++i) {
@@ -137,7 +136,7 @@
void BALProblem::WriteToFile(const std::string& filename) const {
FILE* fptr = fopen(filename.c_str(), "w");
- if (fptr == NULL) {
+ if (fptr == nullptr) {
LOG(FATAL) << "Error: unable to open file " << filename;
return;
};
@@ -161,8 +160,8 @@
} else {
memcpy(angleaxis, parameters_ + 9 * i, 9 * sizeof(double));
}
- for (int j = 0; j < 9; ++j) {
- fprintf(fptr, "%.16g\n", angleaxis[j]);
+ for (double coeff : angleaxis) {
+ fprintf(fptr, "%.16g\n", coeff);
}
}
@@ -297,14 +296,20 @@
CHECK_GE(point_sigma, 0.0);
CHECK_GE(rotation_sigma, 0.0);
CHECK_GE(translation_sigma, 0.0);
-
+ std::mt19937 prng;
+ std::normal_distribution<double> point_noise_distribution(0.0, point_sigma);
double* points = mutable_points();
if (point_sigma > 0) {
for (int i = 0; i < num_points_; ++i) {
- PerturbPoint3(point_sigma, points + 3 * i);
+ PerturbPoint3(std::bind(point_noise_distribution, std::ref(prng)),
+ points + 3 * i);
}
}
+ std::normal_distribution<double> rotation_noise_distribution(0.0,
+ point_sigma);
+ std::normal_distribution<double> translation_noise_distribution(
+ 0.0, translation_sigma);
for (int i = 0; i < num_cameras_; ++i) {
double* camera = mutable_cameras() + camera_block_size() * i;
@@ -314,12 +319,14 @@
// representation.
CameraToAngleAxisAndCenter(camera, angle_axis, center);
if (rotation_sigma > 0.0) {
- PerturbPoint3(rotation_sigma, angle_axis);
+ PerturbPoint3(std::bind(rotation_noise_distribution, std::ref(prng)),
+ angle_axis);
}
AngleAxisAndCenterToCamera(angle_axis, center, camera);
if (translation_sigma > 0.0) {
- PerturbPoint3(translation_sigma, camera + camera_block_size() - 6);
+ PerturbPoint3(std::bind(translation_noise_distribution, std::ref(prng)),
+ camera + camera_block_size() - 6);
}
}
}
@@ -331,5 +338,4 @@
delete[] parameters_;
}
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
diff --git a/examples/bal_problem.h b/examples/bal_problem.h
index e6d4ace..93effc3 100644
--- a/examples/bal_problem.h
+++ b/examples/bal_problem.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,8 +39,7 @@
#include <string>
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
class BALProblem {
public:
@@ -105,7 +104,6 @@
double* parameters_;
};
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // CERES_EXAMPLES_BAL_PROBLEM_H_
diff --git a/examples/bicubic_interpolation.cc b/examples/bicubic_interpolation.cc
new file mode 100644
index 0000000..21b3c7e
--- /dev/null
+++ b/examples/bicubic_interpolation.cc
@@ -0,0 +1,155 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Bicubic interpolation with automatic differentiation
+//
+// We will use estimation of 2d shift as a sample problem for bicubic
+// interpolation.
+//
+// Let us define f(x, y) = x * x - y * x + y * y
+// And optimize cost function sum_i [f(x_i + s_x, y_i + s_y) - v_i]^2
+//
+// Bicubic interpolation of f(x, y) will be exact, thus we can expect close to
+// perfect convergence
+
+#include <utility>
+
+#include "ceres/ceres.h"
+#include "ceres/cubic_interpolation.h"
+#include "glog/logging.h"
+
+using Grid = ceres::Grid2D<double>;
+using Interpolator = ceres::BiCubicInterpolator<Grid>;
+
+// Cost-function using autodiff interface of BiCubicInterpolator
+struct AutoDiffBiCubicCost {
+ EIGEN_MAKE_ALIGNED_OPERATOR_NEW;
+
+ template <typename T>
+ bool operator()(const T* s, T* residual) const {
+ using Vector2T = Eigen::Matrix<T, 2, 1>;
+ Eigen::Map<const Vector2T> shift(s);
+
+ const Vector2T point = point_ + shift;
+
+ T v;
+ interpolator_.Evaluate(point.y(), point.x(), &v);
+
+ *residual = v - value_;
+ return true;
+ }
+
+ AutoDiffBiCubicCost(const Interpolator& interpolator,
+ Eigen::Vector2d point,
+ double value)
+ : point_(std::move(point)), value_(value), interpolator_(interpolator) {}
+
+ static ceres::CostFunction* Create(const Interpolator& interpolator,
+ const Eigen::Vector2d& point,
+ double value) {
+ return new ceres::AutoDiffCostFunction<AutoDiffBiCubicCost, 1, 2>(
+ interpolator, point, value);
+ }
+
+ const Eigen::Vector2d point_;
+ const double value_;
+ const Interpolator& interpolator_;
+};
+
+// Function for input data generation
+static double f(const double& x, const double& y) {
+ return x * x - y * x + y * y;
+}
+
+int main(int argc, char** argv) {
+ google::InitGoogleLogging(argv[0]);
+ // Problem sizes
+ const int kGridRowsHalf = 9;
+ const int kGridColsHalf = 11;
+ const int kGridRows = 2 * kGridRowsHalf + 1;
+ const int kGridCols = 2 * kGridColsHalf + 1;
+ const int kPoints = 4;
+
+ const Eigen::Vector2d shift(1.234, 2.345);
+ const std::array<Eigen::Vector2d, kPoints> points = {
+ Eigen::Vector2d{-2., -3.},
+ Eigen::Vector2d{-2., 3.},
+ Eigen::Vector2d{2., 3.},
+ Eigen::Vector2d{2., -3.}};
+
+ // Data is a row-major array of kGridRows x kGridCols values of function
+ // f(x, y) on the grid, with x in {-kGridColsHalf, ..., +kGridColsHalf},
+ // and y in {-kGridRowsHalf, ..., +kGridRowsHalf}
+ double data[kGridRows * kGridCols];
+ for (int i = 0; i < kGridRows; ++i) {
+ for (int j = 0; j < kGridCols; ++j) {
+ // Using row-major order
+ int index = i * kGridCols + j;
+ double y = i - kGridRowsHalf;
+ double x = j - kGridColsHalf;
+
+ data[index] = f(x, y);
+ }
+ }
+ const Grid grid(data,
+ -kGridRowsHalf,
+ kGridRowsHalf + 1,
+ -kGridColsHalf,
+ kGridColsHalf + 1);
+ const Interpolator interpolator(grid);
+
+ Eigen::Vector2d shift_estimate(3.1415, 1.337);
+
+ ceres::Problem problem;
+ problem.AddParameterBlock(shift_estimate.data(), 2);
+
+ for (const auto& p : points) {
+ const Eigen::Vector2d shifted = p + shift;
+
+ const double v = f(shifted.x(), shifted.y());
+ problem.AddResidualBlock(AutoDiffBiCubicCost::Create(interpolator, p, v),
+ nullptr,
+ shift_estimate.data());
+ }
+
+ ceres::Solver::Options options;
+ options.minimizer_progress_to_stdout = true;
+
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
+ std::cout << summary.BriefReport() << '\n';
+
+ std::cout << "Bicubic interpolation with automatic derivatives:\n";
+ std::cout << "Estimated shift: " << shift_estimate.transpose()
+ << ", ground-truth: " << shift.transpose()
+ << " (error: " << (shift_estimate - shift).transpose() << ")"
+ << std::endl;
+
+ CHECK_LT((shift_estimate - shift).norm(), 1e-9);
+ return 0;
+}
diff --git a/examples/bicubic_interpolation_analytic.cc b/examples/bicubic_interpolation_analytic.cc
new file mode 100644
index 0000000..4b79d56
--- /dev/null
+++ b/examples/bicubic_interpolation_analytic.cc
@@ -0,0 +1,166 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Bicubic interpolation with analytic differentiation
+//
+// We will use estimation of 2d shift as a sample problem for bicubic
+// interpolation.
+//
+// Let us define f(x, y) = x * x - y * x + y * y
+// And optimize cost function sum_i [f(x_i + s_x, y_i + s_y) - v_i]^2
+//
+// Bicubic interpolation of f(x, y) will be exact, thus we can expect close to
+// perfect convergence
+
+#include <utility>
+
+#include "ceres/ceres.h"
+#include "ceres/cubic_interpolation.h"
+#include "glog/logging.h"
+
+using Grid = ceres::Grid2D<double>;
+using Interpolator = ceres::BiCubicInterpolator<Grid>;
+
+// Cost-function using analytic interface of BiCubicInterpolator
+struct AnalyticBiCubicCost : public ceres::CostFunction {
+ EIGEN_MAKE_ALIGNED_OPERATOR_NEW;
+
+ bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const override {
+ Eigen::Map<const Eigen::Vector2d> shift(parameters[0]);
+
+ const Eigen::Vector2d point = point_ + shift;
+
+ double* f = residuals;
+ double* dfdr = nullptr;
+ double* dfdc = nullptr;
+ if (jacobians && jacobians[0]) {
+ dfdc = jacobians[0];
+ dfdr = dfdc + 1;
+ }
+
+ interpolator_.Evaluate(point.y(), point.x(), f, dfdr, dfdc);
+
+ if (residuals) {
+ *f -= value_;
+ }
+ return true;
+ }
+
+ AnalyticBiCubicCost(const Interpolator& interpolator,
+ Eigen::Vector2d point,
+ double value)
+ : point_(std::move(point)), value_(value), interpolator_(interpolator) {
+ set_num_residuals(1);
+ *mutable_parameter_block_sizes() = {2};
+ }
+
+ static ceres::CostFunction* Create(const Interpolator& interpolator,
+ const Eigen::Vector2d& point,
+ double value) {
+ return new AnalyticBiCubicCost(interpolator, point, value);
+ }
+
+ const Eigen::Vector2d point_;
+ const double value_;
+ const Interpolator& interpolator_;
+};
+
+// Function for input data generation
+static double f(const double& x, const double& y) {
+ return x * x - y * x + y * y;
+}
+
+int main(int argc, char** argv) {
+ google::InitGoogleLogging(argv[0]);
+ // Problem sizes
+ const int kGridRowsHalf = 9;
+ const int kGridColsHalf = 11;
+ const int kGridRows = 2 * kGridRowsHalf + 1;
+ const int kGridCols = 2 * kGridColsHalf + 1;
+ const int kPoints = 4;
+
+ const Eigen::Vector2d shift(1.234, 2.345);
+ const std::array<Eigen::Vector2d, kPoints> points = {
+ Eigen::Vector2d{-2., -3.},
+ Eigen::Vector2d{-2., 3.},
+ Eigen::Vector2d{2., 3.},
+ Eigen::Vector2d{2., -3.}};
+
+ // Data is a row-major array of kGridRows x kGridCols values of function
+ // f(x, y) on the grid, with x in {-kGridColsHalf, ..., +kGridColsHalf},
+ // and y in {-kGridRowsHalf, ..., +kGridRowsHalf}
+ double data[kGridRows * kGridCols];
+ for (int i = 0; i < kGridRows; ++i) {
+ for (int j = 0; j < kGridCols; ++j) {
+ // Using row-major order
+ int index = i * kGridCols + j;
+ double y = i - kGridRowsHalf;
+ double x = j - kGridColsHalf;
+
+ data[index] = f(x, y);
+ }
+ }
+ const Grid grid(data,
+ -kGridRowsHalf,
+ kGridRowsHalf + 1,
+ -kGridColsHalf,
+ kGridColsHalf + 1);
+ const Interpolator interpolator(grid);
+
+ Eigen::Vector2d shift_estimate(3.1415, 1.337);
+
+ ceres::Problem problem;
+ problem.AddParameterBlock(shift_estimate.data(), 2);
+
+ for (const auto& p : points) {
+ const Eigen::Vector2d shifted = p + shift;
+
+ const double v = f(shifted.x(), shifted.y());
+ problem.AddResidualBlock(AnalyticBiCubicCost::Create(interpolator, p, v),
+ nullptr,
+ shift_estimate.data());
+ }
+
+ ceres::Solver::Options options;
+ options.minimizer_progress_to_stdout = true;
+
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
+ std::cout << summary.BriefReport() << '\n';
+
+ std::cout << "Bicubic interpolation with analytic derivatives:\n";
+ std::cout << "Estimated shift: " << shift_estimate.transpose()
+ << ", ground-truth: " << shift.transpose()
+ << " (error: " << (shift_estimate - shift).transpose() << ")"
+ << std::endl;
+
+ CHECK_LT((shift_estimate - shift).norm(), 1e-9);
+ return 0;
+}
diff --git a/examples/bundle_adjuster.cc b/examples/bundle_adjuster.cc
index e7b154e..582ae2e 100644
--- a/examples/bundle_adjuster.cc
+++ b/examples/bundle_adjuster.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -55,7 +55,9 @@
#include <cmath>
#include <cstdio>
#include <cstdlib>
+#include <memory>
#include <string>
+#include <thread>
#include <vector>
#include "bal_problem.h"
@@ -80,34 +82,45 @@
"automatic, cameras, points, cameras,points, points,cameras");
DEFINE_string(linear_solver, "sparse_schur", "Options are: "
- "sparse_schur, dense_schur, iterative_schur, sparse_normal_cholesky, "
- "dense_qr, dense_normal_cholesky and cgnr.");
+ "sparse_schur, dense_schur, iterative_schur, "
+ "sparse_normal_cholesky, dense_qr, dense_normal_cholesky, "
+ "and cgnr.");
DEFINE_bool(explicit_schur_complement, false, "If using ITERATIVE_SCHUR "
"then explicitly compute the Schur complement.");
DEFINE_string(preconditioner, "jacobi", "Options are: "
- "identity, jacobi, schur_jacobi, cluster_jacobi, "
+ "identity, jacobi, schur_jacobi, schur_power_series_expansion, cluster_jacobi, "
"cluster_tridiagonal.");
DEFINE_string(visibility_clustering, "canonical_views",
"single_linkage, canonical_views");
+DEFINE_bool(use_spse_initialization, false,
+ "Use power series expansion to initialize the solution in ITERATIVE_SCHUR linear solver.");
DEFINE_string(sparse_linear_algebra_library, "suite_sparse",
- "Options are: suite_sparse and cx_sparse.");
+ "Options are: suite_sparse, accelerate_sparse, eigen_sparse, and "
+ "cuda_sparse.");
DEFINE_string(dense_linear_algebra_library, "eigen",
- "Options are: eigen and lapack.");
-DEFINE_string(ordering, "automatic", "Options are: automatic, user.");
+ "Options are: eigen, lapack, and cuda");
+DEFINE_string(ordering_type, "amd", "Options are: amd, nesdis");
+DEFINE_string(linear_solver_ordering, "user",
+ "Options are: automatic and user");
DEFINE_bool(use_quaternions, false, "If true, uses quaternions to represent "
"rotations. If false, angle axis is used.");
-DEFINE_bool(use_local_parameterization, false, "For quaternions, use a local "
- "parameterization.");
+DEFINE_bool(use_manifolds, false, "For quaternions, use a manifold.");
DEFINE_bool(robustify, false, "Use a robust loss function.");
DEFINE_double(eta, 1e-2, "Default value for eta. Eta determines the "
"accuracy of each linear solve of the truncated newton step. "
"Changing this parameter can affect solve performance.");
-DEFINE_int32(num_threads, 1, "Number of threads.");
+DEFINE_int32(num_threads, -1, "Number of threads. -1 = std::thread::hardware_concurrency.");
DEFINE_int32(num_iterations, 5, "Number of iterations.");
+DEFINE_int32(max_linear_solver_iterations, 500, "Maximum number of iterations"
+ " for solution of linear system.");
+DEFINE_double(spse_tolerance, 0.1,
+ "Tolerance to reach during the iterations of power series expansion initialization or preconditioning.");
+DEFINE_int32(max_num_spse_iterations, 5,
+ "Maximum number of iterations for power series expansion initialization or preconditioning.");
DEFINE_double(max_solver_time, 1e32, "Maximum solve time in seconds.");
DEFINE_bool(nonmonotonic_steps, false, "Trust region algorithm can use"
" nonmonotic steps.");
@@ -120,7 +133,7 @@
"perturbation.");
DEFINE_int32(random_seed, 38401, "Random seed used to set the state "
"of the pseudo random number generator used to generate "
- "the pertubations.");
+ "the perturbations.");
DEFINE_bool(line_search, false, "Use a line search instead of trust region "
"algorithm.");
DEFINE_bool(mixed_precision_solves, false, "Use mixed precision solves.");
@@ -128,29 +141,41 @@
DEFINE_string(initial_ply, "", "Export the BAL file data as a PLY file.");
DEFINE_string(final_ply, "", "Export the refined BAL file data as a PLY "
"file.");
-
// clang-format on
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
namespace {
void SetLinearSolver(Solver::Options* options) {
- CHECK(StringToLinearSolverType(FLAGS_linear_solver,
+ CHECK(StringToLinearSolverType(CERES_GET_FLAG(FLAGS_linear_solver),
&options->linear_solver_type));
- CHECK(StringToPreconditionerType(FLAGS_preconditioner,
+ CHECK(StringToPreconditionerType(CERES_GET_FLAG(FLAGS_preconditioner),
&options->preconditioner_type));
- CHECK(StringToVisibilityClusteringType(FLAGS_visibility_clustering,
- &options->visibility_clustering_type));
+ CHECK(StringToVisibilityClusteringType(
+ CERES_GET_FLAG(FLAGS_visibility_clustering),
+ &options->visibility_clustering_type));
CHECK(StringToSparseLinearAlgebraLibraryType(
- FLAGS_sparse_linear_algebra_library,
+ CERES_GET_FLAG(FLAGS_sparse_linear_algebra_library),
&options->sparse_linear_algebra_library_type));
CHECK(StringToDenseLinearAlgebraLibraryType(
- FLAGS_dense_linear_algebra_library,
+ CERES_GET_FLAG(FLAGS_dense_linear_algebra_library),
&options->dense_linear_algebra_library_type));
- options->use_explicit_schur_complement = FLAGS_explicit_schur_complement;
- options->use_mixed_precision_solves = FLAGS_mixed_precision_solves;
- options->max_num_refinement_iterations = FLAGS_max_num_refinement_iterations;
+ CHECK(
+ StringToLinearSolverOrderingType(CERES_GET_FLAG(FLAGS_ordering_type),
+ &options->linear_solver_ordering_type));
+ options->use_explicit_schur_complement =
+ CERES_GET_FLAG(FLAGS_explicit_schur_complement);
+ options->use_mixed_precision_solves =
+ CERES_GET_FLAG(FLAGS_mixed_precision_solves);
+ options->max_num_refinement_iterations =
+ CERES_GET_FLAG(FLAGS_max_num_refinement_iterations);
+ options->max_linear_solver_iterations =
+ CERES_GET_FLAG(FLAGS_max_linear_solver_iterations);
+ options->use_spse_initialization =
+ CERES_GET_FLAG(FLAGS_use_spse_initialization);
+ options->spse_tolerance = CERES_GET_FLAG(FLAGS_spse_tolerance);
+ options->max_num_spse_iterations =
+ CERES_GET_FLAG(FLAGS_max_num_spse_iterations);
}
void SetOrdering(BALProblem* bal_problem, Solver::Options* options) {
@@ -163,23 +188,27 @@
double* cameras = bal_problem->mutable_cameras();
if (options->use_inner_iterations) {
- if (FLAGS_blocks_for_inner_iterations == "cameras") {
+ if (CERES_GET_FLAG(FLAGS_blocks_for_inner_iterations) == "cameras") {
LOG(INFO) << "Camera blocks for inner iterations";
- options->inner_iteration_ordering.reset(new ParameterBlockOrdering);
+ options->inner_iteration_ordering =
+ std::make_shared<ParameterBlockOrdering>();
for (int i = 0; i < num_cameras; ++i) {
options->inner_iteration_ordering->AddElementToGroup(
cameras + camera_block_size * i, 0);
}
- } else if (FLAGS_blocks_for_inner_iterations == "points") {
+ } else if (CERES_GET_FLAG(FLAGS_blocks_for_inner_iterations) == "points") {
LOG(INFO) << "Point blocks for inner iterations";
- options->inner_iteration_ordering.reset(new ParameterBlockOrdering);
+ options->inner_iteration_ordering =
+ std::make_shared<ParameterBlockOrdering>();
for (int i = 0; i < num_points; ++i) {
options->inner_iteration_ordering->AddElementToGroup(
points + point_block_size * i, 0);
}
- } else if (FLAGS_blocks_for_inner_iterations == "cameras,points") {
+ } else if (CERES_GET_FLAG(FLAGS_blocks_for_inner_iterations) ==
+ "cameras,points") {
LOG(INFO) << "Camera followed by point blocks for inner iterations";
- options->inner_iteration_ordering.reset(new ParameterBlockOrdering);
+ options->inner_iteration_ordering =
+ std::make_shared<ParameterBlockOrdering>();
for (int i = 0; i < num_cameras; ++i) {
options->inner_iteration_ordering->AddElementToGroup(
cameras + camera_block_size * i, 0);
@@ -188,9 +217,11 @@
options->inner_iteration_ordering->AddElementToGroup(
points + point_block_size * i, 1);
}
- } else if (FLAGS_blocks_for_inner_iterations == "points,cameras") {
+ } else if (CERES_GET_FLAG(FLAGS_blocks_for_inner_iterations) ==
+ "points,cameras") {
LOG(INFO) << "Point followed by camera blocks for inner iterations";
- options->inner_iteration_ordering.reset(new ParameterBlockOrdering);
+ options->inner_iteration_ordering =
+ std::make_shared<ParameterBlockOrdering>();
for (int i = 0; i < num_cameras; ++i) {
options->inner_iteration_ordering->AddElementToGroup(
cameras + camera_block_size * i, 1);
@@ -199,11 +230,12 @@
options->inner_iteration_ordering->AddElementToGroup(
points + point_block_size * i, 0);
}
- } else if (FLAGS_blocks_for_inner_iterations == "automatic") {
+ } else if (CERES_GET_FLAG(FLAGS_blocks_for_inner_iterations) ==
+ "automatic") {
LOG(INFO) << "Choosing automatic blocks for inner iterations";
} else {
LOG(FATAL) << "Unknown block type for inner iterations: "
- << FLAGS_blocks_for_inner_iterations;
+ << CERES_GET_FLAG(FLAGS_blocks_for_inner_iterations);
}
}
@@ -213,46 +245,54 @@
// ITERATIVE_SCHUR solvers make use of this specialized
// structure.
//
- // This can either be done by specifying Options::ordering_type =
- // ceres::SCHUR, in which case Ceres will automatically determine
- // the right ParameterBlock ordering, or by manually specifying a
- // suitable ordering vector and defining
- // Options::num_eliminate_blocks.
- if (FLAGS_ordering == "automatic") {
- return;
+ // This can either be done by specifying a
+ // Options::linear_solver_ordering or having Ceres figure it out
+ // automatically using a greedy maximum independent set algorithm.
+ if (CERES_GET_FLAG(FLAGS_linear_solver_ordering) == "user") {
+ auto* ordering = new ceres::ParameterBlockOrdering;
+
+ // The points come before the cameras.
+ for (int i = 0; i < num_points; ++i) {
+ ordering->AddElementToGroup(points + point_block_size * i, 0);
+ }
+
+ for (int i = 0; i < num_cameras; ++i) {
+ // When using axis-angle, there is a single parameter block for
+ // the entire camera.
+ ordering->AddElementToGroup(cameras + camera_block_size * i, 1);
+ }
+
+ options->linear_solver_ordering.reset(ordering);
}
-
- ceres::ParameterBlockOrdering* ordering = new ceres::ParameterBlockOrdering;
-
- // The points come before the cameras.
- for (int i = 0; i < num_points; ++i) {
- ordering->AddElementToGroup(points + point_block_size * i, 0);
- }
-
- for (int i = 0; i < num_cameras; ++i) {
- // When using axis-angle, there is a single parameter block for
- // the entire camera.
- ordering->AddElementToGroup(cameras + camera_block_size * i, 1);
- }
-
- options->linear_solver_ordering.reset(ordering);
}
void SetMinimizerOptions(Solver::Options* options) {
- options->max_num_iterations = FLAGS_num_iterations;
+ options->max_num_iterations = CERES_GET_FLAG(FLAGS_num_iterations);
options->minimizer_progress_to_stdout = true;
- options->num_threads = FLAGS_num_threads;
- options->eta = FLAGS_eta;
- options->max_solver_time_in_seconds = FLAGS_max_solver_time;
- options->use_nonmonotonic_steps = FLAGS_nonmonotonic_steps;
- if (FLAGS_line_search) {
+ if (CERES_GET_FLAG(FLAGS_num_threads) == -1) {
+ const int num_available_threads =
+ static_cast<int>(std::thread::hardware_concurrency());
+ if (num_available_threads > 0) {
+ options->num_threads = num_available_threads;
+ }
+ } else {
+ options->num_threads = CERES_GET_FLAG(FLAGS_num_threads);
+ }
+ CHECK_GE(options->num_threads, 1);
+
+ options->eta = CERES_GET_FLAG(FLAGS_eta);
+ options->max_solver_time_in_seconds = CERES_GET_FLAG(FLAGS_max_solver_time);
+ options->use_nonmonotonic_steps = CERES_GET_FLAG(FLAGS_nonmonotonic_steps);
+ if (CERES_GET_FLAG(FLAGS_line_search)) {
options->minimizer_type = ceres::LINE_SEARCH;
}
- CHECK(StringToTrustRegionStrategyType(FLAGS_trust_region_strategy,
- &options->trust_region_strategy_type));
- CHECK(StringToDoglegType(FLAGS_dogleg, &options->dogleg_type));
- options->use_inner_iterations = FLAGS_inner_iterations;
+ CHECK(StringToTrustRegionStrategyType(
+ CERES_GET_FLAG(FLAGS_trust_region_strategy),
+ &options->trust_region_strategy_type));
+ CHECK(
+ StringToDoglegType(CERES_GET_FLAG(FLAGS_dogleg), &options->dogleg_type));
+ options->use_inner_iterations = CERES_GET_FLAG(FLAGS_inner_iterations);
}
void SetSolverOptionsFromFlags(BALProblem* bal_problem,
@@ -276,16 +316,17 @@
CostFunction* cost_function;
// Each Residual block takes a point and a camera as input and
// outputs a 2 dimensional residual.
- cost_function = (FLAGS_use_quaternions)
+ cost_function = (CERES_GET_FLAG(FLAGS_use_quaternions))
? SnavelyReprojectionErrorWithQuaternions::Create(
observations[2 * i + 0], observations[2 * i + 1])
: SnavelyReprojectionError::Create(
observations[2 * i + 0], observations[2 * i + 1]);
// If enabled use Huber's loss function.
- LossFunction* loss_function = FLAGS_robustify ? new HuberLoss(1.0) : NULL;
+ LossFunction* loss_function =
+ CERES_GET_FLAG(FLAGS_robustify) ? new HuberLoss(1.0) : nullptr;
- // Each observation correponds to a pair of a camera and a point
+ // Each observation corresponds to a pair of a camera and a point
// which are identified by camera_index()[i] and point_index()[i]
// respectively.
double* camera =
@@ -294,60 +335,60 @@
problem->AddResidualBlock(cost_function, loss_function, camera, point);
}
- if (FLAGS_use_quaternions && FLAGS_use_local_parameterization) {
- LocalParameterization* camera_parameterization =
- new ProductParameterization(new QuaternionParameterization(),
- new IdentityParameterization(6));
+ if (CERES_GET_FLAG(FLAGS_use_quaternions) &&
+ CERES_GET_FLAG(FLAGS_use_manifolds)) {
+ Manifold* camera_manifold =
+ new ProductManifold<QuaternionManifold, EuclideanManifold<6>>{};
for (int i = 0; i < bal_problem->num_cameras(); ++i) {
- problem->SetParameterization(cameras + camera_block_size * i,
- camera_parameterization);
+ problem->SetManifold(cameras + camera_block_size * i, camera_manifold);
}
}
}
void SolveProblem(const char* filename) {
- BALProblem bal_problem(filename, FLAGS_use_quaternions);
+ BALProblem bal_problem(filename, CERES_GET_FLAG(FLAGS_use_quaternions));
- if (!FLAGS_initial_ply.empty()) {
- bal_problem.WriteToPLYFile(FLAGS_initial_ply);
+ if (!CERES_GET_FLAG(FLAGS_initial_ply).empty()) {
+ bal_problem.WriteToPLYFile(CERES_GET_FLAG(FLAGS_initial_ply));
}
Problem problem;
- srand(FLAGS_random_seed);
+ srand(CERES_GET_FLAG(FLAGS_random_seed));
bal_problem.Normalize();
- bal_problem.Perturb(
- FLAGS_rotation_sigma, FLAGS_translation_sigma, FLAGS_point_sigma);
+ bal_problem.Perturb(CERES_GET_FLAG(FLAGS_rotation_sigma),
+ CERES_GET_FLAG(FLAGS_translation_sigma),
+ CERES_GET_FLAG(FLAGS_point_sigma));
BuildProblem(&bal_problem, &problem);
Solver::Options options;
SetSolverOptionsFromFlags(&bal_problem, &options);
options.gradient_tolerance = 1e-16;
options.function_tolerance = 1e-16;
+ options.parameter_tolerance = 1e-16;
Solver::Summary summary;
Solve(options, &problem, &summary);
std::cout << summary.FullReport() << "\n";
- if (!FLAGS_final_ply.empty()) {
- bal_problem.WriteToPLYFile(FLAGS_final_ply);
+ if (!CERES_GET_FLAG(FLAGS_final_ply).empty()) {
+ bal_problem.WriteToPLYFile(CERES_GET_FLAG(FLAGS_final_ply));
}
}
} // namespace
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
int main(int argc, char** argv) {
GFLAGS_NAMESPACE::ParseCommandLineFlags(&argc, &argv, true);
google::InitGoogleLogging(argv[0]);
- if (FLAGS_input.empty()) {
+ if (CERES_GET_FLAG(FLAGS_input).empty()) {
LOG(ERROR) << "Usage: bundle_adjuster --input=bal_problem";
return 1;
}
- CHECK(FLAGS_use_quaternions || !FLAGS_use_local_parameterization)
- << "--use_local_parameterization can only be used with "
- << "--use_quaternions.";
- ceres::examples::SolveProblem(FLAGS_input.c_str());
+ CHECK(CERES_GET_FLAG(FLAGS_use_quaternions) ||
+ !CERES_GET_FLAG(FLAGS_use_manifolds))
+ << "--use_manifolds can only be used with --use_quaternions.";
+ ceres::examples::SolveProblem(CERES_GET_FLAG(FLAGS_input).c_str());
return 0;
}
diff --git a/examples/circle_fit.cc b/examples/circle_fit.cc
index c542475..fd848d9 100644
--- a/examples/circle_fit.cc
+++ b/examples/circle_fit.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -57,14 +57,6 @@
#include "gflags/gflags.h"
#include "glog/logging.h"
-using ceres::AutoDiffCostFunction;
-using ceres::CauchyLoss;
-using ceres::CostFunction;
-using ceres::LossFunction;
-using ceres::Problem;
-using ceres::Solve;
-using ceres::Solver;
-
DEFINE_double(robust_threshold,
0.0,
"Robust loss parameter. Set to 0 for normal squared error (no "
@@ -128,21 +120,21 @@
// Parameterize r as m^2 so that it can't be negative.
double m = sqrt(r);
- Problem problem;
+ ceres::Problem problem;
// Configure the loss function.
- LossFunction* loss = NULL;
- if (FLAGS_robust_threshold) {
- loss = new CauchyLoss(FLAGS_robust_threshold);
+ ceres::LossFunction* loss = nullptr;
+ if (CERES_GET_FLAG(FLAGS_robust_threshold)) {
+ loss = new ceres::CauchyLoss(CERES_GET_FLAG(FLAGS_robust_threshold));
}
// Add the residuals.
double xx, yy;
int num_points = 0;
while (scanf("%lf %lf\n", &xx, &yy) == 2) {
- CostFunction* cost =
- new AutoDiffCostFunction<DistanceFromCircleCost, 1, 1, 1, 1>(
- new DistanceFromCircleCost(xx, yy));
+ ceres::CostFunction* cost =
+ new ceres::AutoDiffCostFunction<DistanceFromCircleCost, 1, 1, 1, 1>(xx,
+ yy);
problem.AddResidualBlock(cost, loss, &x, &y, &m);
num_points++;
}
@@ -150,11 +142,11 @@
std::cout << "Got " << num_points << " points.\n";
// Build and solve the problem.
- Solver::Options options;
+ ceres::Solver::Options options;
options.max_num_iterations = 500;
options.linear_solver_type = ceres::DENSE_QR;
- Solver::Summary summary;
- Solve(options, &problem, &summary);
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
// Recover r from m.
r = m * m;
diff --git a/examples/curve_fitting.cc b/examples/curve_fitting.cc
index fc7ff94..105402e 100644
--- a/examples/curve_fitting.cc
+++ b/examples/curve_fitting.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -27,16 +27,13 @@
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
+//
+// This example fits the curve f(x;m,c) = e^(m * x + c) to data, minimizing the
+// sum squared loss.
#include "ceres/ceres.h"
#include "glog/logging.h"
-using ceres::AutoDiffCostFunction;
-using ceres::CostFunction;
-using ceres::Problem;
-using ceres::Solve;
-using ceres::Solver;
-
// Data generated using the following octave code.
// randn('seed', 23497);
// m = 0.3;
@@ -137,28 +134,30 @@
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
- double m = 0.0;
- double c = 0.0;
+ const double initial_m = 0.0;
+ const double initial_c = 0.0;
+ double m = initial_m;
+ double c = initial_c;
- Problem problem;
+ ceres::Problem problem;
for (int i = 0; i < kNumObservations; ++i) {
problem.AddResidualBlock(
- new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
- new ExponentialResidual(data[2 * i], data[2 * i + 1])),
- NULL,
+ new ceres::AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
+ data[2 * i], data[2 * i + 1]),
+ nullptr,
&m,
&c);
}
- Solver::Options options;
+ ceres::Solver::Options options;
options.max_num_iterations = 25;
options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
- Solver::Summary summary;
- Solve(options, &problem, &summary);
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
- std::cout << "Initial m: " << 0.0 << " c: " << 0.0 << "\n";
+ std::cout << "Initial m: " << initial_m << " c: " << initial_c << "\n";
std::cout << "Final m: " << m << " c: " << c << "\n";
return 0;
}
diff --git a/examples/denoising.cc b/examples/denoising.cc
index 61ea2c6..dc13d19 100644
--- a/examples/denoising.cc
+++ b/examples/denoising.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,7 +33,7 @@
// Note that for good denoising results the weighting between the data term
// and the Fields of Experts term needs to be adjusted. This is discussed
// in [1]. This program assumes Gaussian noise. The noise model can be changed
-// by substituing another function for QuadraticCostFunction.
+// by substituting another function for QuadraticCostFunction.
//
// [1] S. Roth and M.J. Black. "Fields of Experts." International Journal of
// Computer Vision, 82(2):205--229, 2009.
@@ -102,8 +102,7 @@
"The fraction of residual blocks to use for the"
" subset preconditioner.");
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
namespace {
// This cost function is used to build the data term.
@@ -113,12 +112,12 @@
class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> {
public:
QuadraticCostFunction(double a, double b) : sqrta_(std::sqrt(a)), b_(b) {}
- virtual bool Evaluate(double const* const* parameters,
- double* residuals,
- double** jacobians) const {
+ bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const override {
const double x = parameters[0][0];
residuals[0] = sqrta_ * (x - b_);
- if (jacobians != NULL && jacobians[0] != NULL) {
+ if (jacobians != nullptr && jacobians[0] != nullptr) {
jacobians[0][0] = sqrta_;
}
return true;
@@ -134,13 +133,14 @@
Problem* problem,
PGMImage<double>* solution) {
// Create the data term
- CHECK_GT(FLAGS_sigma, 0.0);
- const double coefficient = 1 / (2.0 * FLAGS_sigma * FLAGS_sigma);
+ CHECK_GT(CERES_GET_FLAG(FLAGS_sigma), 0.0);
+ const double coefficient =
+ 1 / (2.0 * CERES_GET_FLAG(FLAGS_sigma) * CERES_GET_FLAG(FLAGS_sigma));
for (int index = 0; index < image.NumPixels(); ++index) {
ceres::CostFunction* cost_function = new QuadraticCostFunction(
coefficient, image.PixelFromLinearIndex(index));
problem->AddResidualBlock(
- cost_function, NULL, solution->MutablePixelFromLinearIndex(index));
+ cost_function, nullptr, solution->MutablePixelFromLinearIndex(index));
}
// Create Ceres cost and loss functions for regularization. One is needed for
@@ -175,31 +175,35 @@
}
void SetLinearSolver(Solver::Options* options) {
- CHECK(StringToLinearSolverType(FLAGS_linear_solver,
+ CHECK(StringToLinearSolverType(CERES_GET_FLAG(FLAGS_linear_solver),
&options->linear_solver_type));
- CHECK(StringToPreconditionerType(FLAGS_preconditioner,
+ CHECK(StringToPreconditionerType(CERES_GET_FLAG(FLAGS_preconditioner),
&options->preconditioner_type));
CHECK(StringToSparseLinearAlgebraLibraryType(
- FLAGS_sparse_linear_algebra_library,
+ CERES_GET_FLAG(FLAGS_sparse_linear_algebra_library),
&options->sparse_linear_algebra_library_type));
- options->use_mixed_precision_solves = FLAGS_mixed_precision_solves;
- options->max_num_refinement_iterations = FLAGS_max_num_refinement_iterations;
+ options->use_mixed_precision_solves =
+ CERES_GET_FLAG(FLAGS_mixed_precision_solves);
+ options->max_num_refinement_iterations =
+ CERES_GET_FLAG(FLAGS_max_num_refinement_iterations);
}
void SetMinimizerOptions(Solver::Options* options) {
- options->max_num_iterations = FLAGS_num_iterations;
+ options->max_num_iterations = CERES_GET_FLAG(FLAGS_num_iterations);
options->minimizer_progress_to_stdout = true;
- options->num_threads = FLAGS_num_threads;
- options->eta = FLAGS_eta;
- options->use_nonmonotonic_steps = FLAGS_nonmonotonic_steps;
- if (FLAGS_line_search) {
+ options->num_threads = CERES_GET_FLAG(FLAGS_num_threads);
+ options->eta = CERES_GET_FLAG(FLAGS_eta);
+ options->use_nonmonotonic_steps = CERES_GET_FLAG(FLAGS_nonmonotonic_steps);
+ if (CERES_GET_FLAG(FLAGS_line_search)) {
options->minimizer_type = ceres::LINE_SEARCH;
}
- CHECK(StringToTrustRegionStrategyType(FLAGS_trust_region_strategy,
- &options->trust_region_strategy_type));
- CHECK(StringToDoglegType(FLAGS_dogleg, &options->dogleg_type));
- options->use_inner_iterations = FLAGS_inner_iterations;
+ CHECK(StringToTrustRegionStrategyType(
+ CERES_GET_FLAG(FLAGS_trust_region_strategy),
+ &options->trust_region_strategy_type));
+ CHECK(
+ StringToDoglegType(CERES_GET_FLAG(FLAGS_dogleg), &options->dogleg_type));
+ options->use_inner_iterations = CERES_GET_FLAG(FLAGS_inner_iterations);
}
// Solves the FoE problem using Ceres and post-processes it to make sure the
@@ -226,7 +230,7 @@
std::default_random_engine engine;
std::uniform_real_distribution<> distribution(0, 1); // rage 0 - 1
for (auto residual_block : residual_blocks) {
- if (distribution(engine) <= FLAGS_subset_fraction) {
+ if (distribution(engine) <= CERES_GET_FLAG(FLAGS_subset_fraction)) {
options.residual_blocks_for_subset_preconditioner.insert(
residual_block);
}
@@ -247,20 +251,19 @@
}
} // namespace
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
int main(int argc, char** argv) {
using namespace ceres::examples;
GFLAGS_NAMESPACE::ParseCommandLineFlags(&argc, &argv, true);
google::InitGoogleLogging(argv[0]);
- if (FLAGS_input.empty()) {
+ if (CERES_GET_FLAG(FLAGS_input).empty()) {
std::cerr << "Please provide an image file name using -input.\n";
return 1;
}
- if (FLAGS_foe_file.empty()) {
+ if (CERES_GET_FLAG(FLAGS_foe_file).empty()) {
std::cerr << "Please provide a Fields of Experts file name using -foe_file."
"\n";
return 1;
@@ -268,15 +271,16 @@
// Load the Fields of Experts filters from file.
FieldsOfExperts foe;
- if (!foe.LoadFromFile(FLAGS_foe_file)) {
- std::cerr << "Loading \"" << FLAGS_foe_file << "\" failed.\n";
+ if (!foe.LoadFromFile(CERES_GET_FLAG(FLAGS_foe_file))) {
+ std::cerr << "Loading \"" << CERES_GET_FLAG(FLAGS_foe_file)
+ << "\" failed.\n";
return 2;
}
// Read the images
- PGMImage<double> image(FLAGS_input);
+ PGMImage<double> image(CERES_GET_FLAG(FLAGS_input));
if (image.width() == 0) {
- std::cerr << "Reading \"" << FLAGS_input << "\" failed.\n";
+ std::cerr << "Reading \"" << CERES_GET_FLAG(FLAGS_input) << "\" failed.\n";
return 3;
}
PGMImage<double> solution(image.width(), image.height());
@@ -287,9 +291,9 @@
SolveProblem(&problem, &solution);
- if (!FLAGS_output.empty()) {
- CHECK(solution.WriteToFile(FLAGS_output))
- << "Writing \"" << FLAGS_output << "\" failed.";
+ if (!CERES_GET_FLAG(FLAGS_output).empty()) {
+ CHECK(solution.WriteToFile(CERES_GET_FLAG(FLAGS_output)))
+ << "Writing \"" << CERES_GET_FLAG(FLAGS_output) << "\" failed.";
}
return 0;
diff --git a/examples/ellipse_approximation.cc b/examples/ellipse_approximation.cc
index 74782f4..6fa8f1c 100644
--- a/examples/ellipse_approximation.cc
+++ b/examples/ellipse_approximation.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,6 +36,7 @@
// dense but dynamically sparse.
#include <cmath>
+#include <utility>
#include <vector>
#include "ceres/ceres.h"
@@ -275,8 +276,8 @@
class PointToLineSegmentContourCostFunction : public ceres::CostFunction {
public:
PointToLineSegmentContourCostFunction(const int num_segments,
- const Eigen::Vector2d& y)
- : num_segments_(num_segments), y_(y) {
+ Eigen::Vector2d y)
+ : num_segments_(num_segments), y_(std::move(y)) {
// The first parameter is the preimage position.
mutable_parameter_block_sizes()->push_back(1);
// The next parameters are the control points for the line segment contour.
@@ -286,9 +287,9 @@
set_num_residuals(2);
}
- virtual bool Evaluate(const double* const* x,
- double* residuals,
- double** jacobians) const {
+ bool Evaluate(const double* const* x,
+ double* residuals,
+ double** jacobians) const override {
// Convert the preimage position `t` into a segment index `i0` and the
// line segment interpolation parameter `u`. `i1` is the index of the next
// control point.
@@ -302,16 +303,16 @@
residuals[0] = y_[0] - ((1.0 - u) * x[1 + i0][0] + u * x[1 + i1][0]);
residuals[1] = y_[1] - ((1.0 - u) * x[1 + i0][1] + u * x[1 + i1][1]);
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return true;
}
- if (jacobians[0] != NULL) {
+ if (jacobians[0] != nullptr) {
jacobians[0][0] = x[1 + i0][0] - x[1 + i1][0];
jacobians[0][1] = x[1 + i0][1] - x[1 + i1][1];
}
for (int i = 0; i < num_segments_; ++i) {
- if (jacobians[i + 1] != NULL) {
+ if (jacobians[i + 1] != nullptr) {
ceres::MatrixRef(jacobians[i + 1], 2, 2).setZero();
if (i == i0) {
jacobians[i + 1][0] = -(1.0 - u);
@@ -353,7 +354,7 @@
static ceres::CostFunction* Create(const double sqrt_weight) {
return new ceres::AutoDiffCostFunction<EuclideanDistanceFunctor, 2, 2, 2>(
- new EuclideanDistanceFunctor(sqrt_weight));
+ sqrt_weight);
}
private:
@@ -385,8 +386,8 @@
// Eigen::MatrixXd is column major so we define our own MatrixXd which is
// row major. Eigen::VectorXd can be used directly.
- typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>
- MatrixXd;
+ using MatrixXd =
+ Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
using Eigen::VectorXd;
// `X` is the matrix of control points which make up the contour of line
@@ -395,7 +396,7 @@
//
// Initialize `X` to points on the unit circle.
VectorXd w(num_segments + 1);
- w.setLinSpaced(num_segments + 1, 0.0, 2.0 * M_PI);
+ w.setLinSpaced(num_segments + 1, 0.0, 2.0 * ceres::constants::pi);
w.conservativeResize(num_segments);
MatrixXd X(num_segments, 2);
X.col(0) = w.array().cos();
@@ -404,9 +405,9 @@
// Each data point has an associated preimage position on the line segment
// contour. For each data point we initialize the preimage positions to
// the index of the closest control point.
- const int num_observations = kY.rows();
+ const int64_t num_observations = kY.rows();
VectorXd t(num_observations);
- for (int i = 0; i < num_observations; ++i) {
+ for (int64_t i = 0; i < num_observations; ++i) {
(X.rowwise() - kY.row(i)).rowwise().squaredNorm().minCoeff(&t[i]);
}
@@ -415,7 +416,7 @@
// For each data point add a residual which measures its distance to its
// corresponding position on the line segment contour.
std::vector<double*> parameter_blocks(1 + num_segments);
- parameter_blocks[0] = NULL;
+ parameter_blocks[0] = nullptr;
for (int i = 0; i < num_segments; ++i) {
parameter_blocks[i + 1] = X.data() + 2 * i;
}
@@ -423,7 +424,7 @@
parameter_blocks[0] = &t[i];
problem.AddResidualBlock(
PointToLineSegmentContourCostFunction::Create(num_segments, kY.row(i)),
- NULL,
+ nullptr,
parameter_blocks);
}
@@ -431,7 +432,7 @@
for (int i = 0; i < num_segments; ++i) {
problem.AddResidualBlock(
EuclideanDistanceFunctor::Create(sqrt(regularization_weight)),
- NULL,
+ nullptr,
X.data() + 2 * i,
X.data() + 2 * ((i + 1) % num_segments));
}
diff --git a/examples/evaluation_callback_example.cc b/examples/evaluation_callback_example.cc
new file mode 100644
index 0000000..6dbf932
--- /dev/null
+++ b/examples/evaluation_callback_example.cc
@@ -0,0 +1,257 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+//
+// This example illustrates the use of the EvaluationCallback, which can be used
+// to perform high performance computation of the residual and Jacobians outside
+// Ceres (in this case using Eigen's vectorized code) and then the CostFunctions
+// just copy these computed residuals and Jacobians appropriately and pass them
+// to Ceres Solver.
+//
+// The results of running this example should be identical to the results
+// obtained by running curve_fitting.cc. The only difference between the two
+// examples is how the residuals and Jacobians are computed.
+//
+// The observant reader will note that both here and curve_fitting.cc instead of
+// creating one ResidualBlock for each observation one can just do one
+// ResidualBlock/CostFunction for the entire problem. The reason for keeping one
+// residual per observation is that it is what is needed if and when we need to
+// introduce a loss function which is what we do in robust_curve_fitting.cc
+
+#include <iostream>
+
+#include "Eigen/Core"
+#include "ceres/ceres.h"
+#include "glog/logging.h"
+
+// Data generated using the following octave code.
+// randn('seed', 23497);
+// m = 0.3;
+// c = 0.1;
+// x=[0:0.075:5];
+// y = exp(m * x + c);
+// noise = randn(size(x)) * 0.2;
+// y_observed = y + noise;
+// data = [x', y_observed'];
+
+const int kNumObservations = 67;
+// clang-format off
+const double data[] = {
+ 0.000000e+00, 1.133898e+00,
+ 7.500000e-02, 1.334902e+00,
+ 1.500000e-01, 1.213546e+00,
+ 2.250000e-01, 1.252016e+00,
+ 3.000000e-01, 1.392265e+00,
+ 3.750000e-01, 1.314458e+00,
+ 4.500000e-01, 1.472541e+00,
+ 5.250000e-01, 1.536218e+00,
+ 6.000000e-01, 1.355679e+00,
+ 6.750000e-01, 1.463566e+00,
+ 7.500000e-01, 1.490201e+00,
+ 8.250000e-01, 1.658699e+00,
+ 9.000000e-01, 1.067574e+00,
+ 9.750000e-01, 1.464629e+00,
+ 1.050000e+00, 1.402653e+00,
+ 1.125000e+00, 1.713141e+00,
+ 1.200000e+00, 1.527021e+00,
+ 1.275000e+00, 1.702632e+00,
+ 1.350000e+00, 1.423899e+00,
+ 1.425000e+00, 1.543078e+00,
+ 1.500000e+00, 1.664015e+00,
+ 1.575000e+00, 1.732484e+00,
+ 1.650000e+00, 1.543296e+00,
+ 1.725000e+00, 1.959523e+00,
+ 1.800000e+00, 1.685132e+00,
+ 1.875000e+00, 1.951791e+00,
+ 1.950000e+00, 2.095346e+00,
+ 2.025000e+00, 2.361460e+00,
+ 2.100000e+00, 2.169119e+00,
+ 2.175000e+00, 2.061745e+00,
+ 2.250000e+00, 2.178641e+00,
+ 2.325000e+00, 2.104346e+00,
+ 2.400000e+00, 2.584470e+00,
+ 2.475000e+00, 1.914158e+00,
+ 2.550000e+00, 2.368375e+00,
+ 2.625000e+00, 2.686125e+00,
+ 2.700000e+00, 2.712395e+00,
+ 2.775000e+00, 2.499511e+00,
+ 2.850000e+00, 2.558897e+00,
+ 2.925000e+00, 2.309154e+00,
+ 3.000000e+00, 2.869503e+00,
+ 3.075000e+00, 3.116645e+00,
+ 3.150000e+00, 3.094907e+00,
+ 3.225000e+00, 2.471759e+00,
+ 3.300000e+00, 3.017131e+00,
+ 3.375000e+00, 3.232381e+00,
+ 3.450000e+00, 2.944596e+00,
+ 3.525000e+00, 3.385343e+00,
+ 3.600000e+00, 3.199826e+00,
+ 3.675000e+00, 3.423039e+00,
+ 3.750000e+00, 3.621552e+00,
+ 3.825000e+00, 3.559255e+00,
+ 3.900000e+00, 3.530713e+00,
+ 3.975000e+00, 3.561766e+00,
+ 4.050000e+00, 3.544574e+00,
+ 4.125000e+00, 3.867945e+00,
+ 4.200000e+00, 4.049776e+00,
+ 4.275000e+00, 3.885601e+00,
+ 4.350000e+00, 4.110505e+00,
+ 4.425000e+00, 4.345320e+00,
+ 4.500000e+00, 4.161241e+00,
+ 4.575000e+00, 4.363407e+00,
+ 4.650000e+00, 4.161576e+00,
+ 4.725000e+00, 4.619728e+00,
+ 4.800000e+00, 4.737410e+00,
+ 4.875000e+00, 4.727863e+00,
+ 4.950000e+00, 4.669206e+00,
+};
+// clang-format on
+
+// This implementation of the EvaluationCallback interface also stores the
+// residuals and Jacobians that the CostFunction copies their values from.
+class MyEvaluationCallback : public ceres::EvaluationCallback {
+ public:
+ // m and c are passed by reference so that we have access to their values as
+ // they evolve over time through the course of optimization.
+ MyEvaluationCallback(const double& m, const double& c) : m_(m), c_(c) {
+ x_ = Eigen::VectorXd::Zero(kNumObservations);
+ y_ = Eigen::VectorXd::Zero(kNumObservations);
+ residuals_ = Eigen::VectorXd::Zero(kNumObservations);
+ jacobians_ = Eigen::MatrixXd::Zero(kNumObservations, 2);
+ for (int i = 0; i < kNumObservations; ++i) {
+ x_[i] = data[2 * i];
+ y_[i] = data[2 * i + 1];
+ }
+ PrepareForEvaluation(true, true);
+ }
+
+ void PrepareForEvaluation(bool evaluate_jacobians,
+ bool new_evaluation_point) final {
+ if (new_evaluation_point) {
+ ComputeResidualAndJacobian(evaluate_jacobians);
+ jacobians_are_stale_ = !evaluate_jacobians;
+ } else {
+ if (evaluate_jacobians && jacobians_are_stale_) {
+ ComputeResidualAndJacobian(evaluate_jacobians);
+ jacobians_are_stale_ = false;
+ }
+ }
+ }
+
+ const Eigen::VectorXd& residuals() const { return residuals_; }
+ const Eigen::MatrixXd& jacobians() const { return jacobians_; }
+ bool jacobians_are_stale() const { return jacobians_are_stale_; }
+
+ private:
+ void ComputeResidualAndJacobian(bool evaluate_jacobians) {
+ residuals_ = -(m_ * x_.array() + c_).exp();
+ if (evaluate_jacobians) {
+ jacobians_.col(0) = residuals_.array() * x_.array();
+ jacobians_.col(1) = residuals_;
+ }
+ residuals_ += y_;
+ }
+
+ const double& m_;
+ const double& c_;
+ Eigen::VectorXd x_;
+ Eigen::VectorXd y_;
+ Eigen::VectorXd residuals_;
+ Eigen::MatrixXd jacobians_;
+
+ // jacobians_are_stale_ keeps track of whether the jacobian matrix matches the
+ // residuals or not, we only compute it if we know that Solver is going to
+ // need access to it.
+ bool jacobians_are_stale_ = true;
+};
+
+// As the name implies this CostFunction does not do any computation, it just
+// copies the appropriate residual and Jacobian from the matrices stored in
+// MyEvaluationCallback.
+class CostAndJacobianCopyingCostFunction
+ : public ceres::SizedCostFunction<1, 1, 1> {
+ public:
+ CostAndJacobianCopyingCostFunction(
+ int index, const MyEvaluationCallback& evaluation_callback)
+ : index_(index), evaluation_callback_(evaluation_callback) {}
+ ~CostAndJacobianCopyingCostFunction() override = default;
+
+ bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const final {
+ residuals[0] = evaluation_callback_.residuals()(index_);
+ if (!jacobians) return true;
+
+ // Ensure that we are not using stale Jacobians.
+ CHECK(!evaluation_callback_.jacobians_are_stale());
+
+ if (jacobians[0] != nullptr)
+ jacobians[0][0] = evaluation_callback_.jacobians()(index_, 0);
+ if (jacobians[1] != nullptr)
+ jacobians[1][0] = evaluation_callback_.jacobians()(index_, 1);
+ return true;
+ }
+
+ private:
+ int index_ = -1;
+ const MyEvaluationCallback& evaluation_callback_;
+};
+
+int main(int argc, char** argv) {
+ google::InitGoogleLogging(argv[0]);
+
+ const double initial_m = 0.0;
+ const double initial_c = 0.0;
+ double m = initial_m;
+ double c = initial_c;
+
+ MyEvaluationCallback evaluation_callback(m, c);
+ ceres::Problem::Options problem_options;
+ problem_options.evaluation_callback = &evaluation_callback;
+ ceres::Problem problem(problem_options);
+ for (int i = 0; i < kNumObservations; ++i) {
+ problem.AddResidualBlock(
+ new CostAndJacobianCopyingCostFunction(i, evaluation_callback),
+ nullptr,
+ &m,
+ &c);
+ }
+
+ ceres::Solver::Options options;
+ options.max_num_iterations = 25;
+ options.linear_solver_type = ceres::DENSE_QR;
+ options.minimizer_progress_to_stdout = true;
+
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
+ std::cout << summary.BriefReport() << "\n";
+ std::cout << "Initial m: " << initial_m << " c: " << initial_c << "\n";
+ std::cout << "Final m: " << m << " c: " << c << "\n";
+ return 0;
+}
diff --git a/examples/fields_of_experts.cc b/examples/fields_of_experts.cc
index 7b7983e..f59fe16 100644
--- a/examples/fields_of_experts.cc
+++ b/examples/fields_of_experts.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,7 +28,7 @@
//
// Author: strandmark@google.com (Petter Strandmark)
//
-// Class for loading the data required for descibing a Fields of Experts (FoE)
+// Class for loading the data required for describing a Fields of Experts (FoE)
// model.
#include "fields_of_experts.h"
@@ -38,13 +38,12 @@
#include "pgm_image.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
FieldsOfExpertsCost::FieldsOfExpertsCost(const std::vector<double>& filter)
: filter_(filter) {
set_num_residuals(1);
- for (int i = 0; i < filter_.size(); ++i) {
+ for (int64_t i = 0; i < filter_.size(); ++i) {
mutable_parameter_block_sizes()->push_back(1);
}
}
@@ -54,15 +53,15 @@
bool FieldsOfExpertsCost::Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const {
- int num_variables = filter_.size();
+ const int64_t num_variables = filter_.size();
residuals[0] = 0;
- for (int i = 0; i < num_variables; ++i) {
+ for (int64_t i = 0; i < num_variables; ++i) {
residuals[0] += filter_[i] * parameters[i][0];
}
- if (jacobians != NULL) {
- for (int i = 0; i < num_variables; ++i) {
- if (jacobians[i] != NULL) {
+ if (jacobians != nullptr) {
+ for (int64_t i = 0; i < num_variables; ++i) {
+ if (jacobians[i] != nullptr) {
jacobians[i][0] = filter_[i];
}
}
@@ -145,5 +144,4 @@
return new FieldsOfExpertsLoss(alpha_[alpha_index]);
}
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
diff --git a/examples/fields_of_experts.h b/examples/fields_of_experts.h
index 429881d..2ff8c94 100644
--- a/examples/fields_of_experts.h
+++ b/examples/fields_of_experts.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,7 +28,7 @@
//
// Author: strandmark@google.com (Petter Strandmark)
//
-// Class for loading the data required for descibing a Fields of Experts (FoE)
+// Class for loading the data required for describing a Fields of Experts (FoE)
// model. The Fields of Experts regularization consists of terms of the type
//
// alpha * log(1 + (1/2)*sum(F .* X)^2),
@@ -52,8 +52,7 @@
#include "ceres/sized_cost_function.h"
#include "pgm_image.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
// One sum in the FoE regularizer. This is a dot product between a filter and an
// image patch. It simply calculates the dot product between the filter
@@ -63,9 +62,9 @@
explicit FieldsOfExpertsCost(const std::vector<double>& filter);
// The number of scalar parameters passed to Evaluate must equal the number of
// filter coefficients passed to the constructor.
- virtual bool Evaluate(double const* const* parameters,
- double* residuals,
- double** jacobians) const;
+ bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const override;
private:
const std::vector<double>& filter_;
@@ -78,7 +77,7 @@
class FieldsOfExpertsLoss : public ceres::LossFunction {
public:
explicit FieldsOfExpertsLoss(double alpha) : alpha_(alpha) {}
- virtual void Evaluate(double, double*) const;
+ void Evaluate(double, double*) const override;
private:
const double alpha_;
@@ -128,7 +127,6 @@
std::vector<std::vector<double>> filters_;
};
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // CERES_EXAMPLES_FIELDS_OF_EXPERTS_H_
diff --git a/examples/helloworld.cc b/examples/helloworld.cc
index 9e32fad..40c2f2c 100644
--- a/examples/helloworld.cc
+++ b/examples/helloworld.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,12 +36,6 @@
#include "ceres/ceres.h"
#include "glog/logging.h"
-using ceres::AutoDiffCostFunction;
-using ceres::CostFunction;
-using ceres::Problem;
-using ceres::Solve;
-using ceres::Solver;
-
// A templated cost functor that implements the residual r = 10 -
// x. The method operator() is templated so that we can then use an
// automatic differentiation wrapper around it to generate its
@@ -63,19 +57,19 @@
const double initial_x = x;
// Build the problem.
- Problem problem;
+ ceres::Problem problem;
// Set up the only cost function (also known as residual). This uses
// auto-differentiation to obtain the derivative (jacobian).
- CostFunction* cost_function =
- new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
+ ceres::CostFunction* cost_function =
+ new ceres::AutoDiffCostFunction<CostFunctor, 1, 1>();
problem.AddResidualBlock(cost_function, nullptr, &x);
// Run the solver!
- Solver::Options options;
+ ceres::Solver::Options options;
options.minimizer_progress_to_stdout = true;
- Solver::Summary summary;
- Solve(options, &problem, &summary);
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
std::cout << "x : " << initial_x << " -> " << x << "\n";
diff --git a/examples/helloworld_analytic_diff.cc b/examples/helloworld_analytic_diff.cc
index 6e120b5..b4826a2 100644
--- a/examples/helloworld_analytic_diff.cc
+++ b/examples/helloworld_analytic_diff.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,23 +37,15 @@
#include "ceres/ceres.h"
#include "glog/logging.h"
-using ceres::CostFunction;
-using ceres::Problem;
-using ceres::SizedCostFunction;
-using ceres::Solve;
-using ceres::Solver;
-
// A CostFunction implementing analytically derivatives for the
// function f(x) = 10 - x.
class QuadraticCostFunction
- : public SizedCostFunction<1 /* number of residuals */,
- 1 /* size of first parameter */> {
+ : public ceres::SizedCostFunction<1 /* number of residuals */,
+ 1 /* size of first parameter */> {
public:
- virtual ~QuadraticCostFunction() {}
-
- virtual bool Evaluate(double const* const* parameters,
- double* residuals,
- double** jacobians) const {
+ bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const override {
double x = parameters[0][0];
// f(x) = 10 - x.
@@ -64,14 +56,14 @@
// jacobians.
//
// Since the Evaluate function can be called with the jacobians
- // pointer equal to NULL, the Evaluate function must check to see
+ // pointer equal to nullptr, the Evaluate function must check to see
// if jacobians need to be computed.
//
// For this simple problem it is overkill to check if jacobians[0]
- // is NULL, but in general when writing more complex
+ // is nullptr, but in general when writing more complex
// CostFunctions, it is possible that Ceres may only demand the
// derivatives w.r.t. a subset of the parameter blocks.
- if (jacobians != NULL && jacobians[0] != NULL) {
+ if (jacobians != nullptr && jacobians[0] != nullptr) {
jacobians[0][0] = -1;
}
@@ -88,17 +80,17 @@
const double initial_x = x;
// Build the problem.
- Problem problem;
+ ceres::Problem problem;
// Set up the only cost function (also known as residual).
- CostFunction* cost_function = new QuadraticCostFunction;
- problem.AddResidualBlock(cost_function, NULL, &x);
+ ceres::CostFunction* cost_function = new QuadraticCostFunction;
+ problem.AddResidualBlock(cost_function, nullptr, &x);
// Run the solver!
- Solver::Options options;
+ ceres::Solver::Options options;
options.minimizer_progress_to_stdout = true;
- Solver::Summary summary;
- Solve(options, &problem, &summary);
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
std::cout << "x : " << initial_x << " -> " << x << "\n";
diff --git a/examples/helloworld_numeric_diff.cc b/examples/helloworld_numeric_diff.cc
index 474adf3..4ed9ca6 100644
--- a/examples/helloworld_numeric_diff.cc
+++ b/examples/helloworld_numeric_diff.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,13 +34,6 @@
#include "ceres/ceres.h"
#include "glog/logging.h"
-using ceres::CENTRAL;
-using ceres::CostFunction;
-using ceres::NumericDiffCostFunction;
-using ceres::Problem;
-using ceres::Solve;
-using ceres::Solver;
-
// A cost functor that implements the residual r = 10 - x.
struct CostFunctor {
bool operator()(const double* const x, double* residual) const {
@@ -58,19 +51,20 @@
const double initial_x = x;
// Build the problem.
- Problem problem;
+ ceres::Problem problem;
// Set up the only cost function (also known as residual). This uses
// numeric differentiation to obtain the derivative (jacobian).
- CostFunction* cost_function =
- new NumericDiffCostFunction<CostFunctor, CENTRAL, 1, 1>(new CostFunctor);
- problem.AddResidualBlock(cost_function, NULL, &x);
+ ceres::CostFunction* cost_function =
+ new ceres::NumericDiffCostFunction<CostFunctor, ceres::CENTRAL, 1, 1>(
+ new CostFunctor);
+ problem.AddResidualBlock(cost_function, nullptr, &x);
// Run the solver!
- Solver::Options options;
+ ceres::Solver::Options options;
options.minimizer_progress_to_stdout = true;
- Solver::Summary summary;
- Solve(options, &problem, &summary);
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
std::cout << "x : " << initial_x << " -> " << x << "\n";
diff --git a/examples/iteration_callback_example.cc b/examples/iteration_callback_example.cc
new file mode 100644
index 0000000..0be2f36
--- /dev/null
+++ b/examples/iteration_callback_example.cc
@@ -0,0 +1,199 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+//
+// This example is a variant of curve_fitting.cc where we use an
+// IterationCallback to implement custom logging which prints out the values of
+// the parameter blocks as they evolve over the course of the optimization. This
+// also requires the use of Solver::Options::update_state_every_iteration.
+
+#include <iostream>
+
+#include "ceres/ceres.h"
+#include "glog/logging.h"
+
+// Data generated using the following octave code.
+// randn('seed', 23497);
+// m = 0.3;
+// c = 0.1;
+// x=[0:0.075:5];
+// y = exp(m * x + c);
+// noise = randn(size(x)) * 0.2;
+// y_observed = y + noise;
+// data = [x', y_observed'];
+
+const int kNumObservations = 67;
+// clang-format off
+const double data[] = {
+ 0.000000e+00, 1.133898e+00,
+ 7.500000e-02, 1.334902e+00,
+ 1.500000e-01, 1.213546e+00,
+ 2.250000e-01, 1.252016e+00,
+ 3.000000e-01, 1.392265e+00,
+ 3.750000e-01, 1.314458e+00,
+ 4.500000e-01, 1.472541e+00,
+ 5.250000e-01, 1.536218e+00,
+ 6.000000e-01, 1.355679e+00,
+ 6.750000e-01, 1.463566e+00,
+ 7.500000e-01, 1.490201e+00,
+ 8.250000e-01, 1.658699e+00,
+ 9.000000e-01, 1.067574e+00,
+ 9.750000e-01, 1.464629e+00,
+ 1.050000e+00, 1.402653e+00,
+ 1.125000e+00, 1.713141e+00,
+ 1.200000e+00, 1.527021e+00,
+ 1.275000e+00, 1.702632e+00,
+ 1.350000e+00, 1.423899e+00,
+ 1.425000e+00, 1.543078e+00,
+ 1.500000e+00, 1.664015e+00,
+ 1.575000e+00, 1.732484e+00,
+ 1.650000e+00, 1.543296e+00,
+ 1.725000e+00, 1.959523e+00,
+ 1.800000e+00, 1.685132e+00,
+ 1.875000e+00, 1.951791e+00,
+ 1.950000e+00, 2.095346e+00,
+ 2.025000e+00, 2.361460e+00,
+ 2.100000e+00, 2.169119e+00,
+ 2.175000e+00, 2.061745e+00,
+ 2.250000e+00, 2.178641e+00,
+ 2.325000e+00, 2.104346e+00,
+ 2.400000e+00, 2.584470e+00,
+ 2.475000e+00, 1.914158e+00,
+ 2.550000e+00, 2.368375e+00,
+ 2.625000e+00, 2.686125e+00,
+ 2.700000e+00, 2.712395e+00,
+ 2.775000e+00, 2.499511e+00,
+ 2.850000e+00, 2.558897e+00,
+ 2.925000e+00, 2.309154e+00,
+ 3.000000e+00, 2.869503e+00,
+ 3.075000e+00, 3.116645e+00,
+ 3.150000e+00, 3.094907e+00,
+ 3.225000e+00, 2.471759e+00,
+ 3.300000e+00, 3.017131e+00,
+ 3.375000e+00, 3.232381e+00,
+ 3.450000e+00, 2.944596e+00,
+ 3.525000e+00, 3.385343e+00,
+ 3.600000e+00, 3.199826e+00,
+ 3.675000e+00, 3.423039e+00,
+ 3.750000e+00, 3.621552e+00,
+ 3.825000e+00, 3.559255e+00,
+ 3.900000e+00, 3.530713e+00,
+ 3.975000e+00, 3.561766e+00,
+ 4.050000e+00, 3.544574e+00,
+ 4.125000e+00, 3.867945e+00,
+ 4.200000e+00, 4.049776e+00,
+ 4.275000e+00, 3.885601e+00,
+ 4.350000e+00, 4.110505e+00,
+ 4.425000e+00, 4.345320e+00,
+ 4.500000e+00, 4.161241e+00,
+ 4.575000e+00, 4.363407e+00,
+ 4.650000e+00, 4.161576e+00,
+ 4.725000e+00, 4.619728e+00,
+ 4.800000e+00, 4.737410e+00,
+ 4.875000e+00, 4.727863e+00,
+ 4.950000e+00, 4.669206e+00,
+};
+// clang-format on
+
+struct ExponentialResidual {
+ ExponentialResidual(double x, double y) : x(x), y(y) {}
+
+ template <typename T>
+ bool operator()(const T* const m, const T* const c, T* residual) const {
+ residual[0] = y - exp(m[0] * x + c[0]);
+ return true;
+ }
+
+ private:
+ const double x;
+ const double y;
+};
+
+// MyIterationCallback prints the iteration number, the cost and the value of
+// the parameter blocks every iteration.
+class MyIterationCallback : public ceres::IterationCallback {
+ public:
+ MyIterationCallback(const double* m, const double* c) : m_(m), c_(c) {}
+
+ ~MyIterationCallback() override = default;
+
+ ceres::CallbackReturnType operator()(
+ const ceres::IterationSummary& summary) final {
+ std::cout << "Iteration: " << summary.iteration << " cost: " << summary.cost
+ << " m: " << *m_ << " c: " << *c_ << std::endl;
+ return ceres::SOLVER_CONTINUE;
+ }
+
+ private:
+ const double* m_ = nullptr;
+ const double* c_ = nullptr;
+};
+
+int main(int argc, char** argv) {
+ google::InitGoogleLogging(argv[0]);
+
+ const double initial_m = 0.0;
+ const double initial_c = 0.0;
+
+ double m = initial_m;
+ double c = initial_c;
+
+ ceres::Problem problem;
+ for (int i = 0; i < kNumObservations; ++i) {
+ problem.AddResidualBlock(
+ new ceres::AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
+ data[2 * i], data[2 * i + 1]),
+ nullptr,
+ &m,
+ &c);
+ }
+
+ ceres::Solver::Options options;
+ options.max_num_iterations = 25;
+ options.linear_solver_type = ceres::DENSE_QR;
+
+ // Turn off the default logging from Ceres so that it does not interfere with
+ // MyIterationCallback.
+ options.minimizer_progress_to_stdout = false;
+
+ MyIterationCallback callback(&m, &c);
+ options.callbacks.push_back(&callback);
+
+ // Tell Ceres to update the value of the parameter blocks on each each
+ // iteration (successful or not) so that MyIterationCallback will be able to
+ // see them when called.
+ options.update_state_every_iteration = true;
+
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
+ std::cout << summary.BriefReport() << "\n";
+ std::cout << "Initial m: " << initial_m << " c: " << initial_c << "\n";
+ std::cout << "Final m: " << m << " c: " << c << "\n";
+ return 0;
+}
diff --git a/examples/libmv_bundle_adjuster.cc b/examples/libmv_bundle_adjuster.cc
index b1eb220..9315ed7 100644
--- a/examples/libmv_bundle_adjuster.cc
+++ b/examples/libmv_bundle_adjuster.cc
@@ -60,7 +60,7 @@
// Image number shall be greater or equal to zero. Order of cameras does not
// matter and gaps are possible.
//
-// Every 3D point is decribed by:
+// Every 3D point is described by:
//
// - Track number point belongs to (single 4 bytes integer value).
// - 3D position vector, 3-component vector of float values.
@@ -100,12 +100,16 @@
#define close _close
typedef unsigned __int32 uint32_t;
#else
-#include <stdint.h>
#include <unistd.h>
+#include <cstdint>
+
+// NOTE MinGW does define the macro.
+#ifndef O_BINARY
// O_BINARY is not defined on unix like platforms, as there is no
// difference between binary and text files.
#define O_BINARY 0
+#endif
#endif
@@ -114,12 +118,10 @@
#include "gflags/gflags.h"
#include "glog/logging.h"
-typedef Eigen::Matrix<double, 3, 3> Mat3;
-typedef Eigen::Matrix<double, 6, 1> Vec6;
-typedef Eigen::Vector3d Vec3;
-typedef Eigen::Vector4d Vec4;
-
-using std::vector;
+using Mat3 = Eigen::Matrix<double, 3, 3>;
+using Vec6 = Eigen::Matrix<double, 6, 1>;
+using Vec3 = Eigen::Vector3d;
+using Vec4 = Eigen::Vector4d;
DEFINE_string(input, "", "Input File name");
DEFINE_string(refine_intrinsics,
@@ -135,10 +137,10 @@
// R is a 3x3 matrix representing the rotation of the camera.
// t is a translation vector representing its positions.
struct EuclideanCamera {
- EuclideanCamera() : image(-1) {}
- EuclideanCamera(const EuclideanCamera& c) : image(c.image), R(c.R), t(c.t) {}
+ EuclideanCamera() = default;
+ EuclideanCamera(const EuclideanCamera& c) = default;
- int image;
+ int image{-1};
Mat3 R;
Vec3 t;
};
@@ -148,9 +150,9 @@
// track identifies which track this point corresponds to.
// X represents the 3D position of the track.
struct EuclideanPoint {
- EuclideanPoint() : track(-1) {}
- EuclideanPoint(const EuclideanPoint& p) : track(p.track), X(p.X) {}
- int track;
+ EuclideanPoint() = default;
+ EuclideanPoint(const EuclideanPoint& p) = default;
+ int track{-1};
Vec3 X;
};
@@ -203,32 +205,32 @@
};
// Returns a pointer to the camera corresponding to a image.
-EuclideanCamera* CameraForImage(vector<EuclideanCamera>* all_cameras,
+EuclideanCamera* CameraForImage(std::vector<EuclideanCamera>* all_cameras,
const int image) {
if (image < 0 || image >= all_cameras->size()) {
- return NULL;
+ return nullptr;
}
EuclideanCamera* camera = &(*all_cameras)[image];
if (camera->image == -1) {
- return NULL;
+ return nullptr;
}
return camera;
}
const EuclideanCamera* CameraForImage(
- const vector<EuclideanCamera>& all_cameras, const int image) {
+ const std::vector<EuclideanCamera>& all_cameras, const int image) {
if (image < 0 || image >= all_cameras.size()) {
- return NULL;
+ return nullptr;
}
const EuclideanCamera* camera = &all_cameras[image];
if (camera->image == -1) {
- return NULL;
+ return nullptr;
}
return camera;
}
// Returns maximal image number at which marker exists.
-int MaxImage(const vector<Marker>& all_markers) {
+int MaxImage(const std::vector<Marker>& all_markers) {
if (all_markers.size() == 0) {
return -1;
}
@@ -241,14 +243,14 @@
}
// Returns a pointer to the point corresponding to a track.
-EuclideanPoint* PointForTrack(vector<EuclideanPoint>* all_points,
+EuclideanPoint* PointForTrack(std::vector<EuclideanPoint>* all_points,
const int track) {
if (track < 0 || track >= all_points->size()) {
- return NULL;
+ return nullptr;
}
EuclideanPoint* point = &(*all_points)[track];
if (point->track == -1) {
- return NULL;
+ return nullptr;
}
return point;
}
@@ -262,7 +264,7 @@
// denotes file endianness in this way.
class EndianAwareFileReader {
public:
- EndianAwareFileReader(void) : file_descriptor_(-1) {
+ EndianAwareFileReader() {
// Get an endian type of the host machine.
union {
unsigned char bytes[4];
@@ -272,7 +274,7 @@
file_endian_type_ = host_endian_type_;
}
- ~EndianAwareFileReader(void) {
+ ~EndianAwareFileReader() {
if (file_descriptor_ > 0) {
close(file_descriptor_);
}
@@ -284,7 +286,7 @@
return false;
}
// Get an endian tpye of data in the file.
- unsigned char file_endian_type_flag = Read<unsigned char>();
+ auto file_endian_type_flag = Read<unsigned char>();
if (file_endian_type_flag == 'V') {
file_endian_type_ = kBigEndian;
} else if (file_endian_type_flag == 'v') {
@@ -297,9 +299,11 @@
// Read value from the file, will switch endian if needed.
template <typename T>
- T Read(void) const {
+ T Read() const {
T value;
+ CERES_DISABLE_DEPRECATED_WARNING
CHECK_GT(read(file_descriptor_, &value, sizeof(value)), 0);
+ CERES_RESTORE_DEPRECATED_WARNING
// Switch endian type if file contains data in different type
// that current machine.
if (file_endian_type_ != host_endian_type_) {
@@ -316,7 +320,7 @@
template <typename T>
T SwitchEndian(const T value) const {
if (sizeof(T) == 4) {
- unsigned int temp_value = static_cast<unsigned int>(value);
+ auto temp_value = static_cast<unsigned int>(value);
// clang-format off
return ((temp_value >> 24)) |
((temp_value << 8) & 0x00ff0000) |
@@ -333,7 +337,7 @@
int host_endian_type_;
int file_endian_type_;
- int file_descriptor_;
+ int file_descriptor_{-1};
};
// Read 3x3 column-major matrix from the file
@@ -369,17 +373,17 @@
// reading.
bool ReadProblemFromFile(const std::string& file_name,
double camera_intrinsics[8],
- vector<EuclideanCamera>* all_cameras,
- vector<EuclideanPoint>* all_points,
+ std::vector<EuclideanCamera>* all_cameras,
+ std::vector<EuclideanPoint>* all_points,
bool* is_image_space,
- vector<Marker>* all_markers) {
+ std::vector<Marker>* all_markers) {
EndianAwareFileReader file_reader;
if (!file_reader.OpenFile(file_name)) {
return false;
}
// Read markers' space flag.
- unsigned char is_image_space_flag = file_reader.Read<unsigned char>();
+ auto is_image_space_flag = file_reader.Read<unsigned char>();
if (is_image_space_flag == 'P') {
*is_image_space = true;
} else if (is_image_space_flag == 'N') {
@@ -610,10 +614,10 @@
//
// Element with index i matches to a rotation+translation for
// camera at image i.
-vector<Vec6> PackCamerasRotationAndTranslation(
- const vector<Marker>& all_markers,
- const vector<EuclideanCamera>& all_cameras) {
- vector<Vec6> all_cameras_R_t;
+std::vector<Vec6> PackCamerasRotationAndTranslation(
+ const std::vector<Marker>& all_markers,
+ const std::vector<EuclideanCamera>& all_cameras) {
+ std::vector<Vec6> all_cameras_R_t;
int max_image = MaxImage(all_markers);
all_cameras_R_t.resize(max_image + 1);
@@ -633,9 +637,10 @@
}
// Convert cameras rotations fro mangle axis back to rotation matrix.
-void UnpackCamerasRotationAndTranslation(const vector<Marker>& all_markers,
- const vector<Vec6>& all_cameras_R_t,
- vector<EuclideanCamera>* all_cameras) {
+void UnpackCamerasRotationAndTranslation(
+ const std::vector<Marker>& all_markers,
+ const std::vector<Vec6>& all_cameras_R_t,
+ std::vector<EuclideanCamera>* all_cameras) {
int max_image = MaxImage(all_markers);
for (int i = 0; i <= max_image; i++) {
@@ -650,12 +655,12 @@
}
}
-void EuclideanBundleCommonIntrinsics(const vector<Marker>& all_markers,
+void EuclideanBundleCommonIntrinsics(const std::vector<Marker>& all_markers,
const int bundle_intrinsics,
const int bundle_constraints,
double* camera_intrinsics,
- vector<EuclideanCamera>* all_cameras,
- vector<EuclideanPoint>* all_points) {
+ std::vector<EuclideanCamera>* all_cameras,
+ std::vector<EuclideanPoint>* all_points) {
PrintCameraIntrinsics("Original intrinsics: ", camera_intrinsics);
ceres::Problem::Options problem_options;
@@ -667,11 +672,11 @@
//
// Block for minimization has got the following structure:
// <3 elements for angle-axis> <3 elements for translation>
- vector<Vec6> all_cameras_R_t =
+ std::vector<Vec6> all_cameras_R_t =
PackCamerasRotationAndTranslation(all_markers, *all_cameras);
- // Parameterization used to restrict camera motion for modal solvers.
- ceres::SubsetParameterization* constant_transform_parameterization = NULL;
+ // Manifold used to restrict camera motion for modal solvers.
+ ceres::SubsetManifold* constant_transform_manifold = nullptr;
if (bundle_constraints & BUNDLE_NO_TRANSLATION) {
std::vector<int> constant_translation;
@@ -680,8 +685,8 @@
constant_translation.push_back(4);
constant_translation.push_back(5);
- constant_transform_parameterization =
- new ceres::SubsetParameterization(6, constant_translation);
+ constant_transform_manifold =
+ new ceres::SubsetManifold(6, constant_translation);
}
std::vector<OpenCVReprojectionError> errors;
@@ -692,11 +697,10 @@
int num_residuals = 0;
bool have_locked_camera = false;
- for (int i = 0; i < all_markers.size(); ++i) {
- const Marker& marker = all_markers[i];
+ for (const auto& marker : all_markers) {
EuclideanCamera* camera = CameraForImage(all_cameras, marker.image);
EuclideanPoint* point = PointForTrack(all_points, marker.track);
- if (camera == NULL || point == NULL) {
+ if (camera == nullptr || point == nullptr) {
continue;
}
@@ -708,7 +712,7 @@
costFunctions.emplace_back(&errors.back(), ceres::DO_NOT_TAKE_OWNERSHIP);
problem.AddResidualBlock(&costFunctions.back(),
- NULL,
+ nullptr,
camera_intrinsics,
current_camera_R_t,
&point->X(0));
@@ -720,8 +724,7 @@
}
if (bundle_constraints & BUNDLE_NO_TRANSLATION) {
- problem.SetParameterization(current_camera_R_t,
- constant_transform_parameterization);
+ problem.SetManifold(current_camera_R_t, constant_transform_manifold);
}
num_residuals++;
@@ -760,10 +763,8 @@
// Always set K3 constant, it's not used at the moment.
constant_intrinsics.push_back(OFFSET_K3);
- ceres::SubsetParameterization* subset_parameterization =
- new ceres::SubsetParameterization(8, constant_intrinsics);
-
- problem.SetParameterization(camera_intrinsics, subset_parameterization);
+ auto* subset_manifold = new ceres::SubsetManifold(8, constant_intrinsics);
+ problem.SetManifold(camera_intrinsics, subset_manifold);
}
// Configure the solver.
@@ -793,18 +794,18 @@
GFLAGS_NAMESPACE::ParseCommandLineFlags(&argc, &argv, true);
google::InitGoogleLogging(argv[0]);
- if (FLAGS_input.empty()) {
+ if (CERES_GET_FLAG(FLAGS_input).empty()) {
LOG(ERROR) << "Usage: libmv_bundle_adjuster --input=blender_problem";
return EXIT_FAILURE;
}
double camera_intrinsics[8];
- vector<EuclideanCamera> all_cameras;
- vector<EuclideanPoint> all_points;
+ std::vector<EuclideanCamera> all_cameras;
+ std::vector<EuclideanPoint> all_points;
bool is_image_space;
- vector<Marker> all_markers;
+ std::vector<Marker> all_markers;
- if (!ReadProblemFromFile(FLAGS_input,
+ if (!ReadProblemFromFile(CERES_GET_FLAG(FLAGS_input),
camera_intrinsics,
&all_cameras,
&all_points,
@@ -828,14 +829,14 @@
// declare which intrinsics need to be refined and in this case
// refining flags does not depend on problem at all.
int bundle_intrinsics = BUNDLE_NO_INTRINSICS;
- if (FLAGS_refine_intrinsics.empty()) {
+ if (CERES_GET_FLAG(FLAGS_refine_intrinsics).empty()) {
if (is_image_space) {
bundle_intrinsics = BUNDLE_FOCAL_LENGTH | BUNDLE_RADIAL;
}
} else {
- if (FLAGS_refine_intrinsics == "radial") {
+ if (CERES_GET_FLAG(FLAGS_refine_intrinsics) == "radial") {
bundle_intrinsics = BUNDLE_FOCAL_LENGTH | BUNDLE_RADIAL;
- } else if (FLAGS_refine_intrinsics != "none") {
+ } else if (CERES_GET_FLAG(FLAGS_refine_intrinsics) != "none") {
LOG(ERROR) << "Unsupported value for refine-intrinsics";
return EXIT_FAILURE;
}
diff --git a/examples/libmv_homography.cc b/examples/libmv_homography.cc
index 55f3b70..b7c9eda 100644
--- a/examples/libmv_homography.cc
+++ b/examples/libmv_homography.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -60,17 +60,19 @@
// This example demonstrates custom exit criterion by having a callback check
// for image-space error.
+#include <utility>
+
#include "ceres/ceres.h"
#include "glog/logging.h"
-typedef Eigen::NumTraits<double> EigenDouble;
+using EigenDouble = Eigen::NumTraits<double>;
-typedef Eigen::MatrixXd Mat;
-typedef Eigen::VectorXd Vec;
-typedef Eigen::Matrix<double, 3, 3> Mat3;
-typedef Eigen::Matrix<double, 2, 1> Vec2;
-typedef Eigen::Matrix<double, Eigen::Dynamic, 8> MatX8;
-typedef Eigen::Vector3d Vec3;
+using Mat = Eigen::MatrixXd;
+using Vec = Eigen::VectorXd;
+using Mat3 = Eigen::Matrix<double, 3, 3>;
+using Vec2 = Eigen::Matrix<double, 2, 1>;
+using MatX8 = Eigen::Matrix<double, Eigen::Dynamic, 8>;
+using Vec3 = Eigen::Vector3d;
namespace {
@@ -82,11 +84,10 @@
struct EstimateHomographyOptions {
// Default settings for homography estimation which should be suitable
// for a wide range of use cases.
- EstimateHomographyOptions()
- : max_num_iterations(50), expected_average_symmetric_distance(1e-16) {}
+ EstimateHomographyOptions() = default;
// Maximal number of iterations for the refinement step.
- int max_num_iterations;
+ int max_num_iterations{50};
// Expected average of symmetric geometric distance between
// actual destination points and original ones transformed by
@@ -96,7 +97,7 @@
// geometric distance is less or equal to this value.
//
// This distance is measured in the same units as input points are.
- double expected_average_symmetric_distance;
+ double expected_average_symmetric_distance{1e-16};
};
// Calculate symmetric geometric cost terms:
@@ -111,7 +112,7 @@
const Eigen::Matrix<T, 2, 1>& x2,
T forward_error[2],
T backward_error[2]) {
- typedef Eigen::Matrix<T, 3, 1> Vec3;
+ using Vec3 = Eigen::Matrix<T, 3, 1>;
Vec3 x(x1(0), x1(1), T(1.0));
Vec3 y(x2(0), x2(1), T(1.0));
@@ -152,8 +153,8 @@
template <typename T = double>
class Homography2DNormalizedParameterization {
public:
- typedef Eigen::Matrix<T, 8, 1> Parameters; // a, b, ... g, h
- typedef Eigen::Matrix<T, 3, 3> Parameterized; // H
+ using Parameters = Eigen::Matrix<T, 8, 1>; // a, b, ... g, h
+ using Parameterized = Eigen::Matrix<T, 3, 3>; // H
// Convert from the 8 parameters to a H matrix.
static void To(const Parameters& p, Parameterized* h) {
@@ -202,11 +203,11 @@
assert(x1.rows() == x2.rows());
assert(x1.cols() == x2.cols());
- int n = x1.cols();
+ const int64_t n = x1.cols();
MatX8 L = Mat::Zero(n * 3, 8);
Mat b = Mat::Zero(n * 3, 1);
- for (int i = 0; i < n; ++i) {
- int j = 3 * i;
+ for (int64_t i = 0; i < n; ++i) {
+ int64_t j = 3 * i;
L(j, 0) = x1(0, i); // a
L(j, 1) = x1(1, i); // b
L(j, 2) = 1.0; // c
@@ -242,13 +243,13 @@
// used for homography matrix refinement.
class HomographySymmetricGeometricCostFunctor {
public:
- HomographySymmetricGeometricCostFunctor(const Vec2& x, const Vec2& y)
- : x_(x), y_(y) {}
+ HomographySymmetricGeometricCostFunctor(Vec2 x, Vec2 y)
+ : x_(std::move(x)), y_(std::move(y)) {}
template <typename T>
bool operator()(const T* homography_parameters, T* residuals) const {
- typedef Eigen::Matrix<T, 3, 3> Mat3;
- typedef Eigen::Matrix<T, 2, 1> Vec2;
+ using Mat3 = Eigen::Matrix<T, 3, 3>;
+ using Vec2 = Eigen::Matrix<T, 2, 1>;
Mat3 H(homography_parameters);
Vec2 x(T(x_(0)), T(x_(1)));
@@ -277,8 +278,8 @@
Mat3* H)
: options_(options), x1_(x1), x2_(x2), H_(H) {}
- virtual ceres::CallbackReturnType operator()(
- const ceres::IterationSummary& summary) {
+ ceres::CallbackReturnType operator()(
+ const ceres::IterationSummary& summary) override {
// If the step wasn't successful, there's nothing to do.
if (!summary.step_is_successful) {
return ceres::SOLVER_CONTINUE;
@@ -326,16 +327,11 @@
// Step 2: Refine matrix using Ceres minimizer.
ceres::Problem problem;
for (int i = 0; i < x1.cols(); i++) {
- HomographySymmetricGeometricCostFunctor*
- homography_symmetric_geometric_cost_function =
- new HomographySymmetricGeometricCostFunctor(x1.col(i), x2.col(i));
-
problem.AddResidualBlock(
new ceres::AutoDiffCostFunction<HomographySymmetricGeometricCostFunctor,
4, // num_residuals
- 9>(
- homography_symmetric_geometric_cost_function),
- NULL,
+ 9>(x1.col(i), x2.col(i)),
+ nullptr,
H->data());
}
@@ -380,10 +376,10 @@
Mat x2 = x1;
for (int i = 0; i < x2.cols(); ++i) {
- Vec3 homogenous_x1 = Vec3(x1(0, i), x1(1, i), 1.0);
- Vec3 homogenous_x2 = homography_matrix * homogenous_x1;
- x2(0, i) = homogenous_x2(0) / homogenous_x2(2);
- x2(1, i) = homogenous_x2(1) / homogenous_x2(2);
+ Vec3 homogeneous_x1 = Vec3(x1(0, i), x1(1, i), 1.0);
+ Vec3 homogeneous_x2 = homography_matrix * homogeneous_x1;
+ x2(0, i) = homogeneous_x2(0) / homogeneous_x2(2);
+ x2(1, i) = homogeneous_x2(1) / homogeneous_x2(2);
// Apply some noise so algebraic estimation is not good enough.
x2(0, i) += static_cast<double>(rand() % 1000) / 5000.0;
diff --git a/examples/more_garbow_hillstrom.cc b/examples/more_garbow_hillstrom.cc
index e39d23c..f15e576 100644
--- a/examples/more_garbow_hillstrom.cc
+++ b/examples/more_garbow_hillstrom.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -72,55 +72,56 @@
3,
"Maximal number of extrapolations in Ridders' method.");
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
const double kDoubleMax = std::numeric_limits<double>::max();
static void SetNumericDiffOptions(ceres::NumericDiffOptions* options) {
- options->max_num_ridders_extrapolations = FLAGS_ridders_extrapolations;
+ options->max_num_ridders_extrapolations =
+ CERES_GET_FLAG(FLAGS_ridders_extrapolations);
}
-#define BEGIN_MGH_PROBLEM(name, num_parameters, num_residuals) \
- struct name { \
- static constexpr int kNumParameters = num_parameters; \
- static const double initial_x[kNumParameters]; \
- static const double lower_bounds[kNumParameters]; \
- static const double upper_bounds[kNumParameters]; \
- static const double constrained_optimal_cost; \
- static const double unconstrained_optimal_cost; \
- static CostFunction* Create() { \
- if (FLAGS_use_numeric_diff) { \
- ceres::NumericDiffOptions options; \
- SetNumericDiffOptions(&options); \
- if (FLAGS_numeric_diff_method == "central") { \
- return new NumericDiffCostFunction<name, \
- ceres::CENTRAL, \
- num_residuals, \
- num_parameters>( \
- new name, ceres::TAKE_OWNERSHIP, num_residuals, options); \
- } else if (FLAGS_numeric_diff_method == "forward") { \
- return new NumericDiffCostFunction<name, \
- ceres::FORWARD, \
- num_residuals, \
- num_parameters>( \
- new name, ceres::TAKE_OWNERSHIP, num_residuals, options); \
- } else if (FLAGS_numeric_diff_method == "ridders") { \
- return new NumericDiffCostFunction<name, \
- ceres::RIDDERS, \
- num_residuals, \
- num_parameters>( \
- new name, ceres::TAKE_OWNERSHIP, num_residuals, options); \
- } else { \
- LOG(ERROR) << "Invalid numeric diff method specified"; \
- return NULL; \
- } \
- } else { \
- return new AutoDiffCostFunction<name, num_residuals, num_parameters>( \
- new name); \
- } \
- } \
- template <typename T> \
+#define BEGIN_MGH_PROBLEM(name, num_parameters, num_residuals) \
+ struct name { \
+ static constexpr int kNumParameters = num_parameters; \
+ static const double initial_x[kNumParameters]; \
+ static const double lower_bounds[kNumParameters]; \
+ static const double upper_bounds[kNumParameters]; \
+ static const double constrained_optimal_cost; \
+ static const double unconstrained_optimal_cost; \
+ static CostFunction* Create() { \
+ if (CERES_GET_FLAG(FLAGS_use_numeric_diff)) { \
+ ceres::NumericDiffOptions options; \
+ SetNumericDiffOptions(&options); \
+ if (CERES_GET_FLAG(FLAGS_numeric_diff_method) == "central") { \
+ return new NumericDiffCostFunction<name, \
+ ceres::CENTRAL, \
+ num_residuals, \
+ num_parameters>( \
+ new name, ceres::TAKE_OWNERSHIP, num_residuals, options); \
+ } else if (CERES_GET_FLAG(FLAGS_numeric_diff_method) == "forward") { \
+ return new NumericDiffCostFunction<name, \
+ ceres::FORWARD, \
+ num_residuals, \
+ num_parameters>( \
+ new name, ceres::TAKE_OWNERSHIP, num_residuals, options); \
+ } else if (CERES_GET_FLAG(FLAGS_numeric_diff_method) == "ridders") { \
+ return new NumericDiffCostFunction<name, \
+ ceres::RIDDERS, \
+ num_residuals, \
+ num_parameters>( \
+ new name, ceres::TAKE_OWNERSHIP, num_residuals, options); \
+ } else { \
+ LOG(ERROR) << "Invalid numeric diff method specified"; \
+ return nullptr; \
+ } \
+ } else { \
+ return new AutoDiffCostFunction<name, \
+ num_residuals, \
+ num_parameters>(); \
+ } \
+ } \
+ template <typename T> \
bool operator()(const T* const x, T* residual) const {
// clang-format off
@@ -223,7 +224,7 @@
const T x1 = x[0];
const T x2 = x[1];
const T x3 = x[2];
- const T theta = (0.5 / M_PI) * atan(x2 / x1) + (x1 > 0.0 ? 0.0 : 0.5);
+ const T theta = (0.5 / constants::pi) * atan(x2 / x1) + (x1 > 0.0 ? 0.0 : 0.5);
residual[0] = 10.0 * (x3 - 10.0 * theta);
residual[1] = 10.0 * (sqrt(x1 * x1 + x2 * x2) - 1.0);
residual[2] = x3;
@@ -548,7 +549,7 @@
}
Problem problem;
- problem.AddResidualBlock(TestProblem::Create(), NULL, x);
+ problem.AddResidualBlock(TestProblem::Create(), nullptr, x);
double optimal_cost = TestProblem::unconstrained_optimal_cost;
if (is_constrained) {
@@ -580,13 +581,11 @@
return success;
}
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
int main(int argc, char** argv) {
GFLAGS_NAMESPACE::ParseCommandLineFlags(&argc, &argv, true);
google::InitGoogleLogging(argv[0]);
-
using ceres::examples::Solve;
int unconstrained_problems = 0;
@@ -597,7 +596,8 @@
#define UNCONSTRAINED_SOLVE(n) \
ss << "Unconstrained Problem " << n << " : "; \
- if (FLAGS_problem == #n || FLAGS_problem == "all") { \
+ if (CERES_GET_FLAG(FLAGS_problem) == #n || \
+ CERES_GET_FLAG(FLAGS_problem) == "all") { \
unconstrained_problems += 3; \
if (Solve<ceres::examples::TestProblem##n>(false, 0)) { \
unconstrained_successes += 1; \
@@ -645,7 +645,8 @@
#define CONSTRAINED_SOLVE(n) \
ss << "Constrained Problem " << n << " : "; \
- if (FLAGS_problem == #n || FLAGS_problem == "all") { \
+ if (CERES_GET_FLAG(FLAGS_problem) == #n || \
+ CERES_GET_FLAG(FLAGS_problem) == "all") { \
constrained_problems += 1; \
if (Solve<ceres::examples::TestProblem##n>(true, 0)) { \
constrained_successes += 1; \
diff --git a/examples/nist.cc b/examples/nist.cc
index 977b69d..b92c918 100644
--- a/examples/nist.cc
+++ b/examples/nist.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -71,9 +71,12 @@
// Average LRE 2.3 4.3 4.0 6.8 4.4 9.4
// Winner 0 0 5 11 2 41
+#include <cstdlib>
#include <fstream>
#include <iostream>
#include <iterator>
+#include <string>
+#include <vector>
#include "Eigen/Core"
#include "ceres/ceres.h"
@@ -99,6 +102,9 @@
"dense_qr",
"Options are: sparse_cholesky, dense_qr, dense_normal_cholesky "
"and cgnr");
+DEFINE_string(dense_linear_algebra_library,
+ "eigen",
+ "Options are: eigen, lapack, and cuda.");
DEFINE_string(preconditioner, "jacobi", "Options are: identity, jacobi");
DEFINE_string(line_search,
"wolfe",
@@ -114,7 +120,7 @@
"Maximum number of restarts of line search direction algorithm.");
DEFINE_string(line_search_interpolation,
"cubic",
- "Degree of polynomial aproximation in line search, choices are: "
+ "Degree of polynomial approximation in line search, choices are: "
"bisection, quadratic & cubic.");
DEFINE_int32(lbfgs_rank,
20,
@@ -148,26 +154,18 @@
3,
"Maximal number of Ridders extrapolations.");
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
namespace {
using Eigen::Dynamic;
using Eigen::RowMajor;
-typedef Eigen::Matrix<double, Dynamic, 1> Vector;
-typedef Eigen::Matrix<double, Dynamic, Dynamic, RowMajor> Matrix;
+using Vector = Eigen::Matrix<double, Dynamic, 1>;
+using Matrix = Eigen::Matrix<double, Dynamic, Dynamic, RowMajor>;
-using std::atof;
-using std::atoi;
-using std::cout;
-using std::ifstream;
-using std::string;
-using std::vector;
-
-void SplitStringUsingChar(const string& full,
+void SplitStringUsingChar(const std::string& full,
const char delim,
- vector<string>* result) {
- std::back_insert_iterator<vector<string>> it(*result);
+ std::vector<std::string>* result) {
+ std::back_insert_iterator<std::vector<std::string>> it(*result);
const char* p = full.data();
const char* end = p + full.size();
@@ -177,22 +175,22 @@
} else {
const char* start = p;
while (++p != end && *p != delim) {
- // Skip to the next occurence of the delimiter.
+ // Skip to the next occurrence of the delimiter.
}
- *it++ = string(start, p - start);
+ *it++ = std::string(start, p - start);
}
}
}
-bool GetAndSplitLine(ifstream& ifs, vector<string>* pieces) {
+bool GetAndSplitLine(std::ifstream& ifs, std::vector<std::string>* pieces) {
pieces->clear();
char buf[256];
ifs.getline(buf, 256);
- SplitStringUsingChar(string(buf), ' ', pieces);
+ SplitStringUsingChar(std::string(buf), ' ', pieces);
return true;
}
-void SkipLines(ifstream& ifs, int num_lines) {
+void SkipLines(std::ifstream& ifs, int num_lines) {
char buf[256];
for (int i = 0; i < num_lines; ++i) {
ifs.getline(buf, 256);
@@ -201,24 +199,24 @@
class NISTProblem {
public:
- explicit NISTProblem(const string& filename) {
- ifstream ifs(filename.c_str(), ifstream::in);
+ explicit NISTProblem(const std::string& filename) {
+ std::ifstream ifs(filename.c_str(), std::ifstream::in);
CHECK(ifs) << "Unable to open : " << filename;
- vector<string> pieces;
+ std::vector<std::string> pieces;
SkipLines(ifs, 24);
GetAndSplitLine(ifs, &pieces);
- const int kNumResponses = atoi(pieces[1].c_str());
+ const int kNumResponses = std::atoi(pieces[1].c_str());
GetAndSplitLine(ifs, &pieces);
- const int kNumPredictors = atoi(pieces[0].c_str());
+ const int kNumPredictors = std::atoi(pieces[0].c_str());
GetAndSplitLine(ifs, &pieces);
- const int kNumObservations = atoi(pieces[0].c_str());
+ const int kNumObservations = std::atoi(pieces[0].c_str());
SkipLines(ifs, 4);
GetAndSplitLine(ifs, &pieces);
- const int kNumParameters = atoi(pieces[0].c_str());
+ const int kNumParameters = std::atoi(pieces[0].c_str());
SkipLines(ifs, 8);
// Get the first line of initial and final parameter values to
@@ -234,24 +232,26 @@
// Parse the line for parameter b1.
int parameter_id = 0;
for (int i = 0; i < kNumTries; ++i) {
- initial_parameters_(i, parameter_id) = atof(pieces[i + 2].c_str());
+ initial_parameters_(i, parameter_id) = std::atof(pieces[i + 2].c_str());
}
- final_parameters_(0, parameter_id) = atof(pieces[2 + kNumTries].c_str());
+ final_parameters_(0, parameter_id) =
+ std::atof(pieces[2 + kNumTries].c_str());
// Parse the remaining parameter lines.
for (int parameter_id = 1; parameter_id < kNumParameters; ++parameter_id) {
GetAndSplitLine(ifs, &pieces);
// b2, b3, ....
for (int i = 0; i < kNumTries; ++i) {
- initial_parameters_(i, parameter_id) = atof(pieces[i + 2].c_str());
+ initial_parameters_(i, parameter_id) = std::atof(pieces[i + 2].c_str());
}
- final_parameters_(0, parameter_id) = atof(pieces[2 + kNumTries].c_str());
+ final_parameters_(0, parameter_id) =
+ std::atof(pieces[2 + kNumTries].c_str());
}
- // Certfied cost
+ // Certified cost
SkipLines(ifs, 1);
GetAndSplitLine(ifs, &pieces);
- certified_cost_ = atof(pieces[4].c_str()) / 2.0;
+ certified_cost_ = std::atof(pieces[4].c_str()) / 2.0;
// Read the observations.
SkipLines(ifs, 18 - kNumParameters);
@@ -259,12 +259,12 @@
GetAndSplitLine(ifs, &pieces);
// Response.
for (int j = 0; j < kNumResponses; ++j) {
- response_(i, j) = atof(pieces[j].c_str());
+ response_(i, j) = std::atof(pieces[j].c_str());
}
// Predictor variables.
for (int j = 0; j < kNumPredictors; ++j) {
- predictor_(i, j) = atof(pieces[j + kNumResponses].c_str());
+ predictor_(i, j) = std::atof(pieces[j + kNumResponses].c_str());
}
}
}
@@ -455,47 +455,57 @@
// clang-format on
static void SetNumericDiffOptions(ceres::NumericDiffOptions* options) {
- options->max_num_ridders_extrapolations = FLAGS_ridders_extrapolations;
- options->ridders_relative_initial_step_size = FLAGS_ridders_step_size;
+ options->max_num_ridders_extrapolations =
+ CERES_GET_FLAG(FLAGS_ridders_extrapolations);
+ options->ridders_relative_initial_step_size =
+ CERES_GET_FLAG(FLAGS_ridders_step_size);
}
void SetMinimizerOptions(ceres::Solver::Options* options) {
- CHECK(
- ceres::StringToMinimizerType(FLAGS_minimizer, &options->minimizer_type));
- CHECK(ceres::StringToLinearSolverType(FLAGS_linear_solver,
+ CHECK(ceres::StringToMinimizerType(CERES_GET_FLAG(FLAGS_minimizer),
+ &options->minimizer_type));
+ CHECK(ceres::StringToLinearSolverType(CERES_GET_FLAG(FLAGS_linear_solver),
&options->linear_solver_type));
- CHECK(ceres::StringToPreconditionerType(FLAGS_preconditioner,
+ CHECK(StringToDenseLinearAlgebraLibraryType(
+ CERES_GET_FLAG(FLAGS_dense_linear_algebra_library),
+ &options->dense_linear_algebra_library_type));
+ CHECK(ceres::StringToPreconditionerType(CERES_GET_FLAG(FLAGS_preconditioner),
&options->preconditioner_type));
CHECK(ceres::StringToTrustRegionStrategyType(
- FLAGS_trust_region_strategy, &options->trust_region_strategy_type));
- CHECK(ceres::StringToDoglegType(FLAGS_dogleg, &options->dogleg_type));
+ CERES_GET_FLAG(FLAGS_trust_region_strategy),
+ &options->trust_region_strategy_type));
+ CHECK(ceres::StringToDoglegType(CERES_GET_FLAG(FLAGS_dogleg),
+ &options->dogleg_type));
CHECK(ceres::StringToLineSearchDirectionType(
- FLAGS_line_search_direction, &options->line_search_direction_type));
- CHECK(ceres::StringToLineSearchType(FLAGS_line_search,
+ CERES_GET_FLAG(FLAGS_line_search_direction),
+ &options->line_search_direction_type));
+ CHECK(ceres::StringToLineSearchType(CERES_GET_FLAG(FLAGS_line_search),
&options->line_search_type));
CHECK(ceres::StringToLineSearchInterpolationType(
- FLAGS_line_search_interpolation,
+ CERES_GET_FLAG(FLAGS_line_search_interpolation),
&options->line_search_interpolation_type));
- options->max_num_iterations = FLAGS_num_iterations;
- options->use_nonmonotonic_steps = FLAGS_nonmonotonic_steps;
- options->initial_trust_region_radius = FLAGS_initial_trust_region_radius;
- options->max_lbfgs_rank = FLAGS_lbfgs_rank;
- options->line_search_sufficient_function_decrease = FLAGS_sufficient_decrease;
+ options->max_num_iterations = CERES_GET_FLAG(FLAGS_num_iterations);
+ options->use_nonmonotonic_steps = CERES_GET_FLAG(FLAGS_nonmonotonic_steps);
+ options->initial_trust_region_radius =
+ CERES_GET_FLAG(FLAGS_initial_trust_region_radius);
+ options->max_lbfgs_rank = CERES_GET_FLAG(FLAGS_lbfgs_rank);
+ options->line_search_sufficient_function_decrease =
+ CERES_GET_FLAG(FLAGS_sufficient_decrease);
options->line_search_sufficient_curvature_decrease =
- FLAGS_sufficient_curvature_decrease;
+ CERES_GET_FLAG(FLAGS_sufficient_curvature_decrease);
options->max_num_line_search_step_size_iterations =
- FLAGS_max_line_search_iterations;
+ CERES_GET_FLAG(FLAGS_max_line_search_iterations);
options->max_num_line_search_direction_restarts =
- FLAGS_max_line_search_restarts;
+ CERES_GET_FLAG(FLAGS_max_line_search_restarts);
options->use_approximate_eigenvalue_bfgs_scaling =
- FLAGS_approximate_eigenvalue_bfgs_scaling;
+ CERES_GET_FLAG(FLAGS_approximate_eigenvalue_bfgs_scaling);
options->function_tolerance = std::numeric_limits<double>::epsilon();
options->gradient_tolerance = std::numeric_limits<double>::epsilon();
options->parameter_tolerance = std::numeric_limits<double>::epsilon();
}
-string JoinPath(const string& dirname, const string& basename) {
+std::string JoinPath(const std::string& dirname, const std::string& basename) {
#ifdef _WIN32
static const char separator = '\\';
#else
@@ -507,7 +517,7 @@
} else if (dirname[dirname.size() - 1] == separator) {
return dirname + basename;
} else {
- return dirname + string(&separator, 1) + basename;
+ return dirname + std::string(&separator, 1) + basename;
}
}
@@ -515,24 +525,24 @@
CostFunction* CreateCostFunction(const Matrix& predictor,
const Matrix& response,
const int num_observations) {
- Model* model = new Model(predictor.data(), response.data(), num_observations);
- ceres::CostFunction* cost_function = NULL;
- if (FLAGS_use_numeric_diff) {
+ auto* model = new Model(predictor.data(), response.data(), num_observations);
+ ceres::CostFunction* cost_function = nullptr;
+ if (CERES_GET_FLAG(FLAGS_use_numeric_diff)) {
ceres::NumericDiffOptions options;
SetNumericDiffOptions(&options);
- if (FLAGS_numeric_diff_method == "central") {
+ if (CERES_GET_FLAG(FLAGS_numeric_diff_method) == "central") {
cost_function = new NumericDiffCostFunction<Model,
ceres::CENTRAL,
ceres::DYNAMIC,
num_parameters>(
model, ceres::TAKE_OWNERSHIP, num_observations, options);
- } else if (FLAGS_numeric_diff_method == "forward") {
+ } else if (CERES_GET_FLAG(FLAGS_numeric_diff_method) == "forward") {
cost_function = new NumericDiffCostFunction<Model,
ceres::FORWARD,
ceres::DYNAMIC,
num_parameters>(
model, ceres::TAKE_OWNERSHIP, num_observations, options);
- } else if (FLAGS_numeric_diff_method == "ridders") {
+ } else if (CERES_GET_FLAG(FLAGS_numeric_diff_method) == "ridders") {
cost_function = new NumericDiffCostFunction<Model,
ceres::RIDDERS,
ceres::DYNAMIC,
@@ -540,7 +550,7 @@
model, ceres::TAKE_OWNERSHIP, num_observations, options);
} else {
LOG(ERROR) << "Invalid numeric diff method specified";
- return 0;
+ return nullptr;
}
} else {
cost_function =
@@ -571,8 +581,9 @@
}
template <typename Model, int num_parameters>
-int RegressionDriver(const string& filename) {
- NISTProblem nist_problem(JoinPath(FLAGS_nist_data_dir, filename));
+int RegressionDriver(const std::string& filename) {
+ NISTProblem nist_problem(
+ JoinPath(CERES_GET_FLAG(FLAGS_nist_data_dir), filename));
CHECK_EQ(num_parameters, nist_problem.num_parameters());
Matrix predictor = nist_problem.predictor();
@@ -593,9 +604,10 @@
double initial_cost;
double final_cost;
- if (!FLAGS_use_tiny_solver) {
+ if (!CERES_GET_FLAG(FLAGS_use_tiny_solver)) {
ceres::Problem problem;
- problem.AddResidualBlock(cost_function, NULL, initial_parameters.data());
+ problem.AddResidualBlock(
+ cost_function, nullptr, initial_parameters.data());
ceres::Solver::Summary summary;
ceres::Solver::Options options;
SetMinimizerOptions(&options);
@@ -605,15 +617,15 @@
} else {
ceres::TinySolverCostFunctionAdapter<Eigen::Dynamic, num_parameters> cfa(
*cost_function);
- typedef ceres::TinySolver<
- ceres::TinySolverCostFunctionAdapter<Eigen::Dynamic, num_parameters>>
- Solver;
+ using Solver = ceres::TinySolver<
+ ceres::TinySolverCostFunctionAdapter<Eigen::Dynamic, num_parameters>>;
Solver solver;
- solver.options.max_num_iterations = FLAGS_num_iterations;
+ solver.options.max_num_iterations = CERES_GET_FLAG(FLAGS_num_iterations);
solver.options.gradient_tolerance =
std::numeric_limits<double>::epsilon();
solver.options.parameter_tolerance =
std::numeric_limits<double>::epsilon();
+ solver.options.function_tolerance = 0.0;
Eigen::Matrix<double, num_parameters, 1> x;
x = initial_parameters.transpose();
@@ -645,11 +657,11 @@
}
void SolveNISTProblems() {
- if (FLAGS_nist_data_dir.empty()) {
+ if (CERES_GET_FLAG(FLAGS_nist_data_dir).empty()) {
LOG(FATAL) << "Must specify the directory containing the NIST problems";
}
- cout << "Lower Difficulty\n";
+ std::cout << "Lower Difficulty\n";
int easy_success = 0;
easy_success += RegressionDriver<Misra1a, 2>("Misra1a.dat");
easy_success += RegressionDriver<Chwirut, 3>("Chwirut1.dat");
@@ -660,7 +672,7 @@
easy_success += RegressionDriver<DanWood, 2>("DanWood.dat");
easy_success += RegressionDriver<Misra1b, 2>("Misra1b.dat");
- cout << "\nMedium Difficulty\n";
+ std::cout << "\nMedium Difficulty\n";
int medium_success = 0;
medium_success += RegressionDriver<Kirby2, 5>("Kirby2.dat");
medium_success += RegressionDriver<Hahn1, 7>("Hahn1.dat");
@@ -674,7 +686,7 @@
medium_success += RegressionDriver<Roszman1, 4>("Roszman1.dat");
medium_success += RegressionDriver<ENSO, 9>("ENSO.dat");
- cout << "\nHigher Difficulty\n";
+ std::cout << "\nHigher Difficulty\n";
int hard_success = 0;
hard_success += RegressionDriver<MGH09, 4>("MGH09.dat");
hard_success += RegressionDriver<Thurber, 7>("Thurber.dat");
@@ -685,17 +697,16 @@
hard_success += RegressionDriver<Rat43, 4>("Rat43.dat");
hard_success += RegressionDriver<Bennet5, 3>("Bennett5.dat");
- cout << "\n";
- cout << "Easy : " << easy_success << "/16\n";
- cout << "Medium : " << medium_success << "/22\n";
- cout << "Hard : " << hard_success << "/16\n";
- cout << "Total : " << easy_success + medium_success + hard_success
- << "/54\n";
+ std::cout << "\n";
+ std::cout << "Easy : " << easy_success << "/16\n";
+ std::cout << "Medium : " << medium_success << "/22\n";
+ std::cout << "Hard : " << hard_success << "/16\n";
+ std::cout << "Total : " << easy_success + medium_success + hard_success
+ << "/54\n";
}
} // namespace
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
int main(int argc, char** argv) {
GFLAGS_NAMESPACE::ParseCommandLineFlags(&argc, &argv, true);
diff --git a/examples/pgm_image.h b/examples/pgm_image.h
index 3d2df63..033ab4d 100644
--- a/examples/pgm_image.h
+++ b/examples/pgm_image.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -43,8 +43,7 @@
#include "glog/logging.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
template <typename Real>
class PGMImage {
@@ -311,7 +310,6 @@
return data_;
}
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // CERES_EXAMPLES_PGM_IMAGE_H_
diff --git a/examples/powell.cc b/examples/powell.cc
index c75ad24..a4ca1b7 100644
--- a/examples/powell.cc
+++ b/examples/powell.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -50,12 +50,6 @@
#include "gflags/gflags.h"
#include "glog/logging.h"
-using ceres::AutoDiffCostFunction;
-using ceres::CostFunction;
-using ceres::Problem;
-using ceres::Solve;
-using ceres::Solver;
-
struct F1 {
template <typename T>
bool operator()(const T* const x1, const T* const x2, T* residual) const {
@@ -105,24 +99,24 @@
double x3 = 0.0;
double x4 = 1.0;
- Problem problem;
- // Add residual terms to the problem using the using the autodiff
+ ceres::Problem problem;
+ // Add residual terms to the problem using the autodiff
// wrapper to get the derivatives automatically. The parameters, x1 through
// x4, are modified in place.
problem.AddResidualBlock(
- new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), NULL, &x1, &x2);
+ new ceres::AutoDiffCostFunction<F1, 1, 1, 1>(), nullptr, &x1, &x2);
problem.AddResidualBlock(
- new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), NULL, &x3, &x4);
+ new ceres::AutoDiffCostFunction<F2, 1, 1, 1>(), nullptr, &x3, &x4);
problem.AddResidualBlock(
- new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), NULL, &x2, &x3);
+ new ceres::AutoDiffCostFunction<F3, 1, 1, 1>(), nullptr, &x2, &x3);
problem.AddResidualBlock(
- new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), NULL, &x1, &x4);
+ new ceres::AutoDiffCostFunction<F4, 1, 1, 1>(), nullptr, &x1, &x4);
- Solver::Options options;
- LOG_IF(
- FATAL,
- !ceres::StringToMinimizerType(FLAGS_minimizer, &options.minimizer_type))
- << "Invalid minimizer: " << FLAGS_minimizer
+ ceres::Solver::Options options;
+ LOG_IF(FATAL,
+ !ceres::StringToMinimizerType(CERES_GET_FLAG(FLAGS_minimizer),
+ &options.minimizer_type))
+ << "Invalid minimizer: " << CERES_GET_FLAG(FLAGS_minimizer)
<< ", valid options are: trust_region and line_search.";
options.max_num_iterations = 100;
@@ -138,8 +132,8 @@
// clang-format on
// Run the solver!
- Solver::Summary summary;
- Solve(options, &problem, &summary);
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
std::cout << summary.FullReport() << "\n";
// clang-format off
diff --git a/examples/random.h b/examples/random.h
deleted file mode 100644
index ace0711..0000000
--- a/examples/random.h
+++ /dev/null
@@ -1,64 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-
-#ifndef CERES_EXAMPLES_RANDOM_H_
-#define CERES_EXAMPLES_RANDOM_H_
-
-#include <math.h>
-#include <stdlib.h>
-
-namespace ceres {
-namespace examples {
-
-// Return a random number sampled from a uniform distribution in the range
-// [0,1].
-inline double RandDouble() {
- double r = static_cast<double>(rand());
- return r / RAND_MAX;
-}
-
-// Marsaglia Polar method for generation standard normal (pseudo)
-// random numbers http://en.wikipedia.org/wiki/Marsaglia_polar_method
-inline double RandNormal() {
- double x1, x2, w;
- do {
- x1 = 2.0 * RandDouble() - 1.0;
- x2 = 2.0 * RandDouble() - 1.0;
- w = x1 * x1 + x2 * x2;
- } while (w >= 1.0 || w == 0.0);
-
- w = sqrt((-2.0 * log(w)) / w);
- return x1 * w;
-}
-
-} // namespace examples
-} // namespace ceres
-
-#endif // CERES_EXAMPLES_RANDOM_H_
diff --git a/examples/robot_pose_mle.cc b/examples/robot_pose_mle.cc
index ab9a098..cc60e14 100644
--- a/examples/robot_pose_mle.cc
+++ b/examples/robot_pose_mle.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -50,7 +50,7 @@
//
// There are two types of residuals in this problem:
// 1) The OdometryConstraint residual, that accounts for the odometry readings
-// between successive pose estimatess of the robot.
+// between successive pose estimates of the robot.
// 2) The RangeConstraint residual, that accounts for the errors in the observed
// range readings from each pose.
//
@@ -97,14 +97,14 @@
// timesteps 0 to i for that variable, both inclusive.
//
// Bayes' rule is used to derive eq. 3 from 2, and the independence of
-// odometry observations and range readings is expolited to derive 4 from 3.
+// odometry observations and range readings is exploited to derive 4 from 3.
//
// Thus, the Belief, up to scale, is factored as a product of a number of
// terms, two for each pose, where for each pose term there is one term for the
// range reading, P(y_i | u*_(0:i) and one term for the odometry reading,
// P(u*_i | u_i) . Note that the term for the range reading is dependent on all
// odometry values u*_(0:i), while the odometry term, P(u*_i | u_i) depends only
-// on a single value, u_i. Both the range reading as well as odoemtry
+// on a single value, u_i. Both the range reading as well as odometry
// probability terms are modeled as the Normal distribution, and have the form:
//
// p(x) \propto \exp{-((x - x_mean) / x_stddev)^2}
@@ -123,30 +123,18 @@
// variable, and will be computed by an AutoDiffCostFunction, while the term
// for the range reading will depend on all previous odometry observations, and
// will be computed by a DynamicAutoDiffCostFunction since the number of
-// odoemtry observations will only be known at run time.
+// odometry observations will only be known at run time.
-#include <math.h>
-
+#include <algorithm>
+#include <cmath>
#include <cstdio>
+#include <random>
#include <vector>
#include "ceres/ceres.h"
#include "ceres/dynamic_autodiff_cost_function.h"
#include "gflags/gflags.h"
#include "glog/logging.h"
-#include "random.h"
-
-using ceres::AutoDiffCostFunction;
-using ceres::CauchyLoss;
-using ceres::CostFunction;
-using ceres::DynamicAutoDiffCostFunction;
-using ceres::LossFunction;
-using ceres::Problem;
-using ceres::Solve;
-using ceres::Solver;
-using ceres::examples::RandNormal;
-using std::min;
-using std::vector;
DEFINE_double(corridor_length,
30.0,
@@ -169,7 +157,8 @@
static constexpr int kStride = 10;
struct OdometryConstraint {
- typedef AutoDiffCostFunction<OdometryConstraint, 1, 1> OdometryCostFunction;
+ using OdometryCostFunction =
+ ceres::AutoDiffCostFunction<OdometryConstraint, 1, 1>;
OdometryConstraint(double odometry_mean, double odometry_stddev)
: odometry_mean(odometry_mean), odometry_stddev(odometry_stddev) {}
@@ -181,8 +170,8 @@
}
static OdometryCostFunction* Create(const double odometry_value) {
- return new OdometryCostFunction(
- new OdometryConstraint(odometry_value, FLAGS_odometry_stddev));
+ return new OdometryCostFunction(new OdometryConstraint(
+ odometry_value, CERES_GET_FLAG(FLAGS_odometry_stddev)));
}
const double odometry_mean;
@@ -190,8 +179,8 @@
};
struct RangeConstraint {
- typedef DynamicAutoDiffCostFunction<RangeConstraint, kStride>
- RangeCostFunction;
+ using RangeCostFunction =
+ ceres::DynamicAutoDiffCostFunction<RangeConstraint, kStride>;
RangeConstraint(int pose_index,
double range_reading,
@@ -217,11 +206,14 @@
// conveniently add to a ceres problem.
static RangeCostFunction* Create(const int pose_index,
const double range_reading,
- vector<double>* odometry_values,
- vector<double*>* parameter_blocks) {
- RangeConstraint* constraint = new RangeConstraint(
- pose_index, range_reading, FLAGS_range_stddev, FLAGS_corridor_length);
- RangeCostFunction* cost_function = new RangeCostFunction(constraint);
+ std::vector<double>* odometry_values,
+ std::vector<double*>* parameter_blocks) {
+ auto* constraint =
+ new RangeConstraint(pose_index,
+ range_reading,
+ CERES_GET_FLAG(FLAGS_range_stddev),
+ CERES_GET_FLAG(FLAGS_corridor_length));
+ auto* cost_function = new RangeCostFunction(constraint);
// Add all the parameter blocks that affect this constraint.
parameter_blocks->clear();
for (int i = 0; i <= pose_index; ++i) {
@@ -240,37 +232,45 @@
namespace {
-void SimulateRobot(vector<double>* odometry_values,
- vector<double>* range_readings) {
+void SimulateRobot(std::vector<double>* odometry_values,
+ std::vector<double>* range_readings) {
const int num_steps =
- static_cast<int>(ceil(FLAGS_corridor_length / FLAGS_pose_separation));
+ static_cast<int>(ceil(CERES_GET_FLAG(FLAGS_corridor_length) /
+ CERES_GET_FLAG(FLAGS_pose_separation)));
+ std::mt19937 prng;
+ std::normal_distribution<double> odometry_noise(
+ 0.0, CERES_GET_FLAG(FLAGS_odometry_stddev));
+ std::normal_distribution<double> range_noise(
+ 0.0, CERES_GET_FLAG(FLAGS_range_stddev));
// The robot starts out at the origin.
double robot_location = 0.0;
for (int i = 0; i < num_steps; ++i) {
const double actual_odometry_value =
- min(FLAGS_pose_separation, FLAGS_corridor_length - robot_location);
+ std::min(CERES_GET_FLAG(FLAGS_pose_separation),
+ CERES_GET_FLAG(FLAGS_corridor_length) - robot_location);
robot_location += actual_odometry_value;
- const double actual_range = FLAGS_corridor_length - robot_location;
+ const double actual_range =
+ CERES_GET_FLAG(FLAGS_corridor_length) - robot_location;
const double observed_odometry =
- RandNormal() * FLAGS_odometry_stddev + actual_odometry_value;
- const double observed_range =
- RandNormal() * FLAGS_range_stddev + actual_range;
+ actual_odometry_value + odometry_noise(prng);
+ const double observed_range = actual_range + range_noise(prng);
odometry_values->push_back(observed_odometry);
range_readings->push_back(observed_range);
}
}
-void PrintState(const vector<double>& odometry_readings,
- const vector<double>& range_readings) {
+void PrintState(const std::vector<double>& odometry_readings,
+ const std::vector<double>& range_readings) {
CHECK_EQ(odometry_readings.size(), range_readings.size());
double robot_location = 0.0;
printf("pose: location odom range r.error o.error\n");
for (int i = 0; i < odometry_readings.size(); ++i) {
robot_location += odometry_readings[i];
- const double range_error =
- robot_location + range_readings[i] - FLAGS_corridor_length;
- const double odometry_error = FLAGS_pose_separation - odometry_readings[i];
+ const double range_error = robot_location + range_readings[i] -
+ CERES_GET_FLAG(FLAGS_corridor_length);
+ const double odometry_error =
+ CERES_GET_FLAG(FLAGS_pose_separation) - odometry_readings[i];
printf("%4d: %8.3f %8.3f %8.3f %8.3f %8.3f\n",
static_cast<int>(i),
robot_location,
@@ -287,13 +287,13 @@
google::InitGoogleLogging(argv[0]);
GFLAGS_NAMESPACE::ParseCommandLineFlags(&argc, &argv, true);
// Make sure that the arguments parsed are all positive.
- CHECK_GT(FLAGS_corridor_length, 0.0);
- CHECK_GT(FLAGS_pose_separation, 0.0);
- CHECK_GT(FLAGS_odometry_stddev, 0.0);
- CHECK_GT(FLAGS_range_stddev, 0.0);
+ CHECK_GT(CERES_GET_FLAG(FLAGS_corridor_length), 0.0);
+ CHECK_GT(CERES_GET_FLAG(FLAGS_pose_separation), 0.0);
+ CHECK_GT(CERES_GET_FLAG(FLAGS_odometry_stddev), 0.0);
+ CHECK_GT(CERES_GET_FLAG(FLAGS_range_stddev), 0.0);
- vector<double> odometry_values;
- vector<double> range_readings;
+ std::vector<double> odometry_values;
+ std::vector<double> range_readings;
SimulateRobot(&odometry_values, &range_readings);
printf("Initial values:\n");
@@ -303,25 +303,25 @@
for (int i = 0; i < odometry_values.size(); ++i) {
// Create and add a DynamicAutoDiffCostFunction for the RangeConstraint from
// pose i.
- vector<double*> parameter_blocks;
+ std::vector<double*> parameter_blocks;
RangeConstraint::RangeCostFunction* range_cost_function =
RangeConstraint::Create(
i, range_readings[i], &odometry_values, ¶meter_blocks);
- problem.AddResidualBlock(range_cost_function, NULL, parameter_blocks);
+ problem.AddResidualBlock(range_cost_function, nullptr, parameter_blocks);
// Create and add an AutoDiffCostFunction for the OdometryConstraint for
// pose i.
problem.AddResidualBlock(OdometryConstraint::Create(odometry_values[i]),
- NULL,
+ nullptr,
&(odometry_values[i]));
}
ceres::Solver::Options solver_options;
solver_options.minimizer_progress_to_stdout = true;
- Solver::Summary summary;
+ ceres::Solver::Summary summary;
printf("Solving...\n");
- Solve(solver_options, &problem, &summary);
+ ceres::Solve(solver_options, &problem, &summary);
printf("Done.\n");
std::cout << summary.FullReport() << "\n";
printf("Final values:\n");
diff --git a/examples/robust_curve_fitting.cc b/examples/robust_curve_fitting.cc
index 9b526c5..e08b0df 100644
--- a/examples/robust_curve_fitting.cc
+++ b/examples/robust_curve_fitting.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -27,6 +27,12 @@
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
+//
+// This example fits the curve f(x;m,c) = e^(m * x + c) to data. However unlike
+// the data in curve_fitting.cc, the data here has outliers in it, so minimizing
+// the sum squared loss will result in a bad fit. So this example illustrates
+// the use of a robust loss function (CauchyLoss) to reduce the influence of the
+// outliers on the fit.
#include "ceres/ceres.h"
#include "glog/logging.h"
@@ -115,13 +121,6 @@
};
// clang-format on
-using ceres::AutoDiffCostFunction;
-using ceres::CauchyLoss;
-using ceres::CostFunction;
-using ceres::Problem;
-using ceres::Solve;
-using ceres::Solver;
-
struct ExponentialResidual {
ExponentialResidual(double x, double y) : x_(x), y_(y) {}
@@ -139,25 +138,28 @@
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
- double m = 0.0;
- double c = 0.0;
+ const double initial_m = 0.0;
+ const double initial_c = 0.0;
+ double m = initial_m;
+ double c = initial_c;
- Problem problem;
+ ceres::Problem problem;
for (int i = 0; i < kNumObservations; ++i) {
- CostFunction* cost_function =
- new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
- new ExponentialResidual(data[2 * i], data[2 * i + 1]));
- problem.AddResidualBlock(cost_function, new CauchyLoss(0.5), &m, &c);
+ ceres::CostFunction* cost_function =
+ new ceres::AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
+ data[2 * i], data[2 * i + 1]);
+ problem.AddResidualBlock(cost_function, new ceres::CauchyLoss(0.5), &m, &c);
}
- Solver::Options options;
+ ceres::Solver::Options options;
+ options.max_num_iterations = 25;
options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
- Solver::Summary summary;
- Solve(options, &problem, &summary);
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
- std::cout << "Initial m: " << 0.0 << " c: " << 0.0 << "\n";
+ std::cout << "Initial m: " << initial_m << " c: " << initial_c << "\n";
std::cout << "Final m: " << m << " c: " << c << "\n";
return 0;
}
diff --git a/examples/rosenbrock.cc b/examples/rosenbrock.cc
index 1b9aef6..a382ccd 100644
--- a/examples/rosenbrock.cc
+++ b/examples/rosenbrock.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -27,30 +27,29 @@
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
+//
+// Example of minimizing the Rosenbrock function
+// (https://en.wikipedia.org/wiki/Rosenbrock_function) using
+// GradientProblemSolver using automatically computed derivatives.
#include "ceres/ceres.h"
#include "glog/logging.h"
// f(x,y) = (1-x)^2 + 100(y - x^2)^2;
-class Rosenbrock : public ceres::FirstOrderFunction {
- public:
- virtual ~Rosenbrock() {}
-
- virtual bool Evaluate(const double* parameters,
- double* cost,
- double* gradient) const {
- const double x = parameters[0];
- const double y = parameters[1];
-
+struct Rosenbrock {
+ template <typename T>
+ bool operator()(const T* parameters, T* cost) const {
+ const T x = parameters[0];
+ const T y = parameters[1];
cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
- if (gradient != NULL) {
- gradient[0] = -2.0 * (1.0 - x) - 200.0 * (y - x * x) * 2.0 * x;
- gradient[1] = 200.0 * (y - x * x);
- }
return true;
}
- virtual int NumParameters() const { return 2; }
+ static ceres::FirstOrderFunction* Create() {
+ constexpr int kNumParameters = 2;
+ return new ceres::AutoDiffFirstOrderFunction<Rosenbrock, kNumParameters>(
+ new Rosenbrock);
+ }
};
int main(int argc, char** argv) {
@@ -62,7 +61,7 @@
options.minimizer_progress_to_stdout = true;
ceres::GradientProblemSolver::Summary summary;
- ceres::GradientProblem problem(new Rosenbrock());
+ ceres::GradientProblem problem(Rosenbrock::Create());
ceres::Solve(options, problem, parameters, &summary);
std::cout << summary.FullReport() << "\n";
diff --git a/examples/rosenbrock_analytic_diff.cc b/examples/rosenbrock_analytic_diff.cc
new file mode 100644
index 0000000..65e49eb
--- /dev/null
+++ b/examples/rosenbrock_analytic_diff.cc
@@ -0,0 +1,77 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+//
+// Example of minimizing the Rosenbrock function
+// (https://en.wikipedia.org/wiki/Rosenbrock_function) using
+// GradientProblemSolver using analytic derivatives.
+
+#include "ceres/ceres.h"
+#include "glog/logging.h"
+
+// f(x,y) = (1-x)^2 + 100(y - x^2)^2;
+class Rosenbrock final : public ceres::FirstOrderFunction {
+ public:
+ bool Evaluate(const double* parameters,
+ double* cost,
+ double* gradient) const override {
+ const double x = parameters[0];
+ const double y = parameters[1];
+
+ cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
+
+ if (gradient) {
+ gradient[0] = -2.0 * (1.0 - x) - 200.0 * (y - x * x) * 2.0 * x;
+ gradient[1] = 200.0 * (y - x * x);
+ }
+
+ return true;
+ }
+
+ int NumParameters() const override { return 2; }
+};
+
+int main(int argc, char** argv) {
+ google::InitGoogleLogging(argv[0]);
+
+ double parameters[2] = {-1.2, 1.0};
+
+ ceres::GradientProblemSolver::Options options;
+ options.minimizer_progress_to_stdout = true;
+
+ ceres::GradientProblemSolver::Summary summary;
+ ceres::GradientProblem problem(new Rosenbrock());
+ ceres::Solve(options, problem, parameters, &summary);
+
+ std::cout << summary.FullReport() << "\n";
+ std::cout << "Initial x: " << -1.2 << " y: " << 1.0 << "\n";
+ std::cout << "Final x: " << parameters[0] << " y: " << parameters[1]
+ << "\n";
+ return 0;
+}
diff --git a/examples/rosenbrock_numeric_diff.cc b/examples/rosenbrock_numeric_diff.cc
new file mode 100644
index 0000000..a711b2f
--- /dev/null
+++ b/examples/rosenbrock_numeric_diff.cc
@@ -0,0 +1,74 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+//
+// Example of minimizing the Rosenbrock function
+// (https://en.wikipedia.org/wiki/Rosenbrock_function) using
+// GradientProblemSolver using derivatives computed using numeric
+// differentiation.
+
+#include "ceres/ceres.h"
+#include "glog/logging.h"
+
+// f(x,y) = (1-x)^2 + 100(y - x^2)^2;
+struct Rosenbrock {
+ bool operator()(const double* parameters, double* cost) const {
+ const double x = parameters[0];
+ const double y = parameters[1];
+ cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
+ return true;
+ }
+
+ static ceres::FirstOrderFunction* Create() {
+ constexpr int kNumParameters = 2;
+ return new ceres::NumericDiffFirstOrderFunction<Rosenbrock,
+ ceres::CENTRAL,
+ kNumParameters>(
+ new Rosenbrock);
+ }
+};
+
+int main(int argc, char** argv) {
+ google::InitGoogleLogging(argv[0]);
+
+ double parameters[2] = {-1.2, 1.0};
+
+ ceres::GradientProblemSolver::Options options;
+ options.minimizer_progress_to_stdout = true;
+
+ ceres::GradientProblemSolver::Summary summary;
+ ceres::GradientProblem problem(Rosenbrock::Create());
+ ceres::Solve(options, problem, parameters, &summary);
+
+ std::cout << summary.FullReport() << "\n";
+ std::cout << "Initial x: " << -1.2 << " y: " << 1.0 << "\n";
+ std::cout << "Final x: " << parameters[0] << " y: " << parameters[1]
+ << "\n";
+ return 0;
+}
diff --git a/examples/sampled_function/CMakeLists.txt b/examples/sampled_function/CMakeLists.txt
index 8a17cad..1013753 100644
--- a/examples/sampled_function/CMakeLists.txt
+++ b/examples/sampled_function/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -29,4 +29,4 @@
# Author: vitus@google.com (Michael Vitus)
add_executable(sampled_function sampled_function.cc)
-target_link_libraries(sampled_function Ceres::ceres)
+target_link_libraries(sampled_function PRIVATE Ceres::ceres)
diff --git a/examples/sampled_function/README.md b/examples/sampled_function/README.md
index ef1af43..5fde415 100644
--- a/examples/sampled_function/README.md
+++ b/examples/sampled_function/README.md
@@ -32,7 +32,7 @@
```c++
bool Evaluate(double const* const* parameters, double* residuals, double** jacobians) const {
- if (jacobians == NULL || jacobians[0] == NULL)
+ if (jacobians == nullptr || jacobians[0] == nullptr)
interpolator_.Evaluate(parameters[0][0], residuals);
else
interpolator_.Evaluate(parameters[0][0], residuals, jacobians[0]);
diff --git a/examples/sampled_function/sampled_function.cc b/examples/sampled_function/sampled_function.cc
index e96018d..40e9c1f 100644
--- a/examples/sampled_function/sampled_function.cc
+++ b/examples/sampled_function/sampled_function.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,35 +35,27 @@
#include "ceres/cubic_interpolation.h"
#include "glog/logging.h"
-using ceres::AutoDiffCostFunction;
-using ceres::CostFunction;
-using ceres::CubicInterpolator;
-using ceres::Grid1D;
-using ceres::Problem;
-using ceres::Solve;
-using ceres::Solver;
+using Interpolator = ceres::CubicInterpolator<ceres::Grid1D<double>>;
// A simple cost functor that interfaces an interpolated table of
// values with automatic differentiation.
struct InterpolatedCostFunctor {
- explicit InterpolatedCostFunctor(
- const CubicInterpolator<Grid1D<double>>& interpolator)
- : interpolator_(interpolator) {}
+ explicit InterpolatedCostFunctor(const Interpolator& interpolator)
+ : interpolator(interpolator) {}
template <typename T>
bool operator()(const T* x, T* residuals) const {
- interpolator_.Evaluate(*x, residuals);
+ interpolator.Evaluate(*x, residuals);
return true;
}
- static CostFunction* Create(
- const CubicInterpolator<Grid1D<double>>& interpolator) {
- return new AutoDiffCostFunction<InterpolatedCostFunctor, 1, 1>(
- new InterpolatedCostFunctor(interpolator));
+ static ceres::CostFunction* Create(const Interpolator& interpolator) {
+ return new ceres::AutoDiffCostFunction<InterpolatedCostFunctor, 1, 1>(
+ interpolator);
}
private:
- const CubicInterpolator<Grid1D<double>>& interpolator_;
+ const Interpolator& interpolator;
};
int main(int argc, char** argv) {
@@ -76,18 +68,19 @@
values[i] = (i - 4.5) * (i - 4.5);
}
- Grid1D<double> array(values, 0, kNumSamples);
- CubicInterpolator<Grid1D<double>> interpolator(array);
+ ceres::Grid1D<double> array(values, 0, kNumSamples);
+ Interpolator interpolator(array);
double x = 1.0;
- Problem problem;
- CostFunction* cost_function = InterpolatedCostFunctor::Create(interpolator);
- problem.AddResidualBlock(cost_function, NULL, &x);
+ ceres::Problem problem;
+ ceres::CostFunction* cost_function =
+ InterpolatedCostFunctor::Create(interpolator);
+ problem.AddResidualBlock(cost_function, nullptr, &x);
- Solver::Options options;
+ ceres::Solver::Options options;
options.minimizer_progress_to_stdout = true;
- Solver::Summary summary;
- Solve(options, &problem, &summary);
+ ceres::Solver::Summary summary;
+ ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
std::cout << "Expected x: 4.5. Actual x : " << x << std::endl;
return 0;
diff --git a/examples/simple_bundle_adjuster.cc b/examples/simple_bundle_adjuster.cc
index 8180d73..bb0ba1c 100644
--- a/examples/simple_bundle_adjuster.cc
+++ b/examples/simple_bundle_adjuster.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -66,7 +66,7 @@
bool LoadFile(const char* filename) {
FILE* fptr = fopen(filename, "r");
- if (fptr == NULL) {
+ if (fptr == nullptr) {
return false;
};
@@ -164,8 +164,8 @@
// the client code.
static ceres::CostFunction* Create(const double observed_x,
const double observed_y) {
- return (new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
- new SnavelyReprojectionError(observed_x, observed_y)));
+ return new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
+ observed_x, observed_y);
}
double observed_x;
@@ -198,7 +198,7 @@
ceres::CostFunction* cost_function = SnavelyReprojectionError::Create(
observations[2 * i + 0], observations[2 * i + 1]);
problem.AddResidualBlock(cost_function,
- NULL /* squared loss */,
+ nullptr /* squared loss */,
bal_problem.mutable_camera_for_observation(i),
bal_problem.mutable_point_for_observation(i));
}
diff --git a/examples/slam/CMakeLists.txt b/examples/slam/CMakeLists.txt
index c72aa16..d0b03d7 100644
--- a/examples/slam/CMakeLists.txt
+++ b/examples/slam/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2016 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
diff --git a/examples/slam/common/read_g2o.h b/examples/slam/common/read_g2o.h
index fea32e9..490b054 100644
--- a/examples/slam/common/read_g2o.h
+++ b/examples/slam/common/read_g2o.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,8 +38,7 @@
#include "glog/logging.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
// Reads a single pose from the input and inserts it into the map. Returns false
// if there is a duplicate entry.
@@ -60,7 +59,7 @@
return true;
}
-// Reads the contraints between two vertices in the pose graph
+// Reads the constraints between two vertices in the pose graph
template <typename Constraint, typename Allocator>
void ReadConstraint(std::ifstream* infile,
std::vector<Constraint, Allocator>* constraints) {
@@ -104,8 +103,8 @@
bool ReadG2oFile(const std::string& filename,
std::map<int, Pose, std::less<int>, MapAllocator>* poses,
std::vector<Constraint, VectorAllocator>* constraints) {
- CHECK(poses != NULL);
- CHECK(constraints != NULL);
+ CHECK(poses != nullptr);
+ CHECK(constraints != nullptr);
poses->clear();
constraints->clear();
@@ -137,7 +136,6 @@
return true;
}
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // EXAMPLES_CERES_READ_G2O_H_
diff --git a/examples/slam/pose_graph_2d/CMakeLists.txt b/examples/slam/pose_graph_2d/CMakeLists.txt
index 20af056..87943ec 100644
--- a/examples/slam/pose_graph_2d/CMakeLists.txt
+++ b/examples/slam/pose_graph_2d/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2016 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -30,10 +30,10 @@
if (GFLAGS)
add_executable(pose_graph_2d
- angle_local_parameterization.h
+ angle_manifold.h
normalize_angle.h
pose_graph_2d.cc
pose_graph_2d_error_term.h
types.h)
- target_link_libraries(pose_graph_2d Ceres::ceres gflags)
+ target_link_libraries(pose_graph_2d PRIVATE Ceres::ceres gflags)
endif (GFLAGS)
diff --git a/examples/slam/pose_graph_2d/angle_local_parameterization.h b/examples/slam/pose_graph_2d/angle_manifold.h
similarity index 63%
rename from examples/slam/pose_graph_2d/angle_local_parameterization.h
rename to examples/slam/pose_graph_2d/angle_manifold.h
index a81637c..456d923 100644
--- a/examples/slam/pose_graph_2d/angle_local_parameterization.h
+++ b/examples/slam/pose_graph_2d/angle_manifold.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -27,38 +27,42 @@
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: vitus@google.com (Michael Vitus)
+// sameeragarwal@google.com (Sameer Agarwal)
-#ifndef CERES_EXAMPLES_POSE_GRAPH_2D_ANGLE_LOCAL_PARAMETERIZATION_H_
-#define CERES_EXAMPLES_POSE_GRAPH_2D_ANGLE_LOCAL_PARAMETERIZATION_H_
+#ifndef CERES_EXAMPLES_POSE_GRAPH_2D_ANGLE_MANIFOLD_H_
+#define CERES_EXAMPLES_POSE_GRAPH_2D_ANGLE_MANIFOLD_H_
-#include "ceres/local_parameterization.h"
+#include "ceres/autodiff_manifold.h"
#include "normalize_angle.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
-// Defines a local parameterization for updating the angle to be constrained in
-// [-pi to pi).
-class AngleLocalParameterization {
+// Defines a manifold for updating the angle to be constrained in [-pi to pi).
+class AngleManifold {
public:
template <typename T>
- bool operator()(const T* theta_radians,
- const T* delta_theta_radians,
- T* theta_radians_plus_delta) const {
- *theta_radians_plus_delta =
- NormalizeAngle(*theta_radians + *delta_theta_radians);
+ bool Plus(const T* x_radians,
+ const T* delta_radians,
+ T* x_plus_delta_radians) const {
+ *x_plus_delta_radians = NormalizeAngle(*x_radians + *delta_radians);
+ return true;
+ }
+
+ template <typename T>
+ bool Minus(const T* y_radians,
+ const T* x_radians,
+ T* y_minus_x_radians) const {
+ *y_minus_x_radians =
+ NormalizeAngle(*y_radians) - NormalizeAngle(*x_radians);
return true;
}
- static ceres::LocalParameterization* Create() {
- return (new ceres::AutoDiffLocalParameterization<AngleLocalParameterization,
- 1,
- 1>);
+ static ceres::Manifold* Create() {
+ return new ceres::AutoDiffManifold<AngleManifold, 1, 1>;
}
};
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
-#endif // CERES_EXAMPLES_POSE_GRAPH_2D_ANGLE_LOCAL_PARAMETERIZATION_H_
+#endif // CERES_EXAMPLES_POSE_GRAPH_2D_ANGLE_MANIFOLD_H_
diff --git a/examples/slam/pose_graph_2d/normalize_angle.h b/examples/slam/pose_graph_2d/normalize_angle.h
index c215671..0602878 100644
--- a/examples/slam/pose_graph_2d/normalize_angle.h
+++ b/examples/slam/pose_graph_2d/normalize_angle.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,19 +35,17 @@
#include "ceres/ceres.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
// Normalizes the angle in radians between [-pi and pi).
template <typename T>
inline T NormalizeAngle(const T& angle_radians) {
// Use ceres::floor because it is specialized for double and Jet types.
- T two_pi(2.0 * M_PI);
+ T two_pi(2.0 * constants::pi);
return angle_radians -
- two_pi * ceres::floor((angle_radians + T(M_PI)) / two_pi);
+ two_pi * ceres::floor((angle_radians + T(constants::pi)) / two_pi);
}
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // CERES_EXAMPLES_POSE_GRAPH_2D_NORMALIZE_ANGLE_H_
diff --git a/examples/slam/pose_graph_2d/pose_graph_2d.cc b/examples/slam/pose_graph_2d/pose_graph_2d.cc
index 1172123..3ebae3f 100644
--- a/examples/slam/pose_graph_2d/pose_graph_2d.cc
+++ b/examples/slam/pose_graph_2d/pose_graph_2d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,7 +39,7 @@
#include <string>
#include <vector>
-#include "angle_local_parameterization.h"
+#include "angle_manifold.h"
#include "ceres/ceres.h"
#include "common/read_g2o.h"
#include "gflags/gflags.h"
@@ -49,8 +49,7 @@
DEFINE_string(input, "", "The pose graph definition filename in g2o format.");
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
namespace {
// Constructs the nonlinear least squares optimization problem from the pose
@@ -58,29 +57,21 @@
void BuildOptimizationProblem(const std::vector<Constraint2d>& constraints,
std::map<int, Pose2d>* poses,
ceres::Problem* problem) {
- CHECK(poses != NULL);
- CHECK(problem != NULL);
+ CHECK(poses != nullptr);
+ CHECK(problem != nullptr);
if (constraints.empty()) {
LOG(INFO) << "No constraints, no problem to optimize.";
return;
}
- ceres::LossFunction* loss_function = NULL;
- ceres::LocalParameterization* angle_local_parameterization =
- AngleLocalParameterization::Create();
+ ceres::LossFunction* loss_function = nullptr;
+ ceres::Manifold* angle_manifold = AngleManifold::Create();
- for (std::vector<Constraint2d>::const_iterator constraints_iter =
- constraints.begin();
- constraints_iter != constraints.end();
- ++constraints_iter) {
- const Constraint2d& constraint = *constraints_iter;
-
- std::map<int, Pose2d>::iterator pose_begin_iter =
- poses->find(constraint.id_begin);
+ for (const auto& constraint : constraints) {
+ auto pose_begin_iter = poses->find(constraint.id_begin);
CHECK(pose_begin_iter != poses->end())
<< "Pose with ID: " << constraint.id_begin << " not found.";
- std::map<int, Pose2d>::iterator pose_end_iter =
- poses->find(constraint.id_end);
+ auto pose_end_iter = poses->find(constraint.id_end);
CHECK(pose_end_iter != poses->end())
<< "Pose with ID: " << constraint.id_end << " not found.";
@@ -98,10 +89,8 @@
&pose_end_iter->second.y,
&pose_end_iter->second.yaw_radians);
- problem->SetParameterization(&pose_begin_iter->second.yaw_radians,
- angle_local_parameterization);
- problem->SetParameterization(&pose_end_iter->second.yaw_radians,
- angle_local_parameterization);
+ problem->SetManifold(&pose_begin_iter->second.yaw_radians, angle_manifold);
+ problem->SetManifold(&pose_end_iter->second.yaw_radians, angle_manifold);
}
// The pose graph optimization problem has three DOFs that are not fully
@@ -111,7 +100,7 @@
// internal damping which mitigate this issue, but it is better to properly
// constrain the gauge freedom. This can be done by setting one of the poses
// as constant so the optimizer cannot change it.
- std::map<int, Pose2d>::iterator pose_start_iter = poses->begin();
+ auto pose_start_iter = poses->begin();
CHECK(pose_start_iter != poses->end()) << "There are no poses.";
problem->SetParameterBlockConstant(&pose_start_iter->second.x);
problem->SetParameterBlockConstant(&pose_start_iter->second.y);
@@ -120,7 +109,7 @@
// Returns true if the solve was successful.
bool SolveOptimizationProblem(ceres::Problem* problem) {
- CHECK(problem != NULL);
+ CHECK(problem != nullptr);
ceres::Solver::Options options;
options.max_num_iterations = 100;
@@ -143,10 +132,7 @@
std::cerr << "Error opening the file: " << filename << '\n';
return false;
}
- for (std::map<int, Pose2d>::const_iterator poses_iter = poses.begin();
- poses_iter != poses.end();
- ++poses_iter) {
- const std::map<int, Pose2d>::value_type& pair = *poses_iter;
+ for (const auto& pair : poses) {
outfile << pair.first << " " << pair.second.x << " " << pair.second.y << ' '
<< pair.second.yaw_radians << '\n';
}
@@ -154,8 +140,7 @@
}
} // namespace
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
diff --git a/examples/slam/pose_graph_2d/pose_graph_2d_error_term.h b/examples/slam/pose_graph_2d/pose_graph_2d_error_term.h
index 2df31f6..3d34f8d 100644
--- a/examples/slam/pose_graph_2d/pose_graph_2d_error_term.h
+++ b/examples/slam/pose_graph_2d/pose_graph_2d_error_term.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,9 +34,9 @@
#define CERES_EXAMPLES_POSE_GRAPH_2D_POSE_GRAPH_2D_ERROR_TERM_H_
#include "Eigen/Core"
+#include "ceres/autodiff_cost_function.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
template <typename T>
Eigen::Matrix<T, 2, 2> RotationMatrix2D(T yaw_radians) {
@@ -96,10 +96,9 @@
double y_ab,
double yaw_ab_radians,
const Eigen::Matrix3d& sqrt_information) {
- return (new ceres::
- AutoDiffCostFunction<PoseGraph2dErrorTerm, 3, 1, 1, 1, 1, 1, 1>(
- new PoseGraph2dErrorTerm(
- x_ab, y_ab, yaw_ab_radians, sqrt_information)));
+ return new ceres::
+ AutoDiffCostFunction<PoseGraph2dErrorTerm, 3, 1, 1, 1, 1, 1, 1>(
+ x_ab, y_ab, yaw_ab_radians, sqrt_information);
}
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
@@ -113,7 +112,6 @@
const Eigen::Matrix3d sqrt_information_;
};
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // CERES_EXAMPLES_POSE_GRAPH_2D_POSE_GRAPH_2D_ERROR_TERM_H_
diff --git a/examples/slam/pose_graph_2d/types.h b/examples/slam/pose_graph_2d/types.h
index 3c13824..caf4ccb 100644
--- a/examples/slam/pose_graph_2d/types.h
+++ b/examples/slam/pose_graph_2d/types.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,8 +40,7 @@
#include "Eigen/Core"
#include "normalize_angle.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
// The state for each vertex in the pose graph.
struct Pose2d {
@@ -95,7 +94,6 @@
return input;
}
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // CERES_EXAMPLES_POSE_GRAPH_2D_TYPES_H_
diff --git a/examples/slam/pose_graph_3d/CMakeLists.txt b/examples/slam/pose_graph_3d/CMakeLists.txt
index b6421cc..544b00e 100644
--- a/examples/slam/pose_graph_3d/CMakeLists.txt
+++ b/examples/slam/pose_graph_3d/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2016 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -30,5 +30,5 @@
if (GFLAGS)
add_executable(pose_graph_3d pose_graph_3d.cc)
- target_link_libraries(pose_graph_3d Ceres::ceres gflags)
+ target_link_libraries(pose_graph_3d PRIVATE Ceres::ceres gflags)
endif (GFLAGS)
diff --git a/examples/slam/pose_graph_3d/pose_graph_3d.cc b/examples/slam/pose_graph_3d/pose_graph_3d.cc
index 2f8d6a4..522e2a1 100644
--- a/examples/slam/pose_graph_3d/pose_graph_3d.cc
+++ b/examples/slam/pose_graph_3d/pose_graph_3d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -41,8 +41,7 @@
DEFINE_string(input, "", "The pose graph definition filename in g2o format.");
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
namespace {
// Constructs the nonlinear least squares optimization problem from the pose
@@ -50,27 +49,21 @@
void BuildOptimizationProblem(const VectorOfConstraints& constraints,
MapOfPoses* poses,
ceres::Problem* problem) {
- CHECK(poses != NULL);
- CHECK(problem != NULL);
+ CHECK(poses != nullptr);
+ CHECK(problem != nullptr);
if (constraints.empty()) {
LOG(INFO) << "No constraints, no problem to optimize.";
return;
}
- ceres::LossFunction* loss_function = NULL;
- ceres::LocalParameterization* quaternion_local_parameterization =
- new EigenQuaternionParameterization;
+ ceres::LossFunction* loss_function = nullptr;
+ ceres::Manifold* quaternion_manifold = new EigenQuaternionManifold;
- for (VectorOfConstraints::const_iterator constraints_iter =
- constraints.begin();
- constraints_iter != constraints.end();
- ++constraints_iter) {
- const Constraint3d& constraint = *constraints_iter;
-
- MapOfPoses::iterator pose_begin_iter = poses->find(constraint.id_begin);
+ for (const auto& constraint : constraints) {
+ auto pose_begin_iter = poses->find(constraint.id_begin);
CHECK(pose_begin_iter != poses->end())
<< "Pose with ID: " << constraint.id_begin << " not found.";
- MapOfPoses::iterator pose_end_iter = poses->find(constraint.id_end);
+ auto pose_end_iter = poses->find(constraint.id_end);
CHECK(pose_end_iter != poses->end())
<< "Pose with ID: " << constraint.id_end << " not found.";
@@ -87,10 +80,10 @@
pose_end_iter->second.p.data(),
pose_end_iter->second.q.coeffs().data());
- problem->SetParameterization(pose_begin_iter->second.q.coeffs().data(),
- quaternion_local_parameterization);
- problem->SetParameterization(pose_end_iter->second.q.coeffs().data(),
- quaternion_local_parameterization);
+ problem->SetManifold(pose_begin_iter->second.q.coeffs().data(),
+ quaternion_manifold);
+ problem->SetManifold(pose_end_iter->second.q.coeffs().data(),
+ quaternion_manifold);
}
// The pose graph optimization problem has six DOFs that are not fully
@@ -100,7 +93,7 @@
// internal damping which mitigates this issue, but it is better to properly
// constrain the gauge freedom. This can be done by setting one of the poses
// as constant so the optimizer cannot change it.
- MapOfPoses::iterator pose_start_iter = poses->begin();
+ auto pose_start_iter = poses->begin();
CHECK(pose_start_iter != poses->end()) << "There are no poses.";
problem->SetParameterBlockConstant(pose_start_iter->second.p.data());
problem->SetParameterBlockConstant(pose_start_iter->second.q.coeffs().data());
@@ -108,7 +101,7 @@
// Returns true if the solve was successful.
bool SolveOptimizationProblem(ceres::Problem* problem) {
- CHECK(problem != NULL);
+ CHECK(problem != nullptr);
ceres::Solver::Options options;
options.max_num_iterations = 200;
@@ -130,18 +123,7 @@
LOG(ERROR) << "Error opening the file: " << filename;
return false;
}
- for (std::map<int,
- Pose3d,
- std::less<int>,
- Eigen::aligned_allocator<std::pair<const int, Pose3d>>>::
- const_iterator poses_iter = poses.begin();
- poses_iter != poses.end();
- ++poses_iter) {
- const std::map<int,
- Pose3d,
- std::less<int>,
- Eigen::aligned_allocator<std::pair<const int, Pose3d>>>::
- value_type& pair = *poses_iter;
+ for (const auto& pair : poses) {
outfile << pair.first << " " << pair.second.p.transpose() << " "
<< pair.second.q.x() << " " << pair.second.q.y() << " "
<< pair.second.q.z() << " " << pair.second.q.w() << '\n';
@@ -150,8 +132,7 @@
}
} // namespace
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
diff --git a/examples/slam/pose_graph_3d/pose_graph_3d_error_term.h b/examples/slam/pose_graph_3d/pose_graph_3d_error_term.h
index 1f3e8de..b1c0138 100644
--- a/examples/slam/pose_graph_3d/pose_graph_3d_error_term.h
+++ b/examples/slam/pose_graph_3d/pose_graph_3d_error_term.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,12 +31,13 @@
#ifndef EXAMPLES_CERES_POSE_GRAPH_3D_ERROR_TERM_H_
#define EXAMPLES_CERES_POSE_GRAPH_3D_ERROR_TERM_H_
+#include <utility>
+
#include "Eigen/Core"
#include "ceres/autodiff_cost_function.h"
#include "types.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
// Computes the error term for two poses that have a relative pose measurement
// between them. Let the hat variables be the measurement. We have two poses x_a
@@ -69,9 +70,10 @@
// where I is the information matrix which is the inverse of the covariance.
class PoseGraph3dErrorTerm {
public:
- PoseGraph3dErrorTerm(const Pose3d& t_ab_measured,
- const Eigen::Matrix<double, 6, 6>& sqrt_information)
- : t_ab_measured_(t_ab_measured), sqrt_information_(sqrt_information) {}
+ PoseGraph3dErrorTerm(Pose3d t_ab_measured,
+ Eigen::Matrix<double, 6, 6> sqrt_information)
+ : t_ab_measured_(std::move(t_ab_measured)),
+ sqrt_information_(std::move(sqrt_information)) {}
template <typename T>
bool operator()(const T* const p_a_ptr,
@@ -114,7 +116,7 @@
const Pose3d& t_ab_measured,
const Eigen::Matrix<double, 6, 6>& sqrt_information) {
return new ceres::AutoDiffCostFunction<PoseGraph3dErrorTerm, 6, 3, 4, 3, 4>(
- new PoseGraph3dErrorTerm(t_ab_measured, sqrt_information));
+ t_ab_measured, sqrt_information);
}
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
@@ -126,7 +128,6 @@
const Eigen::Matrix<double, 6, 6> sqrt_information_;
};
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // EXAMPLES_CERES_POSE_GRAPH_3D_ERROR_TERM_H_
diff --git a/examples/slam/pose_graph_3d/types.h b/examples/slam/pose_graph_3d/types.h
index d3f19ed..207dd5d 100644
--- a/examples/slam/pose_graph_3d/types.h
+++ b/examples/slam/pose_graph_3d/types.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,6 +31,7 @@
#ifndef EXAMPLES_CERES_TYPES_H_
#define EXAMPLES_CERES_TYPES_H_
+#include <functional>
#include <istream>
#include <map>
#include <string>
@@ -39,8 +40,7 @@
#include "Eigen/Core"
#include "Eigen/Geometry"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
struct Pose3d {
Eigen::Vector3d p;
@@ -61,11 +61,11 @@
return input;
}
-typedef std::map<int,
- Pose3d,
- std::less<int>,
- Eigen::aligned_allocator<std::pair<const int, Pose3d>>>
- MapOfPoses;
+using MapOfPoses =
+ std::map<int,
+ Pose3d,
+ std::less<int>,
+ Eigen::aligned_allocator<std::pair<const int, Pose3d>>>;
// The constraint between two vertices in the pose graph. The constraint is the
// transformation from vertex id_begin to vertex id_end.
@@ -103,10 +103,9 @@
return input;
}
-typedef std::vector<Constraint3d, Eigen::aligned_allocator<Constraint3d>>
- VectorOfConstraints;
+using VectorOfConstraints =
+ std::vector<Constraint3d, Eigen::aligned_allocator<Constraint3d>>;
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // EXAMPLES_CERES_TYPES_H_
diff --git a/examples/snavely_reprojection_error.h b/examples/snavely_reprojection_error.h
index eb39d23..aaf0c6c 100644
--- a/examples/snavely_reprojection_error.h
+++ b/examples/snavely_reprojection_error.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -41,10 +41,10 @@
#ifndef CERES_EXAMPLES_SNAVELY_REPROJECTION_ERROR_H_
#define CERES_EXAMPLES_SNAVELY_REPROJECTION_ERROR_H_
+#include "ceres/autodiff_cost_function.h"
#include "ceres/rotation.h"
-namespace ceres {
-namespace examples {
+namespace ceres::examples {
// Templated pinhole camera model for used with Ceres. The camera is
// parameterized using 9 parameters: 3 for rotation, 3 for translation, 1 for
@@ -95,8 +95,8 @@
// the client code.
static ceres::CostFunction* Create(const double observed_x,
const double observed_y) {
- return (new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
- new SnavelyReprojectionError(observed_x, observed_y)));
+ return new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
+ observed_x, observed_y);
}
double observed_x;
@@ -123,7 +123,7 @@
// We use QuaternionRotatePoint as it does not assume that the
// quaternion is normalized, since one of the ways to run the
// bundle adjuster is to let Ceres optimize all 4 quaternion
- // parameters without a local parameterization.
+ // parameters without using a Quaternion manifold.
T p[3];
QuaternionRotatePoint(camera, point, p);
@@ -160,20 +160,15 @@
// the client code.
static ceres::CostFunction* Create(const double observed_x,
const double observed_y) {
- return (
- new ceres::AutoDiffCostFunction<SnavelyReprojectionErrorWithQuaternions,
- 2,
- 10,
- 3>(
- new SnavelyReprojectionErrorWithQuaternions(observed_x,
- observed_y)));
+ return new ceres::
+ AutoDiffCostFunction<SnavelyReprojectionErrorWithQuaternions, 2, 10, 3>(
+ observed_x, observed_y);
}
double observed_x;
double observed_y;
};
-} // namespace examples
-} // namespace ceres
+} // namespace ceres::examples
#endif // CERES_EXAMPLES_SNAVELY_REPROJECTION_ERROR_H_
diff --git a/include/ceres/autodiff_cost_function.h b/include/ceres/autodiff_cost_function.h
index 207f0a4..878b2ec 100644
--- a/include/ceres/autodiff_cost_function.h
+++ b/include/ceres/autodiff_cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -82,9 +82,9 @@
// Then given this class definition, the auto differentiated cost function for
// it can be constructed as follows.
//
-// CostFunction* cost_function
-// = new AutoDiffCostFunction<MyScalarCostFunctor, 1, 2, 2>(
-// new MyScalarCostFunctor(1.0)); ^ ^ ^
+// auto* cost_function
+// = new AutoDiffCostFunction<MyScalarCostFunctor, 1, 2, 2>(1.0);
+// ^ ^ ^
// | | |
// Dimension of residual -----+ | |
// Dimension of x ---------------+ |
@@ -99,9 +99,11 @@
// AutoDiffCostFunction also supports cost functions with a
// runtime-determined number of residuals. For example:
//
-// CostFunction* cost_function
-// = new AutoDiffCostFunction<MyScalarCostFunctor, DYNAMIC, 2, 2>(
-// new CostFunctorWithDynamicNumResiduals(1.0), ^ ^ ^
+// auto functor = std::make_unique<CostFunctorWithDynamicNumResiduals>(1.0);
+// auto* cost_function
+// = new AutoDiffCostFunction<CostFunctorWithDynamicNumResiduals,
+// DYNAMIC, 2, 2>(
+// std::move(functor), ^ ^ ^
// runtime_number_of_residuals); <----+ | | |
// | | | |
// | | | |
@@ -126,11 +128,11 @@
#define CERES_PUBLIC_AUTODIFF_COST_FUNCTION_H_
#include <memory>
+#include <type_traits>
#include "ceres/internal/autodiff.h"
#include "ceres/sized_cost_function.h"
#include "ceres/types.h"
-#include "glog/logging.h"
namespace ceres {
@@ -151,17 +153,36 @@
template <typename CostFunctor,
int kNumResiduals, // Number of residuals, or ceres::DYNAMIC.
int... Ns> // Number of parameters in each parameter block.
-class AutoDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
+class AutoDiffCostFunction final
+ : public SizedCostFunction<kNumResiduals, Ns...> {
public:
// Takes ownership of functor by default. Uses the template-provided
// value for the number of residuals ("kNumResiduals").
+ explicit AutoDiffCostFunction(std::unique_ptr<CostFunctor> functor)
+ : AutoDiffCostFunction{std::move(functor), TAKE_OWNERSHIP, FIXED_INIT} {}
+
+ // Constructs the CostFunctor on the heap and takes the ownership.
+ // Invocable only if the number of residuals is known at compile-time.
+ template <class... Args,
+ bool kIsDynamic = kNumResiduals == DYNAMIC,
+ std::enable_if_t<!kIsDynamic &&
+ std::is_constructible_v<CostFunctor, Args&&...>>* =
+ nullptr>
+ explicit AutoDiffCostFunction(Args&&... args)
+ // NOTE We explicitly use direct initialization using parentheses instead
+ // of uniform initialization using braces to avoid narrowing conversion
+ // warnings.
+ : AutoDiffCostFunction{
+ std::make_unique<CostFunctor>(std::forward<Args>(args)...)} {}
+
+ AutoDiffCostFunction(std::unique_ptr<CostFunctor> functor, int num_residuals)
+ : AutoDiffCostFunction{
+ std::move(functor), num_residuals, TAKE_OWNERSHIP, DYNAMIC_INIT} {}
+
explicit AutoDiffCostFunction(CostFunctor* functor,
Ownership ownership = TAKE_OWNERSHIP)
- : functor_(functor), ownership_(ownership) {
- static_assert(kNumResiduals != DYNAMIC,
- "Can't run the fixed-size constructor if the number of "
- "residuals is set to ceres::DYNAMIC.");
- }
+ : AutoDiffCostFunction{
+ std::unique_ptr<CostFunctor>{functor}, ownership, FIXED_INIT} {}
// Takes ownership of functor by default. Ignores the template-provided
// kNumResiduals in favor of the "num_residuals" argument provided.
@@ -171,17 +192,18 @@
AutoDiffCostFunction(CostFunctor* functor,
int num_residuals,
Ownership ownership = TAKE_OWNERSHIP)
- : functor_(functor), ownership_(ownership) {
- static_assert(kNumResiduals == DYNAMIC,
- "Can't run the dynamic-size constructor if the number of "
- "residuals is not ceres::DYNAMIC.");
- SizedCostFunction<kNumResiduals, Ns...>::set_num_residuals(num_residuals);
- }
+ : AutoDiffCostFunction{std::unique_ptr<CostFunctor>{functor},
+ num_residuals,
+ ownership,
+ DYNAMIC_INIT} {}
- explicit AutoDiffCostFunction(AutoDiffCostFunction&& other)
- : functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
+ AutoDiffCostFunction(AutoDiffCostFunction&& other) noexcept = default;
+ AutoDiffCostFunction& operator=(AutoDiffCostFunction&& other) noexcept =
+ default;
+ AutoDiffCostFunction(const AutoDiffCostFunction& other) = delete;
+ AutoDiffCostFunction& operator=(const AutoDiffCostFunction& other) = delete;
- virtual ~AutoDiffCostFunction() {
+ ~AutoDiffCostFunction() override {
// Manually release pointer if configured to not take ownership rather than
// deleting only if ownership is taken.
// This is to stay maximally compatible to old user code which may have
@@ -203,7 +225,7 @@
using ParameterDims =
typename SizedCostFunction<kNumResiduals, Ns...>::ParameterDims;
- if (!jacobians) {
+ if (jacobians == nullptr) {
return internal::VariadicEvaluate<ParameterDims>(
*functor_, parameters, residuals);
}
@@ -215,7 +237,36 @@
jacobians);
};
+ const CostFunctor& functor() const { return *functor_; }
+
private:
+ // Tags used to differentiate between dynamic and fixed size constructor
+ // delegate invocations.
+ static constexpr std::integral_constant<int, DYNAMIC> DYNAMIC_INIT{};
+ static constexpr std::integral_constant<int, kNumResiduals> FIXED_INIT{};
+
+ template <class InitTag>
+ AutoDiffCostFunction(std::unique_ptr<CostFunctor> functor,
+ int num_residuals,
+ Ownership ownership,
+ InitTag /*unused*/)
+ : functor_{std::move(functor)}, ownership_{ownership} {
+ static_assert(kNumResiduals == FIXED_INIT,
+ "Can't run the fixed-size constructor if the number of "
+ "residuals is set to ceres::DYNAMIC.");
+
+ if constexpr (InitTag::value == DYNAMIC_INIT) {
+ SizedCostFunction<kNumResiduals, Ns...>::set_num_residuals(num_residuals);
+ }
+ }
+
+ template <class InitTag>
+ AutoDiffCostFunction(std::unique_ptr<CostFunctor> functor,
+ Ownership ownership,
+ InitTag tag)
+ : AutoDiffCostFunction{
+ std::move(functor), kNumResiduals, ownership, tag} {}
+
std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
};
diff --git a/include/ceres/autodiff_first_order_function.h b/include/ceres/autodiff_first_order_function.h
index b98d845..6cd1b13 100644
--- a/include/ceres/autodiff_first_order_function.h
+++ b/include/ceres/autodiff_first_order_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,7 @@
#define CERES_PUBLIC_AUTODIFF_FIRST_ORDER_FUNCTION_H_
#include <memory>
+#include <type_traits>
#include "ceres/first_order_function.h"
#include "ceres/internal/eigen.h"
@@ -102,15 +103,25 @@
// seen where instead of using a_ directly, a_ is wrapped with T(a_).
template <typename FirstOrderFunctor, int kNumParameters>
-class AutoDiffFirstOrderFunction : public FirstOrderFunction {
+class AutoDiffFirstOrderFunction final : public FirstOrderFunction {
public:
// Takes ownership of functor.
explicit AutoDiffFirstOrderFunction(FirstOrderFunctor* functor)
- : functor_(functor) {
+ : AutoDiffFirstOrderFunction{
+ std::unique_ptr<FirstOrderFunctor>{functor}} {}
+
+ explicit AutoDiffFirstOrderFunction(
+ std::unique_ptr<FirstOrderFunctor> functor)
+ : functor_(std::move(functor)) {
static_assert(kNumParameters > 0, "kNumParameters must be positive");
}
- virtual ~AutoDiffFirstOrderFunction() {}
+ template <class... Args,
+ std::enable_if_t<std::is_constructible_v<FirstOrderFunctor,
+ Args&&...>>* = nullptr>
+ explicit AutoDiffFirstOrderFunction(Args&&... args)
+ : AutoDiffFirstOrderFunction{
+ std::make_unique<FirstOrderFunctor>(std::forward<Args>(args)...)} {}
bool Evaluate(const double* const parameters,
double* cost,
@@ -119,7 +130,7 @@
return (*functor_)(parameters, cost);
}
- typedef Jet<double, kNumParameters> JetT;
+ using JetT = Jet<double, kNumParameters>;
internal::FixedArray<JetT, (256 * 7) / sizeof(JetT)> x(kNumParameters);
for (int i = 0; i < kNumParameters; ++i) {
x[i].a = parameters[i];
@@ -142,6 +153,8 @@
int NumParameters() const override { return kNumParameters; }
+ const FirstOrderFunctor& functor() const { return *functor_; }
+
private:
std::unique_ptr<FirstOrderFunctor> functor_;
};
diff --git a/include/ceres/autodiff_local_parameterization.h b/include/ceres/autodiff_local_parameterization.h
deleted file mode 100644
index d694376..0000000
--- a/include/ceres/autodiff_local_parameterization.h
+++ /dev/null
@@ -1,152 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sergey.vfx@gmail.com (Sergey Sharybin)
-// mierle@gmail.com (Keir Mierle)
-// sameeragarwal@google.com (Sameer Agarwal)
-
-#ifndef CERES_PUBLIC_AUTODIFF_LOCAL_PARAMETERIZATION_H_
-#define CERES_PUBLIC_AUTODIFF_LOCAL_PARAMETERIZATION_H_
-
-#include <memory>
-
-#include "ceres/internal/autodiff.h"
-#include "ceres/local_parameterization.h"
-
-namespace ceres {
-
-// Create local parameterization with Jacobians computed via automatic
-// differentiation. For more information on local parameterizations,
-// see include/ceres/local_parameterization.h
-//
-// To get an auto differentiated local parameterization, you must define
-// a class with a templated operator() (a functor) that computes
-//
-// x_plus_delta = Plus(x, delta);
-//
-// the template parameter T. The autodiff framework substitutes appropriate
-// "Jet" objects for T in order to compute the derivative when necessary, but
-// this is hidden, and you should write the function as if T were a scalar type
-// (e.g. a double-precision floating point number).
-//
-// The function must write the computed value in the last argument (the only
-// non-const one) and return true to indicate success.
-//
-// For example, Quaternions have a three dimensional local
-// parameterization. It's plus operation can be implemented as (taken
-// from internal/ceres/auto_diff_local_parameterization_test.cc)
-//
-// struct QuaternionPlus {
-// template<typename T>
-// bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
-// const T squared_norm_delta =
-// delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
-//
-// T q_delta[4];
-// if (squared_norm_delta > T(0.0)) {
-// T norm_delta = sqrt(squared_norm_delta);
-// const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
-// q_delta[0] = cos(norm_delta);
-// q_delta[1] = sin_delta_by_delta * delta[0];
-// q_delta[2] = sin_delta_by_delta * delta[1];
-// q_delta[3] = sin_delta_by_delta * delta[2];
-// } else {
-// // We do not just use q_delta = [1,0,0,0] here because that is a
-// // constant and when used for automatic differentiation will
-// // lead to a zero derivative. Instead we take a first order
-// // approximation and evaluate it at zero.
-// q_delta[0] = T(1.0);
-// q_delta[1] = delta[0];
-// q_delta[2] = delta[1];
-// q_delta[3] = delta[2];
-// }
-//
-// QuaternionProduct(q_delta, x, x_plus_delta);
-// return true;
-// }
-// };
-//
-// Then given this struct, the auto differentiated local
-// parameterization can now be constructed as
-//
-// LocalParameterization* local_parameterization =
-// new AutoDiffLocalParameterization<QuaternionPlus, 4, 3>;
-// | |
-// Global Size ---------------+ |
-// Local Size -------------------+
-//
-// WARNING: Since the functor will get instantiated with different types for
-// T, you must to convert from other numeric types to T before mixing
-// computations with other variables of type T. In the example above, this is
-// seen where instead of using k_ directly, k_ is wrapped with T(k_).
-
-template <typename Functor, int kGlobalSize, int kLocalSize>
-class AutoDiffLocalParameterization : public LocalParameterization {
- public:
- AutoDiffLocalParameterization() : functor_(new Functor()) {}
-
- // Takes ownership of functor.
- explicit AutoDiffLocalParameterization(Functor* functor)
- : functor_(functor) {}
-
- virtual ~AutoDiffLocalParameterization() {}
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const override {
- return (*functor_)(x, delta, x_plus_delta);
- }
-
- bool ComputeJacobian(const double* x, double* jacobian) const override {
- double zero_delta[kLocalSize];
- for (int i = 0; i < kLocalSize; ++i) {
- zero_delta[i] = 0.0;
- }
-
- double x_plus_delta[kGlobalSize];
- for (int i = 0; i < kGlobalSize; ++i) {
- x_plus_delta[i] = 0.0;
- }
-
- const double* parameter_ptrs[2] = {x, zero_delta};
- double* jacobian_ptrs[2] = {NULL, jacobian};
- return internal::AutoDifferentiate<
- kGlobalSize,
- internal::StaticParameterDims<kGlobalSize, kLocalSize>>(
- *functor_, parameter_ptrs, kGlobalSize, x_plus_delta, jacobian_ptrs);
- }
-
- int GlobalSize() const override { return kGlobalSize; }
- int LocalSize() const override { return kLocalSize; }
-
- private:
- std::unique_ptr<Functor> functor_;
-};
-
-} // namespace ceres
-
-#endif // CERES_PUBLIC_AUTODIFF_LOCAL_PARAMETERIZATION_H_
diff --git a/include/ceres/autodiff_manifold.h b/include/ceres/autodiff_manifold.h
new file mode 100644
index 0000000..4bf7e56
--- /dev/null
+++ b/include/ceres/autodiff_manifold.h
@@ -0,0 +1,259 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#ifndef CERES_PUBLIC_AUTODIFF_MANIFOLD_H_
+#define CERES_PUBLIC_AUTODIFF_MANIFOLD_H_
+
+#include <memory>
+
+#include "ceres/internal/autodiff.h"
+#include "ceres/manifold.h"
+
+namespace ceres {
+
+// Create a Manifold with Jacobians computed via automatic differentiation. For
+// more information on manifolds, see include/ceres/manifold.h
+//
+// To get an auto differentiated manifold, you must define a class/struct with
+// templated Plus and Minus functions that compute
+//
+// x_plus_delta = Plus(x, delta);
+// y_minus_x = Minus(y, x);
+//
+// Where, x, y and x_plus_delta are vectors on the manifold in the ambient space
+// (so they are kAmbientSize vectors) and delta, y_minus_x are vectors in the
+// tangent space (so they are kTangentSize vectors).
+//
+// The Functor should have the signature:
+//
+// struct Functor {
+// template <typename T>
+// bool Plus(const T* x, const T* delta, T* x_plus_delta) const;
+//
+// template <typename T>
+// bool Minus(const T* y, const T* x, T* y_minus_x) const;
+// };
+//
+// Observe that the Plus and Minus operations are templated on the parameter T.
+// The autodiff framework substitutes appropriate "Jet" objects for T in order
+// to compute the derivative when necessary. This is the same mechanism that is
+// used to compute derivatives when using AutoDiffCostFunction.
+//
+// Plus and Minus should return true if the computation is successful and false
+// otherwise, in which case the result will not be used.
+//
+// Given this Functor, the corresponding Manifold can be constructed as:
+//
+// AutoDiffManifold<Functor, kAmbientSize, kTangentSize> manifold;
+//
+// As a concrete example consider the case of Quaternions. Quaternions form a
+// three dimensional manifold embedded in R^4, i.e. they have an ambient
+// dimension of 4 and their tangent space has dimension 3. The following Functor
+// (taken from autodiff_manifold_test.cc) defines the Plus and Minus operations
+// on the Quaternion manifold:
+//
+// NOTE: The following is only used for illustration purposes. Ceres Solver
+// ships with optimized production grade QuaternionManifold implementation. See
+// manifold.h.
+//
+// This functor assumes that the quaternions are laid out as [w,x,y,z] in
+// memory, i.e. the real or scalar part is the first coordinate.
+//
+// struct QuaternionFunctor {
+// template <typename T>
+// bool Plus(const T* x, const T* delta, T* x_plus_delta) const {
+// const T squared_norm_delta =
+// delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
+//
+// T q_delta[4];
+// if (squared_norm_delta > T(0.0)) {
+// T norm_delta = sqrt(squared_norm_delta);
+// const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
+// q_delta[0] = cos(norm_delta);
+// q_delta[1] = sin_delta_by_delta * delta[0];
+// q_delta[2] = sin_delta_by_delta * delta[1];
+// q_delta[3] = sin_delta_by_delta * delta[2];
+// } else {
+// // We do not just use q_delta = [1,0,0,0] here because that is a
+// // constant and when used for automatic differentiation will
+// // lead to a zero derivative. Instead we take a first order
+// // approximation and evaluate it at zero.
+// q_delta[0] = T(1.0);
+// q_delta[1] = delta[0];
+// q_delta[2] = delta[1];
+// q_delta[3] = delta[2];
+// }
+//
+// QuaternionProduct(q_delta, x, x_plus_delta);
+// return true;
+// }
+//
+// template <typename T>
+// bool Minus(const T* y, const T* x, T* y_minus_x) const {
+// T minus_x[4] = {x[0], -x[1], -x[2], -x[3]};
+// T ambient_y_minus_x[4];
+// QuaternionProduct(y, minus_x, ambient_y_minus_x);
+// T u_norm = sqrt(ambient_y_minus_x[1] * ambient_y_minus_x[1] +
+// ambient_y_minus_x[2] * ambient_y_minus_x[2] +
+// ambient_y_minus_x[3] * ambient_y_minus_x[3]);
+// if (u_norm > 0.0) {
+// T theta = atan2(u_norm, ambient_y_minus_x[0]);
+// y_minus_x[0] = theta * ambient_y_minus_x[1] / u_norm;
+// y_minus_x[1] = theta * ambient_y_minus_x[2] / u_norm;
+// y_minus_x[2] = theta * ambient_y_minus_x[3] / u_norm;
+// } else {
+// // We do not use [0,0,0] here because even though the value part is
+// // a constant, the derivative part is not.
+// y_minus_x[0] = ambient_y_minus_x[1];
+// y_minus_x[1] = ambient_y_minus_x[2];
+// y_minus_x[2] = ambient_y_minus_x[3];
+// }
+// return true;
+// }
+// };
+//
+// Then given this struct, the auto differentiated Quaternion Manifold can now
+// be constructed as
+//
+// Manifold* manifold = new AutoDiffManifold<QuaternionFunctor, 4, 3>;
+
+template <typename Functor, int kAmbientSize, int kTangentSize>
+class AutoDiffManifold final : public Manifold {
+ public:
+ AutoDiffManifold() : functor_(std::make_unique<Functor>()) {}
+
+ // Takes ownership of functor.
+ explicit AutoDiffManifold(Functor* functor) : functor_(functor) {}
+
+ int AmbientSize() const override { return kAmbientSize; }
+ int TangentSize() const override { return kTangentSize; }
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override {
+ return functor_->Plus(x, delta, x_plus_delta);
+ }
+
+ bool PlusJacobian(const double* x, double* jacobian) const override;
+
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override {
+ return functor_->Minus(y, x, y_minus_x);
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian) const override;
+
+ const Functor& functor() const { return *functor_; }
+
+ private:
+ std::unique_ptr<Functor> functor_;
+};
+
+namespace internal {
+
+// The following two helper structs are needed to interface the Plus and Minus
+// methods of the ManifoldFunctor with the automatic differentiation which
+// expects a Functor with operator().
+template <typename Functor>
+struct PlusWrapper {
+ explicit PlusWrapper(const Functor& functor) : functor(functor) {}
+ template <typename T>
+ bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
+ return functor.Plus(x, delta, x_plus_delta);
+ }
+ const Functor& functor;
+};
+
+template <typename Functor>
+struct MinusWrapper {
+ explicit MinusWrapper(const Functor& functor) : functor(functor) {}
+ template <typename T>
+ bool operator()(const T* y, const T* x, T* y_minus_x) const {
+ return functor.Minus(y, x, y_minus_x);
+ }
+ const Functor& functor;
+};
+} // namespace internal
+
+template <typename Functor, int kAmbientSize, int kTangentSize>
+bool AutoDiffManifold<Functor, kAmbientSize, kTangentSize>::PlusJacobian(
+ const double* x, double* jacobian) const {
+ double zero_delta[kTangentSize];
+ for (int i = 0; i < kTangentSize; ++i) {
+ zero_delta[i] = 0.0;
+ }
+
+ double x_plus_delta[kAmbientSize];
+ for (int i = 0; i < kAmbientSize; ++i) {
+ x_plus_delta[i] = 0.0;
+ }
+
+ const double* parameter_ptrs[2] = {x, zero_delta};
+
+ // PlusJacobian is D_2 Plus(x,0) so we only need to compute the Jacobian
+ // w.r.t. the second argument.
+ double* jacobian_ptrs[2] = {nullptr, jacobian};
+ return internal::AutoDifferentiate<
+ kAmbientSize,
+ internal::StaticParameterDims<kAmbientSize, kTangentSize>>(
+ internal::PlusWrapper<Functor>(*functor_),
+ parameter_ptrs,
+ kAmbientSize,
+ x_plus_delta,
+ jacobian_ptrs);
+}
+
+template <typename Functor, int kAmbientSize, int kTangentSize>
+bool AutoDiffManifold<Functor, kAmbientSize, kTangentSize>::MinusJacobian(
+ const double* x, double* jacobian) const {
+ double y_minus_x[kTangentSize];
+ for (int i = 0; i < kTangentSize; ++i) {
+ y_minus_x[i] = 0.0;
+ }
+
+ const double* parameter_ptrs[2] = {x, x};
+
+ // MinusJacobian is D_1 Minus(x,x), so we only need to compute the Jacobian
+ // w.r.t. the first argument.
+ double* jacobian_ptrs[2] = {jacobian, nullptr};
+ return internal::AutoDifferentiate<
+ kTangentSize,
+ internal::StaticParameterDims<kAmbientSize, kAmbientSize>>(
+ internal::MinusWrapper<Functor>(*functor_),
+ parameter_ptrs,
+ kTangentSize,
+ y_minus_x,
+ jacobian_ptrs);
+}
+
+} // namespace ceres
+
+#endif // CERES_PUBLIC_AUTODIFF_MANIFOLD_H_
diff --git a/include/ceres/c_api.h b/include/ceres/c_api.h
index 91b82bf..30bcaaf 100644
--- a/include/ceres/c_api.h
+++ b/include/ceres/c_api.h
@@ -1,5 +1,5 @@
/* Ceres Solver - A fast non-linear least squares minimizer
- * Copyright 2019 Google Inc. All rights reserved.
+ * Copyright 2023 Google Inc. All rights reserved.
* http://ceres-solver.org/
*
* Redistribution and use in source and binary forms, with or without
@@ -39,7 +39,7 @@
#define CERES_PUBLIC_C_API_H_
// clang-format off
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/internal/disable_warnings.h"
// clang-format on
diff --git a/include/ceres/ceres.h b/include/ceres/ceres.h
index d249351..51f9d89 100644
--- a/include/ceres/ceres.h
+++ b/include/ceres/ceres.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,9 +34,12 @@
#ifndef CERES_PUBLIC_CERES_H_
#define CERES_PUBLIC_CERES_H_
+// IWYU pragma: begin_exports
#include "ceres/autodiff_cost_function.h"
-#include "ceres/autodiff_local_parameterization.h"
+#include "ceres/autodiff_first_order_function.h"
+#include "ceres/autodiff_manifold.h"
#include "ceres/conditioned_cost_function.h"
+#include "ceres/constants.h"
#include "ceres/context.h"
#include "ceres/cost_function.h"
#include "ceres/cost_function_to_functor.h"
@@ -47,20 +50,26 @@
#include "ceres/dynamic_cost_function_to_functor.h"
#include "ceres/dynamic_numeric_diff_cost_function.h"
#include "ceres/evaluation_callback.h"
+#include "ceres/first_order_function.h"
#include "ceres/gradient_checker.h"
#include "ceres/gradient_problem.h"
#include "ceres/gradient_problem_solver.h"
#include "ceres/iteration_callback.h"
#include "ceres/jet.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/line_manifold.h"
#include "ceres/loss_function.h"
+#include "ceres/manifold.h"
#include "ceres/numeric_diff_cost_function.h"
+#include "ceres/numeric_diff_first_order_function.h"
#include "ceres/numeric_diff_options.h"
#include "ceres/ordered_groups.h"
#include "ceres/problem.h"
+#include "ceres/product_manifold.h"
#include "ceres/sized_cost_function.h"
#include "ceres/solver.h"
+#include "ceres/sphere_manifold.h"
#include "ceres/types.h"
#include "ceres/version.h"
+// IWYU pragma: end_exports
#endif // CERES_PUBLIC_CERES_H_
diff --git a/include/ceres/conditioned_cost_function.h b/include/ceres/conditioned_cost_function.h
index a57ee20..1edc006 100644
--- a/include/ceres/conditioned_cost_function.h
+++ b/include/ceres/conditioned_cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -71,18 +71,18 @@
// ccf_residual[i] = f_i(my_cost_function_residual[i])
//
// and the Jacobian will be affected appropriately.
-class CERES_EXPORT ConditionedCostFunction : public CostFunction {
+class CERES_EXPORT ConditionedCostFunction final : public CostFunction {
public:
// Builds a cost function based on a wrapped cost function, and a
// per-residual conditioner. Takes ownership of all of the wrapped cost
// functions, or not, depending on the ownership parameter. Conditioners
- // may be NULL, in which case the corresponding residual is not modified.
+ // may be nullptr, in which case the corresponding residual is not modified.
//
// The conditioners can repeat.
ConditionedCostFunction(CostFunction* wrapped_cost_function,
const std::vector<CostFunction*>& conditioners,
Ownership ownership);
- virtual ~ConditionedCostFunction();
+ ~ConditionedCostFunction() override;
bool Evaluate(double const* const* parameters,
double* residuals,
diff --git a/internal/ceres/float_cxsparse.cc b/include/ceres/constants.h
similarity index 74%
copy from internal/ceres/float_cxsparse.cc
copy to include/ceres/constants.h
index 6c68830..584b669 100644
--- a/internal/ceres/float_cxsparse.cc
+++ b/include/ceres/constants.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -26,22 +26,17 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
+// Author: hellston20a@gmail.com (H S Helson Go)
-#include "ceres/float_cxsparse.h"
+#ifndef CERES_PUBLIC_CONSTANTS_H_
+#define CERES_PUBLIC_CONSTANTS_H_
-#if !defined(CERES_NO_CXSPARSE)
+// TODO(HSHelson): This header should no longer be necessary once C++20's
+// <numbers> (e.g. std::numbers::pi_v) becomes usable
+namespace ceres::constants {
+template <typename T>
+inline constexpr T pi_v(3.141592653589793238462643383279502884);
+inline constexpr double pi = pi_v<double>;
+} // namespace ceres::constants
-namespace ceres {
-namespace internal {
-
-std::unique_ptr<SparseCholesky> FloatCXSparseCholesky::Create(
- OrderingType ordering_type) {
- LOG(FATAL) << "FloatCXSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // !defined(CERES_NO_CXSPARSE)
+#endif // CERES_PUBLIC_CONSTANTS_H_
diff --git a/include/ceres/context.h b/include/ceres/context.h
index d08e32b..fe18726 100644
--- a/include/ceres/context.h
+++ b/include/ceres/context.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,6 +31,8 @@
#ifndef CERES_PUBLIC_CONTEXT_H_
#define CERES_PUBLIC_CONTEXT_H_
+#include "ceres/internal/export.h"
+
namespace ceres {
// A global context for processing data in Ceres. This provides a mechanism to
@@ -39,13 +41,13 @@
// Problems, either serially or in parallel. When using it with multiple
// Problems at the same time, they may end up contending for resources
// (e.g. threads) managed by the Context.
-class Context {
+class CERES_EXPORT Context {
public:
- Context() {}
+ Context();
Context(const Context&) = delete;
void operator=(const Context&) = delete;
- virtual ~Context() {}
+ virtual ~Context();
// Creates a context object and the caller takes ownership.
static Context* Create();
diff --git a/include/ceres/cost_function.h b/include/ceres/cost_function.h
index d1550c1..2e5b1dd 100644
--- a/include/ceres/cost_function.h
+++ b/include/ceres/cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -48,7 +48,7 @@
#include <vector>
#include "ceres/internal/disable_warnings.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
namespace ceres {
@@ -63,11 +63,11 @@
// when added with AddResidualBlock().
class CERES_EXPORT CostFunction {
public:
- CostFunction() : num_residuals_(0) {}
+ CostFunction();
CostFunction(const CostFunction&) = delete;
- void operator=(const CostFunction&) = delete;
+ CostFunction& operator=(const CostFunction&) = delete;
- virtual ~CostFunction() {}
+ virtual ~CostFunction();
// Inputs:
//
@@ -92,8 +92,8 @@
// jacobians[i][r*parameter_block_size_[i] + c] =
// d residual[r] / d parameters[i][c]
//
- // If jacobians is NULL, then no derivatives are returned; this is
- // the case when computing cost only. If jacobians[i] is NULL, then
+ // If jacobians is nullptr, then no derivatives are returned; this is
+ // the case when computing cost only. If jacobians[i] is nullptr, then
// the jacobian block corresponding to the i'th parameter block must
// not to be returned.
//
@@ -124,6 +124,10 @@
int num_residuals() const { return num_residuals_; }
protected:
+ // Prevent moving through the base class
+ CostFunction(CostFunction&& other) noexcept;
+ CostFunction& operator=(CostFunction&& other) noexcept;
+
std::vector<int32_t>* mutable_parameter_block_sizes() {
return ¶meter_block_sizes_;
}
diff --git a/include/ceres/cost_function_to_functor.h b/include/ceres/cost_function_to_functor.h
index 9364293..573508e 100644
--- a/include/ceres/cost_function_to_functor.h
+++ b/include/ceres/cost_function_to_functor.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -94,10 +94,9 @@
#include "ceres/cost_function.h"
#include "ceres/dynamic_cost_function_to_functor.h"
-#include "ceres/internal/fixed_array.h"
#include "ceres/internal/parameter_dims.h"
-#include "ceres/internal/port.h"
#include "ceres/types.h"
+#include "glog/logging.h"
namespace ceres {
@@ -106,12 +105,16 @@
public:
// Takes ownership of cost_function.
explicit CostFunctionToFunctor(CostFunction* cost_function)
- : cost_functor_(cost_function) {
- CHECK(cost_function != nullptr);
+ : CostFunctionToFunctor{std::unique_ptr<CostFunction>{cost_function}} {}
+
+ // Takes ownership of cost_function.
+ explicit CostFunctionToFunctor(std::unique_ptr<CostFunction> cost_function)
+ : cost_functor_(std::move(cost_function)) {
+ CHECK(cost_functor_.function() != nullptr);
CHECK(kNumResiduals > 0 || kNumResiduals == DYNAMIC);
const std::vector<int32_t>& parameter_block_sizes =
- cost_function->parameter_block_sizes();
+ cost_functor_.function()->parameter_block_sizes();
const int num_parameter_blocks = ParameterDims::kNumParameterBlocks;
CHECK_EQ(static_cast<int>(parameter_block_sizes.size()),
num_parameter_blocks);
@@ -119,7 +122,7 @@
if (parameter_block_sizes.size() == num_parameter_blocks) {
for (int block = 0; block < num_parameter_blocks; ++block) {
CHECK_EQ(ParameterDims::GetDim(block), parameter_block_sizes[block])
- << "Parameter block size missmatch. The specified static parameter "
+ << "Parameter block size mismatch. The specified static parameter "
"block dimension does not match the one from the cost function.";
}
}
diff --git a/include/ceres/covariance.h b/include/ceres/covariance.h
index 2fe025d..d477f31 100644
--- a/include/ceres/covariance.h
+++ b/include/ceres/covariance.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,8 +35,9 @@
#include <utility>
#include <vector>
+#include "ceres/internal/config.h"
#include "ceres/internal/disable_warnings.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/types.h"
namespace ceres {
@@ -145,7 +146,7 @@
// a. The rank deficiency arises from overparameterization. e.g., a
// four dimensional quaternion used to parameterize SO(3), which is
// a three dimensional manifold. In cases like this, the user should
-// use an appropriate LocalParameterization. Not only will this lead
+// use an appropriate Manifold. Not only will this lead
// to better numerical behaviour of the Solver, it will also expose
// the rank deficiency to the Covariance object so that it can
// handle it correctly.
@@ -245,6 +246,20 @@
// used.
CovarianceAlgorithmType algorithm_type = SPARSE_QR;
+ // During QR factorization, if a column with Euclidean norm less
+ // than column_pivot_threshold is encountered it is treated as
+ // zero.
+ //
+ // If column_pivot_threshold < 0, then an automatic default value
+ // of 20*(m+n)*eps*sqrt(max(diag(J’*J))) is used. Here m and n are
+ // the number of rows and columns of the Jacobian (J)
+ // respectively.
+ //
+ // This is an advanced option meant for users who know enough
+ // about their Jacobian matrices that they can determine a value
+ // better than the default.
+ double column_pivot_threshold = -1;
+
// If the Jacobian matrix is near singular, then inverting J'J
// will result in unreliable results, e.g, if
//
@@ -265,7 +280,7 @@
//
// min_sigma / max_sigma < sqrt(min_reciprocal_condition_number)
//
- // where min_sigma and max_sigma are the minimum and maxiumum
+ // where min_sigma and max_sigma are the minimum and maximum
// singular values of J respectively.
//
// 2. SPARSE_QR
@@ -393,11 +408,9 @@
const double* parameter_block2,
double* covariance_block) const;
- // Return the block of the cross-covariance matrix corresponding to
- // parameter_block1 and parameter_block2.
- // Returns cross-covariance in the tangent space if a local
- // parameterization is associated with either parameter block;
- // else returns cross-covariance in the ambient space.
+ // Returns the block of the cross-covariance in the tangent space if a
+ // manifold is associated with either parameter block; else returns
+ // cross-covariance in the ambient space.
//
// Compute must be called before the first call to
// GetCovarianceBlock and the pair <parameter_block1,
@@ -429,9 +442,8 @@
double* covariance_matrix) const;
// Return the covariance matrix corresponding to parameter_blocks
- // in the tangent space if a local parameterization is associated
- // with one of the parameter blocks else returns the covariance
- // matrix in the ambient space.
+ // in the tangent space if a manifold is associated with one of the parameter
+ // blocks else returns the covariance matrix in the ambient space.
//
// Compute must be called before calling GetCovarianceMatrix and all
// parameter_blocks must have been present in the vector
diff --git a/include/ceres/crs_matrix.h b/include/ceres/crs_matrix.h
index bc618fa..787b6a3 100644
--- a/include/ceres/crs_matrix.h
+++ b/include/ceres/crs_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,17 +34,17 @@
#include <vector>
#include "ceres/internal/disable_warnings.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
namespace ceres {
// A compressed row sparse matrix used primarily for communicating the
// Jacobian matrix to the user.
struct CERES_EXPORT CRSMatrix {
- CRSMatrix() : num_rows(0), num_cols(0) {}
+ CRSMatrix() = default;
- int num_rows;
- int num_cols;
+ int num_rows{0};
+ int num_cols{0};
// A compressed row matrix stores its contents in three arrays,
// rows, cols and values.
diff --git a/include/ceres/cubic_interpolation.h b/include/ceres/cubic_interpolation.h
index 9b9ea4a..f165d2b 100644
--- a/include/ceres/cubic_interpolation.h
+++ b/include/ceres/cubic_interpolation.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,7 +32,7 @@
#define CERES_PUBLIC_CUBIC_INTERPOLATION_H_
#include "Eigen/Core"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "glog/logging.h"
namespace ceres {
@@ -59,8 +59,8 @@
// http://en.wikipedia.org/wiki/Cubic_Hermite_spline
// http://en.wikipedia.org/wiki/Bicubic_interpolation
//
-// f if not NULL will contain the interpolated function values.
-// dfdx if not NULL will contain the interpolated derivative values.
+// f if not nullptr will contain the interpolated function values.
+// dfdx if not nullptr will contain the interpolated derivative values.
template <int kDataDimension>
void CubicHermiteSpline(const Eigen::Matrix<double, kDataDimension, 1>& p0,
const Eigen::Matrix<double, kDataDimension, 1>& p1,
@@ -69,7 +69,7 @@
const double x,
double* f,
double* dfdx) {
- typedef Eigen::Matrix<double, kDataDimension, 1> VType;
+ using VType = Eigen::Matrix<double, kDataDimension, 1>;
const VType a = 0.5 * (-p0 + 3.0 * p1 - 3.0 * p2 + p3);
const VType b = 0.5 * (2.0 * p0 - 5.0 * p1 + 4.0 * p2 - p3);
const VType c = 0.5 * (-p0 + p2);
@@ -79,12 +79,12 @@
// derivative.
// f = ax^3 + bx^2 + cx + d
- if (f != NULL) {
+ if (f != nullptr) {
Eigen::Map<VType>(f, kDataDimension) = d + x * (c + x * (b + x * a));
}
// dfdx = 3ax^2 + 2bx + c
- if (dfdx != NULL) {
+ if (dfdx != nullptr) {
Eigen::Map<VType>(dfdx, kDataDimension) = c + x * (2.0 * b + 3.0 * a * x);
}
}
@@ -143,7 +143,7 @@
// The following two Evaluate overloads are needed for interfacing
// with automatic differentiation. The first is for when a scalar
// evaluation is done, and the second one is for when Jets are used.
- void Evaluate(const double& x, double* f) const { Evaluate(x, f, NULL); }
+ void Evaluate(const double& x, double* f) const { Evaluate(x, f, nullptr); }
template <typename JetT>
void Evaluate(const JetT& x, JetT* f) const {
@@ -191,7 +191,7 @@
}
EIGEN_STRONG_INLINE void GetValue(const int n, double* f) const {
- const int idx = std::min(std::max(begin_, n), end_ - 1) - begin_;
+ const int idx = (std::min)((std::max)(begin_, n), end_ - 1) - begin_;
if (kInterleaved) {
for (int i = 0; i < kDataDimension; ++i) {
f[i] = static_cast<double>(data_[kDataDimension * idx + i]);
@@ -317,10 +317,10 @@
// Interpolate vertically the interpolated value from each row and
// compute the derivative along the columns.
CubicHermiteSpline<Grid::DATA_DIMENSION>(f0, f1, f2, f3, r - row, f, dfdr);
- if (dfdc != NULL) {
+ if (dfdc != nullptr) {
// Interpolate vertically the derivative along the columns.
CubicHermiteSpline<Grid::DATA_DIMENSION>(
- df0dc, df1dc, df2dc, df3dc, r - row, dfdc, NULL);
+ df0dc, df1dc, df2dc, df3dc, r - row, dfdc, nullptr);
}
}
@@ -328,7 +328,7 @@
// with automatic differentiation. The first is for when a scalar
// evaluation is done, and the second one is for when Jets are used.
void Evaluate(const double& r, const double& c, double* f) const {
- Evaluate(r, c, f, NULL, NULL);
+ Evaluate(r, c, f, nullptr, nullptr);
}
template <typename JetT>
@@ -368,7 +368,7 @@
//
// f001, f002, f011, f012, ...
//
-// A commonly occuring example are color images (RGB) where the three
+// A commonly occurring example are color images (RGB) where the three
// channels are stored interleaved.
//
// If kInterleaved = false, then it is stored as
@@ -402,9 +402,9 @@
EIGEN_STRONG_INLINE void GetValue(const int r, const int c, double* f) const {
const int row_idx =
- std::min(std::max(row_begin_, r), row_end_ - 1) - row_begin_;
+ (std::min)((std::max)(row_begin_, r), row_end_ - 1) - row_begin_;
const int col_idx =
- std::min(std::max(col_begin_, c), col_end_ - 1) - col_begin_;
+ (std::min)((std::max)(col_begin_, c), col_end_ - 1) - col_begin_;
const int n = (kRowMajor) ? num_cols_ * row_idx + col_idx
: num_rows_ * col_idx + row_idx;
diff --git a/include/ceres/dynamic_autodiff_cost_function.h b/include/ceres/dynamic_autodiff_cost_function.h
index 7ccf6a8..2b8724d 100644
--- a/include/ceres/dynamic_autodiff_cost_function.h
+++ b/include/ceres/dynamic_autodiff_cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,6 +35,7 @@
#include <cmath>
#include <memory>
#include <numeric>
+#include <type_traits>
#include <vector>
#include "ceres/dynamic_cost_function.h"
@@ -65,8 +66,7 @@
// also specify the sizes after creating the dynamic autodiff cost
// function. For example:
//
-// DynamicAutoDiffCostFunction<MyCostFunctor, 3> cost_function(
-// new MyCostFunctor());
+// DynamicAutoDiffCostFunction<MyCostFunctor, 3> cost_function;
// cost_function.AddParameterBlock(5);
// cost_function.AddParameterBlock(10);
// cost_function.SetNumResiduals(21);
@@ -77,17 +77,38 @@
// pass. There is a tradeoff with the size of the passes; you may want
// to experiment with the stride.
template <typename CostFunctor, int Stride = 4>
-class DynamicAutoDiffCostFunction : public DynamicCostFunction {
+class DynamicAutoDiffCostFunction final : public DynamicCostFunction {
public:
+ // Constructs the CostFunctor on the heap and takes the ownership.
+ template <class... Args,
+ std::enable_if_t<std::is_constructible_v<CostFunctor, Args&&...>>* =
+ nullptr>
+ explicit DynamicAutoDiffCostFunction(Args&&... args)
+ // NOTE We explicitly use direct initialization using parentheses instead
+ // of uniform initialization using braces to avoid narrowing conversion
+ // warnings.
+ : DynamicAutoDiffCostFunction{
+ std::make_unique<CostFunctor>(std::forward<Args>(args)...)} {}
+
// Takes ownership by default.
- DynamicAutoDiffCostFunction(CostFunctor* functor,
- Ownership ownership = TAKE_OWNERSHIP)
- : functor_(functor), ownership_(ownership) {}
+ explicit DynamicAutoDiffCostFunction(CostFunctor* functor,
+ Ownership ownership = TAKE_OWNERSHIP)
+ : DynamicAutoDiffCostFunction{std::unique_ptr<CostFunctor>{functor},
+ ownership} {}
- explicit DynamicAutoDiffCostFunction(DynamicAutoDiffCostFunction&& other)
- : functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
+ explicit DynamicAutoDiffCostFunction(std::unique_ptr<CostFunctor> functor)
+ : DynamicAutoDiffCostFunction{std::move(functor), TAKE_OWNERSHIP} {}
- virtual ~DynamicAutoDiffCostFunction() {
+ DynamicAutoDiffCostFunction(const DynamicAutoDiffCostFunction& other) =
+ delete;
+ DynamicAutoDiffCostFunction& operator=(
+ const DynamicAutoDiffCostFunction& other) = delete;
+ DynamicAutoDiffCostFunction(DynamicAutoDiffCostFunction&& other) noexcept =
+ default;
+ DynamicAutoDiffCostFunction& operator=(
+ DynamicAutoDiffCostFunction&& other) noexcept = default;
+
+ ~DynamicAutoDiffCostFunction() override {
// Manually release pointer if configured to not take ownership
// rather than deleting only if ownership is taken. This is to
// stay maximally compatible to old user code which may have
@@ -105,7 +126,7 @@
<< "You must call DynamicAutoDiffCostFunction::SetNumResiduals() "
<< "before DynamicAutoDiffCostFunction::Evaluate().";
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return (*functor_)(parameters, residuals);
}
@@ -150,7 +171,7 @@
jet_parameters[i] = &input_jets[parameter_cursor];
const int parameter_block_size = parameter_block_sizes()[i];
- if (jacobians[i] != NULL) {
+ if (jacobians[i] != nullptr) {
if (!in_derivative_section) {
start_derivative_section.push_back(parameter_cursor);
in_derivative_section = true;
@@ -209,7 +230,7 @@
parameter_cursor >=
(start_derivative_section[current_derivative_section] +
current_derivative_section_cursor)) {
- if (jacobians[i] != NULL) {
+ if (jacobians[i] != nullptr) {
input_jets[parameter_cursor].v[active_parameter_count] = 1.0;
++active_parameter_count;
++current_derivative_section_cursor;
@@ -238,7 +259,7 @@
parameter_cursor >=
(start_derivative_section[current_derivative_section] +
current_derivative_section_cursor)) {
- if (jacobians[i] != NULL) {
+ if (jacobians[i] != nullptr) {
for (int k = 0; k < num_residuals(); ++k) {
jacobians[i][k * parameter_block_sizes()[i] + j] =
output_jets[k].v[active_parameter_count];
@@ -264,11 +285,34 @@
return true;
}
+ const CostFunctor& functor() const { return *functor_; }
+
private:
+ explicit DynamicAutoDiffCostFunction(std::unique_ptr<CostFunctor> functor,
+ Ownership ownership)
+ : functor_(std::move(functor)), ownership_(ownership) {}
+
std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
};
+// Deduction guide that allows the user to avoid explicitly specifying the
+// template parameter of DynamicAutoDiffCostFunction. The class can instead be
+// instantiated as follows:
+//
+// new DynamicAutoDiffCostFunction{new MyCostFunctor{}};
+// new DynamicAutoDiffCostFunction{std::make_unique<MyCostFunctor>()};
+//
+template <typename CostFunctor>
+DynamicAutoDiffCostFunction(CostFunctor* functor)
+ -> DynamicAutoDiffCostFunction<CostFunctor>;
+template <typename CostFunctor>
+DynamicAutoDiffCostFunction(CostFunctor* functor, Ownership ownership)
+ -> DynamicAutoDiffCostFunction<CostFunctor>;
+template <typename CostFunctor>
+DynamicAutoDiffCostFunction(std::unique_ptr<CostFunctor> functor)
+ -> DynamicAutoDiffCostFunction<CostFunctor>;
+
} // namespace ceres
#endif // CERES_PUBLIC_DYNAMIC_AUTODIFF_COST_FUNCTION_H_
diff --git a/include/ceres/dynamic_cost_function.h b/include/ceres/dynamic_cost_function.h
index 6e8a076..02ce1e9 100644
--- a/include/ceres/dynamic_cost_function.h
+++ b/include/ceres/dynamic_cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,7 @@
#define CERES_PUBLIC_DYNAMIC_COST_FUNCTION_H_
#include "ceres/cost_function.h"
+#include "ceres/internal/disable_warnings.h"
namespace ceres {
@@ -40,8 +41,6 @@
// parameter blocks and set the number of residuals at run time.
class CERES_EXPORT DynamicCostFunction : public CostFunction {
public:
- ~DynamicCostFunction() {}
-
virtual void AddParameterBlock(int size) {
mutable_parameter_block_sizes()->push_back(size);
}
@@ -53,4 +52,6 @@
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_PUBLIC_DYNAMIC_COST_FUNCTION_H_
diff --git a/include/ceres/dynamic_cost_function_to_functor.h b/include/ceres/dynamic_cost_function_to_functor.h
index 8d174d8..45ed90f 100644
--- a/include/ceres/dynamic_cost_function_to_functor.h
+++ b/include/ceres/dynamic_cost_function_to_functor.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,8 +37,10 @@
#include <vector>
#include "ceres/dynamic_cost_function.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/internal/fixed_array.h"
-#include "ceres/internal/port.h"
+#include "glog/logging.h"
namespace ceres {
@@ -100,16 +102,22 @@
// private:
// DynamicCostFunctionToFunctor intrinsic_projection_;
// };
-class DynamicCostFunctionToFunctor {
+class CERES_EXPORT DynamicCostFunctionToFunctor {
public:
// Takes ownership of cost_function.
explicit DynamicCostFunctionToFunctor(CostFunction* cost_function)
- : cost_function_(cost_function) {
- CHECK(cost_function != nullptr);
+ : DynamicCostFunctionToFunctor{
+ std::unique_ptr<CostFunction>{cost_function}} {}
+
+ // Takes ownership of cost_function.
+ explicit DynamicCostFunctionToFunctor(
+ std::unique_ptr<CostFunction> cost_function)
+ : cost_function_(std::move(cost_function)) {
+ CHECK(cost_function_ != nullptr);
}
bool operator()(double const* const* parameters, double* residuals) const {
- return cost_function_->Evaluate(parameters, residuals, NULL);
+ return cost_function_->Evaluate(parameters, residuals, nullptr);
}
template <typename JetT>
@@ -181,10 +189,14 @@
return true;
}
+ CostFunction* function() const noexcept { return cost_function_.get(); }
+
private:
std::unique_ptr<CostFunction> cost_function_;
};
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_PUBLIC_DYNAMIC_COST_FUNCTION_TO_FUNCTOR_H_
diff --git a/include/ceres/dynamic_numeric_diff_cost_function.h b/include/ceres/dynamic_numeric_diff_cost_function.h
index ccc8f66..1ce384f 100644
--- a/include/ceres/dynamic_numeric_diff_cost_function.h
+++ b/include/ceres/dynamic_numeric_diff_cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,6 +37,7 @@
#include <cmath>
#include <memory>
#include <numeric>
+#include <type_traits>
#include <vector>
#include "ceres/dynamic_cost_function.h"
@@ -71,25 +72,47 @@
// also specify the sizes after creating the
// DynamicNumericDiffCostFunction. For example:
//
-// DynamicAutoDiffCostFunction<MyCostFunctor, CENTRAL> cost_function(
-// new MyCostFunctor());
+// DynamicAutoDiffCostFunction<MyCostFunctor, CENTRAL> cost_function;
// cost_function.AddParameterBlock(5);
// cost_function.AddParameterBlock(10);
// cost_function.SetNumResiduals(21);
-template <typename CostFunctor, NumericDiffMethodType method = CENTRAL>
-class DynamicNumericDiffCostFunction : public DynamicCostFunction {
+template <typename CostFunctor, NumericDiffMethodType kMethod = CENTRAL>
+class DynamicNumericDiffCostFunction final : public DynamicCostFunction {
public:
explicit DynamicNumericDiffCostFunction(
const CostFunctor* functor,
Ownership ownership = TAKE_OWNERSHIP,
const NumericDiffOptions& options = NumericDiffOptions())
- : functor_(functor), ownership_(ownership), options_(options) {}
+ : DynamicNumericDiffCostFunction{
+ std::unique_ptr<const CostFunctor>{functor}, ownership, options} {}
explicit DynamicNumericDiffCostFunction(
- DynamicNumericDiffCostFunction&& other)
- : functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
+ std::unique_ptr<const CostFunctor> functor,
+ const NumericDiffOptions& options = NumericDiffOptions())
+ : DynamicNumericDiffCostFunction{
+ std::move(functor), TAKE_OWNERSHIP, options} {}
- virtual ~DynamicNumericDiffCostFunction() {
+ // Constructs the CostFunctor on the heap and takes the ownership.
+ template <class... Args,
+ std::enable_if_t<std::is_constructible_v<CostFunctor, Args&&...>>* =
+ nullptr>
+ explicit DynamicNumericDiffCostFunction(Args&&... args)
+ // NOTE We explicitly use direct initialization using parentheses instead
+ // of uniform initialization using braces to avoid narrowing conversion
+ // warnings.
+ : DynamicNumericDiffCostFunction{
+ std::make_unique<CostFunctor>(std::forward<Args>(args)...)} {}
+
+ DynamicNumericDiffCostFunction(const DynamicNumericDiffCostFunction&) =
+ delete;
+ DynamicNumericDiffCostFunction& operator=(
+ const DynamicNumericDiffCostFunction&) = delete;
+ DynamicNumericDiffCostFunction(
+ DynamicNumericDiffCostFunction&& other) noexcept = default;
+ DynamicNumericDiffCostFunction& operator=(
+ DynamicNumericDiffCostFunction&& other) noexcept = default;
+
+ ~DynamicNumericDiffCostFunction() override {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();
}
@@ -111,7 +134,7 @@
const bool status =
internal::VariadicEvaluate<internal::DynamicParameterDims>(
*functor_.get(), parameters, residuals);
- if (jacobians == NULL || !status) {
+ if (jacobians == nullptr || !status) {
return status;
}
@@ -119,7 +142,7 @@
int parameters_size = accumulate(block_sizes.begin(), block_sizes.end(), 0);
std::vector<double> parameters_copy(parameters_size);
std::vector<double*> parameters_references_copy(block_sizes.size());
- parameters_references_copy[0] = ¶meters_copy[0];
+ parameters_references_copy[0] = parameters_copy.data();
for (size_t block = 1; block < block_sizes.size(); ++block) {
parameters_references_copy[block] =
parameters_references_copy[block - 1] + block_sizes[block - 1];
@@ -133,21 +156,22 @@
}
for (size_t block = 0; block < block_sizes.size(); ++block) {
- if (jacobians[block] != NULL &&
+ if (jacobians[block] != nullptr &&
!NumericDiff<CostFunctor,
- method,
+ kMethod,
ceres::DYNAMIC,
internal::DynamicParameterDims,
ceres::DYNAMIC,
ceres::DYNAMIC>::
- EvaluateJacobianForParameterBlock(functor_.get(),
- residuals,
- options_,
- this->num_residuals(),
- block,
- block_sizes[block],
- ¶meters_references_copy[0],
- jacobians[block])) {
+ EvaluateJacobianForParameterBlock(
+ functor_.get(),
+ residuals,
+ options_,
+ this->num_residuals(),
+ block,
+ block_sizes[block],
+ parameters_references_copy.data(),
+ jacobians[block])) {
return false;
}
}
@@ -155,11 +179,45 @@
}
private:
+ explicit DynamicNumericDiffCostFunction(
+ std::unique_ptr<const CostFunctor> functor,
+ Ownership ownership,
+ const NumericDiffOptions& options)
+ : functor_(std::move(functor)),
+ ownership_(ownership),
+ options_(options) {}
+
std::unique_ptr<const CostFunctor> functor_;
Ownership ownership_;
NumericDiffOptions options_;
};
+// Deduction guide that allows the user to avoid explicitly specifying the
+// template parameter of DynamicNumericDiffCostFunction. The class can instead
+// be instantiated as follows:
+//
+// new DynamicNumericDiffCostFunction{new MyCostFunctor{}};
+// new DynamicNumericDiffCostFunction{std::make_unique<MyCostFunctor>()};
+//
+template <typename CostFunctor>
+DynamicNumericDiffCostFunction(CostFunctor* functor)
+ -> DynamicNumericDiffCostFunction<CostFunctor>;
+template <typename CostFunctor>
+DynamicNumericDiffCostFunction(CostFunctor* functor, Ownership ownership)
+ -> DynamicNumericDiffCostFunction<CostFunctor>;
+template <typename CostFunctor>
+DynamicNumericDiffCostFunction(CostFunctor* functor,
+ Ownership ownership,
+ const NumericDiffOptions& options)
+ -> DynamicNumericDiffCostFunction<CostFunctor>;
+template <typename CostFunctor>
+DynamicNumericDiffCostFunction(std::unique_ptr<CostFunctor> functor)
+ -> DynamicNumericDiffCostFunction<CostFunctor>;
+template <typename CostFunctor>
+DynamicNumericDiffCostFunction(std::unique_ptr<CostFunctor> functor,
+ const NumericDiffOptions& options)
+ -> DynamicNumericDiffCostFunction<CostFunctor>;
+
} // namespace ceres
#endif // CERES_PUBLIC_DYNAMIC_AUTODIFF_COST_FUNCTION_H_
diff --git a/include/ceres/evaluation_callback.h b/include/ceres/evaluation_callback.h
index b9f5bbb..e582dc8 100644
--- a/include/ceres/evaluation_callback.h
+++ b/include/ceres/evaluation_callback.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,7 +31,7 @@
#ifndef CERES_PUBLIC_EVALUATION_CALLBACK_H_
#define CERES_PUBLIC_EVALUATION_CALLBACK_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
namespace ceres {
@@ -62,12 +62,16 @@
// execute faster.
class CERES_EXPORT EvaluationCallback {
public:
- virtual ~EvaluationCallback() {}
+ virtual ~EvaluationCallback();
// Called before Ceres requests residuals or jacobians for a given setting of
// the parameters. User parameters (the double* values provided to the cost
- // functions) are fixed until the next call to PrepareForEvaluation(). If
- // new_evaluation_point == true, then this is a new point that is different
+ // functions) are fixed until the next call to PrepareForEvaluation().
+ //
+ // If evaluate_jacobians == true, then the user provided CostFunctions will be
+ // asked to evaluate one or more of their Jacobians.
+ //
+ // If new_evaluation_point == true, then this is a new point that is different
// from the last evaluated point. Otherwise, it is the same point that was
// evaluated previously (either jacobian or residual) and the user can use
// cached results from previous evaluations.
diff --git a/include/ceres/first_order_function.h b/include/ceres/first_order_function.h
index 1420153..ea42732 100644
--- a/include/ceres/first_order_function.h
+++ b/include/ceres/first_order_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,7 +31,7 @@
#ifndef CERES_PUBLIC_FIRST_ORDER_FUNCTION_H_
#define CERES_PUBLIC_FIRST_ORDER_FUNCTION_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
namespace ceres {
@@ -39,7 +39,7 @@
// and its gradient.
class CERES_EXPORT FirstOrderFunction {
public:
- virtual ~FirstOrderFunction() {}
+ virtual ~FirstOrderFunction();
// cost is never null. gradient may be null. The return value
// indicates whether the evaluation was successful or not.
diff --git a/include/ceres/gradient_checker.h b/include/ceres/gradient_checker.h
index b79cf86..77f2c8e 100644
--- a/include/ceres/gradient_checker.h
+++ b/include/ceres/gradient_checker.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -25,7 +25,7 @@
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
-// Copyright 2007 Google Inc. All Rights Reserved.
+// Copyright 2023 Google Inc. All Rights Reserved.
//
// Authors: wjr@google.com (William Rucklidge),
// keir@google.com (Keir Mierle),
@@ -40,9 +40,11 @@
#include "ceres/cost_function.h"
#include "ceres/dynamic_numeric_diff_cost_function.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
#include "ceres/internal/fixed_array.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
#include "glog/logging.h"
namespace ceres {
@@ -56,27 +58,27 @@
// ------------------------------------ < relative_precision
// max(J_actual(i, j), J_numeric(i, j))
//
-// where J_actual(i, j) is the jacobian as computed by the supplied cost
-// function (by the user) multiplied by the local parameterization Jacobian
-// and J_numeric is the jacobian as computed by finite differences, multiplied
-// by the local parameterization Jacobian as well.
+// where J_actual(i, j) is the Jacobian as computed by the supplied cost
+// function (by the user) multiplied by the manifold Jacobian and J_numeric is
+// the Jacobian as computed by finite differences, multiplied by the manifold
+// Jacobian as well.
//
// How to use: Fill in an array of pointers to parameter blocks for your
// CostFunction, and then call Probe(). Check that the return value is 'true'.
class CERES_EXPORT GradientChecker {
public:
- // This will not take ownership of the cost function or local
- // parameterizations.
+ // This will not take ownership of the cost function or manifolds.
//
// function: The cost function to probe.
- // local_parameterizations: A vector of local parameterizations for each
- // parameter. May be NULL or contain NULL pointers to indicate that the
- // respective parameter does not have a local parameterization.
+ //
+ // manifolds: A vector of manifolds for each parameter. May be nullptr or
+ // contain nullptrs to indicate that the respective parameter blocks are
+ // Euclidean.
+ //
// options: Options to use for numerical differentiation.
- GradientChecker(
- const CostFunction* function,
- const std::vector<const LocalParameterization*>* local_parameterizations,
- const NumericDiffOptions& options);
+ GradientChecker(const CostFunction* function,
+ const std::vector<const Manifold*>* manifolds,
+ const NumericDiffOptions& options);
// Contains results from a call to Probe for later inspection.
struct CERES_EXPORT ProbeResults {
@@ -87,11 +89,11 @@
Vector residuals;
// The sizes of the Jacobians below are dictated by the cost function's
- // parameter block size and residual block sizes. If a parameter block
- // has a local parameterization associated with it, the size of the "local"
- // Jacobian will be determined by the local parameterization dimension and
- // residual block size, otherwise it will be identical to the regular
- // Jacobian.
+ // parameter block size and residual block sizes. If a parameter block has a
+ // manifold associated with it, the size of the "local" Jacobian will be
+ // determined by the dimension of the manifold (which is the same as the
+ // dimension of the tangent space) and residual block size, otherwise it
+ // will be identical to the regular Jacobian.
// Derivatives as computed by the cost function.
std::vector<Matrix> jacobians;
@@ -114,20 +116,20 @@
};
// Call the cost function, compute alternative Jacobians using finite
- // differencing and compare results. If local parameterizations are given,
- // the Jacobians will be multiplied by the local parameterization Jacobians
- // before performing the check, which effectively means that all errors along
- // the null space of the local parameterization will be ignored.
- // Returns false if the Jacobians don't match, the cost function return false,
- // or if the cost function returns different residual when called with a
- // Jacobian output argument vs. calling it without. Otherwise returns true.
+ // differencing and compare results. If manifolds are given, the Jacobians
+ // will be multiplied by the manifold Jacobians before performing the check,
+ // which effectively means that all errors along the null space of the
+ // manifold will be ignored. Returns false if the Jacobians don't match, the
+ // cost function return false, or if a cost function returns a different
+ // residual when called with a Jacobian output argument vs. calling it
+ // without. Otherwise returns true.
//
// parameters: The parameter values at which to probe.
// relative_precision: A threshold for the relative difference between the
// Jacobians. If the Jacobians differ by more than this amount, then the
// probe fails.
// results: On return, the Jacobians (and other information) will be stored
- // here. May be NULL.
+ // here. May be nullptr.
//
// Returns true if no problems are detected and the difference between the
// Jacobians is less than error_tolerance.
@@ -140,11 +142,13 @@
GradientChecker(const GradientChecker&) = delete;
void operator=(const GradientChecker&) = delete;
- std::vector<const LocalParameterization*> local_parameterizations_;
+ std::vector<const Manifold*> manifolds_;
const CostFunction* function_;
std::unique_ptr<CostFunction> finite_diff_cost_function_;
};
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_PUBLIC_GRADIENT_CHECKER_H_
diff --git a/include/ceres/gradient_problem.h b/include/ceres/gradient_problem.h
index 49d605e..96d6493 100644
--- a/include/ceres/gradient_problem.h
+++ b/include/ceres/gradient_problem.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,8 +34,9 @@
#include <memory>
#include "ceres/first_order_function.h"
-#include "ceres/internal/port.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
+#include "ceres/manifold.h"
namespace ceres {
@@ -43,23 +44,22 @@
// Instances of GradientProblem represent general non-linear
// optimization problems that must be solved using just the value of
-// the objective function and its gradient. Unlike the Problem class,
-// which can only be used to model non-linear least squares problems,
-// instances of GradientProblem not restricted in the form of the
-// objective function.
+// the objective function and its gradient.
+
+// Unlike the Problem class, which can only be used to model non-linear least
+// squares problems, instances of GradientProblem are not restricted in the form
+// of the objective function.
//
-// Structurally GradientProblem is a composition of a
-// FirstOrderFunction and optionally a LocalParameterization.
+// Structurally GradientProblem is a composition of a FirstOrderFunction and
+// optionally a Manifold.
//
-// The FirstOrderFunction is responsible for evaluating the cost and
-// gradient of the objective function.
+// The FirstOrderFunction is responsible for evaluating the cost and gradient of
+// the objective function.
//
-// The LocalParameterization is responsible for going back and forth
-// between the ambient space and the local tangent space. (See
-// local_parameterization.h for more details). When a
-// LocalParameterization is not provided, then the tangent space is
-// assumed to coincide with the ambient Euclidean space that the
-// gradient vector lives in.
+// The Manifold is responsible for going back and forth between the ambient
+// space and the local tangent space. (See manifold.h for more details). When a
+// Manifold is not provided, then the tangent space is assumed to coincide with
+// the ambient Euclidean space that the gradient vector lives in.
//
// Example usage:
//
@@ -78,7 +78,7 @@
// const double y = parameters[1];
//
// cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
-// if (gradient != NULL) {
+// if (gradient != nullptr) {
// gradient[0] = -2.0 * (1.0 - x) - 200.0 * (y - x * x) * 2.0 * x;
// gradient[1] = 200.0 * (y - x * x);
// }
@@ -94,23 +94,32 @@
// Takes ownership of the function.
explicit GradientProblem(FirstOrderFunction* function);
- // Takes ownership of the function and the parameterization.
- GradientProblem(FirstOrderFunction* function,
- LocalParameterization* parameterization);
+ // Takes ownership of the function and the manifold.
+ GradientProblem(FirstOrderFunction* function, Manifold* manifold);
int NumParameters() const;
- int NumLocalParameters() const;
+
+ // Dimension of the manifold (and its tangent space).
+ int NumTangentParameters() const;
// This call is not thread safe.
bool Evaluate(const double* parameters, double* cost, double* gradient) const;
bool Plus(const double* x, const double* delta, double* x_plus_delta) const;
+ const FirstOrderFunction* function() const { return function_.get(); }
+ FirstOrderFunction* mutable_function() { return function_.get(); }
+
+ const Manifold* manifold() const { return manifold_.get(); }
+ Manifold* mutable_manifold() { return manifold_.get(); }
+
private:
std::unique_ptr<FirstOrderFunction> function_;
- std::unique_ptr<LocalParameterization> parameterization_;
+ std::unique_ptr<Manifold> manifold_;
std::unique_ptr<double[]> scratch_;
};
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_PUBLIC_GRADIENT_PROBLEM_H_
diff --git a/include/ceres/gradient_problem_solver.h b/include/ceres/gradient_problem_solver.h
index 9fab62e..f4c392f 100644
--- a/include/ceres/gradient_problem_solver.h
+++ b/include/ceres/gradient_problem_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,6 +36,7 @@
#include <vector>
#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/internal/port.h"
#include "ceres/iteration_callback.h"
#include "ceres/types.h"
@@ -305,7 +306,7 @@
int num_parameters = -1;
// Dimension of the tangent space of the problem.
- int num_local_parameters = -1;
+ int num_tangent_parameters = -1;
// Type of line search direction used.
LineSearchDirectionType line_search_direction_type = LBFGS;
diff --git a/include/ceres/internal/array_selector.h b/include/ceres/internal/array_selector.h
index 841797f..9480146 100644
--- a/include/ceres/internal/array_selector.h
+++ b/include/ceres/internal/array_selector.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2020 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,8 +38,7 @@
#include "ceres/internal/fixed_array.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// StaticFixedArray selects the best array implementation based on template
// arguments. If the size is not known at compile-time, pass
@@ -73,23 +72,24 @@
true,
fits_on_stack>
: ceres::internal::FixedArray<T, max_num_elements_on_stack> {
- ArraySelector(int s)
+ explicit ArraySelector(int s)
: ceres::internal::FixedArray<T, max_num_elements_on_stack>(s) {}
};
template <typename T, int num_elements, int max_num_elements_on_stack>
struct ArraySelector<T, num_elements, max_num_elements_on_stack, false, true>
: std::array<T, num_elements> {
- ArraySelector(int s) { CHECK_EQ(s, num_elements); }
+ explicit ArraySelector(int s) { CHECK_EQ(s, num_elements); }
};
template <typename T, int num_elements, int max_num_elements_on_stack>
struct ArraySelector<T, num_elements, max_num_elements_on_stack, false, false>
: std::vector<T> {
- ArraySelector(int s) : std::vector<T>(s) { CHECK_EQ(s, num_elements); }
+ explicit ArraySelector(int s) : std::vector<T>(s) {
+ CHECK_EQ(s, num_elements);
+ }
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_ARRAY_SELECTOR_H_
diff --git a/include/ceres/internal/autodiff.h b/include/ceres/internal/autodiff.h
index 9d7de75..8b02a2b 100644
--- a/include/ceres/internal/autodiff.h
+++ b/include/ceres/internal/autodiff.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -132,17 +132,16 @@
// respectively. This is how autodiff works for functors taking multiple vector
// valued arguments (up to 6).
//
-// Jacobian NULL pointers
-// ----------------------
-// In general, the functions below will accept NULL pointers for all or some of
-// the Jacobian parameters, meaning that those Jacobians will not be computed.
+// Jacobian null pointers (nullptr)
+// --------------------------------
+// In general, the functions below will accept nullptr for all or some of the
+// Jacobian parameters, meaning that those Jacobians will not be computed.
#ifndef CERES_PUBLIC_INTERNAL_AUTODIFF_H_
#define CERES_PUBLIC_INTERNAL_AUTODIFF_H_
-#include <stddef.h>
-
#include <array>
+#include <cstddef>
#include <utility>
#include "ceres/internal/array_selector.h"
@@ -165,8 +164,7 @@
#define CERES_AUTODIFF_MAX_RESIDUALS_ON_STACK 20
#endif
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Extends src by a 1st order perturbation for every dimension and puts it in
// dst. The size of src is N. Since this is also used for perturbations in
@@ -198,7 +196,7 @@
template <int N, int Offset, typename T, typename JetT>
struct Make1stOrderPerturbation<N, N, Offset, T, JetT> {
public:
- static void Apply(const T* src, JetT* dst) {}
+ static void Apply(const T* /* NOT USED */, JetT* /* NOT USED */) {}
};
// Calls Make1stOrderPerturbation for every parameter block.
@@ -311,7 +309,7 @@
int dynamic_num_outputs,
T* function_value,
T** jacobians) {
- typedef Jet<T, ParameterDims::kNumParameters> JetT;
+ using JetT = Jet<T, ParameterDims::kNumParameters>;
using Parameters = typename ParameterDims::Parameters;
if (kNumResiduals != DYNAMIC) {
@@ -360,7 +358,6 @@
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_AUTODIFF_H_
diff --git a/include/ceres/internal/disable_warnings.h b/include/ceres/internal/disable_warnings.h
index d7766a0..b6e38aa 100644
--- a/include/ceres/internal/disable_warnings.h
+++ b/include/ceres/internal/disable_warnings.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
diff --git a/include/ceres/internal/eigen.h b/include/ceres/internal/eigen.h
index b6d0b7f..fee6b52 100644
--- a/include/ceres/internal/eigen.h
+++ b/include/ceres/internal/eigen.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,39 +35,39 @@
namespace ceres {
-typedef Eigen::Matrix<double, Eigen::Dynamic, 1> Vector;
-typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>
- Matrix;
-typedef Eigen::Map<Vector> VectorRef;
-typedef Eigen::Map<Matrix> MatrixRef;
-typedef Eigen::Map<const Vector> ConstVectorRef;
-typedef Eigen::Map<const Matrix> ConstMatrixRef;
+using Vector = Eigen::Matrix<double, Eigen::Dynamic, 1>;
+using Matrix =
+ Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
+using VectorRef = Eigen::Map<Vector>;
+using MatrixRef = Eigen::Map<Matrix>;
+using ConstVectorRef = Eigen::Map<const Vector>;
+using ConstMatrixRef = Eigen::Map<const Matrix>;
// Column major matrices for DenseSparseMatrix/DenseQRSolver
-typedef Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::ColMajor>
- ColMajorMatrix;
+using ColMajorMatrix =
+ Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::ColMajor>;
-typedef Eigen::Map<ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>>
- ColMajorMatrixRef;
+using ColMajorMatrixRef =
+ Eigen::Map<ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>>;
-typedef Eigen::Map<const ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>>
- ConstColMajorMatrixRef;
+using ConstColMajorMatrixRef =
+ Eigen::Map<const ColMajorMatrix, 0, Eigen::Stride<Eigen::Dynamic, 1>>;
// C++ does not support templated typdefs, thus the need for this
// struct so that we can support statically sized Matrix and Maps.
template <int num_rows = Eigen::Dynamic, int num_cols = Eigen::Dynamic>
struct EigenTypes {
- typedef Eigen::Matrix<double,
- num_rows,
- num_cols,
- num_cols == 1 ? Eigen::ColMajor : Eigen::RowMajor>
- Matrix;
+ using Matrix =
+ Eigen::Matrix<double,
+ num_rows,
+ num_cols,
+ num_cols == 1 ? Eigen::ColMajor : Eigen::RowMajor>;
- typedef Eigen::Map<Matrix> MatrixRef;
- typedef Eigen::Map<const Matrix> ConstMatrixRef;
- typedef Eigen::Matrix<double, num_rows, 1> Vector;
- typedef Eigen::Map<Eigen::Matrix<double, num_rows, 1>> VectorRef;
- typedef Eigen::Map<const Eigen::Matrix<double, num_rows, 1>> ConstVectorRef;
+ using MatrixRef = Eigen::Map<Matrix>;
+ using ConstMatrixRef = Eigen::Map<const Matrix>;
+ using Vector = Eigen::Matrix<double, num_rows, 1>;
+ using VectorRef = Eigen::Map<Eigen::Matrix<double, num_rows, 1>>;
+ using ConstVectorRef = Eigen::Map<const Eigen::Matrix<double, num_rows, 1>>;
};
} // namespace ceres
diff --git a/include/ceres/internal/euler_angles.h b/include/ceres/internal/euler_angles.h
new file mode 100644
index 0000000..38f2702
--- /dev/null
+++ b/include/ceres/internal/euler_angles.h
@@ -0,0 +1,199 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+
+#ifndef CERES_PUBLIC_INTERNAL_EULER_ANGLES_H_
+#define CERES_PUBLIC_INTERNAL_EULER_ANGLES_H_
+
+#include <type_traits>
+
+namespace ceres {
+namespace internal {
+
+// The EulerSystem struct represents an Euler Angle Convention in compile time.
+// It acts like a trait structure and is also used as a tag for dispatching
+// Euler angle conversion function templates
+//
+// Internally, it implements the convention laid out in "Euler angle
+// conversion", Ken Shoemake, Graphics Gems IV, where a choice of axis for the
+// first rotation (out of 3) and 3 binary choices compactly specify all 24
+// rotation conventions
+//
+// - InnerAxis: Axis for the first rotation. This is specified by struct tags
+// axis::X, axis::Y, and axis::Z
+//
+// - Parity: Defines the parity of the axis permutation. The axis sequence has
+// Even parity if the second axis of rotation is 'greater-than' the first axis
+// of rotation according to the order X<Y<Z<X, otherwise it has Odd parity.
+// This is specified by struct tags Even and Odd
+//
+// - AngleConvention: Defines whether Proper Euler Angles (originally defined
+// by Euler, which has the last axis repeated, i.e. ZYZ, ZXZ, etc), or
+// Tait-Bryan Angles (introduced by the nautical and aerospace fields, i.e.
+// using ZYX for roll-pitch-yaw) are used. This is specified by struct Tags
+// ProperEuler and TaitBryan.
+//
+// - FrameConvention: Defines whether the three rotations are be in a global
+// frame of reference (extrinsic) or in a body centred frame of reference
+// (intrinsic). This is specified by struct tags Extrinsic and Intrinsic
+
+namespace axis {
+struct X : std::integral_constant<int, 0> {};
+struct Y : std::integral_constant<int, 1> {};
+struct Z : std::integral_constant<int, 2> {};
+} // namespace axis
+
+struct Even;
+struct Odd;
+
+struct ProperEuler;
+struct TaitBryan;
+
+struct Extrinsic;
+struct Intrinsic;
+
+template <typename InnerAxisType,
+ typename ParityType,
+ typename AngleConventionType,
+ typename FrameConventionType>
+struct EulerSystem {
+ static constexpr bool kIsParityOdd = std::is_same_v<ParityType, Odd>;
+ static constexpr bool kIsProperEuler =
+ std::is_same_v<AngleConventionType, ProperEuler>;
+ static constexpr bool kIsIntrinsic =
+ std::is_same_v<FrameConventionType, Intrinsic>;
+
+ static constexpr int kAxes[3] = {
+ InnerAxisType::value,
+ (InnerAxisType::value + 1 + static_cast<int>(kIsParityOdd)) % 3,
+ (InnerAxisType::value + 2 - static_cast<int>(kIsParityOdd)) % 3};
+};
+
+} // namespace internal
+
+// Define human readable aliases to the type of the tags
+using ExtrinsicXYZ = internal::EulerSystem<internal::axis::X,
+ internal::Even,
+ internal::TaitBryan,
+ internal::Extrinsic>;
+using ExtrinsicXYX = internal::EulerSystem<internal::axis::X,
+ internal::Even,
+ internal::ProperEuler,
+ internal::Extrinsic>;
+using ExtrinsicXZY = internal::EulerSystem<internal::axis::X,
+ internal::Odd,
+ internal::TaitBryan,
+ internal::Extrinsic>;
+using ExtrinsicXZX = internal::EulerSystem<internal::axis::X,
+ internal::Odd,
+ internal::ProperEuler,
+ internal::Extrinsic>;
+using ExtrinsicYZX = internal::EulerSystem<internal::axis::Y,
+ internal::Even,
+ internal::TaitBryan,
+ internal::Extrinsic>;
+using ExtrinsicYZY = internal::EulerSystem<internal::axis::Y,
+ internal::Even,
+ internal::ProperEuler,
+ internal::Extrinsic>;
+using ExtrinsicYXZ = internal::EulerSystem<internal::axis::Y,
+ internal::Odd,
+ internal::TaitBryan,
+ internal::Extrinsic>;
+using ExtrinsicYXY = internal::EulerSystem<internal::axis::Y,
+ internal::Odd,
+ internal::ProperEuler,
+ internal::Extrinsic>;
+using ExtrinsicZXY = internal::EulerSystem<internal::axis::Z,
+ internal::Even,
+ internal::TaitBryan,
+ internal::Extrinsic>;
+using ExtrinsicZXZ = internal::EulerSystem<internal::axis::Z,
+ internal::Even,
+ internal::ProperEuler,
+ internal::Extrinsic>;
+using ExtrinsicZYX = internal::EulerSystem<internal::axis::Z,
+ internal::Odd,
+ internal::TaitBryan,
+ internal::Extrinsic>;
+using ExtrinsicZYZ = internal::EulerSystem<internal::axis::Z,
+ internal::Odd,
+ internal::ProperEuler,
+ internal::Extrinsic>;
+/* Rotating axes */
+using IntrinsicZYX = internal::EulerSystem<internal::axis::X,
+ internal::Even,
+ internal::TaitBryan,
+ internal::Intrinsic>;
+using IntrinsicXYX = internal::EulerSystem<internal::axis::X,
+ internal::Even,
+ internal::ProperEuler,
+ internal::Intrinsic>;
+using IntrinsicYZX = internal::EulerSystem<internal::axis::X,
+ internal::Odd,
+ internal::TaitBryan,
+ internal::Intrinsic>;
+using IntrinsicXZX = internal::EulerSystem<internal::axis::X,
+ internal::Odd,
+ internal::ProperEuler,
+ internal::Intrinsic>;
+using IntrinsicXZY = internal::EulerSystem<internal::axis::Y,
+ internal::Even,
+ internal::TaitBryan,
+ internal::Intrinsic>;
+using IntrinsicYZY = internal::EulerSystem<internal::axis::Y,
+ internal::Even,
+ internal::ProperEuler,
+ internal::Intrinsic>;
+using IntrinsicZXY = internal::EulerSystem<internal::axis::Y,
+ internal::Odd,
+ internal::TaitBryan,
+ internal::Intrinsic>;
+using IntrinsicYXY = internal::EulerSystem<internal::axis::Y,
+ internal::Odd,
+ internal::ProperEuler,
+ internal::Intrinsic>;
+using IntrinsicYXZ = internal::EulerSystem<internal::axis::Z,
+ internal::Even,
+ internal::TaitBryan,
+ internal::Intrinsic>;
+using IntrinsicZXZ = internal::EulerSystem<internal::axis::Z,
+ internal::Even,
+ internal::ProperEuler,
+ internal::Intrinsic>;
+using IntrinsicXYZ = internal::EulerSystem<internal::axis::Z,
+ internal::Odd,
+ internal::TaitBryan,
+ internal::Intrinsic>;
+using IntrinsicZYZ = internal::EulerSystem<internal::axis::Z,
+ internal::Odd,
+ internal::ProperEuler,
+ internal::Intrinsic>;
+
+} // namespace ceres
+
+#endif // CERES_PUBLIC_INTERNAL_EULER_ANGLES_H_
diff --git a/include/ceres/internal/fixed_array.h b/include/ceres/internal/fixed_array.h
index dcbddcd..0e35f63 100644
--- a/include/ceres/internal/fixed_array.h
+++ b/include/ceres/internal/fixed_array.h
@@ -41,8 +41,7 @@
#include "ceres/internal/memory.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
constexpr static auto kFixedArrayUseDefault = static_cast<size_t>(-1);
@@ -372,8 +371,8 @@
return std::addressof(ptr->array);
}
- static_assert(sizeof(StorageElement) == sizeof(value_type), "");
- static_assert(alignof(StorageElement) == alignof(value_type), "");
+ static_assert(sizeof(StorageElement) == sizeof(value_type));
+ static_assert(alignof(StorageElement) == alignof(value_type));
class NonEmptyInlinedStorage {
public:
@@ -461,7 +460,6 @@
constexpr typename FixedArray<T, N, A>::size_type
FixedArray<T, N, A>::inline_elements;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_FIXED_ARRAY_H_
diff --git a/include/ceres/internal/householder_vector.h b/include/ceres/internal/householder_vector.h
index 55f68e5..dd8361c 100644
--- a/include/ceres/internal/householder_vector.h
+++ b/include/ceres/internal/householder_vector.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://code.google.com/p/ceres-solver/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,8 +34,7 @@
#include "Eigen/Core"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Algorithm 5.1.1 from 'Matrix Computations' by Golub et al. (Johns Hopkins
// Studies in Mathematical Sciences) but using the nth element of the input
@@ -82,7 +81,14 @@
v->head(v->rows() - 1) /= v_pivot;
}
-} // namespace internal
-} // namespace ceres
+template <typename XVectorType, typename Derived>
+typename Derived::PlainObject ApplyHouseholderVector(
+ const XVectorType& y,
+ const Eigen::MatrixBase<Derived>& v,
+ const typename Derived::Scalar& beta) {
+ return (y - v * (beta * (v.transpose() * y)));
+}
+
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_HOUSEHOLDER_VECTOR_H_
diff --git a/include/ceres/internal/integer_sequence_algorithm.h b/include/ceres/internal/integer_sequence_algorithm.h
index 8c0f3bc..0c27d72 100644
--- a/include/ceres/internal/integer_sequence_algorithm.h
+++ b/include/ceres/internal/integer_sequence_algorithm.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -27,6 +27,7 @@
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: jodebo_beck@gmx.de (Johannes Beck)
+// sergiu.deitsch@gmail.com (Sergiu Deitsch)
//
// Algorithms to be used together with integer_sequence, like computing the sum
// or the exclusive scan (sometimes called exclusive prefix sum) at compile
@@ -37,70 +38,9 @@
#include <utility>
-namespace ceres {
-namespace internal {
+#include "ceres/jet_fwd.h"
-// Implementation of calculating the sum of an integer sequence.
-// Recursively instantiate SumImpl and calculate the sum of the N first
-// numbers. This reduces the number of instantiations and speeds up
-// compilation.
-//
-// Examples:
-// 1) integer_sequence<int, 5>:
-// Value = 5
-//
-// 2) integer_sequence<int, 4, 2>:
-// Value = 4 + 2 + SumImpl<integer_sequence<int>>::Value
-// Value = 4 + 2 + 0
-//
-// 3) integer_sequence<int, 2, 1, 4>:
-// Value = 2 + 1 + SumImpl<integer_sequence<int, 4>>::Value
-// Value = 2 + 1 + 4
-template <typename Seq>
-struct SumImpl;
-
-// Strip of and sum the first number.
-template <typename T, T N, T... Ns>
-struct SumImpl<std::integer_sequence<T, N, Ns...>> {
- static constexpr T Value =
- N + SumImpl<std::integer_sequence<T, Ns...>>::Value;
-};
-
-// Strip of and sum the first two numbers.
-template <typename T, T N1, T N2, T... Ns>
-struct SumImpl<std::integer_sequence<T, N1, N2, Ns...>> {
- static constexpr T Value =
- N1 + N2 + SumImpl<std::integer_sequence<T, Ns...>>::Value;
-};
-
-// Strip of and sum the first four numbers.
-template <typename T, T N1, T N2, T N3, T N4, T... Ns>
-struct SumImpl<std::integer_sequence<T, N1, N2, N3, N4, Ns...>> {
- static constexpr T Value =
- N1 + N2 + N3 + N4 + SumImpl<std::integer_sequence<T, Ns...>>::Value;
-};
-
-// Only one number is left. 'Value' is just that number ('recursion' ends).
-template <typename T, T N>
-struct SumImpl<std::integer_sequence<T, N>> {
- static constexpr T Value = N;
-};
-
-// No number is left. 'Value' is the identity element (for sum this is zero).
-template <typename T>
-struct SumImpl<std::integer_sequence<T>> {
- static constexpr T Value = T(0);
-};
-
-// Calculate the sum of an integer sequence. The resulting sum will be stored in
-// 'Value'.
-template <typename Seq>
-class Sum {
- using T = typename Seq::value_type;
-
- public:
- static constexpr T Value = SumImpl<Seq>::Value;
-};
+namespace ceres::internal {
// Implementation of calculating an exclusive scan (exclusive prefix sum) of an
// integer sequence. Exclusive means that the i-th input element is not included
@@ -164,7 +104,96 @@
template <typename Seq>
using ExclusiveScan = typename ExclusiveScanT<Seq>::Type;
-} // namespace internal
-} // namespace ceres
+// Removes all elements from a integer sequence corresponding to specified
+// ValueToRemove.
+//
+// This type should not be used directly but instead RemoveValue.
+template <typename T, T ValueToRemove, typename... Sequence>
+struct RemoveValueImpl;
+
+// Final filtered sequence
+template <typename T, T ValueToRemove, T... Values>
+struct RemoveValueImpl<T,
+ ValueToRemove,
+ std::integer_sequence<T, Values...>,
+ std::integer_sequence<T>> {
+ using type = std::integer_sequence<T, Values...>;
+};
+
+// Found a matching value
+template <typename T, T ValueToRemove, T... Head, T... Tail>
+struct RemoveValueImpl<T,
+ ValueToRemove,
+ std::integer_sequence<T, Head...>,
+ std::integer_sequence<T, ValueToRemove, Tail...>>
+ : RemoveValueImpl<T,
+ ValueToRemove,
+ std::integer_sequence<T, Head...>,
+ std::integer_sequence<T, Tail...>> {};
+
+// Move one element from the tail to the head
+template <typename T, T ValueToRemove, T... Head, T MiddleValue, T... Tail>
+struct RemoveValueImpl<T,
+ ValueToRemove,
+ std::integer_sequence<T, Head...>,
+ std::integer_sequence<T, MiddleValue, Tail...>>
+ : RemoveValueImpl<T,
+ ValueToRemove,
+ std::integer_sequence<T, Head..., MiddleValue>,
+ std::integer_sequence<T, Tail...>> {};
+
+// Start recursion by splitting the integer sequence into two separate ones
+template <typename T, T ValueToRemove, T... Tail>
+struct RemoveValueImpl<T, ValueToRemove, std::integer_sequence<T, Tail...>>
+ : RemoveValueImpl<T,
+ ValueToRemove,
+ std::integer_sequence<T>,
+ std::integer_sequence<T, Tail...>> {};
+
+// RemoveValue takes an integer Sequence of arbitrary type and removes all
+// elements matching ValueToRemove.
+//
+// In contrast to RemoveValueImpl, this implementation deduces the value type
+// eliminating the need to specify it explicitly.
+//
+// As an example, RemoveValue<std::integer_sequence<int, 1, 2, 3>, 4>::type will
+// not transform the type of the original sequence. However,
+// RemoveValue<std::integer_sequence<int, 0, 0, 2>, 2>::type will generate a new
+// sequence of type std::integer_sequence<int, 0, 0> by removing the value 2.
+template <typename Sequence, typename Sequence::value_type ValueToRemove>
+struct RemoveValue
+ : RemoveValueImpl<typename Sequence::value_type, ValueToRemove, Sequence> {
+};
+
+// Convenience template alias for RemoveValue.
+template <typename Sequence, typename Sequence::value_type ValueToRemove>
+using RemoveValue_t = typename RemoveValue<Sequence, ValueToRemove>::type;
+
+// Returns true if all elements of Values are equal to HeadValue.
+//
+// Returns true if Values is empty.
+template <typename T, T HeadValue, T... Values>
+inline constexpr bool AreAllEqual_v = ((HeadValue == Values) && ...);
+
+// Predicate determining whether an integer sequence is either empty or all
+// values are equal.
+template <typename Sequence>
+struct IsEmptyOrAreAllEqual;
+
+// Empty case.
+template <typename T>
+struct IsEmptyOrAreAllEqual<std::integer_sequence<T>> : std::true_type {};
+
+// General case for sequences containing at least one value.
+template <typename T, T HeadValue, T... Values>
+struct IsEmptyOrAreAllEqual<std::integer_sequence<T, HeadValue, Values...>>
+ : std::integral_constant<bool, AreAllEqual_v<T, HeadValue, Values...>> {};
+
+// Convenience variable template for IsEmptyOrAreAllEqual.
+template <class Sequence>
+inline constexpr bool IsEmptyOrAreAllEqual_v =
+ IsEmptyOrAreAllEqual<Sequence>::value;
+
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_INTEGER_SEQUENCE_ALGORITHM_H_
diff --git a/include/ceres/internal/jet_traits.h b/include/ceres/internal/jet_traits.h
new file mode 100644
index 0000000..f504a61
--- /dev/null
+++ b/include/ceres/internal/jet_traits.h
@@ -0,0 +1,195 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sergiu.deitsch@gmail.com (Sergiu Deitsch)
+//
+
+#ifndef CERES_PUBLIC_INTERNAL_JET_TRAITS_H_
+#define CERES_PUBLIC_INTERNAL_JET_TRAITS_H_
+
+#include <tuple>
+#include <type_traits>
+#include <utility>
+
+#include "ceres/internal/integer_sequence_algorithm.h"
+#include "ceres/jet_fwd.h"
+
+namespace ceres {
+namespace internal {
+
+// Predicate that determines whether any of the Types is a Jet.
+template <typename... Types>
+struct AreAnyJet : std::false_type {};
+
+template <typename T, typename... Types>
+struct AreAnyJet<T, Types...> : AreAnyJet<Types...> {};
+
+template <typename T, int N, typename... Types>
+struct AreAnyJet<Jet<T, N>, Types...> : std::true_type {};
+
+// Convenience variable template for AreAnyJet.
+template <typename... Types>
+inline constexpr bool AreAnyJet_v = AreAnyJet<Types...>::value;
+
+// Extracts the underlying floating-point from a type T.
+template <typename T, typename E = void>
+struct UnderlyingScalar {
+ using type = T;
+};
+
+template <typename T, int N>
+struct UnderlyingScalar<Jet<T, N>> : UnderlyingScalar<T> {};
+
+// Convenience template alias for UnderlyingScalar type trait.
+template <typename T>
+using UnderlyingScalar_t = typename UnderlyingScalar<T>::type;
+
+// Predicate determining whether all Types in the pack are the same.
+//
+// Specifically, the predicate applies std::is_same recursively to pairs of
+// Types in the pack.
+template <typename T1, typename... Types>
+inline constexpr bool AreAllSame_v = (std::is_same<T1, Types>::value && ...);
+
+// Determines the rank of a type. This allows to ensure that types passed as
+// arguments are compatible to each other. The rank of Jet is determined by the
+// dimensions of the dual part. The rank of a scalar is always 0.
+// Non-specialized types default to a rank of -1.
+template <typename T, typename E = void>
+struct Rank : std::integral_constant<int, -1> {};
+
+// The rank of a scalar is 0.
+template <typename T>
+struct Rank<T, std::enable_if_t<std::is_scalar<T>::value>>
+ : std::integral_constant<int, 0> {};
+
+// The rank of a Jet is given by its dimensionality.
+template <typename T, int N>
+struct Rank<Jet<T, N>> : std::integral_constant<int, N> {};
+
+// Convenience variable template for Rank.
+template <typename T>
+inline constexpr int Rank_v = Rank<T>::value;
+
+// Constructs an integer sequence of ranks for each of the Types in the pack.
+template <typename... Types>
+using Ranks_t = std::integer_sequence<int, Rank_v<Types>...>;
+
+// Returns the scalar part of a type. This overload acts as an identity.
+template <typename T>
+constexpr decltype(auto) AsScalar(T&& value) noexcept {
+ return std::forward<T>(value);
+}
+
+// Recursively unwraps the scalar part of a Jet until a non-Jet scalar type is
+// encountered.
+template <typename T, int N>
+constexpr decltype(auto) AsScalar(const Jet<T, N>& value) noexcept(
+ noexcept(AsScalar(value.a))) {
+ return AsScalar(value.a);
+}
+
+} // namespace internal
+
+// Type trait ensuring at least one of the types is a Jet,
+// the underlying scalar types are the same and Jet dimensions match.
+//
+// The type trait can be further specialized if necessary.
+//
+// This trait is a candidate for a concept definition once C++20 features can
+// be used.
+template <typename... Types>
+// clang-format off
+struct CompatibleJetOperands : std::integral_constant
+<
+ bool,
+ // At least one of the types is a Jet
+ internal::AreAnyJet_v<Types...> &&
+ // The underlying floating-point types are exactly the same
+ internal::AreAllSame_v<internal::UnderlyingScalar_t<Types>...> &&
+ // Non-zero ranks of types are equal
+ internal::IsEmptyOrAreAllEqual_v<internal::RemoveValue_t<internal::Ranks_t<Types...>, 0>>
+>
+// clang-format on
+{};
+
+// Single Jet operand is always compatible.
+template <typename T, int N>
+struct CompatibleJetOperands<Jet<T, N>> : std::true_type {};
+
+// Single non-Jet operand is always incompatible.
+template <typename T>
+struct CompatibleJetOperands<T> : std::false_type {};
+
+// Empty operands are always incompatible.
+template <>
+struct CompatibleJetOperands<> : std::false_type {};
+
+// Convenience variable template ensuring at least one of the types is a Jet,
+// the underlying scalar types are the same and Jet dimensions match.
+//
+// This trait is a candidate for a concept definition once C++20 features can
+// be used.
+template <typename... Types>
+inline constexpr bool CompatibleJetOperands_v =
+ CompatibleJetOperands<Types...>::value;
+
+// Type trait ensuring at least one of the types is a Jet,
+// the underlying scalar types are compatible among each other and Jet
+// dimensions match.
+//
+// The type trait can be further specialized if necessary.
+//
+// This trait is a candidate for a concept definition once C++20 features can
+// be used.
+template <typename... Types>
+// clang-format off
+struct PromotableJetOperands : std::integral_constant
+<
+ bool,
+ // Types can be compatible among each other
+ internal::AreAnyJet_v<Types...> &&
+ // Non-zero ranks of types are equal
+ internal::IsEmptyOrAreAllEqual_v<internal::RemoveValue_t<internal::Ranks_t<Types...>, 0>>
+>
+// clang-format on
+{};
+
+// Convenience variable template ensuring at least one of the types is a Jet,
+// the underlying scalar types are compatible among each other and Jet
+// dimensions match.
+//
+// This trait is a candidate for a concept definition once C++20 features can
+// be used.
+template <typename... Types>
+inline constexpr bool PromotableJetOperands_v =
+ PromotableJetOperands<Types...>::value;
+
+} // namespace ceres
+
+#endif // CERES_PUBLIC_INTERNAL_JET_TRAITS_H_
diff --git a/include/ceres/internal/line_parameterization.h b/include/ceres/internal/line_parameterization.h
index eda3901..f50603d 100644
--- a/include/ceres/internal/line_parameterization.h
+++ b/include/ceres/internal/line_parameterization.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2020 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
diff --git a/include/ceres/internal/memory.h b/include/ceres/internal/memory.h
index 45c5b67..e54cf2b 100644
--- a/include/ceres/internal/memory.h
+++ b/include/ceres/internal/memory.h
@@ -40,8 +40,7 @@
} while (false)
#endif // CERES_HAVE_EXCEPTIONS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template <typename Allocator, typename Iterator, typename... Args>
void ConstructRange(Allocator& alloc,
@@ -84,7 +83,6 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_MEMORY_H_
diff --git a/include/ceres/internal/numeric_diff.h b/include/ceres/internal/numeric_diff.h
index ff7a2c3..ba28bec 100644
--- a/include/ceres/internal/numeric_diff.h
+++ b/include/ceres/internal/numeric_diff.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -47,8 +47,7 @@
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// This is split from the main class because C++ doesn't allow partial template
// specializations for member functions. The alternative is to repeat the main
@@ -86,18 +85,18 @@
(kParameterBlockSize != ceres::DYNAMIC ? kParameterBlockSize
: parameter_block_size);
- typedef Matrix<double, kNumResiduals, 1> ResidualVector;
- typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
+ using ResidualVector = Matrix<double, kNumResiduals, 1>;
+ using ParameterVector = Matrix<double, kParameterBlockSize, 1>;
// The convoluted reasoning for choosing the Row/Column major
// ordering of the matrix is an artifact of the restrictions in
// Eigen that prevent it from creating RowMajor matrices with a
// single column. In these cases, we ask for a ColMajor matrix.
- typedef Matrix<double,
- kNumResiduals,
- kParameterBlockSize,
- (kParameterBlockSize == 1) ? ColMajor : RowMajor>
- JacobianMatrix;
+ using JacobianMatrix =
+ Matrix<double,
+ kNumResiduals,
+ kParameterBlockSize,
+ (kParameterBlockSize == 1) ? ColMajor : RowMajor>;
Map<JacobianMatrix> parameter_jacobian(
jacobian, num_residuals_internal, parameter_block_size_internal);
@@ -121,7 +120,7 @@
// thus ridders_relative_initial_step_size is used.
if (kMethod == RIDDERS) {
min_step_size =
- std::max(min_step_size, options.ridders_relative_initial_step_size);
+ (std::max)(min_step_size, options.ridders_relative_initial_step_size);
}
// For each parameter in the parameter block, use finite differences to
@@ -132,7 +131,7 @@
num_residuals_internal);
for (int j = 0; j < parameter_block_size_internal; ++j) {
- const double delta = std::max(min_step_size, step_size(j));
+ const double delta = (std::max)(min_step_size, step_size(j));
if (kMethod == RIDDERS) {
if (!EvaluateRiddersJacobianColumn(functor,
@@ -184,8 +183,8 @@
using Eigen::Map;
using Eigen::Matrix;
- typedef Matrix<double, kNumResiduals, 1> ResidualVector;
- typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
+ using ResidualVector = Matrix<double, kNumResiduals, 1>;
+ using ParameterVector = Matrix<double, kParameterBlockSize, 1>;
Map<const ParameterVector> x(x_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr, parameter_block_size);
@@ -260,10 +259,10 @@
using Eigen::Map;
using Eigen::Matrix;
- typedef Matrix<double, kNumResiduals, 1> ResidualVector;
- typedef Matrix<double, kNumResiduals, Eigen::Dynamic>
- ResidualCandidateMatrix;
- typedef Matrix<double, kParameterBlockSize, 1> ParameterVector;
+ using ResidualVector = Matrix<double, kNumResiduals, 1>;
+ using ResidualCandidateMatrix =
+ Matrix<double, kNumResiduals, Eigen::Dynamic>;
+ using ParameterVector = Matrix<double, kParameterBlockSize, 1>;
Map<const ParameterVector> x(x_ptr, parameter_block_size);
Map<ParameterVector> x_plus_delta(x_plus_delta_ptr, parameter_block_size);
@@ -296,7 +295,7 @@
// norm_error is supposed to decrease as the finite difference tableau
// generation progresses, serving both as an estimate for differentiation
// error and as a measure of differentiation numerical stability.
- double norm_error = std::numeric_limits<double>::max();
+ double norm_error = (std::numeric_limits<double>::max)();
// Loop over decreasing step sizes until:
// 1. Error is smaller than a given value (ridders_epsilon),
@@ -342,7 +341,7 @@
options.ridders_step_shrink_factor;
// Compute the difference between the previous value and the current.
- double candidate_error = std::max(
+ double candidate_error = (std::max)(
(current_candidates->col(k) - current_candidates->col(k - 1))
.norm(),
(current_candidates->col(k) - previous_candidates->col(k - 1))
@@ -502,7 +501,6 @@
}
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_NUMERIC_DIFF_H_
diff --git a/include/ceres/internal/parameter_dims.h b/include/ceres/internal/parameter_dims.h
index 2402106..b7cf935 100644
--- a/include/ceres/internal/parameter_dims.h
+++ b/include/ceres/internal/parameter_dims.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,22 +36,7 @@
#include "ceres/internal/integer_sequence_algorithm.h"
-namespace ceres {
-namespace internal {
-
-// Checks, whether the given parameter block sizes are valid. Valid means every
-// dimension is bigger than zero.
-constexpr bool IsValidParameterDimensionSequence(std::integer_sequence<int>) {
- return true;
-}
-
-template <int N, int... Ts>
-constexpr bool IsValidParameterDimensionSequence(
- std::integer_sequence<int, N, Ts...>) {
- return (N <= 0) ? false
- : IsValidParameterDimensionSequence(
- std::integer_sequence<int, Ts...>());
-}
+namespace ceres::internal {
// Helper class that represents the parameter dimensions. The parameter
// dimensions are either dynamic or the sizes are known at compile time. It is
@@ -70,8 +55,7 @@
// The parameter dimensions are only valid if all parameter block dimensions
// are greater than zero.
- static constexpr bool kIsValid =
- IsValidParameterDimensionSequence(Parameters());
+ static constexpr bool kIsValid = ((Ns > 0) && ...);
static_assert(kIsValid,
"Invalid parameter block dimension detected. Each parameter "
"block dimension must be bigger than zero.");
@@ -81,8 +65,7 @@
static_assert(kIsDynamic || kNumParameterBlocks > 0,
"At least one parameter block must be specified.");
- static constexpr int kNumParameters =
- Sum<std::integer_sequence<int, Ns...>>::Value;
+ static constexpr int kNumParameters = (Ns + ... + 0);
static constexpr int GetDim(int dim) { return params_[dim]; }
@@ -118,7 +101,6 @@
using StaticParameterDims = ParameterDims<false, Ns...>;
using DynamicParameterDims = ParameterDims<true>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_PARAMETER_DIMS_H_
diff --git a/include/ceres/internal/port.h b/include/ceres/internal/port.h
index 040a1ef..d78ed51 100644
--- a/include/ceres/internal/port.h
+++ b/include/ceres/internal/port.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,80 +31,81 @@
#ifndef CERES_PUBLIC_INTERNAL_PORT_H_
#define CERES_PUBLIC_INTERNAL_PORT_H_
-// This file needs to compile as c code.
-#include "ceres/internal/config.h"
+#include <cmath> // Necessary for __cpp_lib_math_special_functions feature test
-#if defined(CERES_USE_OPENMP)
-#if defined(CERES_USE_CXX_THREADS) || defined(CERES_NO_THREADS)
-#error CERES_USE_OPENMP is mutually exclusive to CERES_USE_CXX_THREADS and CERES_NO_THREADS
-#endif
-#elif defined(CERES_USE_CXX_THREADS)
-#if defined(CERES_USE_OPENMP) || defined(CERES_NO_THREADS)
-#error CERES_USE_CXX_THREADS is mutually exclusive to CERES_USE_OPENMP, CERES_USE_CXX_THREADS and CERES_NO_THREADS
-#endif
-#elif defined(CERES_NO_THREADS)
-#if defined(CERES_USE_OPENMP) || defined(CERES_USE_CXX_THREADS)
-#error CERES_NO_THREADS is mutually exclusive to CERES_USE_OPENMP and CERES_USE_CXX_THREADS
-#endif
-#else
-# error One of CERES_USE_OPENMP, CERES_USE_CXX_THREADS or CERES_NO_THREADS must be defined.
-#endif
-
-// CERES_NO_SPARSE should be automatically defined by config.h if Ceres was
-// compiled without any sparse back-end. Verify that it has not subsequently
-// been inconsistently redefined.
-#if defined(CERES_NO_SPARSE)
-#if !defined(CERES_NO_SUITESPARSE)
-#error CERES_NO_SPARSE requires CERES_NO_SUITESPARSE.
-#endif
-#if !defined(CERES_NO_CXSPARSE)
-#error CERES_NO_SPARSE requires CERES_NO_CXSPARSE
-#endif
-#if !defined(CERES_NO_ACCELERATE_SPARSE)
-#error CERES_NO_SPARSE requires CERES_NO_ACCELERATE_SPARSE
-#endif
-#if defined(CERES_USE_EIGEN_SPARSE)
-#error CERES_NO_SPARSE requires !CERES_USE_EIGEN_SPARSE
-#endif
-#endif
-
-// A macro to signal which functions and classes are exported when
-// building a shared library.
+// A macro to mark a function/variable/class as deprecated.
+// We use compiler specific attributes rather than the c++
+// attribute because they do not mix well with each other.
#if defined(_MSC_VER)
-#define CERES_API_SHARED_IMPORT __declspec(dllimport)
-#define CERES_API_SHARED_EXPORT __declspec(dllexport)
+#define CERES_DEPRECATED_WITH_MSG(message) __declspec(deprecated(message))
#elif defined(__GNUC__)
-#define CERES_API_SHARED_IMPORT __attribute__((visibility("default")))
-#define CERES_API_SHARED_EXPORT __attribute__((visibility("default")))
+#define CERES_DEPRECATED_WITH_MSG(message) __attribute__((deprecated(message)))
#else
-#define CERES_API_SHARED_IMPORT
-#define CERES_API_SHARED_EXPORT
+// In the worst case fall back to c++ attribute.
+#define CERES_DEPRECATED_WITH_MSG(message) [[deprecated(message)]]
#endif
-// CERES_BUILDING_SHARED_LIBRARY is only defined locally when Ceres itself is
-// compiled as a shared library, it is never exported to users. In order that
-// we do not have to configure config.h separately when building Ceres as either
-// a static or dynamic library, we define both CERES_USING_SHARED_LIBRARY and
-// CERES_BUILDING_SHARED_LIBRARY when building as a shared library.
-#if defined(CERES_USING_SHARED_LIBRARY)
-#if defined(CERES_BUILDING_SHARED_LIBRARY)
-// Compiling Ceres itself as a shared library.
-#define CERES_EXPORT CERES_API_SHARED_EXPORT
-#else
-// Using Ceres as a shared library.
-#define CERES_EXPORT CERES_API_SHARED_IMPORT
-#endif
-#else
-// Ceres was compiled as a static library, export everything.
-#define CERES_EXPORT
+#ifndef CERES_GET_FLAG
+#define CERES_GET_FLAG(X) X
#endif
-// Unit tests reach in and test internal functionality so we need a way to make
-// those symbols visible
-#ifdef CERES_EXPORT_INTERNAL_SYMBOLS
-#define CERES_EXPORT_INTERNAL CERES_EXPORT
-#else
-#define CERES_EXPORT_INTERNAL
+// Indicates whether C++20 is currently active
+#ifndef CERES_HAS_CPP20
+#if __cplusplus >= 202002L || (defined(_MSVC_LANG) && _MSVC_LANG >= 202002L)
+#define CERES_HAS_CPP20
+#endif // __cplusplus >= 202002L || (defined(_MSVC_LANG) && _MSVC_LANG >=
+ // 202002L)
+#endif // !defined(CERES_HAS_CPP20)
+
+// Prevents symbols from being substituted by the corresponding macro definition
+// under the same name. For instance, min and max are defined as macros on
+// Windows (unless NOMINMAX is defined) which causes compilation errors when
+// defining or referencing symbols under the same name.
+//
+// To be robust in all cases particularly when NOMINMAX cannot be used, use this
+// macro to annotate min/max declarations/definitions. Examples:
+//
+// int max CERES_PREVENT_MACRO_SUBSTITUTION();
+// min CERES_PREVENT_MACRO_SUBSTITUTION(a, b);
+// max CERES_PREVENT_MACRO_SUBSTITUTION(a, b);
+//
+// NOTE: In case the symbols for which the substitution must be prevented are
+// used within another macro, the substitution must be inhibited using parens as
+//
+// (std::numerical_limits<double>::max)()
+//
+// since the helper macro will not work here. Do not use this technique in
+// general case, because it will prevent argument-dependent lookup (ADL).
+//
+#define CERES_PREVENT_MACRO_SUBSTITUTION // Yes, it's empty
+
+// CERES_DISABLE_DEPRECATED_WARNING and CERES_RESTORE_DEPRECATED_WARNING allow
+// to temporarily disable deprecation warnings
+#if defined(_MSC_VER)
+#define CERES_DISABLE_DEPRECATED_WARNING \
+ _Pragma("warning(push)") _Pragma("warning(disable : 4996)")
+#define CERES_RESTORE_DEPRECATED_WARNING _Pragma("warning(pop)")
+#else // defined(_MSC_VER)
+#define CERES_DISABLE_DEPRECATED_WARNING
+#define CERES_RESTORE_DEPRECATED_WARNING
+#endif // defined(_MSC_VER)
+
+#if defined(__cpp_lib_math_special_functions) && \
+ ((__cpp_lib_math_special_functions >= 201603L) || \
+ defined(__STDCPP_MATH_SPEC_FUNCS__) && \
+ (__STDCPP_MATH_SPEC_FUNCS__ >= 201003L))
+// If defined, indicates whether C++17 Bessel functions (of the first kind) are
+// available. Some standard library implementations, such as libc++ (Android
+// NDK, Apple, Clang) do not yet provide these functions. Implementations that
+// do not support C++17, but support ISO 29124:2010, provide the functions if
+// __STDCPP_MATH_SPEC_FUNCS__ is defined by the implementation to a value at
+// least 201003L and if the user defines __STDCPP_WANT_MATH_SPEC_FUNCS__ before
+// including any standard library headers. Standard library Bessel functions are
+// preferred over any other implementation.
+#define CERES_HAS_CPP17_BESSEL_FUNCTIONS
+#elif defined(_SVID_SOURCE) || defined(_BSD_SOURCE) || defined(_XOPEN_SOURCE)
+// If defined, indicates that j0, j1, and jn from <math.h> are available.
+#define CERES_HAS_POSIX_BESSEL_FUNCTIONS
#endif
#endif // CERES_PUBLIC_INTERNAL_PORT_H_
diff --git a/include/ceres/internal/reenable_warnings.h b/include/ceres/internal/reenable_warnings.h
index 2c5db06..a183c25 100644
--- a/include/ceres/internal/reenable_warnings.h
+++ b/include/ceres/internal/reenable_warnings.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
diff --git a/include/ceres/internal/sphere_manifold_functions.h b/include/ceres/internal/sphere_manifold_functions.h
new file mode 100644
index 0000000..4793442
--- /dev/null
+++ b/include/ceres/internal/sphere_manifold_functions.h
@@ -0,0 +1,163 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: vitus@google.com (Mike Vitus)
+// jodebo_beck@gmx.de (Johannes Beck)
+
+#ifndef CERES_PUBLIC_INTERNAL_SPHERE_MANIFOLD_HELPERS_H_
+#define CERES_PUBLIC_INTERNAL_SPHERE_MANIFOLD_HELPERS_H_
+
+#include "ceres/constants.h"
+#include "ceres/internal/householder_vector.h"
+
+// This module contains functions to compute the SphereManifold plus and minus
+// operator and their Jacobians.
+//
+// As the parameters to these functions are shared between them, they are
+// described here: The following variable names are used:
+// Plus(x, delta) = x + delta = x_plus_delta,
+// Minus(y, x) = y - x = y_minus_x.
+//
+// The remaining ones are v and beta which describe the Householder
+// transformation of x, and norm_delta which is the norm of delta.
+//
+// The types of x, y, x_plus_delta and y_minus_x need to be equivalent to
+// Eigen::Matrix<double, AmbientSpaceDimension, 1> and the type of delta needs
+// to be equivalent to Eigen::Matrix<double, TangentSpaceDimension, 1>.
+//
+// The type of Jacobian plus needs to be equivalent to Eigen::Matrix<double,
+// AmbientSpaceDimension, TangentSpaceDimension, Eigen::RowMajor> and for
+// Jacobian minus Eigen::Matrix<double, TangentSpaceDimension,
+// AmbientSpaceDimension, Eigen::RowMajor>.
+//
+// For all vector / matrix inputs and outputs, template parameters are
+// used in order to allow also Eigen::Ref and Eigen block expressions to
+// be passed to the function.
+
+namespace ceres::internal {
+
+template <typename VT, typename XT, typename DeltaT, typename XPlusDeltaT>
+inline void ComputeSphereManifoldPlus(const VT& v,
+ double beta,
+ const XT& x,
+ const DeltaT& delta,
+ const double norm_delta,
+ XPlusDeltaT* x_plus_delta) {
+ constexpr int AmbientDim = VT::RowsAtCompileTime;
+
+ // Map the delta from the minimum representation to the over parameterized
+ // homogeneous vector. See B.2 p.25 equation (106) - (107) for more details.
+ const double sin_delta_by_delta = std::sin(norm_delta) / norm_delta;
+
+ Eigen::Matrix<double, AmbientDim, 1> y(v.size());
+ y << sin_delta_by_delta * delta, std::cos(norm_delta);
+
+ // Apply the delta update to remain on the sphere.
+ *x_plus_delta = x.norm() * ApplyHouseholderVector(y, v, beta);
+}
+
+template <typename VT, typename JacobianT>
+inline void ComputeSphereManifoldPlusJacobian(const VT& x,
+ JacobianT* jacobian) {
+ constexpr int AmbientSpaceDim = VT::RowsAtCompileTime;
+ using AmbientVector = Eigen::Matrix<double, AmbientSpaceDim, 1>;
+ const int ambient_size = x.size();
+ const int tangent_size = x.size() - 1;
+
+ AmbientVector v(ambient_size);
+ double beta;
+
+ // NOTE: The explicit template arguments are needed here because
+ // ComputeHouseholderVector is templated and some versions of MSVC
+ // have trouble deducing the type of v automatically.
+ ComputeHouseholderVector<VT, double, AmbientSpaceDim>(x, &v, &beta);
+
+ // The Jacobian is equal to J = H.leftCols(size_ - 1) where H is the
+ // Householder matrix (H = I - beta * v * v').
+ for (int i = 0; i < tangent_size; ++i) {
+ (*jacobian).col(i) = -beta * v(i) * v;
+ (*jacobian)(i, i) += 1.0;
+ }
+ (*jacobian) *= x.norm();
+}
+
+template <typename VT, typename XT, typename YT, typename YMinusXT>
+inline void ComputeSphereManifoldMinus(
+ const VT& v, double beta, const XT& x, const YT& y, YMinusXT* y_minus_x) {
+ constexpr int AmbientSpaceDim = VT::RowsAtCompileTime;
+ constexpr int TangentSpaceDim =
+ AmbientSpaceDim == Eigen::Dynamic ? Eigen::Dynamic : AmbientSpaceDim - 1;
+ using AmbientVector = Eigen::Matrix<double, AmbientSpaceDim, 1>;
+
+ const int tangent_size = v.size() - 1;
+
+ const AmbientVector hy = ApplyHouseholderVector(y, v, beta) / x.norm();
+
+ // Calculate y - x. See B.2 p.25 equation (108).
+ const double y_last = hy[tangent_size];
+ const double hy_norm = hy.template head<TangentSpaceDim>(tangent_size).norm();
+ if (hy_norm == 0.0) {
+ y_minus_x->setZero();
+ y_minus_x->data()[tangent_size - 1] = y_last >= 0 ? 0.0 : constants::pi;
+ } else {
+ *y_minus_x = std::atan2(hy_norm, y_last) / hy_norm *
+ hy.template head<TangentSpaceDim>(tangent_size);
+ }
+}
+
+template <typename VT, typename JacobianT>
+inline void ComputeSphereManifoldMinusJacobian(const VT& x,
+ JacobianT* jacobian) {
+ constexpr int AmbientSpaceDim = VT::RowsAtCompileTime;
+ using AmbientVector = Eigen::Matrix<double, AmbientSpaceDim, 1>;
+ const int ambient_size = x.size();
+ const int tangent_size = x.size() - 1;
+
+ AmbientVector v(ambient_size);
+ double beta;
+
+ // NOTE: The explicit template arguments are needed here because
+ // ComputeHouseholderVector is templated and some versions of MSVC
+ // have trouble deducing the type of v automatically.
+ ComputeHouseholderVector<VT, double, AmbientSpaceDim>(x, &v, &beta);
+
+ // The Jacobian is equal to J = H.leftCols(size_ - 1) where H is the
+ // Householder matrix (H = I - beta * v * v').
+ for (int i = 0; i < tangent_size; ++i) {
+ // NOTE: The transpose is used for correctness (the product is expected to
+ // be a row vector), although here there seems to be no difference between
+ // transposing or not for Eigen (possibly a compile-time auto fix).
+ (*jacobian).row(i) = -beta * v(i) * v.transpose();
+ (*jacobian)(i, i) += 1.0;
+ }
+ (*jacobian) /= x.norm();
+}
+
+} // namespace ceres::internal
+
+#endif
diff --git a/include/ceres/internal/variadic_evaluate.h b/include/ceres/internal/variadic_evaluate.h
index 47ff6b1..61af6b2 100644
--- a/include/ceres/internal/variadic_evaluate.h
+++ b/include/ceres/internal/variadic_evaluate.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,16 +33,14 @@
#ifndef CERES_PUBLIC_INTERNAL_VARIADIC_EVALUATE_H_
#define CERES_PUBLIC_INTERNAL_VARIADIC_EVALUATE_H_
-#include <stddef.h>
-
+#include <cstddef>
#include <type_traits>
#include <utility>
#include "ceres/cost_function.h"
#include "ceres/internal/parameter_dims.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// For fixed size cost functors
template <typename Functor, typename T, int... Indices>
@@ -51,7 +49,7 @@
T* output,
std::false_type /*is_dynamic*/,
std::integer_sequence<int, Indices...>) {
- static_assert(sizeof...(Indices),
+ static_assert(sizeof...(Indices) > 0,
"Invalid number of parameter blocks. At least one parameter "
"block must be specified.");
return functor(input[Indices]..., output);
@@ -108,7 +106,29 @@
return VariadicEvaluateImpl<ParameterDims>(functor, input, output, &functor);
}
-} // namespace internal
-} // namespace ceres
+// When differentiating dynamically sized CostFunctions, VariadicEvaluate
+// expects a functor with the signature:
+//
+// bool operator()(double const* const* parameters, double* cost) const
+//
+// However for NumericDiffFirstOrderFunction, the functor has the signature
+//
+// bool operator()(double const* parameters, double* cost) const
+//
+// This thin wrapper adapts the latter to the former.
+template <typename Functor>
+class FirstOrderFunctorAdapter {
+ public:
+ explicit FirstOrderFunctorAdapter(const Functor& functor)
+ : functor_(functor) {}
+ bool operator()(double const* const* parameters, double* cost) const {
+ return functor_(*parameters, cost);
+ }
+
+ private:
+ const Functor& functor_;
+};
+
+} // namespace ceres::internal
#endif // CERES_PUBLIC_INTERNAL_VARIADIC_EVALUATE_H_
diff --git a/include/ceres/iteration_callback.h b/include/ceres/iteration_callback.h
index 4507fdf..955e2ad 100644
--- a/include/ceres/iteration_callback.h
+++ b/include/ceres/iteration_callback.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,6 +36,7 @@
#define CERES_PUBLIC_ITERATION_CALLBACK_H_
#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/types.h"
namespace ceres {
@@ -164,8 +165,6 @@
// explicit LoggingCallback(bool log_to_stdout)
// : log_to_stdout_(log_to_stdout) {}
//
-// ~LoggingCallback() {}
-//
// CallbackReturnType operator()(const IterationSummary& summary) {
// const char* kReportRowFormat =
// "% 4d: f:% 8e d:% 3.2e g:% 3.2e h:% 3.2e "
@@ -194,7 +193,7 @@
//
class CERES_EXPORT IterationCallback {
public:
- virtual ~IterationCallback() {}
+ virtual ~IterationCallback();
virtual CallbackReturnType operator()(const IterationSummary& summary) = 0;
};
diff --git a/include/ceres/jet.h b/include/ceres/jet.h
index da49f32..3b7f23f 100644
--- a/include/ceres/jet.h
+++ b/include/ceres/jet.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -158,20 +158,59 @@
#define CERES_PUBLIC_JET_H_
#include <cmath>
+#include <complex>
#include <iosfwd>
#include <iostream> // NOLINT
#include <limits>
+#include <numeric>
#include <string>
+#include <type_traits>
#include "Eigen/Core"
+#include "ceres/internal/jet_traits.h"
#include "ceres/internal/port.h"
+#include "ceres/jet_fwd.h"
+
+// Here we provide partial specializations of std::common_type for the Jet class
+// to allow determining a Jet type with a common underlying arithmetic type.
+// Such an arithmetic type can be either a scalar or an another Jet. An example
+// for a common type, say, between a float and a Jet<double, N> is a Jet<double,
+// N> (i.e., std::common_type_t<float, ceres::Jet<double, N>> and
+// ceres::Jet<double, N> refer to the same type.)
+//
+// The partial specialization are also used for determining compatible types by
+// means of SFINAE and thus allow such types to be expressed as operands of
+// logical comparison operators. Missing (partial) specialization of
+// std::common_type for a particular (custom) type will therefore disable the
+// use of comparison operators defined by Ceres.
+//
+// Since these partial specializations are used as SFINAE constraints, they
+// enable standard promotion rules between various scalar types and consequently
+// their use in comparison against a Jet without providing implicit
+// conversions from a scalar, such as an int, to a Jet (see the implementation
+// of logical comparison operators below).
+
+template <typename T, int N, typename U>
+struct std::common_type<T, ceres::Jet<U, N>> {
+ using type = ceres::Jet<common_type_t<T, U>, N>;
+};
+
+template <typename T, int N, typename U>
+struct std::common_type<ceres::Jet<T, N>, U> {
+ using type = ceres::Jet<common_type_t<T, U>, N>;
+};
+
+template <typename T, int N, typename U>
+struct std::common_type<ceres::Jet<T, N>, ceres::Jet<U, N>> {
+ using type = ceres::Jet<common_type_t<T, U>, N>;
+};
namespace ceres {
template <typename T, int N>
struct Jet {
enum { DIMENSION = N };
- typedef T Scalar;
+ using Scalar = T;
// Default-construct "a" because otherwise this can lead to false errors about
// uninitialized uses when other classes relying on default constructed T
@@ -352,19 +391,21 @@
return Jet<T, N>(f.a * s_inverse, f.v * s_inverse);
}
-// Binary comparison operators for both scalars and jets.
-#define CERES_DEFINE_JET_COMPARISON_OPERATOR(op) \
- template <typename T, int N> \
- inline bool operator op(const Jet<T, N>& f, const Jet<T, N>& g) { \
- return f.a op g.a; \
- } \
- template <typename T, int N> \
- inline bool operator op(const T& s, const Jet<T, N>& g) { \
- return s op g.a; \
- } \
- template <typename T, int N> \
- inline bool operator op(const Jet<T, N>& f, const T& s) { \
- return f.a op s; \
+// Binary comparison operators for both scalars and jets. At least one of the
+// operands must be a Jet. Promotable scalars (e.g., int, float, double etc.)
+// can appear on either side of the operator. std::common_type_t is used as an
+// SFINAE constraint to selectively enable compatible operand types. This allows
+// comparison, for instance, against int literals without implicit conversion.
+// In case the Jet arithmetic type is a Jet itself, a recursive expansion of Jet
+// value is performed.
+#define CERES_DEFINE_JET_COMPARISON_OPERATOR(op) \
+ template <typename Lhs, \
+ typename Rhs, \
+ std::enable_if_t<PromotableJetOperands_v<Lhs, Rhs>>* = nullptr> \
+ constexpr bool operator op(const Lhs& f, const Rhs& g) noexcept( \
+ noexcept(internal::AsScalar(f) op internal::AsScalar(g))) { \
+ using internal::AsScalar; \
+ return AsScalar(f) op AsScalar(g); \
}
CERES_DEFINE_JET_COMPARISON_OPERATOR(<) // NOLINT
CERES_DEFINE_JET_COMPARISON_OPERATOR(<=) // NOLINT
@@ -386,43 +427,141 @@
using std::atan2;
using std::cbrt;
using std::ceil;
+using std::copysign;
using std::cos;
using std::cosh;
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+using std::cyl_bessel_j;
+#endif // CERES_HAS_CPP17_BESSEL_FUNCTIONS
using std::erf;
using std::erfc;
using std::exp;
using std::exp2;
+using std::expm1;
+using std::fdim;
using std::floor;
+using std::fma;
using std::fmax;
using std::fmin;
+using std::fpclassify;
using std::hypot;
using std::isfinite;
using std::isinf;
using std::isnan;
using std::isnormal;
using std::log;
+using std::log10;
+using std::log1p;
using std::log2;
+using std::norm;
using std::pow;
+using std::signbit;
using std::sin;
using std::sinh;
using std::sqrt;
using std::tan;
using std::tanh;
+// MSVC (up to 1930) defines quiet comparison functions as template functions
+// which causes compilation errors due to ambiguity in the template parameter
+// type resolution for using declarations in the ceres namespace. Workaround the
+// issue by defining specific overload and bypass MSVC standard library
+// definitions.
+#if defined(_MSC_VER)
+inline bool isgreater(double lhs,
+ double rhs) noexcept(noexcept(std::isgreater(lhs, rhs))) {
+ return std::isgreater(lhs, rhs);
+}
+inline bool isless(double lhs,
+ double rhs) noexcept(noexcept(std::isless(lhs, rhs))) {
+ return std::isless(lhs, rhs);
+}
+inline bool islessequal(double lhs,
+ double rhs) noexcept(noexcept(std::islessequal(lhs,
+ rhs))) {
+ return std::islessequal(lhs, rhs);
+}
+inline bool isgreaterequal(double lhs, double rhs) noexcept(
+ noexcept(std::isgreaterequal(lhs, rhs))) {
+ return std::isgreaterequal(lhs, rhs);
+}
+inline bool islessgreater(double lhs, double rhs) noexcept(
+ noexcept(std::islessgreater(lhs, rhs))) {
+ return std::islessgreater(lhs, rhs);
+}
+inline bool isunordered(double lhs,
+ double rhs) noexcept(noexcept(std::isunordered(lhs,
+ rhs))) {
+ return std::isunordered(lhs, rhs);
+}
+#else
+using std::isgreater;
+using std::isgreaterequal;
+using std::isless;
+using std::islessequal;
+using std::islessgreater;
+using std::isunordered;
+#endif
+
+#ifdef CERES_HAS_CPP20
+using std::lerp;
+using std::midpoint;
+#endif // defined(CERES_HAS_CPP20)
+
// Legacy names from pre-C++11 days.
// clang-format off
+CERES_DEPRECATED_WITH_MSG("ceres::IsFinite will be removed in a future Ceres Solver release. Please use ceres::isfinite.")
inline bool IsFinite(double x) { return std::isfinite(x); }
+CERES_DEPRECATED_WITH_MSG("ceres::IsInfinite will be removed in a future Ceres Solver release. Please use ceres::isinf.")
inline bool IsInfinite(double x) { return std::isinf(x); }
+CERES_DEPRECATED_WITH_MSG("ceres::IsNaN will be removed in a future Ceres Solver release. Please use ceres::isnan.")
inline bool IsNaN(double x) { return std::isnan(x); }
+CERES_DEPRECATED_WITH_MSG("ceres::IsNormal will be removed in a future Ceres Solver release. Please use ceres::isnormal.")
inline bool IsNormal(double x) { return std::isnormal(x); }
// clang-format on
// In general, f(a + h) ~= f(a) + f'(a) h, via the chain rule.
-// abs(x + h) ~= x + h or -(x + h)
+// abs(x + h) ~= abs(x) + sgn(x)h
template <typename T, int N>
inline Jet<T, N> abs(const Jet<T, N>& f) {
- return (f.a < T(0.0) ? -f : f);
+ return Jet<T, N>(abs(f.a), copysign(T(1), f.a) * f.v);
+}
+
+// copysign(a, b) composes a float with the magnitude of a and the sign of b.
+// Therefore, the function can be formally defined as
+//
+// copysign(a, b) = sgn(b)|a|
+//
+// where
+//
+// d/dx |x| = sgn(x)
+// d/dx sgn(x) = 2δ(x)
+//
+// sgn(x) being the signum function. Differentiating copysign(a, b) with respect
+// to a and b gives:
+//
+// d/da sgn(b)|a| = sgn(a) sgn(b)
+// d/db sgn(b)|a| = 2|a|δ(b)
+//
+// with the dual representation given by
+//
+// copysign(a + da, b + db) ~= sgn(b)|a| + (sgn(a)sgn(b) da + 2|a|δ(b) db)
+//
+// where δ(b) is the Dirac delta function.
+template <typename T, int N>
+inline Jet<T, N> copysign(const Jet<T, N>& f, const Jet<T, N> g) {
+ // The Dirac delta function δ(b) is undefined at b=0 (here it's
+ // infinite) and 0 everywhere else.
+ T d = fpclassify(g) == FP_ZERO ? std::numeric_limits<T>::infinity() : T(0);
+ T sa = copysign(T(1), f.a); // sgn(a)
+ T sb = copysign(T(1), g.a); // sgn(b)
+ // The second part of the infinitesimal is 2|a|δ(b) which is either infinity
+ // or 0 unless a or any of the values of the b infinitesimal are 0. In the
+ // latter case, the corresponding values become NaNs (multiplying 0 by
+ // infinity gives NaN). We drop the constant factor 2 since it does not change
+ // the result (its values will still be either 0, infinity or NaN).
+ return Jet<T, N>(copysign(f.a, g.a), sa * sb * f.v + abs(f.a) * d * g.v);
}
// log(a + h) ~= log(a) + h / a
@@ -432,6 +571,21 @@
return Jet<T, N>(log(f.a), f.v * a_inverse);
}
+// log10(a + h) ~= log10(a) + h / (a log(10))
+template <typename T, int N>
+inline Jet<T, N> log10(const Jet<T, N>& f) {
+ // Most compilers will expand log(10) to a constant.
+ const T a_inverse = T(1.0) / (f.a * log(T(10.0)));
+ return Jet<T, N>(log10(f.a), f.v * a_inverse);
+}
+
+// log1p(a + h) ~= log1p(a) + h / (1 + a)
+template <typename T, int N>
+inline Jet<T, N> log1p(const Jet<T, N>& f) {
+ const T a_inverse = T(1.0) / (T(1.0) + f.a);
+ return Jet<T, N>(log1p(f.a), f.v * a_inverse);
+}
+
// exp(a + h) ~= exp(a) + exp(a) h
template <typename T, int N>
inline Jet<T, N> exp(const Jet<T, N>& f) {
@@ -439,6 +593,14 @@
return Jet<T, N>(tmp, tmp * f.v);
}
+// expm1(a + h) ~= expm1(a) + exp(a) h
+template <typename T, int N>
+inline Jet<T, N> expm1(const Jet<T, N>& f) {
+ const T tmp = expm1(f.a);
+ const T expa = tmp + T(1.0); // exp(a) = expm1(a) + 1
+ return Jet<T, N>(tmp, expa * f.v);
+}
+
// sqrt(a + h) ~= sqrt(a) + h / (2 sqrt(a))
template <typename T, int N>
inline Jet<T, N> sqrt(const Jet<T, N>& f) {
@@ -565,31 +727,152 @@
return Jet<T, N>(tmp, x.a / tmp * x.v + y.a / tmp * y.v);
}
+// Like sqrt(x^2 + y^2 + z^2),
+// but acts to prevent underflow/overflow for small/large x/y/z.
+// Note that the function is non-smooth at x=y=z=0,
+// so the derivative is undefined there.
template <typename T, int N>
-inline Jet<T, N> fmax(const Jet<T, N>& x, const Jet<T, N>& y) {
- return x < y ? y : x;
+inline Jet<T, N> hypot(const Jet<T, N>& x,
+ const Jet<T, N>& y,
+ const Jet<T, N>& z) {
+ // d/da sqrt(a) = 0.5 / sqrt(a)
+ // d/dx x^2 + y^2 + z^2 = 2x
+ // So by the chain rule:
+ // d/dx sqrt(x^2 + y^2 + z^2)
+ // = 0.5 / sqrt(x^2 + y^2 + z^2) * 2x
+ // = x / sqrt(x^2 + y^2 + z^2)
+ // d/dy sqrt(x^2 + y^2 + z^2) = y / sqrt(x^2 + y^2 + z^2)
+ // d/dz sqrt(x^2 + y^2 + z^2) = z / sqrt(x^2 + y^2 + z^2)
+ const T tmp = hypot(x.a, y.a, z.a);
+ return Jet<T, N>(tmp, x.a / tmp * x.v + y.a / tmp * y.v + z.a / tmp * z.v);
}
+// Like x * y + z but rounded only once.
template <typename T, int N>
-inline Jet<T, N> fmin(const Jet<T, N>& x, const Jet<T, N>& y) {
- return y < x ? y : x;
+inline Jet<T, N> fma(const Jet<T, N>& x,
+ const Jet<T, N>& y,
+ const Jet<T, N>& z) {
+ // d/dx fma(x, y, z) = y
+ // d/dy fma(x, y, z) = x
+ // d/dz fma(x, y, z) = 1
+ return Jet<T, N>(fma(x.a, y.a, z.a), y.a * x.v + x.a * y.v + z.v);
}
-// erf is defined as an integral that cannot be expressed analyticaly
+// Return value of fmax() and fmin() on equality
+// ---------------------------------------------
+//
+// There is arguably no good answer to what fmax() & fmin() should return on
+// equality, which for Jets by definition ONLY compares the scalar parts. We
+// choose what we think is the least worst option (averaging as Jets) which
+// minimises undesirable/unexpected behaviour as used, and also supports client
+// code written against Ceres versions prior to type promotion being supported
+// in Jet comparisons (< v2.1).
+//
+// The std::max() convention of returning the first argument on equality is
+// problematic, as it means that the derivative component may or may not be
+// preserved (when comparing a Jet with a scalar) depending upon the ordering.
+//
+// Always returning the Jet in {Jet, scalar} cases on equality is problematic
+// as it is inconsistent with the behaviour that would be obtained if the scalar
+// was first cast to Jet and the {Jet, Jet} case was used. Prior to type
+// promotion (Ceres v2.1) client code would typically cast constants to Jets
+// e.g: fmax(x, T(2.0)) which means the {Jet, Jet} case predominates, and we
+// still want the result to be order independent.
+//
+// Our intuition is that preserving a non-zero derivative is best, even if
+// its value does not match either of the inputs. Averaging achieves this
+// whilst ensuring argument ordering independence. This is also the approach
+// used by the Jax library, and TensorFlow's reduce_max().
+
+// Returns the larger of the two arguments, with Jet averaging on equality.
+// NaNs are treated as missing data.
+//
+// NOTE: This function is NOT subject to any of the error conditions specified
+// in `math_errhandling`.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline decltype(auto) fmax(const Lhs& x, const Rhs& y) {
+ using J = std::common_type_t<Lhs, Rhs>;
+ // As x == y may set FP exceptions in the presence of NaNs when used with
+ // non-default compiler options so we avoid its use here.
+ if (isnan(x) || isnan(y) || islessgreater(x, y)) {
+ return isnan(x) || isless(x, y) ? J{y} : J{x};
+ }
+ // x == y (scalar parts) return the average of their Jet representations.
+#if defined(CERES_HAS_CPP20)
+ return midpoint(J{x}, J{y});
+#else
+ return (J{x} + J{y}) * typename J::Scalar(0.5);
+#endif // defined(CERES_HAS_CPP20)
+}
+
+// Returns the smaller of the two arguments, with Jet averaging on equality.
+// NaNs are treated as missing data.
+//
+// NOTE: This function is NOT subject to any of the error conditions specified
+// in `math_errhandling`.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline decltype(auto) fmin(const Lhs& x, const Rhs& y) {
+ using J = std::common_type_t<Lhs, Rhs>;
+ // As x == y may set FP exceptions in the presence of NaNs when used with
+ // non-default compiler options so we avoid its use here.
+ if (isnan(x) || isnan(y) || islessgreater(x, y)) {
+ return isnan(x) || isgreater(x, y) ? J{y} : J{x};
+ }
+ // x == y (scalar parts) return the average of their Jet representations.
+#if defined(CERES_HAS_CPP20)
+ return midpoint(J{x}, J{y});
+#else
+ return (J{x} + J{y}) * typename J::Scalar(0.5);
+#endif // defined(CERES_HAS_CPP20)
+}
+
+// Returns the positive difference (f - g) of two arguments and zero if f <= g.
+// If at least one argument is NaN, a NaN is return.
+//
+// NOTE At least one of the argument types must be a Jet, the other one can be a
+// scalar. In case both arguments are Jets, their dimensionality must match.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline decltype(auto) fdim(const Lhs& f, const Rhs& g) {
+ using J = std::common_type_t<Lhs, Rhs>;
+ if (isnan(f) || isnan(g)) {
+ return std::numeric_limits<J>::quiet_NaN();
+ }
+ return isgreater(f, g) ? J{f - g} : J{};
+}
+
+// erf is defined as an integral that cannot be expressed analytically
// however, the derivative is trivial to compute
// erf(x + h) = erf(x) + h * 2*exp(-x^2)/sqrt(pi)
template <typename T, int N>
inline Jet<T, N> erf(const Jet<T, N>& x) {
- return Jet<T, N>(erf(x.a), x.v * M_2_SQRTPI * exp(-x.a * x.a));
+ // We evaluate the constant as follows:
+ // 2 / sqrt(pi) = 1 / sqrt(atan(1.))
+ // On POSIX systems it is defined as M_2_SQRTPI, but this is not
+ // portable and the type may not be T. The above expression
+ // evaluates to full precision with IEEE arithmetic and, since it's
+ // constant, the compiler can generate exactly the same code. gcc
+ // does so even at -O0.
+ return Jet<T, N>(erf(x.a), x.v * exp(-x.a * x.a) * (T(1) / sqrt(atan(T(1)))));
}
// erfc(x) = 1-erf(x)
// erfc(x + h) = erfc(x) + h * (-2*exp(-x^2)/sqrt(pi))
template <typename T, int N>
inline Jet<T, N> erfc(const Jet<T, N>& x) {
- return Jet<T, N>(erfc(x.a), -x.v * M_2_SQRTPI * exp(-x.a * x.a));
+ // See in erf() above for the evaluation of the constant in the derivative.
+ return Jet<T, N>(erfc(x.a),
+ -x.v * exp(-x.a * x.a) * (T(1) / sqrt(atan(T(1)))));
}
+#if defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS) || \
+ defined(CERES_HAS_POSIX_BESSEL_FUNCTIONS)
+
// Bessel functions of the first kind with integer order equal to 0, 1, n.
//
// Microsoft has deprecated the j[0,1,n]() POSIX Bessel functions in favour of
@@ -597,25 +880,33 @@
// function errors in client code (the specific warning is suppressed when
// Ceres itself is built).
inline double BesselJ0(double x) {
-#if defined(CERES_MSVC_USE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS)
- return _j0(x);
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+ return cyl_bessel_j(0, x);
#else
+ CERES_DISABLE_DEPRECATED_WARNING
return j0(x);
-#endif
+ CERES_RESTORE_DEPRECATED_WARNING
+#endif // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
}
+
inline double BesselJ1(double x) {
-#if defined(CERES_MSVC_USE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS)
- return _j1(x);
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+ return cyl_bessel_j(1, x);
#else
+ CERES_DISABLE_DEPRECATED_WARNING
return j1(x);
-#endif
+ CERES_RESTORE_DEPRECATED_WARNING
+#endif // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
}
+
inline double BesselJn(int n, double x) {
-#if defined(CERES_MSVC_USE_UNDERSCORE_PREFIXED_BESSEL_FUNCTIONS)
- return _jn(n, x);
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+ return cyl_bessel_j(static_cast<double>(n), x);
#else
+ CERES_DISABLE_DEPRECATED_WARNING
return jn(n, x);
-#endif
+ CERES_RESTORE_DEPRECATED_WARNING
+#endif // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
}
// For the formulae of the derivatives of the Bessel functions see the book:
@@ -628,100 +919,264 @@
// j0(a + h) ~= j0(a) - j1(a) h
template <typename T, int N>
inline Jet<T, N> BesselJ0(const Jet<T, N>& f) {
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+ return cyl_bessel_j(0, f);
+#else
return Jet<T, N>(BesselJ0(f.a), -BesselJ1(f.a) * f.v);
+#endif // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
}
// See formula http://dlmf.nist.gov/10.6#E1
// j1(a + h) ~= j1(a) + 0.5 ( j0(a) - j2(a) ) h
template <typename T, int N>
inline Jet<T, N> BesselJ1(const Jet<T, N>& f) {
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+ return cyl_bessel_j(1, f);
+#else
return Jet<T, N>(BesselJ1(f.a),
T(0.5) * (BesselJ0(f.a) - BesselJn(2, f.a)) * f.v);
+#endif // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
}
// See formula http://dlmf.nist.gov/10.6#E1
// j_n(a + h) ~= j_n(a) + 0.5 ( j_{n-1}(a) - j_{n+1}(a) ) h
template <typename T, int N>
inline Jet<T, N> BesselJn(int n, const Jet<T, N>& f) {
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+ return cyl_bessel_j(n, f);
+#else
return Jet<T, N>(
BesselJn(n, f.a),
T(0.5) * (BesselJn(n - 1, f.a) - BesselJn(n + 1, f.a)) * f.v);
+#endif // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
}
-// Jet Classification. It is not clear what the appropriate semantics are for
-// these classifications. This picks that std::isfinite and std::isnormal are
-// "all" operations, i.e. all elements of the jet must be finite for the jet
-// itself to be finite (or normal). For IsNaN and IsInfinite, the answer is less
-// clear. This takes a "any" approach for IsNaN and IsInfinite such that if any
-// part of a jet is nan or inf, then the entire jet is nan or inf. This leads
-// to strange situations like a jet can be both IsInfinite and IsNaN, but in
-// practice the "any" semantics are the most useful for e.g. checking that
-// derivatives are sane.
+#endif // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS) ||
+ // defined(CERES_HAS_POSIX_BESSEL_FUNCTIONS)
-// The jet is finite if all parts of the jet are finite.
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+
+// See formula http://dlmf.nist.gov/10.6#E1
+// j_n(a + h) ~= j_n(a) + 0.5 ( j_{n-1}(a) - j_{n+1}(a) ) h
+template <typename T, int N>
+inline Jet<T, N> cyl_bessel_j(double v, const Jet<T, N>& f) {
+ // See formula http://dlmf.nist.gov/10.6#E3
+ // j0(a + h) ~= j0(a) - j1(a) h
+ if (fpclassify(v) == FP_ZERO) {
+ return Jet<T, N>(cyl_bessel_j(0, f.a), -cyl_bessel_j(1, f.a) * f.v);
+ }
+
+ return Jet<T, N>(
+ cyl_bessel_j(v, f.a),
+ T(0.5) * (cyl_bessel_j(v - 1, f.a) - cyl_bessel_j(v + 1, f.a)) * f.v);
+}
+
+#endif // CERES_HAS_CPP17_BESSEL_FUNCTIONS
+
+// Classification and comparison functionality referencing only the scalar part
+// of a Jet. To classify the derivatives (e.g., for sanity checks), the dual
+// part should be referenced explicitly. For instance, to check whether the
+// derivatives of a Jet 'f' are reasonable, one can use
+//
+// isfinite(f.v.array()).all()
+// !isnan(f.v.array()).any()
+//
+// etc., depending on the desired semantics.
+//
+// NOTE: Floating-point classification and comparison functions and operators
+// should be used with care as no derivatives can be propagated by such
+// functions directly but only by expressions resulting from corresponding
+// conditional statements. At the same time, conditional statements can possibly
+// introduce a discontinuity in the cost function making it impossible to
+// evaluate its derivative and thus the optimization problem intractable.
+
+// Determines whether the scalar part of the Jet is finite.
template <typename T, int N>
inline bool isfinite(const Jet<T, N>& f) {
- // Branchless implementation. This is more efficient for the false-case and
- // works with the codegen system.
- auto result = isfinite(f.a);
- for (int i = 0; i < N; ++i) {
- result = result & isfinite(f.v[i]);
- }
- return result;
+ return isfinite(f.a);
}
-// The jet is infinite if any part of the Jet is infinite.
+// Determines whether the scalar part of the Jet is infinite.
template <typename T, int N>
inline bool isinf(const Jet<T, N>& f) {
- auto result = isinf(f.a);
- for (int i = 0; i < N; ++i) {
- result = result | isinf(f.v[i]);
- }
- return result;
+ return isinf(f.a);
}
-// The jet is NaN if any part of the jet is NaN.
+// Determines whether the scalar part of the Jet is NaN.
template <typename T, int N>
inline bool isnan(const Jet<T, N>& f) {
- auto result = isnan(f.a);
- for (int i = 0; i < N; ++i) {
- result = result | isnan(f.v[i]);
- }
- return result;
+ return isnan(f.a);
}
-// The jet is normal if all parts of the jet are normal.
+// Determines whether the scalar part of the Jet is neither zero, subnormal,
+// infinite, nor NaN.
template <typename T, int N>
inline bool isnormal(const Jet<T, N>& f) {
- auto result = isnormal(f.a);
- for (int i = 0; i < N; ++i) {
- result = result & isnormal(f.v[i]);
- }
- return result;
+ return isnormal(f.a);
+}
+
+// Determines whether the scalar part of the Jet f is less than the scalar
+// part of g.
+//
+// NOTE: This function does NOT set any floating-point exceptions.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline bool isless(const Lhs& f, const Rhs& g) {
+ using internal::AsScalar;
+ return isless(AsScalar(f), AsScalar(g));
+}
+
+// Determines whether the scalar part of the Jet f is greater than the scalar
+// part of g.
+//
+// NOTE: This function does NOT set any floating-point exceptions.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline bool isgreater(const Lhs& f, const Rhs& g) {
+ using internal::AsScalar;
+ return isgreater(AsScalar(f), AsScalar(g));
+}
+
+// Determines whether the scalar part of the Jet f is less than or equal to the
+// scalar part of g.
+//
+// NOTE: This function does NOT set any floating-point exceptions.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline bool islessequal(const Lhs& f, const Rhs& g) {
+ using internal::AsScalar;
+ return islessequal(AsScalar(f), AsScalar(g));
+}
+
+// Determines whether the scalar part of the Jet f is less than or greater than
+// (f < g || f > g) the scalar part of g.
+//
+// NOTE: This function does NOT set any floating-point exceptions.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline bool islessgreater(const Lhs& f, const Rhs& g) {
+ using internal::AsScalar;
+ return islessgreater(AsScalar(f), AsScalar(g));
+}
+
+// Determines whether the scalar part of the Jet f is greater than or equal to
+// the scalar part of g.
+//
+// NOTE: This function does NOT set any floating-point exceptions.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline bool isgreaterequal(const Lhs& f, const Rhs& g) {
+ using internal::AsScalar;
+ return isgreaterequal(AsScalar(f), AsScalar(g));
+}
+
+// Determines if either of the scalar parts of the arguments are NaN and
+// thus cannot be ordered with respect to each other.
+template <typename Lhs,
+ typename Rhs,
+ std::enable_if_t<CompatibleJetOperands_v<Lhs, Rhs>>* = nullptr>
+inline bool isunordered(const Lhs& f, const Rhs& g) {
+ using internal::AsScalar;
+ return isunordered(AsScalar(f), AsScalar(g));
+}
+
+// Categorize scalar part as zero, subnormal, normal, infinite, NaN, or
+// implementation-defined.
+template <typename T, int N>
+inline int fpclassify(const Jet<T, N>& f) {
+ return fpclassify(f.a);
+}
+
+// Determines whether the scalar part of the argument is negative.
+template <typename T, int N>
+inline bool signbit(const Jet<T, N>& f) {
+ return signbit(f.a);
}
// Legacy functions from the pre-C++11 days.
template <typename T, int N>
+CERES_DEPRECATED_WITH_MSG(
+ "ceres::IsFinite will be removed in a future Ceres Solver release. Please "
+ "use ceres::isfinite.")
inline bool IsFinite(const Jet<T, N>& f) {
return isfinite(f);
}
template <typename T, int N>
+CERES_DEPRECATED_WITH_MSG(
+ "ceres::IsNaN will be removed in a future Ceres Solver release. Please use "
+ "ceres::isnan.")
inline bool IsNaN(const Jet<T, N>& f) {
return isnan(f);
}
template <typename T, int N>
+CERES_DEPRECATED_WITH_MSG(
+ "ceres::IsNormal will be removed in a future Ceres Solver release. Please "
+ "use ceres::isnormal.")
inline bool IsNormal(const Jet<T, N>& f) {
return isnormal(f);
}
// The jet is infinite if any part of the jet is infinite.
template <typename T, int N>
+CERES_DEPRECATED_WITH_MSG(
+ "ceres::IsInfinite will be removed in a future Ceres Solver release. "
+ "Please use ceres::isinf.")
inline bool IsInfinite(const Jet<T, N>& f) {
return isinf(f);
}
+#ifdef CERES_HAS_CPP20
+// Computes the linear interpolation a + t(b - a) between a and b at the value
+// t. For arguments outside of the range 0 <= t <= 1, the values are
+// extrapolated.
+//
+// Differentiating lerp(a, b, t) with respect to a, b, and t gives:
+//
+// d/da lerp(a, b, t) = 1 - t
+// d/db lerp(a, b, t) = t
+// d/dt lerp(a, b, t) = b - a
+//
+// with the dual representation given by
+//
+// lerp(a + da, b + db, t + dt)
+// ~= lerp(a, b, t) + (1 - t) da + t db + (b - a) dt .
+template <typename T, int N>
+inline Jet<T, N> lerp(const Jet<T, N>& a,
+ const Jet<T, N>& b,
+ const Jet<T, N>& t) {
+ return Jet<T, N>{lerp(a.a, b.a, t.a),
+ (T(1) - t.a) * a.v + t.a * b.v + (b.a - a.a) * t.v};
+}
+
+// Computes the midpoint a + (b - a) / 2.
+//
+// Differentiating midpoint(a, b) with respect to a and b gives:
+//
+// d/da midpoint(a, b) = 1/2
+// d/db midpoint(a, b) = 1/2
+//
+// with the dual representation given by
+//
+// midpoint(a + da, b + db) ~= midpoint(a, b) + (da + db) / 2 .
+template <typename T, int N>
+inline Jet<T, N> midpoint(const Jet<T, N>& a, const Jet<T, N>& b) {
+ Jet<T, N> result{midpoint(a.a, b.a)};
+ // To avoid overflow in the differential, compute
+ // (da + db) / 2 using midpoint.
+ for (int i = 0; i < N; ++i) {
+ result.v[i] = midpoint(a.v[i], b.v[i]);
+ }
+ return result;
+}
+#endif // defined(CERES_HAS_CPP20)
+
// atan2(b + db, a + da) ~= atan2(b, a) + (- b da + a db) / (a^2 + b^2)
//
// In words: the rate of change of theta is 1/r times the rate of
@@ -737,6 +1192,22 @@
return Jet<T, N>(atan2(g.a, f.a), tmp * (-g.a * f.v + f.a * g.v));
}
+// Computes the square x^2 of a real number x (not the Euclidean L^2 norm as
+// the name might suggest).
+//
+// NOTE: While std::norm is primarily intended for computing the squared
+// magnitude of a std::complex<> number, the current Jet implementation does not
+// support mixing a scalar T in its real part and std::complex<T> and in the
+// infinitesimal. Mixed Jet support is necessary for the type decay from
+// std::complex<T> to T (the squared magnitude of a complex number is always
+// real) performed by std::norm.
+//
+// norm(x + h) ~= norm(x) + 2x h
+template <typename T, int N>
+inline Jet<T, N> norm(const Jet<T, N>& f) {
+ return Jet<T, N>(norm(f.a), T(2) * f.a * f.v);
+}
+
// pow -- base is a differentiable function, exponent is a constant.
// (a+da)^p ~= a^p + p*a^(p-1) da
template <typename T, int N>
@@ -760,14 +1231,14 @@
inline Jet<T, N> pow(T f, const Jet<T, N>& g) {
Jet<T, N> result;
- if (f == T(0) && g.a > T(0)) {
+ if (fpclassify(f) == FP_ZERO && g > 0) {
// Handle case 2.
result = Jet<T, N>(T(0.0));
} else {
- if (f < 0 && g.a == floor(g.a)) { // Handle case 3.
+ if (f < 0 && g == floor(g.a)) { // Handle case 3.
result = Jet<T, N>(pow(f, g.a));
for (int i = 0; i < N; i++) {
- if (g.v[i] != T(0.0)) {
+ if (fpclassify(g.v[i]) != FP_ZERO) {
// Return a NaN when g.v != 0.
result.v[i] = std::numeric_limits<T>::quiet_NaN();
}
@@ -822,21 +1293,21 @@
inline Jet<T, N> pow(const Jet<T, N>& f, const Jet<T, N>& g) {
Jet<T, N> result;
- if (f.a == T(0) && g.a >= T(1)) {
+ if (fpclassify(f) == FP_ZERO && g >= 1) {
// Handle cases 2 and 3.
- if (g.a > T(1)) {
+ if (g > 1) {
result = Jet<T, N>(T(0.0));
} else {
result = f;
}
} else {
- if (f.a < T(0) && g.a == floor(g.a)) {
+ if (f < 0 && g == floor(g.a)) {
// Handle cases 7 and 8.
T const tmp = g.a * pow(f.a, g.a - T(1.0));
result = Jet<T, N>(pow(f.a, g.a), tmp * f.v);
for (int i = 0; i < N; i++) {
- if (g.v[i] != T(0.0)) {
+ if (fpclassify(g.v[i]) != FP_ZERO) {
// Return a NaN when g.v != 0.
result.v[i] = T(std::numeric_limits<double>::quiet_NaN());
}
@@ -887,8 +1358,13 @@
static constexpr bool is_bounded = std::numeric_limits<T>::is_bounded;
static constexpr bool is_modulo = std::numeric_limits<T>::is_modulo;
+ // has_denorm (and has_denorm_loss, not defined for Jet) has been deprecated
+ // in C++23. However, without an intent to remove the declaration. Disable
+ // deprecation warnings temporarily just for the corresponding symbols.
+ CERES_DISABLE_DEPRECATED_WARNING
static constexpr std::float_denorm_style has_denorm =
std::numeric_limits<T>::has_denorm;
+ CERES_RESTORE_DEPRECATED_WARNING
static constexpr std::float_round_style round_style =
std::numeric_limits<T>::round_style;
@@ -904,8 +1380,9 @@
static constexpr bool tinyness_before =
std::numeric_limits<T>::tinyness_before;
- static constexpr ceres::Jet<T, N> min() noexcept {
- return ceres::Jet<T, N>(std::numeric_limits<T>::min());
+ static constexpr ceres::Jet<T, N> min
+ CERES_PREVENT_MACRO_SUBSTITUTION() noexcept {
+ return ceres::Jet<T, N>((std::numeric_limits<T>::min)());
}
static constexpr ceres::Jet<T, N> lowest() noexcept {
return ceres::Jet<T, N>(std::numeric_limits<T>::lowest());
@@ -929,8 +1406,9 @@
return ceres::Jet<T, N>(std::numeric_limits<T>::denorm_min());
}
- static constexpr ceres::Jet<T, N> max() noexcept {
- return ceres::Jet<T, N>(std::numeric_limits<T>::max());
+ static constexpr ceres::Jet<T, N> max
+ CERES_PREVENT_MACRO_SUBSTITUTION() noexcept {
+ return ceres::Jet<T, N>((std::numeric_limits<T>::max)());
}
};
@@ -942,10 +1420,10 @@
// Eigen arrays, getting all the goodness of Eigen combined with autodiff.
template <typename T, int N>
struct NumTraits<ceres::Jet<T, N>> {
- typedef ceres::Jet<T, N> Real;
- typedef ceres::Jet<T, N> NonInteger;
- typedef ceres::Jet<T, N> Nested;
- typedef ceres::Jet<T, N> Literal;
+ using Real = ceres::Jet<T, N>;
+ using NonInteger = ceres::Jet<T, N>;
+ using Nested = ceres::Jet<T, N>;
+ using Literal = ceres::Jet<T, N>;
static typename ceres::Jet<T, N> dummy_precision() {
return ceres::Jet<T, N>(1e-12);
@@ -956,6 +1434,7 @@
}
static inline int digits10() { return NumTraits<T>::digits10(); }
+ static inline int max_digits10() { return NumTraits<T>::max_digits10(); }
enum {
IsComplex = 0,
@@ -984,8 +1463,8 @@
};
};
- static inline Real highest() { return Real(std::numeric_limits<T>::max()); }
- static inline Real lowest() { return Real(-std::numeric_limits<T>::max()); }
+ static inline Real highest() { return Real((std::numeric_limits<T>::max)()); }
+ static inline Real lowest() { return Real(-(std::numeric_limits<T>::max)()); }
};
// Specifying the return type of binary operations between Jets and scalar types
@@ -996,11 +1475,11 @@
// is only available on Eigen versions >= 3.3
template <typename BinaryOp, typename T, int N>
struct ScalarBinaryOpTraits<ceres::Jet<T, N>, T, BinaryOp> {
- typedef ceres::Jet<T, N> ReturnType;
+ using ReturnType = ceres::Jet<T, N>;
};
template <typename BinaryOp, typename T, int N>
struct ScalarBinaryOpTraits<T, ceres::Jet<T, N>, BinaryOp> {
- typedef ceres::Jet<T, N> ReturnType;
+ using ReturnType = ceres::Jet<T, N>;
};
} // namespace Eigen
diff --git a/internal/ceres/float_cxsparse.cc b/include/ceres/jet_fwd.h
similarity index 77%
copy from internal/ceres/float_cxsparse.cc
copy to include/ceres/jet_fwd.h
index 6c68830..b5216da 100644
--- a/internal/ceres/float_cxsparse.cc
+++ b/include/ceres/jet_fwd.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -26,22 +26,19 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
+// Author: sergiu.deitsch@gmail.com (Sergiu Deitsch)
+//
-#include "ceres/float_cxsparse.h"
-
-#if !defined(CERES_NO_CXSPARSE)
+#ifndef CERES_PUBLIC_JET_FWD_H_
+#define CERES_PUBLIC_JET_FWD_H_
namespace ceres {
-namespace internal {
-std::unique_ptr<SparseCholesky> FloatCXSparseCholesky::Create(
- OrderingType ordering_type) {
- LOG(FATAL) << "FloatCXSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
-}
+// Jet forward declaration necessary for the following partial specialization of
+// std::common_type and type traits.
+template <typename T, int N>
+struct Jet;
-} // namespace internal
} // namespace ceres
-#endif // !defined(CERES_NO_CXSPARSE)
+#endif // CERES_PUBLIC_JET_FWD_H_
diff --git a/include/ceres/line_manifold.h b/include/ceres/line_manifold.h
new file mode 100644
index 0000000..dad9737
--- /dev/null
+++ b/include/ceres/line_manifold.h
@@ -0,0 +1,301 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: jodebo_beck@gmx.de (Johannes Beck)
+//
+
+#ifndef CERES_PUBLIC_LINE_MANIFOLD_H_
+#define CERES_PUBLIC_LINE_MANIFOLD_H_
+
+#include <Eigen/Core>
+#include <algorithm>
+#include <array>
+#include <memory>
+#include <vector>
+
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
+#include "ceres/internal/householder_vector.h"
+#include "ceres/internal/sphere_manifold_functions.h"
+#include "ceres/manifold.h"
+#include "ceres/types.h"
+#include "glog/logging.h"
+
+namespace ceres {
+// This provides a manifold for lines, where the line is
+// over-parameterized by an origin point and a direction vector. So the
+// parameter vector size needs to be two times the ambient space dimension,
+// where the first half is interpreted as the origin point and the second half
+// as the direction.
+//
+// The plus operator for the line direction is the same as for the
+// SphereManifold. The update of the origin point is
+// perpendicular to the line direction before the update.
+//
+// This manifold is a special case of the affine Grassmannian
+// manifold (see https://en.wikipedia.org/wiki/Affine_Grassmannian_(manifold))
+// for the case Graff_1(R^n).
+//
+// The class works with dynamic and static ambient space dimensions. If the
+// ambient space dimensions is known at compile time use
+//
+// LineManifold<3> manifold;
+//
+// If the ambient space dimensions is not known at compile time the template
+// parameter needs to be set to ceres::DYNAMIC and the actual dimension needs
+// to be provided as a constructor argument:
+//
+// LineManifold<ceres::DYNAMIC> manifold(ambient_dim);
+//
+template <int AmbientSpaceDimension>
+class LineManifold final : public Manifold {
+ public:
+ static_assert(AmbientSpaceDimension == DYNAMIC || AmbientSpaceDimension >= 2,
+ "The ambient space must be at least 2.");
+ static_assert(ceres::DYNAMIC == Eigen::Dynamic,
+ "ceres::DYNAMIC needs to be the same as Eigen::Dynamic.");
+
+ LineManifold();
+ explicit LineManifold(int size);
+
+ int AmbientSize() const override { return 2 * size_; }
+ int TangentSize() const override { return 2 * (size_ - 1); }
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override;
+ bool PlusJacobian(const double* x, double* jacobian) const override;
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override;
+ bool MinusJacobian(const double* x, double* jacobian) const override;
+
+ private:
+ static constexpr bool IsDynamic = (AmbientSpaceDimension == ceres::DYNAMIC);
+ static constexpr int TangentSpaceDimension =
+ IsDynamic ? ceres::DYNAMIC : AmbientSpaceDimension - 1;
+
+ static constexpr int DAmbientSpaceDimension =
+ IsDynamic ? ceres::DYNAMIC : 2 * AmbientSpaceDimension;
+ static constexpr int DTangentSpaceDimension =
+ IsDynamic ? ceres::DYNAMIC : 2 * TangentSpaceDimension;
+
+ using AmbientVector = Eigen::Matrix<double, AmbientSpaceDimension, 1>;
+ using TangentVector = Eigen::Matrix<double, TangentSpaceDimension, 1>;
+ using MatrixPlusJacobian = Eigen::Matrix<double,
+ DAmbientSpaceDimension,
+ DTangentSpaceDimension,
+ Eigen::RowMajor>;
+ using MatrixMinusJacobian = Eigen::Matrix<double,
+ DTangentSpaceDimension,
+ DAmbientSpaceDimension,
+ Eigen::RowMajor>;
+
+ const int size_{AmbientSpaceDimension};
+};
+
+template <int AmbientSpaceDimension>
+LineManifold<AmbientSpaceDimension>::LineManifold()
+ : size_{AmbientSpaceDimension} {
+ static_assert(
+ AmbientSpaceDimension != Eigen::Dynamic,
+ "The size is set to dynamic. Please call the constructor with a size.");
+}
+
+template <int AmbientSpaceDimension>
+LineManifold<AmbientSpaceDimension>::LineManifold(int size) : size_{size} {
+ if (AmbientSpaceDimension != Eigen::Dynamic) {
+ CHECK_EQ(AmbientSpaceDimension, size)
+ << "Specified size by template parameter differs from the supplied "
+ "one.";
+ } else {
+ CHECK_GT(size_, 1)
+ << "The size of the manifold needs to be greater than 1.";
+ }
+}
+
+template <int AmbientSpaceDimension>
+bool LineManifold<AmbientSpaceDimension>::Plus(const double* x_ptr,
+ const double* delta_ptr,
+ double* x_plus_delta_ptr) const {
+ // We seek a box plus operator of the form
+ //
+ // [o*, d*] = Plus([o, d], [delta_o, delta_d])
+ //
+ // where o is the origin point, d is the direction vector, delta_o is
+ // the delta of the origin point and delta_d the delta of the direction and
+ // o* and d* is the updated origin point and direction.
+ //
+ // We separate the Plus operator into the origin point and directional part
+ // d* = Plus_d(d, delta_d)
+ // o* = Plus_o(o, d, delta_o)
+ //
+ // The direction update function Plus_d is the same as as the SphereManifold:
+ //
+ // d* = H_{v(d)} [sinc(|delta_d|) delta_d, cos(|delta_d|)]^T
+ //
+ // where H is the householder matrix
+ // H_{v} = I - (2 / |v|^2) v v^T
+ // and
+ // v(d) = d - sign(d_n) |d| e_n.
+ //
+ // The origin point update function Plus_o is defined as
+ //
+ // o* = o + H_{v(d)} [delta_o, 0]^T.
+
+ Eigen::Map<const AmbientVector> o(x_ptr, size_);
+ Eigen::Map<const AmbientVector> d(x_ptr + size_, size_);
+
+ Eigen::Map<const TangentVector> delta_o(delta_ptr, size_ - 1);
+ Eigen::Map<const TangentVector> delta_d(delta_ptr + size_ - 1, size_ - 1);
+ Eigen::Map<AmbientVector> o_plus_delta(x_plus_delta_ptr, size_);
+ Eigen::Map<AmbientVector> d_plus_delta(x_plus_delta_ptr + size_, size_);
+
+ const double norm_delta_d = delta_d.norm();
+
+ o_plus_delta = o;
+
+ // Shortcut for zero delta direction.
+ if (norm_delta_d == 0.0) {
+ d_plus_delta = d;
+
+ if (delta_o.isZero(0.0)) {
+ return true;
+ }
+ }
+
+ // Calculate the householder transformation which is needed for f_d and f_o.
+ AmbientVector v(size_);
+ double beta;
+
+ // NOTE: The explicit template arguments are needed here because
+ // ComputeHouseholderVector is templated and some versions of MSVC
+ // have trouble deducing the type of v automatically.
+ internal::ComputeHouseholderVector<Eigen::Map<const AmbientVector>,
+ double,
+ AmbientSpaceDimension>(d, &v, &beta);
+
+ if (norm_delta_d != 0.0) {
+ internal::ComputeSphereManifoldPlus(
+ v, beta, d, delta_d, norm_delta_d, &d_plus_delta);
+ }
+
+ // The null space is in the direction of the line, so the tangent space is
+ // perpendicular to the line direction. This is achieved by using the
+ // householder matrix of the direction and allow only movements
+ // perpendicular to e_n.
+ AmbientVector y(size_);
+ y << delta_o, 0;
+ o_plus_delta += internal::ApplyHouseholderVector(y, v, beta);
+
+ return true;
+}
+
+template <int AmbientSpaceDimension>
+bool LineManifold<AmbientSpaceDimension>::PlusJacobian(
+ const double* x_ptr, double* jacobian_ptr) const {
+ Eigen::Map<const AmbientVector> d(x_ptr + size_, size_);
+ Eigen::Map<MatrixPlusJacobian> jacobian(
+ jacobian_ptr, 2 * size_, 2 * (size_ - 1));
+
+ // Clear the Jacobian as only half of the matrix is not zero.
+ jacobian.setZero();
+
+ auto jacobian_d =
+ jacobian
+ .template topLeftCorner<AmbientSpaceDimension, TangentSpaceDimension>(
+ size_, size_ - 1);
+ auto jacobian_o = jacobian.template bottomRightCorner<AmbientSpaceDimension,
+ TangentSpaceDimension>(
+ size_, size_ - 1);
+ internal::ComputeSphereManifoldPlusJacobian(d, &jacobian_d);
+ jacobian_o = jacobian_d;
+ return true;
+}
+
+template <int AmbientSpaceDimension>
+bool LineManifold<AmbientSpaceDimension>::Minus(const double* y_ptr,
+ const double* x_ptr,
+ double* y_minus_x) const {
+ Eigen::Map<const AmbientVector> y_o(y_ptr, size_);
+ Eigen::Map<const AmbientVector> y_d(y_ptr + size_, size_);
+ Eigen::Map<const AmbientVector> x_o(x_ptr, size_);
+ Eigen::Map<const AmbientVector> x_d(x_ptr + size_, size_);
+
+ Eigen::Map<TangentVector> y_minus_x_o(y_minus_x, size_ - 1);
+ Eigen::Map<TangentVector> y_minus_x_d(y_minus_x + size_ - 1, size_ - 1);
+
+ AmbientVector v(size_);
+ double beta;
+
+ // NOTE: The explicit template arguments are needed here because
+ // ComputeHouseholderVector is templated and some versions of MSVC
+ // have trouble deducing the type of v automatically.
+ internal::ComputeHouseholderVector<Eigen::Map<const AmbientVector>,
+ double,
+ AmbientSpaceDimension>(x_d, &v, &beta);
+
+ internal::ComputeSphereManifoldMinus(v, beta, x_d, y_d, &y_minus_x_d);
+
+ AmbientVector delta_o = y_o - x_o;
+ const AmbientVector h_delta_o =
+ internal::ApplyHouseholderVector(delta_o, v, beta);
+ y_minus_x_o = h_delta_o.template head<TangentSpaceDimension>(size_ - 1);
+
+ return true;
+}
+
+template <int AmbientSpaceDimension>
+bool LineManifold<AmbientSpaceDimension>::MinusJacobian(
+ const double* x_ptr, double* jacobian_ptr) const {
+ Eigen::Map<const AmbientVector> d(x_ptr + size_, size_);
+ Eigen::Map<MatrixMinusJacobian> jacobian(
+ jacobian_ptr, 2 * (size_ - 1), 2 * size_);
+
+ // Clear the Jacobian as only half of the matrix is not zero.
+ jacobian.setZero();
+
+ auto jacobian_d =
+ jacobian
+ .template topLeftCorner<TangentSpaceDimension, AmbientSpaceDimension>(
+ size_ - 1, size_);
+ auto jacobian_o = jacobian.template bottomRightCorner<TangentSpaceDimension,
+ AmbientSpaceDimension>(
+ size_ - 1, size_);
+ internal::ComputeSphereManifoldMinusJacobian(d, &jacobian_d);
+ jacobian_o = jacobian_d;
+
+ return true;
+}
+
+} // namespace ceres
+
+// clang-format off
+#include "ceres/internal/reenable_warnings.h"
+// clang-format on
+
+#endif // CERES_PUBLIC_LINE_MANIFOLD_H_
diff --git a/include/ceres/local_parameterization.h b/include/ceres/local_parameterization.h
deleted file mode 100644
index ba7579d..0000000
--- a/include/ceres/local_parameterization.h
+++ /dev/null
@@ -1,363 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: keir@google.com (Keir Mierle)
-// sameeragarwal@google.com (Sameer Agarwal)
-
-#ifndef CERES_PUBLIC_LOCAL_PARAMETERIZATION_H_
-#define CERES_PUBLIC_LOCAL_PARAMETERIZATION_H_
-
-#include <array>
-#include <memory>
-#include <vector>
-
-#include "ceres/internal/disable_warnings.h"
-#include "ceres/internal/port.h"
-
-namespace ceres {
-
-// Purpose: Sometimes parameter blocks x can overparameterize a problem
-//
-// min f(x)
-// x
-//
-// In that case it is desirable to choose a parameterization for the
-// block itself to remove the null directions of the cost. More
-// generally, if x lies on a manifold of a smaller dimension than the
-// ambient space that it is embedded in, then it is numerically and
-// computationally more effective to optimize it using a
-// parameterization that lives in the tangent space of that manifold
-// at each point.
-//
-// For example, a sphere in three dimensions is a 2 dimensional
-// manifold, embedded in a three dimensional space. At each point on
-// the sphere, the plane tangent to it defines a two dimensional
-// tangent space. For a cost function defined on this sphere, given a
-// point x, moving in the direction normal to the sphere at that point
-// is not useful. Thus a better way to do a local optimization is to
-// optimize over two dimensional vector delta in the tangent space at
-// that point and then "move" to the point x + delta, where the move
-// operation involves projecting back onto the sphere. Doing so
-// removes a redundant dimension from the optimization, making it
-// numerically more robust and efficient.
-//
-// More generally we can define a function
-//
-// x_plus_delta = Plus(x, delta),
-//
-// where x_plus_delta has the same size as x, and delta is of size
-// less than or equal to x. The function Plus, generalizes the
-// definition of vector addition. Thus it satisfies the identify
-//
-// Plus(x, 0) = x, for all x.
-//
-// A trivial version of Plus is when delta is of the same size as x
-// and
-//
-// Plus(x, delta) = x + delta
-//
-// A more interesting case if x is two dimensional vector, and the
-// user wishes to hold the first coordinate constant. Then, delta is a
-// scalar and Plus is defined as
-//
-// Plus(x, delta) = x + [0] * delta
-// [1]
-//
-// An example that occurs commonly in Structure from Motion problems
-// is when camera rotations are parameterized using Quaternion. There,
-// it is useful to only make updates orthogonal to that 4-vector
-// defining the quaternion. One way to do this is to let delta be a 3
-// dimensional vector and define Plus to be
-//
-// Plus(x, delta) = [cos(|delta|), sin(|delta|) delta / |delta|] * x
-//
-// The multiplication between the two 4-vectors on the RHS is the
-// standard quaternion product.
-//
-// Given f and a point x, optimizing f can now be restated as
-//
-// min f(Plus(x, delta))
-// delta
-//
-// Given a solution delta to this problem, the optimal value is then
-// given by
-//
-// x* = Plus(x, delta)
-//
-// The class LocalParameterization defines the function Plus and its
-// Jacobian which is needed to compute the Jacobian of f w.r.t delta.
-class CERES_EXPORT LocalParameterization {
- public:
- virtual ~LocalParameterization();
-
- // Generalization of the addition operation,
- //
- // x_plus_delta = Plus(x, delta)
- //
- // with the condition that Plus(x, 0) = x.
- virtual bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const = 0;
-
- // The jacobian of Plus(x, delta) w.r.t delta at delta = 0.
- //
- // jacobian is a row-major GlobalSize() x LocalSize() matrix.
- virtual bool ComputeJacobian(const double* x, double* jacobian) const = 0;
-
- // local_matrix = global_matrix * jacobian
- //
- // global_matrix is a num_rows x GlobalSize row major matrix.
- // local_matrix is a num_rows x LocalSize row major matrix.
- // jacobian(x) is the matrix returned by ComputeJacobian at x.
- //
- // This is only used by GradientProblem. For most normal uses, it is
- // okay to use the default implementation.
- virtual bool MultiplyByJacobian(const double* x,
- const int num_rows,
- const double* global_matrix,
- double* local_matrix) const;
-
- // Size of x.
- virtual int GlobalSize() const = 0;
-
- // Size of delta.
- virtual int LocalSize() const = 0;
-};
-
-// Some basic parameterizations
-
-// Identity Parameterization: Plus(x, delta) = x + delta
-class CERES_EXPORT IdentityParameterization : public LocalParameterization {
- public:
- explicit IdentityParameterization(int size);
- virtual ~IdentityParameterization() {}
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const override;
- bool ComputeJacobian(const double* x, double* jacobian) const override;
- bool MultiplyByJacobian(const double* x,
- const int num_cols,
- const double* global_matrix,
- double* local_matrix) const override;
- int GlobalSize() const override { return size_; }
- int LocalSize() const override { return size_; }
-
- private:
- const int size_;
-};
-
-// Hold a subset of the parameters inside a parameter block constant.
-class CERES_EXPORT SubsetParameterization : public LocalParameterization {
- public:
- explicit SubsetParameterization(int size,
- const std::vector<int>& constant_parameters);
- virtual ~SubsetParameterization() {}
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const override;
- bool ComputeJacobian(const double* x, double* jacobian) const override;
- bool MultiplyByJacobian(const double* x,
- const int num_cols,
- const double* global_matrix,
- double* local_matrix) const override;
- int GlobalSize() const override {
- return static_cast<int>(constancy_mask_.size());
- }
- int LocalSize() const override { return local_size_; }
-
- private:
- const int local_size_;
- std::vector<char> constancy_mask_;
-};
-
-// Plus(x, delta) = [cos(|delta|), sin(|delta|) delta / |delta|] * x
-// with * being the quaternion multiplication operator. Here we assume
-// that the first element of the quaternion vector is the real (cos
-// theta) part.
-class CERES_EXPORT QuaternionParameterization : public LocalParameterization {
- public:
- virtual ~QuaternionParameterization() {}
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const override;
- bool ComputeJacobian(const double* x, double* jacobian) const override;
- int GlobalSize() const override { return 4; }
- int LocalSize() const override { return 3; }
-};
-
-// Implements the quaternion local parameterization for Eigen's representation
-// of the quaternion. Eigen uses a different internal memory layout for the
-// elements of the quaternion than what is commonly used. Specifically, Eigen
-// stores the elements in memory as [x, y, z, w] where the real part is last
-// whereas it is typically stored first. Note, when creating an Eigen quaternion
-// through the constructor the elements are accepted in w, x, y, z order. Since
-// Ceres operates on parameter blocks which are raw double pointers this
-// difference is important and requires a different parameterization.
-//
-// Plus(x, delta) = [sin(|delta|) delta / |delta|, cos(|delta|)] * x
-// with * being the quaternion multiplication operator.
-class CERES_EXPORT EigenQuaternionParameterization
- : public ceres::LocalParameterization {
- public:
- virtual ~EigenQuaternionParameterization() {}
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const override;
- bool ComputeJacobian(const double* x, double* jacobian) const override;
- int GlobalSize() const override { return 4; }
- int LocalSize() const override { return 3; }
-};
-
-// This provides a parameterization for homogeneous vectors which are commonly
-// used in Structure for Motion problems. One example where they are used is
-// in representing points whose triangulation is ill-conditioned. Here
-// it is advantageous to use an over-parameterization since homogeneous vectors
-// can represent points at infinity.
-//
-// The plus operator is defined as
-// Plus(x, delta) =
-// [sin(0.5 * |delta|) * delta / |delta|, cos(0.5 * |delta|)] * x
-// with * defined as an operator which applies the update orthogonal to x to
-// remain on the sphere. We assume that the last element of x is the scalar
-// component. The size of the homogeneous vector is required to be greater than
-// 1.
-class CERES_EXPORT HomogeneousVectorParameterization
- : public LocalParameterization {
- public:
- explicit HomogeneousVectorParameterization(int size);
- virtual ~HomogeneousVectorParameterization() {}
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const override;
- bool ComputeJacobian(const double* x, double* jacobian) const override;
- int GlobalSize() const override { return size_; }
- int LocalSize() const override { return size_ - 1; }
-
- private:
- const int size_;
-};
-
-// This provides a parameterization for lines, where the line is
-// over-parameterized by an origin point and a direction vector. So the
-// parameter vector size needs to be two times the ambient space dimension,
-// where the first half is interpreted as the origin point and the second half
-// as the direction.
-//
-// The plus operator for the line direction is the same as for the
-// HomogeneousVectorParameterization. The update of the origin point is
-// perpendicular to the line direction before the update.
-//
-// This local parameterization is a special case of the affine Grassmannian
-// manifold (see https://en.wikipedia.org/wiki/Affine_Grassmannian_(manifold))
-// for the case Graff_1(R^n).
-template <int AmbientSpaceDimension>
-class LineParameterization : public LocalParameterization {
- public:
- static_assert(AmbientSpaceDimension >= 2,
- "The ambient space must be at least 2");
-
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const override;
- bool ComputeJacobian(const double* x, double* jacobian) const override;
- int GlobalSize() const override { return 2 * AmbientSpaceDimension; }
- int LocalSize() const override { return 2 * (AmbientSpaceDimension - 1); }
-};
-
-// Construct a local parameterization by taking the Cartesian product
-// of a number of other local parameterizations. This is useful, when
-// a parameter block is the cartesian product of two or more
-// manifolds. For example the parameters of a camera consist of a
-// rotation and a translation, i.e., SO(3) x R^3.
-//
-// Example usage:
-//
-// ProductParameterization product_param(new QuaterionionParameterization(),
-// new IdentityParameterization(3));
-//
-// is the local parameterization for a rigid transformation, where the
-// rotation is represented using a quaternion.
-class CERES_EXPORT ProductParameterization : public LocalParameterization {
- public:
- ProductParameterization(const ProductParameterization&) = delete;
- ProductParameterization& operator=(const ProductParameterization&) = delete;
- virtual ~ProductParameterization() {}
- //
- // NOTE: The constructor takes ownership of the input local
- // parameterizations.
- //
- template <typename... LocalParams>
- ProductParameterization(LocalParams*... local_params)
- : local_params_(sizeof...(LocalParams)),
- local_size_{0},
- global_size_{0},
- buffer_size_{0} {
- constexpr int kNumLocalParams = sizeof...(LocalParams);
- static_assert(kNumLocalParams >= 2,
- "At least two local parameterizations must be specified.");
-
- using LocalParameterizationPtr = std::unique_ptr<LocalParameterization>;
-
- // Wrap all raw pointers into std::unique_ptr for exception safety.
- std::array<LocalParameterizationPtr, kNumLocalParams> local_params_array{
- LocalParameterizationPtr(local_params)...};
-
- // Initialize internal state.
- for (int i = 0; i < kNumLocalParams; ++i) {
- LocalParameterizationPtr& param = local_params_[i];
- param = std::move(local_params_array[i]);
-
- buffer_size_ =
- std::max(buffer_size_, param->LocalSize() * param->GlobalSize());
- global_size_ += param->GlobalSize();
- local_size_ += param->LocalSize();
- }
- }
-
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const override;
- bool ComputeJacobian(const double* x,
- double* jacobian) const override;
- int GlobalSize() const override { return global_size_; }
- int LocalSize() const override { return local_size_; }
-
- private:
- std::vector<std::unique_ptr<LocalParameterization>> local_params_;
- int local_size_;
- int global_size_;
- int buffer_size_;
-};
-
-} // namespace ceres
-
-// clang-format off
-#include "ceres/internal/reenable_warnings.h"
-#include "ceres/internal/line_parameterization.h"
-
-#endif // CERES_PUBLIC_LOCAL_PARAMETERIZATION_H_
diff --git a/include/ceres/loss_function.h b/include/ceres/loss_function.h
index 7aabf7d..b8582f8 100644
--- a/include/ceres/loss_function.h
+++ b/include/ceres/loss_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,7 +35,7 @@
//
// For least squares problem where there are no outliers and standard
// squared loss is expected, it is not necessary to create a loss
-// function; instead passing a NULL to the problem when adding
+// function; instead passing a nullptr to the problem when adding
// residuals implies a standard squared loss.
//
// For least squares problems where the minimization may encounter
@@ -78,6 +78,7 @@
#include <memory>
#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/types.h"
#include "glog/logging.h"
@@ -85,7 +86,7 @@
class CERES_EXPORT LossFunction {
public:
- virtual ~LossFunction() {}
+ virtual ~LossFunction();
// For a residual vector with squared 2-norm 'sq_norm', this method
// is required to fill in the value and derivatives of the loss
@@ -125,10 +126,10 @@
//
// At s = 0: rho = [0, 1, 0].
//
-// It is not normally necessary to use this, as passing NULL for the
+// It is not normally necessary to use this, as passing nullptr for the
// loss function when building the problem accomplishes the same
// thing.
-class CERES_EXPORT TrivialLoss : public LossFunction {
+class CERES_EXPORT TrivialLoss final : public LossFunction {
public:
void Evaluate(double, double*) const override;
};
@@ -171,7 +172,7 @@
//
// The scaling parameter 'a' corresponds to 'delta' on this page:
// http://en.wikipedia.org/wiki/Huber_Loss_Function
-class CERES_EXPORT HuberLoss : public LossFunction {
+class CERES_EXPORT HuberLoss final : public LossFunction {
public:
explicit HuberLoss(double a) : a_(a), b_(a * a) {}
void Evaluate(double, double*) const override;
@@ -187,7 +188,7 @@
// rho(s) = 2 (sqrt(1 + s) - 1).
//
// At s = 0: rho = [0, 1, -1 / (2 * a^2)].
-class CERES_EXPORT SoftLOneLoss : public LossFunction {
+class CERES_EXPORT SoftLOneLoss final : public LossFunction {
public:
explicit SoftLOneLoss(double a) : b_(a * a), c_(1 / b_) {}
void Evaluate(double, double*) const override;
@@ -204,7 +205,7 @@
// rho(s) = log(1 + s).
//
// At s = 0: rho = [0, 1, -1 / a^2].
-class CERES_EXPORT CauchyLoss : public LossFunction {
+class CERES_EXPORT CauchyLoss final : public LossFunction {
public:
explicit CauchyLoss(double a) : b_(a * a), c_(1 / b_) {}
void Evaluate(double, double*) const override;
@@ -225,7 +226,7 @@
// rho(s) = a atan(s / a).
//
// At s = 0: rho = [0, 1, 0].
-class CERES_EXPORT ArctanLoss : public LossFunction {
+class CERES_EXPORT ArctanLoss final : public LossFunction {
public:
explicit ArctanLoss(double a) : a_(a), b_(1 / (a * a)) {}
void Evaluate(double, double*) const override;
@@ -264,7 +265,7 @@
// concentrated in the range a - b to a + b.
//
// At s = 0: rho = [0, ~0, ~0].
-class CERES_EXPORT TolerantLoss : public LossFunction {
+class CERES_EXPORT TolerantLoss final : public LossFunction {
public:
explicit TolerantLoss(double a, double b);
void Evaluate(double, double*) const override;
@@ -283,7 +284,7 @@
// rho(s) = a^2 / 3 for s > a^2.
//
// At s = 0: rho = [0, 1, -2 / a^2]
-class CERES_EXPORT TukeyLoss : public ceres::LossFunction {
+class CERES_EXPORT TukeyLoss final : public ceres::LossFunction {
public:
explicit TukeyLoss(double a) : a_squared_(a * a) {}
void Evaluate(double, double*) const override;
@@ -294,14 +295,14 @@
// Composition of two loss functions. The error is the result of first
// evaluating g followed by f to yield the composition f(g(s)).
-// The loss functions must not be NULL.
-class CERES_EXPORT ComposedLoss : public LossFunction {
+// The loss functions must not be nullptr.
+class CERES_EXPORT ComposedLoss final : public LossFunction {
public:
explicit ComposedLoss(const LossFunction* f,
Ownership ownership_f,
const LossFunction* g,
Ownership ownership_g);
- virtual ~ComposedLoss();
+ ~ComposedLoss() override;
void Evaluate(double, double*) const override;
private:
@@ -322,11 +323,11 @@
// s -> a * rho'(s)
// s -> a * rho''(s)
//
-// Since we treat the a NULL Loss function as the Identity loss
-// function, rho = NULL is a valid input and will result in the input
+// Since we treat the a nullptr Loss function as the Identity loss
+// function, rho = nullptr is a valid input and will result in the input
// being scaled by a. This provides a simple way of implementing a
// scaled ResidualBlock.
-class CERES_EXPORT ScaledLoss : public LossFunction {
+class CERES_EXPORT ScaledLoss final : public LossFunction {
public:
// Constructs a ScaledLoss wrapping another loss function. Takes
// ownership of the wrapped loss function or not depending on the
@@ -336,7 +337,7 @@
ScaledLoss(const ScaledLoss&) = delete;
void operator=(const ScaledLoss&) = delete;
- virtual ~ScaledLoss() {
+ ~ScaledLoss() override {
if (ownership_ == DO_NOT_TAKE_OWNERSHIP) {
rho_.release();
}
@@ -361,8 +362,8 @@
// whose scale can be mutated after an optimization problem has been
// constructed.
//
-// Since we treat the a NULL Loss function as the Identity loss
-// function, rho = NULL is a valid input.
+// Since we treat the a nullptr Loss function as the Identity loss
+// function, rho = nullptr is a valid input.
//
// Example usage
//
@@ -374,7 +375,8 @@
// new AutoDiffCostFunction < UW_Camera_Mapper, 2, 9, 3>(
// new UW_Camera_Mapper(feature_x, feature_y));
//
-// LossFunctionWrapper* loss_function(new HuberLoss(1.0), TAKE_OWNERSHIP);
+// LossFunctionWrapper* loss_function = new LossFunctionWrapper(
+// new HuberLoss(1.0), TAKE_OWNERSHIP);
//
// problem.AddResidualBlock(cost_function, loss_function, parameters);
//
@@ -387,7 +389,7 @@
//
// Solve(options, &problem, &summary)
//
-class CERES_EXPORT LossFunctionWrapper : public LossFunction {
+class CERES_EXPORT LossFunctionWrapper final : public LossFunction {
public:
LossFunctionWrapper(LossFunction* rho, Ownership ownership)
: rho_(rho), ownership_(ownership) {}
@@ -395,14 +397,14 @@
LossFunctionWrapper(const LossFunctionWrapper&) = delete;
void operator=(const LossFunctionWrapper&) = delete;
- virtual ~LossFunctionWrapper() {
+ ~LossFunctionWrapper() override {
if (ownership_ == DO_NOT_TAKE_OWNERSHIP) {
rho_.release();
}
}
void Evaluate(double sq_norm, double out[3]) const override {
- if (rho_.get() == NULL) {
+ if (rho_.get() == nullptr) {
out[0] = sq_norm;
out[1] = 1.0;
out[2] = 0.0;
diff --git a/include/ceres/manifold.h b/include/ceres/manifold.h
new file mode 100644
index 0000000..9bd6459
--- /dev/null
+++ b/include/ceres/manifold.h
@@ -0,0 +1,411 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#ifndef CERES_PUBLIC_MANIFOLD_H_
+#define CERES_PUBLIC_MANIFOLD_H_
+
+#include <Eigen/Core>
+#include <algorithm>
+#include <array>
+#include <memory>
+#include <utility>
+#include <vector>
+
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
+#include "ceres/types.h"
+#include "glog/logging.h"
+
+namespace ceres {
+
+// In sensor fusion problems, often we have to model quantities that live in
+// spaces known as Manifolds, for example the rotation/orientation of a sensor
+// that is represented by a quaternion.
+//
+// Manifolds are spaces which locally look like Euclidean spaces. More
+// precisely, at each point on the manifold there is a linear space that is
+// tangent to the manifold. It has dimension equal to the intrinsic dimension of
+// the manifold itself, which is less than or equal to the ambient space in
+// which the manifold is embedded.
+//
+// For example, the tangent space to a point on a sphere in three dimensions is
+// the two dimensional plane that is tangent to the sphere at that point. There
+// are two reasons tangent spaces are interesting:
+//
+// 1. They are Eucliean spaces so the usual vector space operations apply there,
+// which makes numerical operations easy.
+// 2. Movement in the tangent space translate into movements along the manifold.
+// Movements perpendicular to the tangent space do not translate into
+// movements on the manifold.
+//
+// Returning to our sphere example, moving in the 2 dimensional plane
+// tangent to the sphere and projecting back onto the sphere will move you away
+// from the point you started from but moving along the normal at the same point
+// and the projecting back onto the sphere brings you back to the point.
+//
+// The Manifold interface defines two operations (and their derivatives)
+// involving the tangent space, allowing filtering and optimization to be
+// performed on said manifold:
+//
+// 1. x_plus_delta = Plus(x, delta)
+// 2. delta = Minus(x_plus_delta, x)
+//
+// "Plus" computes the result of moving along delta in the tangent space at x,
+// and then projecting back onto the manifold that x belongs to. In Differential
+// Geometry this is known as a "Retraction". It is a generalization of vector
+// addition in Euclidean spaces.
+//
+// Given two points on the manifold, "Minus" computes the change delta to x in
+// the tangent space at x, that will take it to x_plus_delta.
+//
+// Let us now consider two examples.
+//
+// The Euclidean space R^n is the simplest example of a manifold. It has
+// dimension n (and so does its tangent space) and Plus and Minus are the
+// familiar vector sum and difference operations.
+//
+// Plus(x, delta) = x + delta = y,
+// Minus(y, x) = y - x = delta.
+//
+// A more interesting case is SO(3), the special orthogonal group in three
+// dimensions - the space of 3x3 rotation matrices. SO(3) is a three dimensional
+// manifold embedded in R^9 or R^(3x3). So points on SO(3) are represented using
+// 9 dimensional vectors or 3x3 matrices, and points in its tangent spaces are
+// represented by 3 dimensional vectors.
+//
+// Defining Plus and Minus are defined in terms of the matrix Exp and Log
+// operations as follows:
+//
+// Let Exp(p, q, r) = [cos(theta) + cp^2, -sr + cpq , sq + cpr ]
+// [sr + cpq , cos(theta) + cq^2, -sp + cqr ]
+// [-sq + cpr , sp + cqr , cos(theta) + cr^2]
+//
+// where: theta = sqrt(p^2 + q^2 + r^2)
+// s = sinc(theta)
+// c = (1 - cos(theta))/theta^2
+//
+// and Log(x) = 1/(2 sinc(theta))[x_32 - x_23, x_13 - x_31, x_21 - x_12]
+//
+// where: theta = acos((Trace(x) - 1)/2)
+//
+// Then,
+//
+// Plus(x, delta) = x Exp(delta)
+// Minus(y, x) = Log(x^T y)
+//
+// For Plus and Minus to be mathematically consistent, the following identities
+// must be satisfied at all points x on the manifold:
+//
+// 1. Plus(x, 0) = x.
+// 2. For all y, Plus(x, Minus(y, x)) = y.
+// 3. For all delta, Minus(Plus(x, delta), x) = delta.
+// 4. For all delta_1, delta_2
+// |Minus(Plus(x, delta_1), Plus(x, delta_2)) <= |delta_1 - delta_2|
+//
+// Briefly:
+// (1) Ensures that the tangent space is "centered" at x, and the zero vector is
+// the identity element.
+// (2) Ensures that any y can be reached from x.
+// (3) Ensures that Plus is an injective (one-to-one) map.
+// (4) Allows us to define a metric on the manifold.
+//
+// Additionally we require that Plus and Minus be sufficiently smooth. In
+// particular they need to be differentiable everywhere on the manifold.
+//
+// For more details, please see
+//
+// "Integrating Generic Sensor Fusion Algorithms with Sound State
+// Representations through Encapsulation of Manifolds"
+// By C. Hertzberg, R. Wagner, U. Frese and L. Schroder
+// https://arxiv.org/pdf/1107.1119.pdf
+class CERES_EXPORT Manifold {
+ public:
+ virtual ~Manifold();
+
+ // Dimension of the ambient space in which the manifold is embedded.
+ virtual int AmbientSize() const = 0;
+
+ // Dimension of the manifold/tangent space.
+ virtual int TangentSize() const = 0;
+
+ // x_plus_delta = Plus(x, delta),
+ //
+ // A generalization of vector addition in Euclidean space, Plus computes the
+ // result of moving along delta in the tangent space at x, and then projecting
+ // back onto the manifold that x belongs to.
+ //
+ // x and x_plus_delta are AmbientSize() vectors.
+ // delta is a TangentSize() vector.
+ //
+ // Return value indicates if the operation was successful or not.
+ virtual bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const = 0;
+
+ // Compute the derivative of Plus(x, delta) w.r.t delta at delta = 0, i.e.
+ //
+ // (D_2 Plus)(x, 0)
+ //
+ // jacobian is a row-major AmbientSize() x TangentSize() matrix.
+ //
+ // Return value indicates whether the operation was successful or not.
+ virtual bool PlusJacobian(const double* x, double* jacobian) const = 0;
+
+ // tangent_matrix = ambient_matrix * (D_2 Plus)(x, 0)
+ //
+ // ambient_matrix is a row-major num_rows x AmbientSize() matrix.
+ // tangent_matrix is a row-major num_rows x TangentSize() matrix.
+ //
+ // Return value indicates whether the operation was successful or not.
+ //
+ // This function is only used by the GradientProblemSolver, where the
+ // dimension of the parameter block can be large and it may be more efficient
+ // to compute this product directly rather than first evaluating the Jacobian
+ // into a matrix and then doing a matrix vector product.
+ //
+ // Because this is not an often used function, we provide a default
+ // implementation for convenience. If performance becomes an issue then the
+ // user should consider implementing a specialization.
+ virtual bool RightMultiplyByPlusJacobian(const double* x,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const;
+
+ // y_minus_x = Minus(y, x)
+ //
+ // Given two points on the manifold, Minus computes the change to x in the
+ // tangent space at x, that will take it to y.
+ //
+ // x and y are AmbientSize() vectors.
+ // y_minus_x is a TangentSize() vector.
+ //
+ // Return value indicates if the operation was successful or not.
+ virtual bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const = 0;
+
+ // Compute the derivative of Minus(y, x) w.r.t y at y = x, i.e
+ //
+ // (D_1 Minus) (x, x)
+ //
+ // Jacobian is a row-major TangentSize() x AmbientSize() matrix.
+ //
+ // Return value indicates whether the operation was successful or not.
+ virtual bool MinusJacobian(const double* x, double* jacobian) const = 0;
+};
+
+// The Euclidean manifold is another name for the ordinary vector space R^size,
+// where the plus and minus operations are the usual vector addition and
+// subtraction:
+// Plus(x, delta) = x + delta
+// Minus(y, x) = y - x.
+//
+// The class works with dynamic and static ambient space dimensions. If the
+// ambient space dimensions is know at compile time use
+//
+// EuclideanManifold<3> manifold;
+//
+// If the ambient space dimensions is not known at compile time the template
+// parameter needs to be set to ceres::DYNAMIC and the actual dimension needs
+// to be provided as a constructor argument:
+//
+// EuclideanManifold<ceres::DYNAMIC> manifold(ambient_dim);
+template <int Size>
+class EuclideanManifold final : public Manifold {
+ public:
+ static_assert(Size == ceres::DYNAMIC || Size >= 0,
+ "The size of the manifold needs to be non-negative.");
+ static_assert(ceres::DYNAMIC == Eigen::Dynamic,
+ "ceres::DYNAMIC needs to be the same as Eigen::Dynamic.");
+
+ EuclideanManifold() : size_{Size} {
+ static_assert(
+ Size != ceres::DYNAMIC,
+ "The size is set to dynamic. Please call the constructor with a size.");
+ }
+
+ explicit EuclideanManifold(int size) : size_(size) {
+ if (Size != ceres::DYNAMIC) {
+ CHECK_EQ(Size, size)
+ << "Specified size by template parameter differs from the supplied "
+ "one.";
+ } else {
+ CHECK_GE(size_, 0)
+ << "The size of the manifold needs to be non-negative.";
+ }
+ }
+
+ int AmbientSize() const override { return size_; }
+ int TangentSize() const override { return size_; }
+
+ bool Plus(const double* x_ptr,
+ const double* delta_ptr,
+ double* x_plus_delta_ptr) const override {
+ Eigen::Map<const AmbientVector> x(x_ptr, size_);
+ Eigen::Map<const AmbientVector> delta(delta_ptr, size_);
+ Eigen::Map<AmbientVector> x_plus_delta(x_plus_delta_ptr, size_);
+ x_plus_delta = x + delta;
+ return true;
+ }
+
+ bool PlusJacobian(const double* x_ptr, double* jacobian_ptr) const override {
+ Eigen::Map<MatrixJacobian> jacobian(jacobian_ptr, size_, size_);
+ jacobian.setIdentity();
+ return true;
+ }
+
+ bool RightMultiplyByPlusJacobian(const double* x,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const override {
+ std::copy_n(ambient_matrix, num_rows * size_, tangent_matrix);
+ return true;
+ }
+
+ bool Minus(const double* y_ptr,
+ const double* x_ptr,
+ double* y_minus_x_ptr) const override {
+ Eigen::Map<const AmbientVector> x(x_ptr, size_);
+ Eigen::Map<const AmbientVector> y(y_ptr, size_);
+ Eigen::Map<AmbientVector> y_minus_x(y_minus_x_ptr, size_);
+ y_minus_x = y - x;
+ return true;
+ }
+
+ bool MinusJacobian(const double* x_ptr, double* jacobian_ptr) const override {
+ Eigen::Map<MatrixJacobian> jacobian(jacobian_ptr, size_, size_);
+ jacobian.setIdentity();
+ return true;
+ }
+
+ private:
+ static constexpr bool IsDynamic = (Size == ceres::DYNAMIC);
+ using AmbientVector = Eigen::Matrix<double, Size, 1>;
+ using MatrixJacobian = Eigen::Matrix<double, Size, Size, Eigen::RowMajor>;
+
+ int size_{};
+};
+
+// Hold a subset of the parameters inside a parameter block constant.
+class CERES_EXPORT SubsetManifold final : public Manifold {
+ public:
+ SubsetManifold(int size, const std::vector<int>& constant_parameters);
+ int AmbientSize() const override;
+ int TangentSize() const override;
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override;
+ bool PlusJacobian(const double* x, double* jacobian) const override;
+ bool RightMultiplyByPlusJacobian(const double* x,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const override;
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override;
+ bool MinusJacobian(const double* x, double* jacobian) const override;
+
+ private:
+ const int tangent_size_ = 0;
+ std::vector<bool> constancy_mask_;
+};
+
+// Implements the manifold for a Hamilton quaternion as defined in
+// https://en.wikipedia.org/wiki/Quaternion. Quaternions are represented as
+// unit norm 4-vectors, i.e.
+//
+// q = [q0; q1; q2; q3], |q| = 1
+//
+// is the ambient space representation.
+//
+// q0 scalar part.
+// q1 coefficient of i.
+// q2 coefficient of j.
+// q3 coefficient of k.
+//
+// where: i*i = j*j = k*k = -1 and i*j = k, j*k = i, k*i = j.
+//
+// The tangent space is R^3, which relates to the ambient space through the
+// Plus and Minus operations defined as:
+//
+// Plus(x, delta) = [cos(|delta|); sin(|delta|) * delta / |delta|] * x
+// Minus(y, x) = to_delta(y * x^{-1})
+//
+// where "*" is the quaternion product and because q is a unit quaternion
+// (|q|=1), q^-1 = [q0; -q1; -q2; -q3]
+//
+// and to_delta( [q0; u_{3x1}] ) = u / |u| * atan2(|u|, q0)
+class CERES_EXPORT QuaternionManifold final : public Manifold {
+ public:
+ int AmbientSize() const override { return 4; }
+ int TangentSize() const override { return 3; }
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override;
+ bool PlusJacobian(const double* x, double* jacobian) const override;
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override;
+ bool MinusJacobian(const double* x, double* jacobian) const override;
+};
+
+// Implements the quaternion manifold for Eigen's representation of the
+// Hamilton quaternion. Geometrically it is exactly the same as the
+// QuaternionManifold defined above. However, Eigen uses a different internal
+// memory layout for the elements of the quaternion than what is commonly
+// used. It stores the quaternion in memory as [q1, q2, q3, q0] or
+// [x, y, z, w] where the real (scalar) part is last.
+//
+// Since Ceres operates on parameter blocks which are raw double pointers this
+// difference is important and requires a different manifold.
+class CERES_EXPORT EigenQuaternionManifold final : public Manifold {
+ public:
+ int AmbientSize() const override { return 4; }
+ int TangentSize() const override { return 3; }
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override;
+ bool PlusJacobian(const double* x, double* jacobian) const override;
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override;
+ bool MinusJacobian(const double* x, double* jacobian) const override;
+};
+
+} // namespace ceres
+
+// clang-format off
+#include "ceres/internal/reenable_warnings.h"
+// clang-format on
+
+#endif // CERES_PUBLIC_MANIFOLD_H_
diff --git a/include/ceres/manifold_test_utils.h b/include/ceres/manifold_test_utils.h
new file mode 100644
index 0000000..3e61457
--- /dev/null
+++ b/include/ceres/manifold_test_utils.h
@@ -0,0 +1,345 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#include <cmath>
+#include <limits>
+#include <memory>
+
+#include "ceres/dynamic_numeric_diff_cost_function.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/manifold.h"
+#include "ceres/numeric_diff_options.h"
+#include "ceres/types.h"
+#include "gmock/gmock.h"
+#include "gtest/gtest.h"
+
+namespace ceres {
+
+// Matchers and macros to simplify testing of custom Manifold objects using the
+// gtest testing framework.
+//
+// Testing a Manifold has two parts.
+//
+// 1. Checking that Manifold::Plus() and Manifold::Minus() are correctly
+// defined. This requires per manifold tests.
+//
+// 2. The other methods of the manifold have mathematical properties that make
+// them compatible with Plus() and Minus(), as described in [1].
+//
+// To verify these general requirements for a custom Manifold, use the
+// EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD() macro from within a gtest test. Note
+// that additional domain-specific tests may also be prudent, e.g to verify the
+// behaviour of a Quaternion Manifold about pi.
+//
+// [1] "Integrating Generic Sensor Fusion Algorithms with Sound State
+// Representations through Encapsulation of Manifolds", C. Hertzberg,
+// R. Wagner, U. Frese and L. Schroder, https://arxiv.org/pdf/1107.1119.pdf
+
+// Verifies the general requirements for a custom Manifold are satisfied to
+// within the specified (numerical) tolerance.
+//
+// Example usage for a custom Manifold: ExampleManifold:
+//
+// TEST(ExampleManifold, ManifoldInvariantsHold) {
+// constexpr double kTolerance = 1.0e-9;
+// ExampleManifold manifold;
+// ceres::Vector x = ceres::Vector::Zero(manifold.AmbientSize());
+// ceres::Vector y = ceres::Vector::Zero(manifold.AmbientSize());
+// ceres::Vector delta = ceres::Vector::Zero(manifold.TangentSize());
+// EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+// }
+#define EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, tolerance) \
+ ::ceres::Vector zero_tangent = \
+ ::ceres::Vector::Zero(manifold.TangentSize()); \
+ EXPECT_THAT(manifold, ::ceres::XPlusZeroIsXAt(x, tolerance)); \
+ EXPECT_THAT(manifold, ::ceres::XMinusXIsZeroAt(x, tolerance)); \
+ EXPECT_THAT(manifold, ::ceres::MinusPlusIsIdentityAt(x, delta, tolerance)); \
+ EXPECT_THAT(manifold, \
+ ::ceres::MinusPlusIsIdentityAt(x, zero_tangent, tolerance)); \
+ EXPECT_THAT(manifold, ::ceres::PlusMinusIsIdentityAt(x, x, tolerance)); \
+ EXPECT_THAT(manifold, ::ceres::PlusMinusIsIdentityAt(x, y, tolerance)); \
+ EXPECT_THAT(manifold, ::ceres::HasCorrectPlusJacobianAt(x, tolerance)); \
+ EXPECT_THAT(manifold, ::ceres::HasCorrectMinusJacobianAt(x, tolerance)); \
+ EXPECT_THAT(manifold, ::ceres::MinusPlusJacobianIsIdentityAt(x, tolerance)); \
+ EXPECT_THAT(manifold, \
+ ::ceres::HasCorrectRightMultiplyByPlusJacobianAt(x, tolerance));
+
+// Checks that the invariant Plus(x, 0) == x holds.
+MATCHER_P2(XPlusZeroIsXAt, x, tolerance, "") {
+ const int ambient_size = arg.AmbientSize();
+ const int tangent_size = arg.TangentSize();
+
+ Vector actual = Vector::Zero(ambient_size);
+ Vector zero = Vector::Zero(tangent_size);
+ EXPECT_TRUE(arg.Plus(x.data(), zero.data(), actual.data()));
+ const double n = (actual - Vector{x}).norm();
+ const double d = x.norm();
+ const double diffnorm = (d == 0.0) ? n : (n / d);
+ if (diffnorm > tolerance) {
+ *result_listener << "\nexpected (x): " << x.transpose()
+ << "\nactual: " << actual.transpose()
+ << "\ndiffnorm: " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+// Checks that the invariant Minus(x, x) == 0 holds.
+MATCHER_P2(XMinusXIsZeroAt, x, tolerance, "") {
+ const int tangent_size = arg.TangentSize();
+ Vector actual = Vector::Zero(tangent_size);
+ EXPECT_TRUE(arg.Minus(x.data(), x.data(), actual.data()));
+ const double diffnorm = actual.norm();
+ if (diffnorm > tolerance) {
+ *result_listener << "\nx: " << x.transpose() //
+ << "\nexpected: 0 0 0"
+ << "\nactual: " << actual.transpose()
+ << "\ndiffnorm: " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+// Helper struct to curry Plus(x, .) so that it can be numerically
+// differentiated.
+struct PlusFunctor {
+ PlusFunctor(const Manifold& manifold, const double* x)
+ : manifold(manifold), x(x) {}
+ bool operator()(double const* const* parameters, double* x_plus_delta) const {
+ return manifold.Plus(x, parameters[0], x_plus_delta);
+ }
+
+ const Manifold& manifold;
+ const double* x;
+};
+
+// Checks that the output of PlusJacobian matches the one obtained by
+// numerically evaluating D_2 Plus(x,0).
+MATCHER_P2(HasCorrectPlusJacobianAt, x, tolerance, "") {
+ const int ambient_size = arg.AmbientSize();
+ const int tangent_size = arg.TangentSize();
+
+ NumericDiffOptions options;
+ options.ridders_relative_initial_step_size = 1e-4;
+
+ DynamicNumericDiffCostFunction<PlusFunctor, RIDDERS> cost_function(
+ new PlusFunctor(arg, x.data()), TAKE_OWNERSHIP, options);
+ cost_function.AddParameterBlock(tangent_size);
+ cost_function.SetNumResiduals(ambient_size);
+
+ Vector zero = Vector::Zero(tangent_size);
+ double* parameters[1] = {zero.data()};
+
+ Vector x_plus_zero = Vector::Zero(ambient_size);
+ Matrix expected = Matrix::Zero(ambient_size, tangent_size);
+ double* jacobians[1] = {expected.data()};
+
+ EXPECT_TRUE(
+ cost_function.Evaluate(parameters, x_plus_zero.data(), jacobians));
+
+ Matrix actual = Matrix::Random(ambient_size, tangent_size);
+ EXPECT_TRUE(arg.PlusJacobian(x.data(), actual.data()));
+
+ const double n = (actual - expected).norm();
+ const double d = expected.norm();
+ const double diffnorm = (d == 0.0) ? n : n / d;
+ if (diffnorm > tolerance) {
+ *result_listener << "\nx: " << x.transpose() << "\nexpected: \n"
+ << expected << "\nactual:\n"
+ << actual << "\ndiff:\n"
+ << expected - actual << "\ndiffnorm : " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+// Checks that the invariant Minus(Plus(x, delta), x) == delta holds.
+MATCHER_P3(MinusPlusIsIdentityAt, x, delta, tolerance, "") {
+ const int ambient_size = arg.AmbientSize();
+ const int tangent_size = arg.TangentSize();
+ Vector x_plus_delta = Vector::Zero(ambient_size);
+ EXPECT_TRUE(arg.Plus(x.data(), delta.data(), x_plus_delta.data()));
+ Vector actual = Vector::Zero(tangent_size);
+ EXPECT_TRUE(arg.Minus(x_plus_delta.data(), x.data(), actual.data()));
+
+ const double n = (actual - Vector{delta}).norm();
+ const double d = delta.norm();
+ const double diffnorm = (d == 0.0) ? n : (n / d);
+ if (diffnorm > tolerance) {
+ *result_listener << "\nx: " << x.transpose()
+ << "\nexpected: " << delta.transpose()
+ << "\nactual:" << actual.transpose()
+ << "\ndiff:" << (delta - actual).transpose()
+ << "\ndiffnorm: " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+// Checks that the invariant Plus(Minus(y, x), x) == y holds.
+MATCHER_P3(PlusMinusIsIdentityAt, x, y, tolerance, "") {
+ const int ambient_size = arg.AmbientSize();
+ const int tangent_size = arg.TangentSize();
+
+ Vector y_minus_x = Vector::Zero(tangent_size);
+ EXPECT_TRUE(arg.Minus(y.data(), x.data(), y_minus_x.data()));
+
+ Vector actual = Vector::Zero(ambient_size);
+ EXPECT_TRUE(arg.Plus(x.data(), y_minus_x.data(), actual.data()));
+
+ const double n = (actual - Vector{y}).norm();
+ const double d = y.norm();
+ const double diffnorm = (d == 0.0) ? n : (n / d);
+ if (diffnorm > tolerance) {
+ *result_listener << "\nx: " << x.transpose()
+ << "\nexpected: " << y.transpose()
+ << "\nactual:" << actual.transpose()
+ << "\ndiff:" << (y - actual).transpose()
+ << "\ndiffnorm: " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+// Helper struct to curry Minus(., x) so that it can be numerically
+// differentiated.
+struct MinusFunctor {
+ MinusFunctor(const Manifold& manifold, const double* x)
+ : manifold(manifold), x(x) {}
+ bool operator()(double const* const* parameters, double* y_minus_x) const {
+ return manifold.Minus(parameters[0], x, y_minus_x);
+ }
+
+ const Manifold& manifold;
+ const double* x;
+};
+
+// Checks that the output of MinusJacobian matches the one obtained by
+// numerically evaluating D_1 Minus(x,x).
+MATCHER_P2(HasCorrectMinusJacobianAt, x, tolerance, "") {
+ const int ambient_size = arg.AmbientSize();
+ const int tangent_size = arg.TangentSize();
+
+ Vector y = x;
+ Vector y_minus_x = Vector::Zero(tangent_size);
+
+ NumericDiffOptions options;
+ options.ridders_relative_initial_step_size = 1e-4;
+ DynamicNumericDiffCostFunction<MinusFunctor, RIDDERS> cost_function(
+ new MinusFunctor(arg, x.data()), TAKE_OWNERSHIP, options);
+ cost_function.AddParameterBlock(ambient_size);
+ cost_function.SetNumResiduals(tangent_size);
+
+ double* parameters[1] = {y.data()};
+
+ Matrix expected = Matrix::Zero(tangent_size, ambient_size);
+ double* jacobians[1] = {expected.data()};
+
+ EXPECT_TRUE(cost_function.Evaluate(parameters, y_minus_x.data(), jacobians));
+
+ Matrix actual = Matrix::Random(tangent_size, ambient_size);
+ EXPECT_TRUE(arg.MinusJacobian(x.data(), actual.data()));
+
+ const double n = (actual - expected).norm();
+ const double d = expected.norm();
+ const double diffnorm = (d == 0.0) ? n : (n / d);
+ if (diffnorm > tolerance) {
+ *result_listener << "\nx: " << x.transpose() << "\nexpected: \n"
+ << expected << "\nactual:\n"
+ << actual << "\ndiff:\n"
+ << expected - actual << "\ndiffnorm: " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+// Checks that D_delta Minus(Plus(x, delta), x) at delta = 0 is an identity
+// matrix.
+MATCHER_P2(MinusPlusJacobianIsIdentityAt, x, tolerance, "") {
+ const int ambient_size = arg.AmbientSize();
+ const int tangent_size = arg.TangentSize();
+
+ Matrix plus_jacobian(ambient_size, tangent_size);
+ EXPECT_TRUE(arg.PlusJacobian(x.data(), plus_jacobian.data()));
+ Matrix minus_jacobian(tangent_size, ambient_size);
+ EXPECT_TRUE(arg.MinusJacobian(x.data(), minus_jacobian.data()));
+
+ const Matrix actual = minus_jacobian * plus_jacobian;
+ const Matrix expected = Matrix::Identity(tangent_size, tangent_size);
+
+ const double n = (actual - expected).norm();
+ const double d = expected.norm();
+ const double diffnorm = n / d;
+ if (diffnorm > tolerance) {
+ *result_listener << "\nx: " << x.transpose() << "\nexpected: \n"
+ << expected << "\nactual:\n"
+ << actual << "\ndiff:\n"
+ << expected - actual << "\ndiffnorm: " << diffnorm;
+
+ return false;
+ }
+ return true;
+}
+
+// Verify that the output of RightMultiplyByPlusJacobian is ambient_matrix *
+// plus_jacobian.
+MATCHER_P2(HasCorrectRightMultiplyByPlusJacobianAt, x, tolerance, "") {
+ const int ambient_size = arg.AmbientSize();
+ const int tangent_size = arg.TangentSize();
+
+ constexpr int kMinNumRows = 0;
+ constexpr int kMaxNumRows = 3;
+ for (int num_rows = kMinNumRows; num_rows <= kMaxNumRows; ++num_rows) {
+ Matrix plus_jacobian = Matrix::Random(ambient_size, tangent_size);
+ EXPECT_TRUE(arg.PlusJacobian(x.data(), plus_jacobian.data()));
+
+ Matrix ambient_matrix = Matrix::Random(num_rows, ambient_size);
+ Matrix expected = ambient_matrix * plus_jacobian;
+
+ Matrix actual = Matrix::Random(num_rows, tangent_size);
+ EXPECT_TRUE(arg.RightMultiplyByPlusJacobian(
+ x.data(), num_rows, ambient_matrix.data(), actual.data()));
+ const double n = (actual - expected).norm();
+ const double d = expected.norm();
+ const double diffnorm = (d == 0.0) ? n : (n / d);
+ if (diffnorm > tolerance) {
+ *result_listener << "\nx: " << x.transpose() << "\nambient_matrix : \n"
+ << ambient_matrix << "\nplus_jacobian : \n"
+ << plus_jacobian << "\nexpected: \n"
+ << expected << "\nactual:\n"
+ << actual << "\ndiff:\n"
+ << expected - actual << "\ndiffnorm : " << diffnorm;
+ return false;
+ }
+ }
+ return true;
+}
+
+} // namespace ceres
diff --git a/include/ceres/normal_prior.h b/include/ceres/normal_prior.h
index 14ab379..5a26e01 100644
--- a/include/ceres/normal_prior.h
+++ b/include/ceres/normal_prior.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -57,11 +57,11 @@
// which would be the case if the covariance matrix S is rank
// deficient.
-class CERES_EXPORT NormalPrior : public CostFunction {
+class CERES_EXPORT NormalPrior final : public CostFunction {
public:
// Check that the number of rows in the vector b are the same as the
// number of columns in the matrix A, crash otherwise.
- NormalPrior(const Matrix& A, const Vector& b);
+ NormalPrior(const Matrix& A, Vector b);
bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const override;
diff --git a/include/ceres/numeric_diff_cost_function.h b/include/ceres/numeric_diff_cost_function.h
index cf7971c..f2a377b 100644
--- a/include/ceres/numeric_diff_cost_function.h
+++ b/include/ceres/numeric_diff_cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -149,9 +149,8 @@
// The numerically differentiated version of a cost function for a cost function
// can be constructed as follows:
//
-// CostFunction* cost_function
-// = new NumericDiffCostFunction<MyCostFunction, CENTRAL, 1, 4, 8>(
-// new MyCostFunction(...), TAKE_OWNERSHIP);
+// auto* cost_function
+// = new NumericDiffCostFunction<MyCostFunction, CENTRAL, 1, 4, 8>();
//
// where MyCostFunction has 1 residual and 2 parameter blocks with sizes 4 and 8
// respectively. Look at the tests for a more detailed example.
@@ -163,6 +162,7 @@
#include <array>
#include <memory>
+#include <type_traits>
#include "Eigen/Dense"
#include "ceres/cost_function.h"
@@ -171,31 +171,55 @@
#include "ceres/numeric_diff_options.h"
#include "ceres/sized_cost_function.h"
#include "ceres/types.h"
-#include "glog/logging.h"
namespace ceres {
template <typename CostFunctor,
- NumericDiffMethodType method = CENTRAL,
+ NumericDiffMethodType kMethod = CENTRAL,
int kNumResiduals = 0, // Number of residuals, or ceres::DYNAMIC
int... Ns> // Parameters dimensions for each block.
-class NumericDiffCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
+class NumericDiffCostFunction final
+ : public SizedCostFunction<kNumResiduals, Ns...> {
public:
- NumericDiffCostFunction(
+ explicit NumericDiffCostFunction(
CostFunctor* functor,
Ownership ownership = TAKE_OWNERSHIP,
int num_residuals = kNumResiduals,
const NumericDiffOptions& options = NumericDiffOptions())
- : functor_(functor), ownership_(ownership), options_(options) {
- if (kNumResiduals == DYNAMIC) {
- SizedCostFunction<kNumResiduals, Ns...>::set_num_residuals(num_residuals);
- }
- }
+ : NumericDiffCostFunction{std::unique_ptr<CostFunctor>{functor},
+ ownership,
+ num_residuals,
+ options} {}
- explicit NumericDiffCostFunction(NumericDiffCostFunction&& other)
- : functor_(std::move(other.functor_)), ownership_(other.ownership_) {}
+ explicit NumericDiffCostFunction(
+ std::unique_ptr<CostFunctor> functor,
+ int num_residuals = kNumResiduals,
+ const NumericDiffOptions& options = NumericDiffOptions())
+ : NumericDiffCostFunction{
+ std::move(functor), TAKE_OWNERSHIP, num_residuals, options} {}
- virtual ~NumericDiffCostFunction() {
+ // Constructs the CostFunctor on the heap and takes the ownership.
+ // Invocable only if the number of residuals is known at compile-time.
+ template <class... Args,
+ bool kIsDynamic = kNumResiduals == DYNAMIC,
+ std::enable_if_t<!kIsDynamic &&
+ std::is_constructible_v<CostFunctor, Args&&...>>* =
+ nullptr>
+ explicit NumericDiffCostFunction(Args&&... args)
+ // NOTE We explicitly use direct initialization using parentheses instead
+ // of uniform initialization using braces to avoid narrowing conversion
+ // warnings.
+ : NumericDiffCostFunction{
+ std::make_unique<CostFunctor>(std::forward<Args>(args)...),
+ TAKE_OWNERSHIP} {}
+
+ NumericDiffCostFunction(NumericDiffCostFunction&& other) noexcept = default;
+ NumericDiffCostFunction& operator=(NumericDiffCostFunction&& other) noexcept =
+ default;
+ NumericDiffCostFunction(const NumericDiffCostFunction&) = delete;
+ NumericDiffCostFunction& operator=(const NumericDiffCostFunction&) = delete;
+
+ ~NumericDiffCostFunction() override {
if (ownership_ != TAKE_OWNERSHIP) {
functor_.release();
}
@@ -219,7 +243,7 @@
return false;
}
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return true;
}
@@ -235,7 +259,7 @@
}
internal::EvaluateJacobianForParameterBlocks<ParameterDims>::
- template Apply<method, kNumResiduals>(
+ template Apply<kMethod, kNumResiduals>(
functor_.get(),
residuals,
options_,
@@ -246,7 +270,19 @@
return true;
}
+ const CostFunctor& functor() const { return *functor_; }
+
private:
+ explicit NumericDiffCostFunction(std::unique_ptr<CostFunctor> functor,
+ Ownership ownership,
+ [[maybe_unused]] int num_residuals,
+ const NumericDiffOptions& options)
+ : functor_(std::move(functor)), ownership_(ownership), options_(options) {
+ if constexpr (kNumResiduals == DYNAMIC) {
+ SizedCostFunction<kNumResiduals, Ns...>::set_num_residuals(num_residuals);
+ }
+ }
+
std::unique_ptr<CostFunctor> functor_;
Ownership ownership_;
NumericDiffOptions options_;
diff --git a/include/ceres/numeric_diff_first_order_function.h b/include/ceres/numeric_diff_first_order_function.h
new file mode 100644
index 0000000..525f197
--- /dev/null
+++ b/include/ceres/numeric_diff_first_order_function.h
@@ -0,0 +1,271 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#ifndef CERES_PUBLIC_NUMERIC_DIFF_FIRST_ORDER_FUNCTION_H_
+#define CERES_PUBLIC_NUMERIC_DIFF_FIRST_ORDER_FUNCTION_H_
+
+#include <algorithm>
+#include <memory>
+#include <type_traits>
+#include <utility>
+
+#include "ceres/first_order_function.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/fixed_array.h"
+#include "ceres/internal/numeric_diff.h"
+#include "ceres/internal/parameter_dims.h"
+#include "ceres/internal/variadic_evaluate.h"
+#include "ceres/numeric_diff_options.h"
+#include "ceres/types.h"
+#include "glog/logging.h"
+
+namespace ceres {
+
+// Creates FirstOrderFunctions as needed by the GradientProblem
+// framework, with gradients computed via numeric differentiation. For
+// more information on numeric differentiation, see the wikipedia
+// article at https://en.wikipedia.org/wiki/Numerical_differentiation
+//
+// To get an numerically differentiated cost function, you must define
+// a class with an operator() (a functor) that computes the cost.
+//
+// The function must write the computed value in the last argument
+// (the only non-const one) and return true to indicate success.
+//
+// For example, consider a scalar error e = x'y - a, where both x and y are
+// two-dimensional column vector parameters, the prime sign indicates
+// transposition, and a is a constant.
+//
+// To write an numerically-differentiable cost function for the above model,
+// first define the object
+//
+// class QuadraticCostFunctor {
+// public:
+// explicit QuadraticCostFunctor(double a) : a_(a) {}
+// bool operator()(const double* const xy, double* cost) const {
+// constexpr int kInputVectorLength = 2;
+// const double* const x = xy;
+// const double* const y = xy + kInputVectorLength;
+// *cost = x[0] * y[0] + x[1] * y[1] - a_;
+// return true;
+// }
+//
+// private:
+// double a_;
+// };
+//
+//
+// Note that in the declaration of operator() the input parameters xy
+// come first, and are passed as const pointers to array of
+// doubles. The output cost is the last parameter.
+//
+// Then given this class definition, the numerically differentiated
+// first order function with central differences used for computing the
+// derivative can be constructed as follows.
+//
+// FirstOrderFunction* function
+// = new NumericDiffFirstOrderFunction<MyScalarCostFunctor, CENTRAL, 4>(
+// new QuadraticCostFunctor(1.0)); ^ ^ ^
+// | | |
+// Finite Differencing Scheme -+ | |
+// Dimension of xy ------------------------+
+//
+//
+// In the instantiation above, the template parameters following
+// "QuadraticCostFunctor", "CENTRAL, 4", describe the finite
+// differencing scheme as "central differencing" and the functor as
+// computing its cost from a 4 dimensional input.
+//
+// If the size of the parameter vector is not known at compile time, then an
+// alternate construction syntax can be used:
+//
+// FirstOrderFunction* function
+// = new NumericDiffFirstOrderFunction<MyScalarCostFunctor, CENTRAL>(
+// new QuadraticCostFunctor(1.0), 4);
+//
+// Note that instead of passing 4 as a template argument, it is now passed as
+// the second argument to the constructor.
+template <typename FirstOrderFunctor,
+ NumericDiffMethodType kMethod,
+ int kNumParameters = DYNAMIC>
+class NumericDiffFirstOrderFunction final : public FirstOrderFunction {
+ public:
+ template <class... Args,
+ bool kIsDynamic = kNumParameters == DYNAMIC,
+ std::enable_if_t<!kIsDynamic &&
+ std::is_constructible_v<FirstOrderFunctor,
+ Args&&...>>* = nullptr>
+ explicit NumericDiffFirstOrderFunction(Args&&... args)
+ : NumericDiffFirstOrderFunction{std::make_unique<FirstOrderFunction>(
+ std::forward<Args>(args)...)} {}
+
+ NumericDiffFirstOrderFunction(const NumericDiffFirstOrderFunction&) = delete;
+ NumericDiffFirstOrderFunction& operator=(
+ const NumericDiffFirstOrderFunction&) = delete;
+ NumericDiffFirstOrderFunction(
+ NumericDiffFirstOrderFunction&& other) noexcept = default;
+ NumericDiffFirstOrderFunction& operator=(
+ NumericDiffFirstOrderFunction&& other) noexcept = default;
+
+ // Constructor for the case where the parameter size is known at compile time.
+ explicit NumericDiffFirstOrderFunction(
+ FirstOrderFunctor* functor,
+ Ownership ownership = TAKE_OWNERSHIP,
+ const NumericDiffOptions& options = NumericDiffOptions())
+ : NumericDiffFirstOrderFunction{
+ std::unique_ptr<FirstOrderFunctor>{functor},
+ kNumParameters,
+ ownership,
+ options,
+ FIXED_INIT} {}
+
+ // Constructor for the case where the parameter size is known at compile time.
+ explicit NumericDiffFirstOrderFunction(
+ std::unique_ptr<FirstOrderFunctor> functor,
+ const NumericDiffOptions& options = NumericDiffOptions())
+ : NumericDiffFirstOrderFunction{
+ std::move(functor), kNumParameters, TAKE_OWNERSHIP, FIXED_INIT} {}
+
+ // Constructor for the case where the parameter size is specified at run time.
+ explicit NumericDiffFirstOrderFunction(
+ FirstOrderFunctor* functor,
+ int num_parameters,
+ Ownership ownership = TAKE_OWNERSHIP,
+ const NumericDiffOptions& options = NumericDiffOptions())
+ : NumericDiffFirstOrderFunction{
+ std::unique_ptr<FirstOrderFunctor>{functor},
+ num_parameters,
+ ownership,
+ options,
+ DYNAMIC_INIT} {}
+
+ // Constructor for the case where the parameter size is specified at run time.
+ explicit NumericDiffFirstOrderFunction(
+ std::unique_ptr<FirstOrderFunctor> functor,
+ int num_parameters,
+ Ownership ownership = TAKE_OWNERSHIP,
+ const NumericDiffOptions& options = NumericDiffOptions())
+ : NumericDiffFirstOrderFunction{std::move(functor),
+ num_parameters,
+ ownership,
+ options,
+ DYNAMIC_INIT} {}
+
+ ~NumericDiffFirstOrderFunction() override {
+ if (ownership_ != TAKE_OWNERSHIP) {
+ functor_.release();
+ }
+ }
+
+ bool Evaluate(const double* const parameters,
+ double* cost,
+ double* gradient) const override {
+ // Get the function value (cost) at the the point to evaluate.
+ if (!(*functor_)(parameters, cost)) {
+ return false;
+ }
+
+ if (gradient == nullptr) {
+ return true;
+ }
+
+ // Create a copy of the parameters which will get mutated.
+ internal::FixedArray<double, 32> parameters_copy(num_parameters_);
+ std::copy_n(parameters, num_parameters_, parameters_copy.data());
+ double* parameters_ptr = parameters_copy.data();
+ constexpr int kNumResiduals = 1;
+ if constexpr (kNumParameters == DYNAMIC) {
+ internal::FirstOrderFunctorAdapter<FirstOrderFunctor> fofa(*functor_);
+ return internal::NumericDiff<
+ internal::FirstOrderFunctorAdapter<FirstOrderFunctor>,
+ kMethod,
+ kNumResiduals,
+ internal::DynamicParameterDims,
+ 0,
+ DYNAMIC>::EvaluateJacobianForParameterBlock(&fofa,
+ cost,
+ options_,
+ kNumResiduals,
+ 0,
+ num_parameters_,
+ ¶meters_ptr,
+ gradient);
+ } else {
+ return internal::EvaluateJacobianForParameterBlocks<
+ internal::StaticParameterDims<kNumParameters>>::
+ template Apply<kMethod, 1>(functor_.get(),
+ cost,
+ options_,
+ kNumResiduals,
+ ¶meters_ptr,
+ &gradient);
+ }
+ }
+
+ int NumParameters() const override { return num_parameters_; }
+
+ const FirstOrderFunctor& functor() const { return *functor_; }
+
+ private:
+ // Tags used to differentiate between dynamic and fixed size constructor
+ // delegate invocations.
+ static constexpr std::integral_constant<int, DYNAMIC> DYNAMIC_INIT{};
+ static constexpr std::integral_constant<int, kNumParameters> FIXED_INIT{};
+
+ template <class InitTag>
+ explicit NumericDiffFirstOrderFunction(
+ std::unique_ptr<FirstOrderFunctor> functor,
+ int num_parameters,
+ Ownership ownership,
+ const NumericDiffOptions& options,
+ InitTag /*unused*/)
+ : functor_(std::move(functor)),
+ num_parameters_(num_parameters),
+ ownership_(ownership),
+ options_(options) {
+ static_assert(
+ kNumParameters == FIXED_INIT,
+ "Template parameter must be DYNAMIC when using this constructor. If "
+ "you want to provide the number of parameters statically use the other "
+ "constructor.");
+ if constexpr (InitTag::value == DYNAMIC_INIT) {
+ CHECK_GT(num_parameters, 0);
+ }
+ }
+
+ std::unique_ptr<FirstOrderFunctor> functor_;
+ int num_parameters_;
+ Ownership ownership_;
+ NumericDiffOptions options_;
+};
+
+} // namespace ceres
+
+#endif // CERES_PUBLIC_NUMERIC_DIFF_FIRST_ORDER_FUNCTION_H_
diff --git a/include/ceres/numeric_diff_options.h b/include/ceres/numeric_diff_options.h
index 64919ed..eefb7ad 100644
--- a/include/ceres/numeric_diff_options.h
+++ b/include/ceres/numeric_diff_options.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,7 +32,8 @@
#ifndef CERES_PUBLIC_NUMERIC_DIFF_OPTIONS_H_
#define CERES_PUBLIC_NUMERIC_DIFF_OPTIONS_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
namespace ceres {
@@ -70,4 +71,6 @@
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_PUBLIC_NUMERIC_DIFF_OPTIONS_H_
diff --git a/include/ceres/ordered_groups.h b/include/ceres/ordered_groups.h
index 954663c..d15d22d 100644
--- a/include/ceres/ordered_groups.h
+++ b/include/ceres/ordered_groups.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,7 +36,7 @@
#include <unordered_map>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "glog/logging.h"
namespace ceres {
@@ -190,7 +190,7 @@
};
// Typedef for the most commonly used version of OrderedGroups.
-typedef OrderedGroups<double*> ParameterBlockOrdering;
+using ParameterBlockOrdering = OrderedGroups<double*>;
} // namespace ceres
diff --git a/include/ceres/problem.h b/include/ceres/problem.h
index add12ea..4c6fd1b 100644
--- a/include/ceres/problem.h
+++ b/include/ceres/problem.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -43,6 +43,7 @@
#include "ceres/context.h"
#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/internal/port.h"
#include "ceres/types.h"
#include "glog/logging.h"
@@ -52,7 +53,7 @@
class CostFunction;
class EvaluationCallback;
class LossFunction;
-class LocalParameterization;
+class Manifold;
class Solver;
struct CRSMatrix;
@@ -65,7 +66,7 @@
// A ResidualBlockId is an opaque handle clients can use to remove residual
// blocks from a Problem after adding them.
-typedef internal::ResidualBlock* ResidualBlockId;
+using ResidualBlockId = internal::ResidualBlock*;
// A class to represent non-linear least squares problems. Such
// problems have a cost function that is a sum of error terms (known
@@ -78,31 +79,28 @@
//
// where
//
-// r_ij is residual number i, component j; the residual is a
-// function of some subset of the parameters x1...xk. For
-// example, in a structure from motion problem a residual
-// might be the difference between a measured point in an
-// image and the reprojected position for the matching
-// camera, point pair. The residual would have two
-// components, error in x and error in y.
+// r_ij is residual number i, component j; the residual is a function of some
+// subset of the parameters x1...xk. For example, in a structure from
+// motion problem a residual might be the difference between a measured
+// point in an image and the reprojected position for the matching
+// camera, point pair. The residual would have two components, error in x
+// and error in y.
//
-// loss(y) is the loss function; for example, squared error or
-// Huber L1 loss. If loss(y) = y, then the cost function is
-// non-robustified least squares.
+// loss(y) is the loss function; for example, squared error or Huber L1
+// loss. If loss(y) = y, then the cost function is non-robustified
+// least squares.
//
-// This class is specifically designed to address the important subset
-// of "sparse" least squares problems, where each component of the
-// residual depends only on a small number number of parameters, even
-// though the total number of residuals and parameters may be very
-// large. This property affords tremendous gains in scale, allowing
-// efficient solving of large problems that are otherwise
-// inaccessible.
+// This class is specifically designed to address the important subset of
+// "sparse" least squares problems, where each component of the residual depends
+// only on a small number number of parameters, even though the total number of
+// residuals and parameters may be very large. This property affords tremendous
+// gains in scale, allowing efficient solving of large problems that are
+// otherwise inaccessible.
//
// The canonical example of a sparse least squares problem is
-// "structure-from-motion" (SFM), where the parameters are points and
-// cameras, and residuals are reprojection errors. Typically a single
-// residual will depend only on 9 parameters (3 for the point, 6 for
-// the camera).
+// "structure-from-motion" (SFM), where the parameters are points and cameras,
+// and residuals are reprojection errors. Typically a single residual will
+// depend only on 9 parameters (3 for the point, 6 for the camera).
//
// To create a least squares problem, use the AddResidualBlock() and
// AddParameterBlock() methods, documented below. Here is an example least
@@ -122,38 +120,37 @@
class CERES_EXPORT Problem {
public:
struct CERES_EXPORT Options {
- // These flags control whether the Problem object owns the cost
- // functions, loss functions, and parameterizations passed into
- // the Problem. If set to TAKE_OWNERSHIP, then the problem object
- // will delete the corresponding cost or loss functions on
- // destruction. The destructor is careful to delete the pointers
- // only once, since sharing cost/loss/parameterizations is
- // allowed.
+ // These flags control whether the Problem object owns the CostFunctions,
+ // LossFunctions, and Manifolds passed into the Problem.
+ //
+ // If set to TAKE_OWNERSHIP, then the problem object will delete the
+ // corresponding object on destruction. The destructor is careful to delete
+ // the pointers only once, since sharing objects is allowed.
Ownership cost_function_ownership = TAKE_OWNERSHIP;
Ownership loss_function_ownership = TAKE_OWNERSHIP;
- Ownership local_parameterization_ownership = TAKE_OWNERSHIP;
+ Ownership manifold_ownership = TAKE_OWNERSHIP;
// If true, trades memory for faster RemoveResidualBlock() and
// RemoveParameterBlock() operations.
//
// By default, RemoveParameterBlock() and RemoveResidualBlock() take time
- // proportional to the size of the entire problem. If you only ever remove
+ // proportional to the size of the entire problem. If you only ever remove
// parameters or residuals from the problem occasionally, this might be
- // acceptable. However, if you have memory to spare, enable this option to
+ // acceptable. However, if you have memory to spare, enable this option to
// make RemoveParameterBlock() take time proportional to the number of
// residual blocks that depend on it, and RemoveResidualBlock() take (on
// average) constant time.
//
- // The increase in memory usage is twofold: an additional hash set per
+ // The increase in memory usage is two-fold: an additional hash set per
// parameter block containing all the residuals that depend on the parameter
// block; and a hash set in the problem containing all residuals.
bool enable_fast_removal = false;
// By default, Ceres performs a variety of safety checks when constructing
- // the problem. There is a small but measurable performance penalty to
- // these checks, typically around 5% of construction time. If you are sure
- // your problem construction is correct, and 5% of the problem construction
- // time is truly an overhead you want to avoid, then you can set
+ // the problem. There is a small but measurable performance penalty to these
+ // checks, typically around 5% of construction time. If you are sure your
+ // problem construction is correct, and 5% of the problem construction time
+ // is truly an overhead you want to avoid, then you can set
// disable_all_safety_checks to true.
//
// WARNING: Do not set this to true, unless you are absolutely sure of what
@@ -167,26 +164,23 @@
// Ceres does NOT take ownership of the pointer.
Context* context = nullptr;
- // Using this callback interface, Ceres can notify you when it is
- // about to evaluate the residuals or jacobians. With the
- // callback, you can share computation between residual blocks by
- // doing the shared computation in
+ // Using this callback interface, Ceres can notify you when it is about to
+ // evaluate the residuals or jacobians. With the callback, you can share
+ // computation between residual blocks by doing the shared computation in
// EvaluationCallback::PrepareForEvaluation() before Ceres calls
- // CostFunction::Evaluate(). It also enables caching results
- // between a pure residual evaluation and a residual & jacobian
- // evaluation.
+ // CostFunction::Evaluate(). It also enables caching results between a pure
+ // residual evaluation and a residual & jacobian evaluation.
//
// Problem DOES NOT take ownership of the callback.
//
- // NOTE: Evaluation callbacks are incompatible with inner
- // iterations. So calling Solve with
- // Solver::Options::use_inner_iterations = true on a Problem with
- // a non-null evaluation callback is an error.
+ // NOTE: Evaluation callbacks are incompatible with inner iterations. So
+ // calling Solve with Solver::Options::use_inner_iterations = true on a
+ // Problem with a non-null evaluation callback is an error.
EvaluationCallback* evaluation_callback = nullptr;
};
- // The default constructor is equivalent to the
- // invocation Problem(Problem::Options()).
+ // The default constructor is equivalent to the invocation
+ // Problem(Problem::Options()).
Problem();
explicit Problem(const Options& options);
Problem(Problem&&);
@@ -197,31 +191,29 @@
~Problem();
- // Add a residual block to the overall cost function. The cost
- // function carries with its information about the sizes of the
- // parameter blocks it expects. The function checks that these match
- // the sizes of the parameter blocks listed in parameter_blocks. The
- // program aborts if a mismatch is detected. loss_function can be
- // nullptr, in which case the cost of the term is just the squared norm
- // of the residuals.
+ // Add a residual block to the overall cost function. The cost function
+ // carries with its information about the sizes of the parameter blocks it
+ // expects. The function checks that these match the sizes of the parameter
+ // blocks listed in parameter_blocks. The program aborts if a mismatch is
+ // detected. loss_function can be nullptr, in which case the cost of the term
+ // is just the squared norm of the residuals.
//
- // The user has the option of explicitly adding the parameter blocks
- // using AddParameterBlock. This causes additional correctness
- // checking; however, AddResidualBlock implicitly adds the parameter
- // blocks if they are not present, so calling AddParameterBlock
- // explicitly is not required.
+ // The user has the option of explicitly adding the parameter blocks using
+ // AddParameterBlock. This causes additional correctness checking; however,
+ // AddResidualBlock implicitly adds the parameter blocks if they are not
+ // present, so calling AddParameterBlock explicitly is not required.
//
- // The Problem object by default takes ownership of the
- // cost_function and loss_function pointers. These objects remain
- // live for the life of the Problem object. If the user wishes to
- // keep control over the destruction of these objects, then they can
+ // The Problem object by default takes ownership of the cost_function and
+ // loss_function pointers (See Problem::Options to override this behaviour).
+ // These objects remain live for the life of the Problem object. If the user
+ // wishes to keep control over the destruction of these objects, then they can
// do this by setting the corresponding enums in the Options struct.
//
- // Note: Even though the Problem takes ownership of cost_function
- // and loss_function, it does not preclude the user from re-using
- // them in another residual block. The destructor takes care to call
- // delete on each cost_function or loss_function pointer only once,
- // regardless of how many residual blocks refer to them.
+ // Note: Even though the Problem takes ownership of cost_function and
+ // loss_function, it does not preclude the user from re-using them in another
+ // residual block. The destructor takes care to call delete on each
+ // cost_function or loss_function pointer only once, regardless of how many
+ // residual blocks refer to them.
//
// Example usage:
//
@@ -234,8 +226,8 @@
// problem.AddResidualBlock(new MyUnaryCostFunction(...), nullptr, x1);
// problem.AddResidualBlock(new MyBinaryCostFunction(...), nullptr, x2, x1);
//
- // Add a residual block by listing the parameter block pointers
- // directly instead of wapping them in a container.
+ // Add a residual block by listing the parameter block pointers directly
+ // instead of wapping them in a container.
template <typename... Ts>
ResidualBlockId AddResidualBlock(CostFunction* cost_function,
LossFunction* loss_function,
@@ -261,29 +253,32 @@
double* const* const parameter_blocks,
int num_parameter_blocks);
- // Add a parameter block with appropriate size to the problem.
- // Repeated calls with the same arguments are ignored. Repeated
- // calls with the same double pointer but a different size results
- // in undefined behaviour.
+ // Add a parameter block with appropriate size to the problem. Repeated calls
+ // with the same arguments are ignored. Repeated calls with the same double
+ // pointer but a different size will result in a crash.
void AddParameterBlock(double* values, int size);
- // Add a parameter block with appropriate size and parameterization
- // to the problem. Repeated calls with the same arguments are
- // ignored. Repeated calls with the same double pointer but a
- // different size results in undefined behaviour.
- void AddParameterBlock(double* values,
- int size,
- LocalParameterization* local_parameterization);
-
- // Remove a parameter block from the problem. The parameterization of the
- // parameter block, if it exists, will persist until the deletion of the
- // problem (similar to cost/loss functions in residual block removal). Any
- // residual blocks that depend on the parameter are also removed, as
- // described above in RemoveResidualBlock().
+ // Add a parameter block with appropriate size and Manifold to the
+ // problem. It is okay for manifold to be nullptr.
//
- // If Problem::Options::enable_fast_removal is true, then the
- // removal is fast (almost constant time). Otherwise, removing a parameter
- // block will incur a scan of the entire Problem object.
+ // Repeated calls with the same arguments are ignored. Repeated calls
+ // with the same double pointer but a different size results in a crash
+ // (unless Solver::Options::disable_all_safety_checks is set to true).
+ //
+ // Repeated calls with the same double pointer and size but different Manifold
+ // is equivalent to calling SetManifold(manifold), i.e., any previously
+ // associated Manifold object will be replaced with the manifold.
+ void AddParameterBlock(double* values, int size, Manifold* manifold);
+
+ // Remove a parameter block from the problem. The Manifold of the parameter
+ // block, if it exists, will persist until the deletion of the problem
+ // (similar to cost/loss functions in residual block removal). Any residual
+ // blocks that depend on the parameter are also removed, as described above
+ // in RemoveResidualBlock().
+ //
+ // If Problem::Options::enable_fast_removal is true, then the removal is fast
+ // (almost constant time). Otherwise, removing a parameter block will incur a
+ // scan of the entire Problem object.
//
// WARNING: Removing a residual or parameter block will destroy the implicit
// ordering, rendering the jacobian or residuals returned from the solver
@@ -308,35 +303,41 @@
// Allow the indicated parameter block to vary during optimization.
void SetParameterBlockVariable(double* values);
- // Returns true if a parameter block is set constant, and false
- // otherwise. A parameter block may be set constant in two ways:
- // either by calling SetParameterBlockConstant or by associating a
- // LocalParameterization with a zero dimensional tangent space with
- // it.
+ // Returns true if a parameter block is set constant, and false otherwise. A
+ // parameter block may be set constant in two ways: either by calling
+ // SetParameterBlockConstant or by associating a Manifold with a zero
+ // dimensional tangent space with it.
bool IsParameterBlockConstant(const double* values) const;
- // Set the local parameterization for one of the parameter blocks.
- // The local_parameterization is owned by the Problem by default. It
- // is acceptable to set the same parameterization for multiple
- // parameters; the destructor is careful to delete local
- // parameterizations only once. Calling SetParameterization with
- // nullptr will clear any previously set parameterization.
- void SetParameterization(double* values,
- LocalParameterization* local_parameterization);
+ // Set the Manifold for the parameter block. Calling SetManifold with nullptr
+ // will clear any previously set Manifold for the parameter block.
+ //
+ // Repeated calls will result in any previously associated Manifold object to
+ // be replaced with the manifold.
+ //
+ // The manifold is owned by the Problem by default (See Problem::Options to
+ // override this behaviour).
+ //
+ // It is acceptable to set the same Manifold for multiple parameter blocks.
+ void SetManifold(double* values, Manifold* manifold);
- // Get the local parameterization object associated with this
- // parameter block. If there is no parameterization object
- // associated then nullptr is returned.
- const LocalParameterization* GetParameterization(const double* values) const;
+ // Get the Manifold object associated with this parameter block.
+ //
+ // If there is no Manifold object associated then nullptr is returned.
+ const Manifold* GetManifold(const double* values) const;
+
+ // Returns true if a Manifold is associated with this parameter block, false
+ // otherwise.
+ bool HasManifold(const double* values) const;
// Set the lower/upper bound for the parameter at position "index".
void SetParameterLowerBound(double* values, int index, double lower_bound);
void SetParameterUpperBound(double* values, int index, double upper_bound);
- // Get the lower/upper bound for the parameter at position
- // "index". If the parameter is not bounded by the user, then its
- // lower bound is -std::numeric_limits<double>::max() and upper
- // bound is std::numeric_limits<double>::max().
+ // Get the lower/upper bound for the parameter at position "index". If the
+ // parameter is not bounded by the user, then its lower bound is
+ // -std::numeric_limits<double>::max() and upper bound is
+ // std::numeric_limits<double>::max().
double GetParameterLowerBound(const double* values, int index) const;
double GetParameterUpperBound(const double* values, int index) const;
@@ -344,37 +345,37 @@
// parameter_blocks().size() and parameter_block_sizes().size().
int NumParameterBlocks() const;
- // The size of the parameter vector obtained by summing over the
- // sizes of all the parameter blocks.
+ // The size of the parameter vector obtained by summing over the sizes of all
+ // the parameter blocks.
int NumParameters() const;
// Number of residual blocks in the problem. Always equals
// residual_blocks().size().
int NumResidualBlocks() const;
- // The size of the residual vector obtained by summing over the
- // sizes of all of the residual blocks.
+ // The size of the residual vector obtained by summing over the sizes of all
+ // of the residual blocks.
int NumResiduals() const;
// The size of the parameter block.
int ParameterBlockSize(const double* values) const;
- // The size of local parameterization for the parameter block. If
- // there is no local parameterization associated with this parameter
- // block, then ParameterBlockLocalSize = ParameterBlockSize.
- int ParameterBlockLocalSize(const double* values) const;
+ // The dimension of the tangent space of the Manifold for the parameter block.
+ // If there is no Manifold associated with this parameter block, then
+ // ParameterBlockTangentSize = ParameterBlockSize.
+ int ParameterBlockTangentSize(const double* values) const;
// Is the given parameter block present in this problem or not?
bool HasParameterBlock(const double* values) const;
- // Fills the passed parameter_blocks vector with pointers to the
- // parameter blocks currently in the problem. After this call,
- // parameter_block.size() == NumParameterBlocks.
+ // Fills the passed parameter_blocks vector with pointers to the parameter
+ // blocks currently in the problem. After this call, parameter_block.size() ==
+ // NumParameterBlocks.
void GetParameterBlocks(std::vector<double*>* parameter_blocks) const;
- // Fills the passed residual_blocks vector with pointers to the
- // residual blocks currently in the problem. After this call,
- // residual_blocks.size() == NumResidualBlocks.
+ // Fills the passed residual_blocks vector with pointers to the residual
+ // blocks currently in the problem. After this call, residual_blocks.size() ==
+ // NumResidualBlocks.
void GetResidualBlocks(std::vector<ResidualBlockId>* residual_blocks) const;
// Get all the parameter blocks that depend on the given residual block.
@@ -393,10 +394,10 @@
// Get all the residual blocks that depend on the given parameter block.
//
- // If Problem::Options::enable_fast_removal is true, then
- // getting the residual blocks is fast and depends only on the number of
- // residual blocks. Otherwise, getting the residual blocks for a parameter
- // block will incur a scan of the entire Problem object.
+ // If Problem::Options::enable_fast_removal is true, then getting the residual
+ // blocks is fast and depends only on the number of residual
+ // blocks. Otherwise, getting the residual blocks for a parameter block will
+ // incur a scan of the entire Problem object.
void GetResidualBlocksForParameterBlock(
const double* values,
std::vector<ResidualBlockId>* residual_blocks) const;
@@ -404,49 +405,45 @@
// Options struct to control Problem::Evaluate.
struct EvaluateOptions {
// The set of parameter blocks for which evaluation should be
- // performed. This vector determines the order that parameter
- // blocks occur in the gradient vector and in the columns of the
- // jacobian matrix. If parameter_blocks is empty, then it is
- // assumed to be equal to vector containing ALL the parameter
- // blocks. Generally speaking the parameter blocks will occur in
- // the order in which they were added to the problem. But, this
- // may change if the user removes any parameter blocks from the
- // problem.
+ // performed. This vector determines the order that parameter blocks occur
+ // in the gradient vector and in the columns of the jacobian matrix. If
+ // parameter_blocks is empty, then it is assumed to be equal to vector
+ // containing ALL the parameter blocks. Generally speaking the parameter
+ // blocks will occur in the order in which they were added to the
+ // problem. But, this may change if the user removes any parameter blocks
+ // from the problem.
//
- // NOTE: This vector should contain the same pointers as the ones
- // used to add parameter blocks to the Problem. These parameter
- // block should NOT point to new memory locations. Bad things will
- // happen otherwise.
+ // NOTE: This vector should contain the same pointers as the ones used to
+ // add parameter blocks to the Problem. These parameter block should NOT
+ // point to new memory locations. Bad things will happen otherwise.
std::vector<double*> parameter_blocks;
- // The set of residual blocks to evaluate. This vector determines
- // the order in which the residuals occur, and how the rows of the
- // jacobian are ordered. If residual_blocks is empty, then it is
- // assumed to be equal to the vector containing ALL the residual
- // blocks. Generally speaking the residual blocks will occur in
- // the order in which they were added to the problem. But, this
- // may change if the user removes any residual blocks from the
- // problem.
+ // The set of residual blocks to evaluate. This vector determines the order
+ // in which the residuals occur, and how the rows of the jacobian are
+ // ordered. If residual_blocks is empty, then it is assumed to be equal to
+ // the vector containing ALL the residual blocks. Generally speaking the
+ // residual blocks will occur in the order in which they were added to the
+ // problem. But, this may change if the user removes any residual blocks
+ // from the problem.
std::vector<ResidualBlockId> residual_blocks;
// Even though the residual blocks in the problem may contain loss
- // functions, setting apply_loss_function to false will turn off
- // the application of the loss function to the output of the cost
- // function. This is of use for example if the user wishes to
- // analyse the solution quality by studying the distribution of
- // residuals before and after the solve.
+ // functions, setting apply_loss_function to false will turn off the
+ // application of the loss function to the output of the cost function. This
+ // is of use for example if the user wishes to analyse the solution quality
+ // by studying the distribution of residuals before and after the solve.
bool apply_loss_function = true;
int num_threads = 1;
};
- // Evaluate Problem. Any of the output pointers can be nullptr. Which
- // residual blocks and parameter blocks are used is controlled by
- // the EvaluateOptions struct above.
+ // Evaluate Problem. Any of the output pointers can be nullptr. Which residual
+ // blocks and parameter blocks are used is controlled by the EvaluateOptions
+ // struct above.
//
- // Note 1: The evaluation will use the values stored in the memory
- // locations pointed to by the parameter block pointers used at the
- // time of the construction of the problem. i.e.,
+ // Note 1: The evaluation will use the values stored in the memory locations
+ // pointed to by the parameter block pointers used at the time of the
+ // construction of the problem. i.e.,
//
// Problem problem;
// double x = 1;
@@ -456,8 +453,8 @@
// problem.Evaluate(Problem::EvaluateOptions(), &cost,
// nullptr, nullptr, nullptr);
//
- // The cost is evaluated at x = 1. If you wish to evaluate the
- // problem at x = 2, then
+ // The cost is evaluated at x = 1. If you wish to evaluate the problem at x =
+ // 2, then
//
// x = 2;
// problem.Evaluate(Problem::EvaluateOptions(), &cost,
@@ -465,80 +462,74 @@
//
// is the way to do so.
//
- // Note 2: If no local parameterizations are used, then the size of
- // the gradient vector (and the number of columns in the jacobian)
- // is the sum of the sizes of all the parameter blocks. If a
- // parameter block has a local parameterization, then it contributes
- // "LocalSize" entries to the gradient vector (and the number of
- // columns in the jacobian).
+ // Note 2: If no Manifolds are used, then the size of the gradient vector (and
+ // the number of columns in the jacobian) is the sum of the sizes of all the
+ // parameter blocks. If a parameter block has a Manifold, then it contributes
+ // "TangentSize" entries to the gradient vector (and the number of columns in
+ // the jacobian).
//
- // Note 3: This function cannot be called while the problem is being
- // solved, for example it cannot be called from an IterationCallback
- // at the end of an iteration during a solve.
+ // Note 3: This function cannot be called while the problem is being solved,
+ // for example it cannot be called from an IterationCallback at the end of an
+ // iteration during a solve.
//
- // Note 4: If an EvaluationCallback is associated with the problem,
- // then its PrepareForEvaluation method will be called every time
- // this method is called with new_point = true.
+ // Note 4: If an EvaluationCallback is associated with the problem, then its
+ // PrepareForEvaluation method will be called every time this method is called
+ // with new_point = true.
bool Evaluate(const EvaluateOptions& options,
double* cost,
std::vector<double>* residuals,
std::vector<double>* gradient,
CRSMatrix* jacobian);
- // Evaluates the residual block, storing the scalar cost in *cost,
- // the residual components in *residuals, and the jacobians between
- // the parameters and residuals in jacobians[i], in row-major order.
+ // Evaluates the residual block, storing the scalar cost in *cost, the
+ // residual components in *residuals, and the jacobians between the parameters
+ // and residuals in jacobians[i], in row-major order.
//
// If residuals is nullptr, the residuals are not computed.
//
- // If jacobians is nullptr, no Jacobians are computed. If
- // jacobians[i] is nullptr, then the Jacobian for that parameter
- // block is not computed.
+ // If jacobians is nullptr, no Jacobians are computed. If jacobians[i] is
+ // nullptr, then the Jacobian for that parameter block is not computed.
//
- // It is not okay to request the Jacobian w.r.t a parameter block
- // that is constant.
+ // It is not okay to request the Jacobian w.r.t a parameter block that is
+ // constant.
//
- // The return value indicates the success or failure. Even if the
- // function returns false, the caller should expect the output
- // memory locations to have been modified.
+ // The return value indicates the success or failure. Even if the function
+ // returns false, the caller should expect the output memory locations to have
+ // been modified.
//
- // The returned cost and jacobians have had robustification and
- // local parameterizations applied already; for example, the
- // jacobian for a 4-dimensional quaternion parameter using the
- // "QuaternionParameterization" is num_residuals by 3 instead of
- // num_residuals by 4.
+ // The returned cost and jacobians have had robustification and Manifold
+ // applied already; for example, the jacobian for a 4-dimensional quaternion
+ // parameter using the "QuaternionParameterization" is num_residuals by 3
+ // instead of num_residuals by 4.
//
- // apply_loss_function as the name implies allows the user to switch
- // the application of the loss function on and off.
+ // apply_loss_function as the name implies allows the user to switch the
+ // application of the loss function on and off.
//
// If an EvaluationCallback is associated with the problem, then its
- // PrepareForEvaluation method will be called every time this method
- // is called with new_point = true. This conservatively assumes that
- // the user may have changed the parameter values since the previous
- // call to evaluate / solve. For improved efficiency, and only if
- // you know that the parameter values have not changed between
- // calls, see EvaluateResidualBlockAssumingParametersUnchanged().
+ // PrepareForEvaluation method will be called every time this method is called
+ // with new_point = true. This conservatively assumes that the user may have
+ // changed the parameter values since the previous call to evaluate / solve.
+ // For improved efficiency, and only if you know that the parameter values
+ // have not changed between calls, see
+ // EvaluateResidualBlockAssumingParametersUnchanged().
bool EvaluateResidualBlock(ResidualBlockId residual_block_id,
bool apply_loss_function,
double* cost,
double* residuals,
double** jacobians) const;
- // Same as EvaluateResidualBlock except that if an
- // EvaluationCallback is associated with the problem, then its
- // PrepareForEvaluation method will be called every time this method
- // is called with new_point = false.
+ // Same as EvaluateResidualBlock except that if an EvaluationCallback is
+ // associated with the problem, then its PrepareForEvaluation method will be
+ // called every time this method is called with new_point = false.
//
- // This means, if an EvaluationCallback is associated with the
- // problem then it is the user's responsibility to call
- // PrepareForEvaluation before calling this method if necessary,
- // i.e. iff the parameter values have been changed since the last
- // call to evaluate / solve.'
+ // This means, if an EvaluationCallback is associated with the problem then it
+ // is the user's responsibility to call PrepareForEvaluation before calling
+ // this method if necessary, i.e. iff the parameter values have been changed
+ // since the last call to evaluate / solve.'
//
- // This is because, as the name implies, we assume that the
- // parameter blocks did not change since the last time
- // PrepareForEvaluation was called (via Solve, Evaluate or
- // EvaluateResidualBlock).
+ // This is because, as the name implies, we assume that the parameter blocks
+ // did not change since the last time PrepareForEvaluation was called (via
+ // Solve, Evaluate or EvaluateResidualBlock).
bool EvaluateResidualBlockAssumingParametersUnchanged(
ResidualBlockId residual_block_id,
bool apply_loss_function,
@@ -546,9 +537,13 @@
double* residuals,
double** jacobians) const;
+ // Returns reference to the options with which the Problem was constructed.
+ const Options& options() const;
+
+ // Returns pointer to Problem implementation
+ internal::ProblemImpl* mutable_impl();
+
private:
- friend class Solver;
- friend class Covariance;
std::unique_ptr<internal::ProblemImpl> impl_;
};
diff --git a/include/ceres/product_manifold.h b/include/ceres/product_manifold.h
new file mode 100644
index 0000000..ed2d1f4
--- /dev/null
+++ b/include/ceres/product_manifold.h
@@ -0,0 +1,319 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+// sergiu.deitsch@gmail.com (Sergiu Deitsch)
+//
+
+#ifndef CERES_PUBLIC_PRODUCT_MANIFOLD_H_
+#define CERES_PUBLIC_PRODUCT_MANIFOLD_H_
+
+#include <algorithm>
+#include <array>
+#include <cassert>
+#include <cstddef>
+#include <numeric>
+#include <tuple>
+#include <type_traits>
+#include <utility>
+
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/fixed_array.h"
+#include "ceres/internal/port.h"
+#include "ceres/manifold.h"
+
+namespace ceres {
+
+// Construct a manifold by taking the Cartesian product of a number of other
+// manifolds. This is useful, when a parameter block is the Cartesian product
+// of two or more manifolds. For example the parameters of a camera consist of
+// a rotation and a translation, i.e., SO(3) x R^3.
+//
+// Example usage:
+//
+// ProductManifold<QuaternionManifold, EuclideanManifold<3>> se3;
+//
+// is the manifold for a rigid transformation, where the rotation is
+// represented using a quaternion.
+//
+// Manifolds can be copied and moved to ProductManifold:
+//
+// SubsetManifold manifold1(5, {2});
+// SubsetManifold manifold2(3, {0, 1});
+// ProductManifold<SubsetManifold, SubsetManifold> manifold(manifold1,
+// manifold2);
+//
+// In advanced use cases, manifolds can be dynamically allocated and passed as
+// (smart) pointers:
+//
+// ProductManifold<std::unique_ptr<QuaternionManifold>, EuclideanManifold<3>>
+// se3{std::make_unique<QuaternionManifold>(), EuclideanManifold<3>{}};
+//
+// In C++17, the template parameters can be left out as they are automatically
+// deduced making the initialization much simpler:
+//
+// ProductManifold se3{QuaternionManifold{}, EuclideanManifold<3>{}};
+//
+// The manifold implementations must be either default constructible, copyable
+// or moveable to be usable in a ProductManifold.
+template <typename Manifold0, typename Manifold1, typename... ManifoldN>
+class ProductManifold final : public Manifold {
+ public:
+ // ProductManifold constructor perfect forwards arguments to store manifolds.
+ //
+ // Either use default construction or if you need to copy or move-construct a
+ // manifold instance, you need to pass an instance as an argument for all
+ // types given as class template parameters.
+ template <typename... Args,
+ std::enable_if_t<std::is_constructible<
+ std::tuple<Manifold0, Manifold1, ManifoldN...>,
+ Args...>::value>* = nullptr>
+ explicit ProductManifold(Args&&... manifolds)
+ : ProductManifold{std::make_index_sequence<kNumManifolds>{},
+ std::forward<Args>(manifolds)...} {}
+
+ int AmbientSize() const override { return ambient_size_; }
+ int TangentSize() const override { return tangent_size_; }
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override {
+ return PlusImpl(
+ x, delta, x_plus_delta, std::make_index_sequence<kNumManifolds>{});
+ }
+
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override {
+ return MinusImpl(
+ y, x, y_minus_x, std::make_index_sequence<kNumManifolds>{});
+ }
+
+ bool PlusJacobian(const double* x, double* jacobian_ptr) const override {
+ MatrixRef jacobian(jacobian_ptr, AmbientSize(), TangentSize());
+ jacobian.setZero();
+ internal::FixedArray<double> buffer(buffer_size_);
+
+ return PlusJacobianImpl(
+ x, jacobian, buffer, std::make_index_sequence<kNumManifolds>{});
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian_ptr) const override {
+ MatrixRef jacobian(jacobian_ptr, TangentSize(), AmbientSize());
+ jacobian.setZero();
+ internal::FixedArray<double> buffer(buffer_size_);
+
+ return MinusJacobianImpl(
+ x, jacobian, buffer, std::make_index_sequence<kNumManifolds>{});
+ }
+
+ private:
+ static constexpr std::size_t kNumManifolds = 2 + sizeof...(ManifoldN);
+
+ template <std::size_t... Indices, typename... Args>
+ explicit ProductManifold(std::index_sequence<Indices...>, Args&&... manifolds)
+ : manifolds_{std::forward<Args>(manifolds)...},
+ buffer_size_{(std::max)(
+ {(Dereference(std::get<Indices>(manifolds_)).TangentSize() *
+ Dereference(std::get<Indices>(manifolds_)).AmbientSize())...})},
+ ambient_sizes_{
+ Dereference(std::get<Indices>(manifolds_)).AmbientSize()...},
+ tangent_sizes_{
+ Dereference(std::get<Indices>(manifolds_)).TangentSize()...},
+ ambient_offsets_{ExclusiveScan(ambient_sizes_)},
+ tangent_offsets_{ExclusiveScan(tangent_sizes_)},
+ ambient_size_{
+ std::accumulate(ambient_sizes_.begin(), ambient_sizes_.end(), 0)},
+ tangent_size_{
+ std::accumulate(tangent_sizes_.begin(), tangent_sizes_.end(), 0)} {}
+
+ template <std::size_t Index0, std::size_t... Indices>
+ bool PlusImpl(const double* x,
+ const double* delta,
+ double* x_plus_delta,
+ std::index_sequence<Index0, Indices...>) const {
+ if (!Dereference(std::get<Index0>(manifolds_))
+ .Plus(x + ambient_offsets_[Index0],
+ delta + tangent_offsets_[Index0],
+ x_plus_delta + ambient_offsets_[Index0])) {
+ return false;
+ }
+
+ return PlusImpl(x, delta, x_plus_delta, std::index_sequence<Indices...>{});
+ }
+
+ static constexpr bool PlusImpl(const double* /*x*/,
+ const double* /*delta*/,
+ double* /*x_plus_delta*/,
+ std::index_sequence<>) noexcept {
+ return true;
+ }
+
+ template <std::size_t Index0, std::size_t... Indices>
+ bool MinusImpl(const double* y,
+ const double* x,
+ double* y_minus_x,
+ std::index_sequence<Index0, Indices...>) const {
+ if (!Dereference(std::get<Index0>(manifolds_))
+ .Minus(y + ambient_offsets_[Index0],
+ x + ambient_offsets_[Index0],
+ y_minus_x + tangent_offsets_[Index0])) {
+ return false;
+ }
+
+ return MinusImpl(y, x, y_minus_x, std::index_sequence<Indices...>{});
+ }
+
+ static constexpr bool MinusImpl(const double* /*y*/,
+ const double* /*x*/,
+ double* /*y_minus_x*/,
+ std::index_sequence<>) noexcept {
+ return true;
+ }
+
+ template <std::size_t Index0, std::size_t... Indices>
+ bool PlusJacobianImpl(const double* x,
+ MatrixRef& jacobian,
+ internal::FixedArray<double>& buffer,
+ std::index_sequence<Index0, Indices...>) const {
+ if (!Dereference(std::get<Index0>(manifolds_))
+ .PlusJacobian(x + ambient_offsets_[Index0], buffer.data())) {
+ return false;
+ }
+
+ jacobian.block(ambient_offsets_[Index0],
+ tangent_offsets_[Index0],
+ ambient_sizes_[Index0],
+ tangent_sizes_[Index0]) =
+ MatrixRef(
+ buffer.data(), ambient_sizes_[Index0], tangent_sizes_[Index0]);
+
+ return PlusJacobianImpl(
+ x, jacobian, buffer, std::index_sequence<Indices...>{});
+ }
+
+ static constexpr bool PlusJacobianImpl(
+ const double* /*x*/,
+ MatrixRef& /*jacobian*/,
+ internal::FixedArray<double>& /*buffer*/,
+ std::index_sequence<>) noexcept {
+ return true;
+ }
+
+ template <std::size_t Index0, std::size_t... Indices>
+ bool MinusJacobianImpl(const double* x,
+ MatrixRef& jacobian,
+ internal::FixedArray<double>& buffer,
+ std::index_sequence<Index0, Indices...>) const {
+ if (!Dereference(std::get<Index0>(manifolds_))
+ .MinusJacobian(x + ambient_offsets_[Index0], buffer.data())) {
+ return false;
+ }
+
+ jacobian.block(tangent_offsets_[Index0],
+ ambient_offsets_[Index0],
+ tangent_sizes_[Index0],
+ ambient_sizes_[Index0]) =
+ MatrixRef(
+ buffer.data(), tangent_sizes_[Index0], ambient_sizes_[Index0]);
+
+ return MinusJacobianImpl(
+ x, jacobian, buffer, std::index_sequence<Indices...>{});
+ }
+
+ static constexpr bool MinusJacobianImpl(
+ const double* /*x*/,
+ MatrixRef& /*jacobian*/,
+ internal::FixedArray<double>& /*buffer*/,
+ std::index_sequence<>) noexcept {
+ return true;
+ }
+
+ template <typename T, std::size_t N>
+ static std::array<T, N> ExclusiveScan(const std::array<T, N>& values) {
+ std::array<T, N> result;
+ // TODO Replace with std::exclusive_scan once all platforms have full C++17
+ // STL support.
+ T init = 0;
+ for (std::size_t i = 0; i != N; ++i) {
+ result[i] = init;
+ init += values[i];
+ }
+ return result;
+ }
+
+ template <typename T, typename E = void>
+ struct IsDereferenceable : std::false_type {};
+
+ template <typename T>
+ struct IsDereferenceable<T, std::void_t<decltype(*std::declval<T>())>>
+ : std::true_type {};
+
+ template <typename T,
+ std::enable_if_t<!IsDereferenceable<T>::value>* = nullptr>
+ static constexpr decltype(auto) Dereference(T& value) {
+ return value;
+ }
+
+ // Support dereferenceable types such as std::unique_ptr, std::shared_ptr, raw
+ // pointers etc.
+ template <typename T,
+ std::enable_if_t<IsDereferenceable<T>::value>* = nullptr>
+ static constexpr decltype(auto) Dereference(T& value) {
+ return *value;
+ }
+
+ template <typename T>
+ static constexpr decltype(auto) Dereference(T* p) {
+ assert(p != nullptr);
+ return *p;
+ }
+
+ std::tuple<Manifold0, Manifold1, ManifoldN...> manifolds_;
+ int buffer_size_;
+ std::array<int, kNumManifolds> ambient_sizes_;
+ std::array<int, kNumManifolds> tangent_sizes_;
+ std::array<int, kNumManifolds> ambient_offsets_;
+ std::array<int, kNumManifolds> tangent_offsets_;
+ int ambient_size_;
+ int tangent_size_;
+};
+
+// C++17 deduction guide that allows the user to avoid explicitly specifying
+// the template parameters of ProductManifold. The class can instead be
+// instantiated as follows:
+//
+// ProductManifold manifold{QuaternionManifold{}, EuclideanManifold<3>{}};
+//
+template <typename Manifold0, typename Manifold1, typename... Manifolds>
+ProductManifold(Manifold0&&, Manifold1&&, Manifolds&&...)
+ -> ProductManifold<Manifold0, Manifold1, Manifolds...>;
+
+} // namespace ceres
+
+#endif // CERES_PUBLIC_PRODUCT_MANIFOLD_H_
diff --git a/include/ceres/rotation.h b/include/ceres/rotation.h
index 0c82a41..0cccfa7 100644
--- a/include/ceres/rotation.h
+++ b/include/ceres/rotation.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -47,8 +47,9 @@
#include <algorithm>
#include <cmath>
-#include <limits>
+#include "ceres/constants.h"
+#include "ceres/internal/euler_angles.h"
#include "glog/logging.h"
namespace ceres {
@@ -60,7 +61,7 @@
//
// the expression M(i, j) is equivalent to
//
-// arrary[i * row_stride + j * col_stride]
+// array[i * row_stride + j * col_stride]
//
// Conversion functions to and from rotation matrices accept
// MatrixAdapters to permit using row-major and column-major layouts,
@@ -136,6 +137,71 @@
void EulerAnglesToRotationMatrix(
const T* euler, const MatrixAdapter<T, row_stride, col_stride>& R);
+// Convert a generic Euler Angle sequence (in radians) to a 3x3 rotation matrix.
+//
+// Euler Angles define a sequence of 3 rotations about a sequence of axes,
+// typically taken to be the X, Y, or Z axes. The last axis may be the same as
+// the first axis (e.g. ZYZ) per Euler's original definition of his angles
+// (proper Euler angles) or not (e.g. ZYX / yaw-pitch-roll), per common usage in
+// the nautical and aerospace fields (Tait-Bryan angles). The three rotations
+// may be in a global frame of reference (Extrinsic) or in a body fixed frame of
+// reference (Intrinsic) that moves with the rotating object.
+//
+// Internally, Euler Axis sequences are classified by Ken Shoemake's scheme from
+// "Euler angle conversion", Graphics Gems IV, where a choice of axis for the
+// first rotation and 3 binary choices:
+// 1. Parity of the axis permutation. The axis sequence has Even parity if the
+// second axis of rotation is 'greater-than' the first axis of rotation
+// according to the order X<Y<Z<X, otherwise it has Odd parity.
+// 2. Proper Euler Angles v.s. Tait-Bryan Angles
+// 3. Extrinsic Rotations v.s. Intrinsic Rotations
+// compactly represent all 24 possible Euler Angle Conventions
+//
+// One template parameter: EulerSystem must be explicitly given. This parameter
+// is a tag named by 'Extrinsic' or 'Intrinsic' followed by three characters in
+// the set '[XYZ]', specifying the axis sequence, e.g. ceres::ExtrinsicYZY
+// (robotic arms), ceres::IntrinsicZYX (for aerospace), etc.
+//
+// The order of elements in the input array 'euler' follows the axis sequence
+template <typename EulerSystem, typename T>
+inline void EulerAnglesToRotation(const T* euler, T* R);
+
+template <typename EulerSystem, typename T, int row_stride, int col_stride>
+void EulerAnglesToRotation(const T* euler,
+ const MatrixAdapter<T, row_stride, col_stride>& R);
+
+// Convert a 3x3 rotation matrix to a generic Euler Angle sequence (in radians)
+//
+// Euler Angles define a sequence of 3 rotations about a sequence of axes,
+// typically taken to be the X, Y, or Z axes. The last axis may be the same as
+// the first axis (e.g. ZYZ) per Euler's original definition of his angles
+// (proper Euler angles) or not (e.g. ZYX / yaw-pitch-roll), per common usage in
+// the nautical and aerospace fields (Tait-Bryan angles). The three rotations
+// may be in a global frame of reference (Extrinsic) or in a body fixed frame of
+// reference (Intrinsic) that moves with the rotating object.
+//
+// Internally, Euler Axis sequences are classified by Ken Shoemake's scheme from
+// "Euler angle conversion", Graphics Gems IV, where a choice of axis for the
+// first rotation and 3 binary choices:
+// 1. Oddness of the axis permutation, that defines whether the second axis is
+// 'greater-than' the first axis according to the order X>Y>Z>X)
+// 2. Proper Euler Angles v.s. Tait-Bryan Angles
+// 3. Extrinsic Rotations v.s. Intrinsic Rotations
+// compactly represent all 24 possible Euler Angle Conventions
+//
+// One template parameter: EulerSystem must be explicitly given. This parameter
+// is a tag named by 'Extrinsic' or 'Intrinsic' followed by three characters in
+// the set '[XYZ]', specifying the axis sequence, e.g. ceres::ExtrinsicYZY
+// (robotic arms), ceres::IntrinsicZYX (for aerospace), etc.
+//
+// The order of elements in the output array 'euler' follows the axis sequence
+template <typename EulerSystem, typename T>
+inline void RotationMatrixToEulerAngles(const T* R, T* euler);
+
+template <typename EulerSystem, typename T, int row_stride, int col_stride>
+void RotationMatrixToEulerAngles(
+ const MatrixAdapter<const T, row_stride, col_stride>& R, T* euler);
+
// Convert a 4-vector to a 3x3 scaled rotation matrix.
//
// The choice of rotation is such that the quaternion [1 0 0 0] goes to an
@@ -247,14 +313,15 @@
template <typename T>
inline void AngleAxisToQuaternion(const T* angle_axis, T* quaternion) {
+ using std::fpclassify;
+ using std::hypot;
const T& a0 = angle_axis[0];
const T& a1 = angle_axis[1];
const T& a2 = angle_axis[2];
- const T theta_squared = a0 * a0 + a1 * a1 + a2 * a2;
+ const T theta = hypot(a0, a1, a2);
// For points not at the origin, the full conversion is numerically stable.
- if (theta_squared > T(0.0)) {
- const T theta = sqrt(theta_squared);
+ if (fpclassify(theta) != FP_ZERO) {
const T half_theta = theta * T(0.5);
const T k = sin(half_theta) / theta;
quaternion[0] = cos(half_theta);
@@ -276,15 +343,16 @@
template <typename T>
inline void QuaternionToAngleAxis(const T* quaternion, T* angle_axis) {
+ using std::fpclassify;
+ using std::hypot;
const T& q1 = quaternion[1];
const T& q2 = quaternion[2];
const T& q3 = quaternion[3];
- const T sin_squared_theta = q1 * q1 + q2 * q2 + q3 * q3;
+ const T sin_theta = hypot(q1, q2, q3);
// For quaternions representing non-zero rotation, the conversion
// is numerically stable.
- if (sin_squared_theta > T(0.0)) {
- const T sin_theta = sqrt(sin_squared_theta);
+ if (fpclassify(sin_theta) != FP_ZERO) {
const T& cos_theta = quaternion[0];
// If cos_theta is negative, theta is greater than pi/2, which
@@ -385,13 +453,14 @@
template <typename T, int row_stride, int col_stride>
void AngleAxisToRotationMatrix(
const T* angle_axis, const MatrixAdapter<T, row_stride, col_stride>& R) {
+ using std::fpclassify;
+ using std::hypot;
static const T kOne = T(1.0);
- const T theta2 = DotProduct(angle_axis, angle_axis);
- if (theta2 > T(std::numeric_limits<double>::epsilon())) {
+ const T theta = hypot(angle_axis[0], angle_axis[1], angle_axis[2]);
+ if (fpclassify(theta) != FP_ZERO) {
// We want to be careful to only evaluate the square root if the
// norm of the angle_axis vector is greater than zero. Otherwise
// we get a division by zero.
- const T theta = sqrt(theta2);
const T wx = angle_axis[0] / theta;
const T wy = angle_axis[1] / theta;
const T wz = angle_axis[2] / theta;
@@ -411,7 +480,7 @@
R(2, 2) = costheta + wz*wz*(kOne - costheta);
// clang-format on
} else {
- // Near zero, we switch to using the first order Taylor expansion.
+ // At zero, we switch to using the first order Taylor expansion.
R(0, 0) = kOne;
R(1, 0) = angle_axis[2];
R(2, 0) = -angle_axis[1];
@@ -424,6 +493,141 @@
}
}
+template <typename EulerSystem, typename T>
+inline void EulerAnglesToRotation(const T* euler, T* R) {
+ EulerAnglesToRotation<EulerSystem>(euler, RowMajorAdapter3x3(R));
+}
+
+template <typename EulerSystem, typename T, int row_stride, int col_stride>
+void EulerAnglesToRotation(const T* euler,
+ const MatrixAdapter<T, row_stride, col_stride>& R) {
+ using std::cos;
+ using std::sin;
+
+ const auto [i, j, k] = EulerSystem::kAxes;
+
+ T ea[3];
+ ea[1] = euler[1];
+ if constexpr (EulerSystem::kIsIntrinsic) {
+ ea[0] = euler[2];
+ ea[2] = euler[0];
+ } else {
+ ea[0] = euler[0];
+ ea[2] = euler[2];
+ }
+ if constexpr (EulerSystem::kIsParityOdd) {
+ ea[0] = -ea[0];
+ ea[1] = -ea[1];
+ ea[2] = -ea[2];
+ }
+
+ const T ci = cos(ea[0]);
+ const T cj = cos(ea[1]);
+ const T ch = cos(ea[2]);
+ const T si = sin(ea[0]);
+ const T sj = sin(ea[1]);
+ const T sh = sin(ea[2]);
+ const T cc = ci * ch;
+ const T cs = ci * sh;
+ const T sc = si * ch;
+ const T ss = si * sh;
+ if constexpr (EulerSystem::kIsProperEuler) {
+ R(i, i) = cj;
+ R(i, j) = sj * si;
+ R(i, k) = sj * ci;
+ R(j, i) = sj * sh;
+ R(j, j) = -cj * ss + cc;
+ R(j, k) = -cj * cs - sc;
+ R(k, i) = -sj * ch;
+ R(k, j) = cj * sc + cs;
+ R(k, k) = cj * cc - ss;
+ } else {
+ R(i, i) = cj * ch;
+ R(i, j) = sj * sc - cs;
+ R(i, k) = sj * cc + ss;
+ R(j, i) = cj * sh;
+ R(j, j) = sj * ss + cc;
+ R(j, k) = sj * cs - sc;
+ R(k, i) = -sj;
+ R(k, j) = cj * si;
+ R(k, k) = cj * ci;
+ }
+}
+
+template <typename EulerSystem, typename T>
+inline void RotationMatrixToEulerAngles(const T* R, T* euler) {
+ RotationMatrixToEulerAngles<EulerSystem>(RowMajorAdapter3x3(R), euler);
+}
+
+template <typename EulerSystem, typename T, int row_stride, int col_stride>
+void RotationMatrixToEulerAngles(
+ const MatrixAdapter<const T, row_stride, col_stride>& R, T* euler) {
+ using std::atan2;
+ using std::fpclassify;
+ using std::hypot;
+
+ const auto [i, j, k] = EulerSystem::kAxes;
+
+ T ea[3];
+ if constexpr (EulerSystem::kIsProperEuler) {
+ const T sy = hypot(R(i, j), R(i, k));
+ if (fpclassify(sy) != FP_ZERO) {
+ ea[0] = atan2(R(i, j), R(i, k));
+ ea[1] = atan2(sy, R(i, i));
+ ea[2] = atan2(R(j, i), -R(k, i));
+ } else {
+ ea[0] = atan2(-R(j, k), R(j, j));
+ ea[1] = atan2(sy, R(i, i));
+ ea[2] = T(0.0);
+ }
+ } else {
+ const T cy = hypot(R(i, i), R(j, i));
+ if (fpclassify(cy) != FP_ZERO) {
+ ea[0] = atan2(R(k, j), R(k, k));
+ ea[1] = atan2(-R(k, i), cy);
+ ea[2] = atan2(R(j, i), R(i, i));
+ } else {
+ ea[0] = atan2(-R(j, k), R(j, j));
+ ea[1] = atan2(-R(k, i), cy);
+ ea[2] = T(0.0);
+ }
+ }
+ if constexpr (EulerSystem::kIsParityOdd) {
+ ea[0] = -ea[0];
+ ea[1] = -ea[1];
+ ea[2] = -ea[2];
+ }
+ euler[1] = ea[1];
+ if constexpr (EulerSystem::kIsIntrinsic) {
+ euler[0] = ea[2];
+ euler[2] = ea[0];
+ } else {
+ euler[0] = ea[0];
+ euler[2] = ea[2];
+ }
+
+ // Proper euler angles are defined for angles in
+ // [-pi, pi) x [0, pi / 2) x [-pi, pi)
+ // which is enforced here
+ if constexpr (EulerSystem::kIsProperEuler) {
+ const T kPi(constants::pi);
+ const T kTwoPi(2.0 * kPi);
+ if (euler[1] < T(0.0) || ea[1] > kPi) {
+ euler[0] += kPi;
+ euler[1] = -euler[1];
+ euler[2] -= kPi;
+ }
+
+ for (int i = 0; i < 3; ++i) {
+ if (euler[i] < -kPi) {
+ euler[i] += kTwoPi;
+ } else if (euler[i] > kPi) {
+ euler[i] -= kTwoPi;
+ }
+ }
+ }
+}
+
template <typename T>
inline void EulerAnglesToRotationMatrix(const T* euler,
const int row_stride_parameter,
@@ -521,18 +725,18 @@
DCHECK_NE(pt, result) << "Inplace rotation is not supported.";
// clang-format off
- const T t2 = q[0] * q[1];
- const T t3 = q[0] * q[2];
- const T t4 = q[0] * q[3];
- const T t5 = -q[1] * q[1];
- const T t6 = q[1] * q[2];
- const T t7 = q[1] * q[3];
- const T t8 = -q[2] * q[2];
- const T t9 = q[2] * q[3];
- const T t1 = -q[3] * q[3];
- result[0] = T(2) * ((t8 + t1) * pt[0] + (t6 - t4) * pt[1] + (t3 + t7) * pt[2]) + pt[0]; // NOLINT
- result[1] = T(2) * ((t4 + t6) * pt[0] + (t5 + t1) * pt[1] + (t9 - t2) * pt[2]) + pt[1]; // NOLINT
- result[2] = T(2) * ((t7 - t3) * pt[0] + (t2 + t9) * pt[1] + (t5 + t8) * pt[2]) + pt[2]; // NOLINT
+ T uv0 = q[2] * pt[2] - q[3] * pt[1];
+ T uv1 = q[3] * pt[0] - q[1] * pt[2];
+ T uv2 = q[1] * pt[1] - q[2] * pt[0];
+ uv0 += uv0;
+ uv1 += uv1;
+ uv2 += uv2;
+ result[0] = pt[0] + q[0] * uv0;
+ result[1] = pt[1] + q[0] * uv1;
+ result[2] = pt[2] + q[0] * uv2;
+ result[0] += q[2] * uv2 - q[3] * uv1;
+ result[1] += q[3] * uv0 - q[1] * uv2;
+ result[2] += q[1] * uv1 - q[2] * uv0;
// clang-format on
}
@@ -589,9 +793,12 @@
const T pt[3],
T result[3]) {
DCHECK_NE(pt, result) << "Inplace rotation is not supported.";
+ using std::fpclassify;
+ using std::hypot;
- const T theta2 = DotProduct(angle_axis, angle_axis);
- if (theta2 > T(std::numeric_limits<double>::epsilon())) {
+ const T theta = hypot(angle_axis[0], angle_axis[1], angle_axis[2]);
+
+ if (fpclassify(theta) != FP_ZERO) {
// Away from zero, use the rodriguez formula
//
// result = pt costheta +
@@ -602,7 +809,6 @@
// norm of the angle_axis vector is greater than zero. Otherwise
// we get a division by zero.
//
- const T theta = sqrt(theta2);
const T costheta = cos(theta);
const T sintheta = sin(theta);
const T theta_inverse = T(1.0) / theta;
@@ -623,19 +829,19 @@
result[1] = pt[1] * costheta + w_cross_pt[1] * sintheta + w[1] * tmp;
result[2] = pt[2] * costheta + w_cross_pt[2] * sintheta + w[2] * tmp;
} else {
- // Near zero, the first order Taylor approximation of the rotation
- // matrix R corresponding to a vector w and angle w is
+ // At zero, the first order Taylor approximation of the rotation
+ // matrix R corresponding to a vector w and angle theta is
//
// R = I + hat(w) * sin(theta)
//
// But sintheta ~ theta and theta * w = angle_axis, which gives us
//
- // R = I + hat(w)
+ // R = I + hat(angle_axis)
//
// and actually performing multiplication with the point pt, gives us
- // R * pt = pt + w x pt.
+ // R * pt = pt + angle_axis x pt.
//
- // Switching to the Taylor expansion near zero provides meaningful
+ // Switching to the Taylor expansion at zero provides meaningful
// derivatives when evaluated using Jets.
//
// Explicitly inlined evaluation of the cross product for
diff --git a/include/ceres/sized_cost_function.h b/include/ceres/sized_cost_function.h
index 8e92f1b..8928c19 100644
--- a/include/ceres/sized_cost_function.h
+++ b/include/ceres/sized_cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,9 +38,10 @@
#ifndef CERES_PUBLIC_SIZED_COST_FUNCTION_H_
#define CERES_PUBLIC_SIZED_COST_FUNCTION_H_
+#include <initializer_list>
+
#include "ceres/cost_function.h"
#include "ceres/types.h"
-#include "glog/logging.h"
#include "internal/parameter_dims.h"
namespace ceres {
@@ -58,11 +59,9 @@
SizedCostFunction() {
set_num_residuals(kNumResiduals);
- *mutable_parameter_block_sizes() = std::vector<int32_t>{Ns...};
+ *mutable_parameter_block_sizes() = std::initializer_list<int32_t>{Ns...};
}
- virtual ~SizedCostFunction() {}
-
// Subclasses must implement Evaluate().
};
diff --git a/include/ceres/solver.h b/include/ceres/solver.h
index 61b8dd5..68438a1 100644
--- a/include/ceres/solver.h
+++ b/include/ceres/solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,8 +38,9 @@
#include <vector>
#include "ceres/crs_matrix.h"
+#include "ceres/internal/config.h"
#include "ceres/internal/disable_warnings.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/iteration_callback.h"
#include "ceres/ordered_groups.h"
#include "ceres/problem.h"
@@ -63,8 +64,6 @@
// with a message describing the problem.
bool IsValid(std::string* error) const;
- // Minimizer options ----------------------------------------
-
// Ceres supports the two major families of optimization strategies -
// Trust Region and Line Search.
//
@@ -363,102 +362,158 @@
std::unordered_set<ResidualBlockId>
residual_blocks_for_subset_preconditioner;
- // Ceres supports using multiple dense linear algebra libraries
- // for dense matrix factorizations. Currently EIGEN and LAPACK are
- // the valid choices. EIGEN is always available, LAPACK refers to
- // the system BLAS + LAPACK library which may or may not be
+ // Ceres supports using multiple dense linear algebra libraries for dense
+ // matrix factorizations. Currently EIGEN, LAPACK and CUDA are the valid
+ // choices. EIGEN is always available, LAPACK refers to the system BLAS +
+ // LAPACK library which may or may not be available. CUDA refers to Nvidia's
+ // GPU based dense linear algebra library, which may or may not be
// available.
//
- // This setting affects the DENSE_QR, DENSE_NORMAL_CHOLESKY and
- // DENSE_SCHUR solvers. For small to moderate sized problem EIGEN
- // is a fine choice but for large problems, an optimized LAPACK +
- // BLAS implementation can make a substantial difference in
- // performance.
+ // This setting affects the DENSE_QR, DENSE_NORMAL_CHOLESKY and DENSE_SCHUR
+ // solvers. For small to moderate sized problem EIGEN is a fine choice but
+ // for large problems, an optimized LAPACK + BLAS or CUDA implementation can
+ // make a substantial difference in performance.
DenseLinearAlgebraLibraryType dense_linear_algebra_library_type = EIGEN;
- // Ceres supports using multiple sparse linear algebra libraries
- // for sparse matrix ordering and factorizations. Currently,
- // SUITE_SPARSE and CX_SPARSE are the valid choices, depending on
- // whether they are linked into Ceres at build time.
+ // Ceres supports using multiple sparse linear algebra libraries for sparse
+ // matrix ordering and factorizations.
SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type =
#if !defined(CERES_NO_SUITESPARSE)
SUITE_SPARSE;
-#elif defined(CERES_USE_EIGEN_SPARSE)
- EIGEN_SPARSE;
-#elif !defined(CERES_NO_CXSPARSE)
- CX_SPARSE;
#elif !defined(CERES_NO_ACCELERATE_SPARSE)
ACCELERATE_SPARSE;
+#elif defined(CERES_USE_EIGEN_SPARSE)
+ EIGEN_SPARSE;
#else
NO_SPARSE;
#endif
// The order in which variables are eliminated in a linear solver
- // can have a significant of impact on the efficiency and accuracy
- // of the method. e.g., when doing sparse Cholesky factorization,
+ // can have a significant impact on the efficiency and accuracy of
+ // the method. e.g., when doing sparse Cholesky factorization,
// there are matrices for which a good ordering will give a
// Cholesky factor with O(n) storage, where as a bad ordering will
// result in an completely dense factor.
//
- // Ceres allows the user to provide varying amounts of hints to
- // the solver about the variable elimination ordering to use. This
- // can range from no hints, where the solver is free to decide the
- // best possible ordering based on the user's choices like the
- // linear solver being used, to an exact order in which the
- // variables should be eliminated, and a variety of possibilities
- // in between.
+ // Sparse direct solvers like SPARSE_NORMAL_CHOLESKY and
+ // SPARSE_SCHUR use a fill reducing ordering of the columns and
+ // rows of the matrix being factorized before computing the
+ // numeric factorization.
//
- // Instances of the ParameterBlockOrdering class are used to
- // communicate this information to Ceres.
+ // This enum controls the type of algorithm used to compute
+ // this fill reducing ordering. There is no single algorithm
+ // that works on all matrices, so determining which algorithm
+ // works better is a matter of empirical experimentation.
//
- // Formally an ordering is an ordered partitioning of the
- // parameter blocks, i.e, each parameter block belongs to exactly
- // one group, and each group has a unique non-negative integer
- // associated with it, that determines its order in the set of
- // groups.
+ // The exact behaviour of this setting is affected by the value of
+ // linear_solver_ordering as described below.
+ LinearSolverOrderingType linear_solver_ordering_type = AMD;
+
+ // Besides specifying the fill reducing ordering via
+ // linear_solver_ordering_type, Ceres allows the user to provide varying
+ // amounts of hints to the linear solver about the variable elimination
+ // ordering to use. This can range from no hints, where the solver is free
+ // to decide the best possible ordering based on the user's choices like the
+ // linear solver being used, to an exact order in which the variables should
+ // be eliminated, and a variety of possibilities in between.
//
- // Given such an ordering, Ceres ensures that the parameter blocks in
- // the lowest numbered group are eliminated first, and then the
- // parameter blocks in the next lowest numbered group and so on. Within
- // each group, Ceres is free to order the parameter blocks as it
- // chooses.
+ // Instances of the ParameterBlockOrdering class are used to communicate
+ // this information to Ceres.
//
- // If NULL, then all parameter blocks are assumed to be in the
- // same group and the solver is free to decide the best
- // ordering.
+ // Formally an ordering is an ordered partitioning of the parameter blocks,
+ // i.e, each parameter block belongs to exactly one group, and each group
+ // has a unique non-negative integer associated with it, that determines its
+ // order in the set of groups.
//
// e.g. Consider the linear system
//
// x + y = 3
// 2x + 3y = 7
//
- // There are two ways in which it can be solved. First eliminating x
- // from the two equations, solving for y and then back substituting
- // for x, or first eliminating y, solving for x and back substituting
- // for y. The user can construct three orderings here.
+ // There are two ways in which it can be solved. First eliminating x from
+ // the two equations, solving for y and then back substituting for x, or
+ // first eliminating y, solving for x and back substituting for y. The user
+ // can construct three orderings here.
//
// {0: x}, {1: y} - eliminate x first.
// {0: y}, {1: x} - eliminate y first.
// {0: x, y} - Solver gets to decide the elimination order.
//
- // Thus, to have Ceres determine the ordering automatically using
- // heuristics, put all the variables in group 0 and to control the
- // ordering for every variable, create groups 0..N-1, one per
- // variable, in the desired order.
+ // Thus, to have Ceres determine the ordering automatically, put all the
+ // variables in group 0 and to control the ordering for every variable
+ // create groups 0 ... N-1, one per variable, in the desired
+ // order.
+ //
+ // linear_solver_ordering == nullptr and an ordering where all the parameter
+ // blocks are in one elimination group mean the same thing - the solver is
+ // free to choose what it thinks is the best elimination ordering. Therefore
+ // in the following we will only consider the case where
+ // linear_solver_ordering is nullptr.
+ //
+ // The exact interpretation of this information depends on the values of
+ // linear_solver_ordering_type and linear_solver_type/preconditioner_type
+ // and sparse_linear_algebra_type.
//
// Bundle Adjustment
- // -----------------
+ // =================
//
- // A particular case of interest is bundle adjustment, where the user
- // has two options. The default is to not specify an ordering at all,
- // the solver will see that the user wants to use a Schur type solver
- // and figure out the right elimination ordering.
+ // If the user is using one of the Schur solvers (DENSE_SCHUR,
+ // SPARSE_SCHUR, ITERATIVE_SCHUR) and chooses to specify an
+ // ordering, it must have one important property. The lowest
+ // numbered elimination group must form an independent set in the
+ // graph corresponding to the Hessian, or in other words, no two
+ // parameter blocks in in the first elimination group should
+ // co-occur in the same residual block. For the best performance,
+ // this elimination group should be as large as possible. For
+ // standard bundle adjustment problems, this corresponds to the
+ // first elimination group containing all the 3d points, and the
+ // second containing the all the cameras parameter blocks.
//
- // But if the user already knows what parameter blocks are points and
- // what are cameras, they can save preprocessing time by partitioning
- // the parameter blocks into two groups, one for the points and one
- // for the cameras, where the group containing the points has an id
- // smaller than the group containing cameras.
+ // If the user leaves the choice to Ceres, then the solver uses an
+ // approximate maximum independent set algorithm to identify the first
+ // elimination group.
+ //
+ // sparse_linear_algebra_library_type = SUITE_SPARSE
+ // =================================================
+ //
+ // linear_solver_ordering_type = AMD
+ // ---------------------------------
+ //
+ // A Constrained Approximate Minimum Degree (CAMD) ordering used where the
+ // parameter blocks in the lowest numbered group are eliminated first, and
+ // then the parameter blocks in the next lowest numbered group and so
+ // on. Within each group, CAMD free to order the parameter blocks as it
+ // chooses.
+ //
+ // linear_solver_ordering_type = NESDIS
+ // -------------------------------------
+ //
+ // a. linear_solver_type = SPARSE_NORMAL_CHOLESKY or
+ // linear_solver_type = CGNR and preconditioner_type = SUBSET
+ //
+ // The value of linear_solver_ordering is ignored and a Nested Dissection
+ // algorithm is used to compute a fill reducing ordering.
+ //
+ // b. linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR
+ //
+ // ONLY the lowest group are used to compute the Schur complement, and
+ // Nested Dissection is used to compute a fill reducing ordering for the
+ // Schur Complement (or its preconditioner).
+ //
+ // sparse_linear_algebra_library_type = EIGEN_SPARSE or ACCELERATE_SPARSE
+ // ======================================================================
+ //
+ // a. linear_solver_type = SPARSE_NORMAL_CHOLESKY or
+ // linear_solver_type = CGNR and preconditioner_type = SUBSET
+ //
+ // then the value of linear_solver_ordering is ignored and AMD or NESDIS is
+ // used to compute a fill reducing ordering as requested by the user.
+ //
+ // b. linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR
+ //
+ // ONLY the lowest group are used to compute the Schur complement, and AMD
+ // or NESDIS is used to compute a fill reducing ordering for the Schur
+ // Complement (or its preconditioner).
std::shared_ptr<ParameterBlockOrdering> linear_solver_ordering;
// Use an explicitly computed Schur complement matrix with
@@ -499,12 +554,6 @@
// Jacobian matrix and generally speaking, there is no performance
// penalty for doing so.
- // In some rare cases, it is worth using a more complicated
- // reordering algorithm which has slightly better runtime
- // performance at the expense of an extra copy of the Jacobian
- // matrix. Setting use_postordering to true enables this tradeoff.
- bool use_postordering = false;
-
// Some non-linear least squares problems are symbolically dense but
// numerically sparse. i.e. at any given state only a small number
// of jacobian entries are non-zero, but the position and number of
@@ -520,11 +569,6 @@
// This settings only affects the SPARSE_NORMAL_CHOLESKY solver.
bool dynamic_sparsity = false;
- // TODO(sameeragarwal): Further expand the documentation for the
- // following two options.
-
- // NOTE1: EXPERIMENTAL FEATURE, UNDER DEVELOPMENT, USE AT YOUR OWN RISK.
- //
// If use_mixed_precision_solves is true, the Gauss-Newton matrix
// is computed in double precision, but its factorization is
// computed in single precision. This can result in significant
@@ -535,15 +579,57 @@
// If use_mixed_precision_solves is true, we recommend setting
// max_num_refinement_iterations to 2-3.
//
- // NOTE2: The following two options are currently only applicable
- // if sparse_linear_algebra_library_type is EIGEN_SPARSE and
- // linear_solver_type is SPARSE_NORMAL_CHOLESKY, or SPARSE_SCHUR.
+ // This options is available when linear solver uses sparse or dense
+ // cholesky factorization, except when sparse_linear_algebra_library_type =
+ // SUITE_SPARSE.
bool use_mixed_precision_solves = false;
// Number steps of the iterative refinement process to run when
// computing the Gauss-Newton step.
int max_num_refinement_iterations = 0;
+ // Minimum number of iterations for which the linear solver should
+ // run, even if the convergence criterion is satisfied.
+ int min_linear_solver_iterations = 0;
+
+ // Maximum number of iterations for which the linear solver should
+ // run. If the solver does not converge in less than
+ // max_linear_solver_iterations, then it returns MAX_ITERATIONS,
+ // as its termination type.
+ int max_linear_solver_iterations = 500;
+
+ // Maximum number of iterations performed by SCHUR_POWER_SERIES_EXPANSION.
+ // Each iteration corresponds to one more term in the power series expansion
+ // od the inverse of the Schur complement. This value controls the maximum
+ // number of iterations whether it is used as a preconditioner or just to
+ // initialize the solution for ITERATIVE_SCHUR.
+ int max_num_spse_iterations = 5;
+
+ // Use SCHUR_POWER_SERIES_EXPANSION to initialize the solution for
+ // ITERATIVE_SCHUR. This option can be set true regardless of what
+ // preconditioner is being used.
+ bool use_spse_initialization = false;
+
+ // When use_spse_initialization is true, this parameter along with
+ // max_num_spse_iterations controls the number of
+ // SCHUR_POWER_SERIES_EXPANSION iterations performed for initialization. It
+ // is not used to control the preconditioner.
+ double spse_tolerance = 0.1;
+
+ // Forcing sequence parameter. The truncated Newton solver uses
+ // this number to control the relative accuracy with which the
+ // Newton step is computed.
+ //
+ // This constant is passed to ConjugateGradientsSolver which uses
+ // it to terminate the iterations when
+ //
+ // (Q_i - Q_{i-1})/Q_i < eta/i
+ double eta = 1e-1;
+
+ // Normalize the jacobian using Jacobi scaling before calling
+ // the linear least squares solver.
+ bool jacobi_scaling = true;
+
// Some non-linear least squares problems have additional
// structure in the way the parameter blocks interact that it is
// beneficial to modify the way the trust region step is computed.
@@ -627,32 +713,6 @@
// iterations is disabled.
double inner_iteration_tolerance = 1e-3;
- // Minimum number of iterations for which the linear solver should
- // run, even if the convergence criterion is satisfied.
- int min_linear_solver_iterations = 0;
-
- // Maximum number of iterations for which the linear solver should
- // run. If the solver does not converge in less than
- // max_linear_solver_iterations, then it returns MAX_ITERATIONS,
- // as its termination type.
- int max_linear_solver_iterations = 500;
-
- // Forcing sequence parameter. The truncated Newton solver uses
- // this number to control the relative accuracy with which the
- // Newton step is computed.
- //
- // This constant is passed to ConjugateGradientsSolver which uses
- // it to terminate the iterations when
- //
- // (Q_i - Q_{i-1})/Q_i < eta/i
- double eta = 1e-1;
-
- // Normalize the jacobian using Jacobi scaling before calling
- // the linear least squares solver.
- bool jacobi_scaling = true;
-
- // Logging options ---------------------------------------------------------
-
LoggingType logging_type = PER_MINIMIZER_ITERATION;
// By default the Minimizer progress is logged to VLOG(1), which
@@ -789,10 +849,9 @@
// IterationSummary for each minimizer iteration in order.
std::vector<IterationSummary> iterations;
- // Number of minimizer iterations in which the step was
- // accepted. Unless use_non_monotonic_steps is true this is also
- // the number of steps in which the objective function value/cost
- // went down.
+ // Number of minimizer iterations in which the step was accepted. Unless
+ // use_nonmonotonic_steps is true this is also the number of steps in which
+ // the objective function value/cost went down.
int num_successful_steps = -1;
// Number of minimizer iterations in which the step was rejected
@@ -882,7 +941,7 @@
// Dimension of the tangent space of the problem (or the number of
// columns in the Jacobian for the problem). This is different
// from num_parameters if a parameter block is associated with a
- // LocalParameterization
+ // Manifold.
int num_effective_parameters = -1;
// Number of residual blocks in the problem.
@@ -903,7 +962,7 @@
// number of columns in the Jacobian for the reduced
// problem). This is different from num_parameters_reduced if a
// parameter block in the reduced problem is associated with a
- // LocalParameterization.
+ // Manifold.
int num_effective_parameters_reduced = -1;
// Number of residual blocks in the reduced problem.
@@ -920,8 +979,7 @@
int num_threads_given = -1;
// Number of threads actually used by the solver for Jacobian and
- // residual evaluation. This number is not equal to
- // num_threads_given if OpenMP is not available.
+ // residual evaluation.
int num_threads_used = -1;
// Type of the linear solver requested by the user.
@@ -944,6 +1002,10 @@
SPARSE_NORMAL_CHOLESKY;
#endif
+ bool mixed_precision_solves_used = false;
+
+ LinearSolverOrderingType linear_solver_ordering_type = AMD;
+
// Size of the elimination groups given by the user as hints to
// the linear solver.
std::vector<int> linear_solver_ordering_given;
@@ -1003,7 +1065,7 @@
PreconditionerType preconditioner_type_used = IDENTITY;
// Type of clustering algorithm used for visibility based
- // preconditioning. Only meaningful when the preconditioner_type
+ // preconditioning. Only meaningful when the preconditioner_type_used
// is CLUSTER_JACOBI or CLUSTER_TRIDIAGONAL.
VisibilityClusteringType visibility_clustering_type = CANONICAL_VIEWS;
diff --git a/include/ceres/sphere_manifold.h b/include/ceres/sphere_manifold.h
new file mode 100644
index 0000000..1c7458b
--- /dev/null
+++ b/include/ceres/sphere_manifold.h
@@ -0,0 +1,236 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: vitus@google.com (Mike Vitus)
+// jodebo_beck@gmx.de (Johannes Beck)
+
+#ifndef CERES_PUBLIC_SPHERE_MANIFOLD_H_
+#define CERES_PUBLIC_SPHERE_MANIFOLD_H_
+
+#include <Eigen/Core>
+#include <algorithm>
+#include <array>
+#include <memory>
+#include <vector>
+
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
+#include "ceres/internal/householder_vector.h"
+#include "ceres/internal/sphere_manifold_functions.h"
+#include "ceres/manifold.h"
+#include "ceres/types.h"
+#include "glog/logging.h"
+
+namespace ceres {
+
+// This provides a manifold on a sphere meaning that the norm of the vector
+// stays the same. Such cases often arises in Structure for Motion
+// problems. One example where they are used is in representing points whose
+// triangulation is ill-conditioned. Here it is advantageous to use an
+// over-parameterization since homogeneous vectors can represent points at
+// infinity.
+//
+// The plus operator is defined as
+// Plus(x, delta) =
+// [sin(0.5 * |delta|) * delta / |delta|, cos(0.5 * |delta|)] * x
+//
+// The minus operator is defined as
+// Minus(x, y) = 2 atan2(nhy, y[-1]) / nhy * hy[0 : size_ - 1]
+// with nhy = norm(hy[0 : size_ - 1])
+//
+// with * defined as an operator which applies the update orthogonal to x to
+// remain on the sphere. The ambient space dimension is required to be greater
+// than 1.
+//
+// The class works with dynamic and static ambient space dimensions. If the
+// ambient space dimensions is known at compile time use
+//
+// SphereManifold<3> manifold;
+//
+// If the ambient space dimensions is not known at compile time the template
+// parameter needs to be set to ceres::DYNAMIC and the actual dimension needs
+// to be provided as a constructor argument:
+//
+// SphereManifold<ceres::DYNAMIC> manifold(ambient_dim);
+//
+// See section B.2 (p.25) in "Integrating Generic Sensor Fusion Algorithms
+// with Sound State Representations through Encapsulation of Manifolds" by C.
+// Hertzberg, R. Wagner, U. Frese and L. Schroder for more details
+// (https://arxiv.org/pdf/1107.1119.pdf)
+template <int AmbientSpaceDimension>
+class SphereManifold final : public Manifold {
+ public:
+ static_assert(
+ AmbientSpaceDimension == ceres::DYNAMIC || AmbientSpaceDimension > 1,
+ "The size of the homogeneous vector needs to be greater than 1.");
+ static_assert(ceres::DYNAMIC == Eigen::Dynamic,
+ "ceres::DYNAMIC needs to be the same as Eigen::Dynamic.");
+
+ SphereManifold();
+ explicit SphereManifold(int size);
+
+ int AmbientSize() const override {
+ return AmbientSpaceDimension == ceres::DYNAMIC ? size_
+ : AmbientSpaceDimension;
+ }
+ int TangentSize() const override { return AmbientSize() - 1; }
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override;
+ bool PlusJacobian(const double* x, double* jacobian) const override;
+
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override;
+ bool MinusJacobian(const double* x, double* jacobian) const override;
+
+ private:
+ static constexpr int TangentSpaceDimension =
+ AmbientSpaceDimension > 0 ? AmbientSpaceDimension - 1 : Eigen::Dynamic;
+
+ // NOTE: Eigen does not allow to have a RowMajor column vector.
+ // In that case, change the storage order
+ static constexpr int SafeRowMajor =
+ TangentSpaceDimension == 1 ? Eigen::ColMajor : Eigen::RowMajor;
+
+ using AmbientVector = Eigen::Matrix<double, AmbientSpaceDimension, 1>;
+ using TangentVector = Eigen::Matrix<double, TangentSpaceDimension, 1>;
+ using MatrixPlusJacobian = Eigen::Matrix<double,
+ AmbientSpaceDimension,
+ TangentSpaceDimension,
+ SafeRowMajor>;
+ using MatrixMinusJacobian = Eigen::Matrix<double,
+ TangentSpaceDimension,
+ AmbientSpaceDimension,
+ Eigen::RowMajor>;
+
+ const int size_{};
+};
+
+template <int AmbientSpaceDimension>
+SphereManifold<AmbientSpaceDimension>::SphereManifold()
+ : size_{AmbientSpaceDimension} {
+ static_assert(
+ AmbientSpaceDimension != Eigen::Dynamic,
+ "The size is set to dynamic. Please call the constructor with a size.");
+}
+
+template <int AmbientSpaceDimension>
+SphereManifold<AmbientSpaceDimension>::SphereManifold(int size) : size_{size} {
+ if (AmbientSpaceDimension != Eigen::Dynamic) {
+ CHECK_EQ(AmbientSpaceDimension, size)
+ << "Specified size by template parameter differs from the supplied "
+ "one.";
+ } else {
+ CHECK_GT(size_, 1)
+ << "The size of the manifold needs to be greater than 1.";
+ }
+}
+
+template <int AmbientSpaceDimension>
+bool SphereManifold<AmbientSpaceDimension>::Plus(
+ const double* x_ptr,
+ const double* delta_ptr,
+ double* x_plus_delta_ptr) const {
+ Eigen::Map<const AmbientVector> x(x_ptr, size_);
+ Eigen::Map<const TangentVector> delta(delta_ptr, size_ - 1);
+ Eigen::Map<AmbientVector> x_plus_delta(x_plus_delta_ptr, size_);
+
+ const double norm_delta = delta.norm();
+
+ if (norm_delta == 0.0) {
+ x_plus_delta = x;
+ return true;
+ }
+
+ AmbientVector v(size_);
+ double beta;
+
+ // NOTE: The explicit template arguments are needed here because
+ // ComputeHouseholderVector is templated and some versions of MSVC
+ // have trouble deducing the type of v automatically.
+ internal::ComputeHouseholderVector<Eigen::Map<const AmbientVector>,
+ double,
+ AmbientSpaceDimension>(x, &v, &beta);
+
+ internal::ComputeSphereManifoldPlus(
+ v, beta, x, delta, norm_delta, &x_plus_delta);
+
+ return true;
+}
+
+template <int AmbientSpaceDimension>
+bool SphereManifold<AmbientSpaceDimension>::PlusJacobian(
+ const double* x_ptr, double* jacobian_ptr) const {
+ Eigen::Map<const AmbientVector> x(x_ptr, size_);
+ Eigen::Map<MatrixPlusJacobian> jacobian(jacobian_ptr, size_, size_ - 1);
+ internal::ComputeSphereManifoldPlusJacobian(x, &jacobian);
+
+ return true;
+}
+
+template <int AmbientSpaceDimension>
+bool SphereManifold<AmbientSpaceDimension>::Minus(const double* y_ptr,
+ const double* x_ptr,
+ double* y_minus_x_ptr) const {
+ AmbientVector y = Eigen::Map<const AmbientVector>(y_ptr, size_);
+ Eigen::Map<const AmbientVector> x(x_ptr, size_);
+ Eigen::Map<TangentVector> y_minus_x(y_minus_x_ptr, size_ - 1);
+
+ // Apply hoseholder transformation.
+ AmbientVector v(size_);
+ double beta;
+
+ // NOTE: The explicit template arguments are needed here because
+ // ComputeHouseholderVector is templated and some versions of MSVC
+ // have trouble deducing the type of v automatically.
+ internal::ComputeHouseholderVector<Eigen::Map<const AmbientVector>,
+ double,
+ AmbientSpaceDimension>(x, &v, &beta);
+ internal::ComputeSphereManifoldMinus(v, beta, x, y, &y_minus_x);
+ return true;
+}
+
+template <int AmbientSpaceDimension>
+bool SphereManifold<AmbientSpaceDimension>::MinusJacobian(
+ const double* x_ptr, double* jacobian_ptr) const {
+ Eigen::Map<const AmbientVector> x(x_ptr, size_);
+ Eigen::Map<MatrixMinusJacobian> jacobian(jacobian_ptr, size_ - 1, size_);
+
+ internal::ComputeSphereManifoldMinusJacobian(x, &jacobian);
+ return true;
+}
+
+} // namespace ceres
+
+// clang-format off
+#include "ceres/internal/reenable_warnings.h"
+// clang-format on
+
+#endif // CERES_PUBLIC_SPHERE_MANIFOLD_H_
diff --git a/include/ceres/tiny_solver.h b/include/ceres/tiny_solver.h
index 47db582..9242cd0 100644
--- a/include/ceres/tiny_solver.h
+++ b/include/ceres/tiny_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -84,7 +84,8 @@
// double* parameters -- NUM_PARAMETERS or NumParameters()
// double* residuals -- NUM_RESIDUALS or NumResiduals()
// double* jacobian -- NUM_RESIDUALS * NUM_PARAMETERS in column-major format
-// (Eigen's default); or NULL if no jacobian requested.
+// (Eigen's default); or nullptr if no jacobian
+// requested.
//
// An example (fully statically sized):
//
@@ -126,8 +127,8 @@
//
template <typename Function,
typename LinearSolver =
- Eigen::LDLT<Eigen::Matrix<typename Function::Scalar,
- Function::NUM_PARAMETERS,
+ Eigen::LDLT<Eigen::Matrix<typename Function::Scalar, //
+ Function::NUM_PARAMETERS, //
Function::NUM_PARAMETERS>>>
class TinySolver {
public:
@@ -139,41 +140,59 @@
NUM_RESIDUALS = Function::NUM_RESIDUALS,
NUM_PARAMETERS = Function::NUM_PARAMETERS
};
- typedef typename Function::Scalar Scalar;
- typedef typename Eigen::Matrix<Scalar, NUM_PARAMETERS, 1> Parameters;
+ using Scalar = typename Function::Scalar;
+ using Parameters = typename Eigen::Matrix<Scalar, NUM_PARAMETERS, 1>;
enum Status {
- GRADIENT_TOO_SMALL, // eps > max(J'*f(x))
- RELATIVE_STEP_SIZE_TOO_SMALL, // eps > ||dx|| / (||x|| + eps)
- COST_TOO_SMALL, // eps > ||f(x)||^2 / 2
+ // max_norm |J'(x) * f(x)| < gradient_tolerance
+ GRADIENT_TOO_SMALL,
+ // ||dx|| <= parameter_tolerance * (||x|| + parameter_tolerance)
+ RELATIVE_STEP_SIZE_TOO_SMALL,
+ // cost_threshold > ||f(x)||^2 / 2
+ COST_TOO_SMALL,
+ // num_iterations >= max_num_iterations
HIT_MAX_ITERATIONS,
+ // (new_cost - old_cost) < function_tolerance * old_cost
+ COST_CHANGE_TOO_SMALL,
// TODO(sameeragarwal): Deal with numerical failures.
};
struct Options {
- Scalar gradient_tolerance = 1e-10; // eps > max(J'*f(x))
- Scalar parameter_tolerance = 1e-8; // eps > ||dx|| / ||x||
- Scalar cost_threshold = // eps > ||f(x)||
- std::numeric_limits<Scalar>::epsilon();
- Scalar initial_trust_region_radius = 1e4;
int max_num_iterations = 50;
+
+ // max_norm |J'(x) * f(x)| < gradient_tolerance
+ Scalar gradient_tolerance = 1e-10;
+
+ // ||dx|| <= parameter_tolerance * (||x|| + parameter_tolerance)
+ Scalar parameter_tolerance = 1e-8;
+
+ // (new_cost - old_cost) < function_tolerance * old_cost
+ Scalar function_tolerance = 1e-6;
+
+ // cost_threshold > ||f(x)||^2 / 2
+ Scalar cost_threshold = std::numeric_limits<Scalar>::epsilon();
+
+ Scalar initial_trust_region_radius = 1e4;
};
struct Summary {
- Scalar initial_cost = -1; // 1/2 ||f(x)||^2
- Scalar final_cost = -1; // 1/2 ||f(x)||^2
- Scalar gradient_max_norm = -1; // max(J'f(x))
+ // 1/2 ||f(x_0)||^2
+ Scalar initial_cost = -1;
+ // 1/2 ||f(x)||^2
+ Scalar final_cost = -1;
+ // max_norm(J'f(x))
+ Scalar gradient_max_norm = -1;
int iterations = -1;
Status status = HIT_MAX_ITERATIONS;
};
bool Update(const Function& function, const Parameters& x) {
- if (!function(x.data(), error_.data(), jacobian_.data())) {
+ if (!function(x.data(), residuals_.data(), jacobian_.data())) {
return false;
}
- error_ = -error_;
+ residuals_ = -residuals_;
// On the first iteration, compute a diagonal (Jacobi) scaling
// matrix, which we store as a vector.
@@ -192,9 +211,9 @@
// factorization.
jacobian_ = jacobian_ * jacobi_scaling_.asDiagonal();
jtj_ = jacobian_.transpose() * jacobian_;
- g_ = jacobian_.transpose() * error_;
+ g_ = jacobian_.transpose() * residuals_;
summary.gradient_max_norm = g_.array().abs().maxCoeff();
- cost_ = error_.squaredNorm() / 2;
+ cost_ = residuals_.squaredNorm() / 2;
return true;
}
@@ -229,10 +248,9 @@
jtj_regularized_ = jtj_;
const Scalar min_diagonal = 1e-6;
const Scalar max_diagonal = 1e32;
- for (int i = 0; i < lm_diagonal_.rows(); ++i) {
- lm_diagonal_[i] = std::sqrt(
- u * std::min(std::max(jtj_(i, i), min_diagonal), max_diagonal));
- jtj_regularized_(i, i) += lm_diagonal_[i] * lm_diagonal_[i];
+ for (int i = 0; i < dx_.rows(); ++i) {
+ jtj_regularized_(i, i) +=
+ u * (std::min)((std::max)(jtj_(i, i), min_diagonal), max_diagonal);
}
// TODO(sameeragarwal): Check for failure and deal with it.
@@ -253,10 +271,9 @@
// TODO(keir): Add proper handling of errors from user eval of cost
// functions.
- function(&x_new_[0], &f_x_new_[0], NULL);
+ function(&x_new_[0], &f_x_new_[0], nullptr);
const Scalar cost_change = (2 * cost_ - f_x_new_.squaredNorm());
-
// TODO(sameeragarwal): Better more numerically stable evaluation.
const Scalar model_cost_change = lm_step_.dot(2 * g_ - jtj_ * lm_step_);
@@ -269,6 +286,12 @@
// model fits well.
x = x_new_;
+ if (std::abs(cost_change) < options.function_tolerance) {
+ cost_ = f_x_new_.squaredNorm() / 2;
+ summary.status = COST_CHANGE_TOO_SMALL;
+ break;
+ }
+
// TODO(sameeragarwal): Deal with failure.
Update(function, x);
if (summary.gradient_max_norm < options.gradient_tolerance) {
@@ -282,16 +305,24 @@
}
Scalar tmp = Scalar(2 * rho - 1);
- u = u * std::max(1 / 3., 1 - tmp * tmp * tmp);
+ u = u * (std::max)(Scalar(1 / 3.), Scalar(1) - tmp * tmp * tmp);
v = 2;
- continue;
- }
- // Reject the update because either the normal equations failed to solve
- // or the local linear model was not good (rho < 0). Instead, increase u
- // to move closer to gradient descent.
- u *= v;
- v *= 2;
+ } else {
+ // Reject the update because either the normal equations failed to solve
+ // or the local linear model was not good (rho < 0).
+
+ // Additionally if the cost change is too small, then terminate.
+ if (std::abs(cost_change) < options.function_tolerance) {
+ // Terminate
+ summary.status = COST_CHANGE_TOO_SMALL;
+ break;
+ }
+
+ // Reduce the size of the trust region.
+ u *= v;
+ v *= 2;
+ }
}
summary.final_cost = cost_;
@@ -306,8 +337,8 @@
// linear system. This allows reusing the intermediate storage across solves.
LinearSolver linear_solver_;
Scalar cost_;
- Parameters dx_, x_new_, g_, jacobi_scaling_, lm_diagonal_, lm_step_;
- Eigen::Matrix<Scalar, NUM_RESIDUALS, 1> error_, f_x_new_;
+ Parameters dx_, x_new_, g_, jacobi_scaling_, lm_step_;
+ Eigen::Matrix<Scalar, NUM_RESIDUALS, 1> residuals_, f_x_new_;
Eigen::Matrix<Scalar, NUM_RESIDUALS, NUM_PARAMETERS> jacobian_;
Eigen::Matrix<Scalar, NUM_PARAMETERS, NUM_PARAMETERS> jtj_, jtj_regularized_;
@@ -317,7 +348,7 @@
template <typename T>
struct enable_if<true, T> {
- typedef T type;
+ using type = T;
};
// The number of parameters and residuals are dynamically sized.
@@ -353,9 +384,8 @@
x_new_.resize(num_parameters);
g_.resize(num_parameters);
jacobi_scaling_.resize(num_parameters);
- lm_diagonal_.resize(num_parameters);
lm_step_.resize(num_parameters);
- error_.resize(num_residuals);
+ residuals_.resize(num_residuals);
f_x_new_.resize(num_residuals);
jacobian_.resize(num_residuals, num_parameters);
jtj_.resize(num_parameters, num_parameters);
diff --git a/include/ceres/tiny_solver_autodiff_function.h b/include/ceres/tiny_solver_autodiff_function.h
index b782f54..1b9bd96 100644
--- a/include/ceres/tiny_solver_autodiff_function.h
+++ b/include/ceres/tiny_solver_autodiff_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -113,12 +113,12 @@
// as a member a Jet type, which itself has a fixed-size Eigen type as member.
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
- TinySolverAutoDiffFunction(const CostFunctor& cost_functor)
+ explicit TinySolverAutoDiffFunction(const CostFunctor& cost_functor)
: cost_functor_(cost_functor) {
Initialize<kNumResiduals>(cost_functor);
}
- typedef T Scalar;
+ using Scalar = T;
enum {
NUM_PARAMETERS = kNumParameters,
NUM_RESIDUALS = kNumResiduals,
@@ -127,7 +127,7 @@
// This is similar to AutoDifferentiate(), but since there is only one
// parameter block it is easier to inline to avoid overhead.
bool operator()(const T* parameters, T* residuals, T* jacobian) const {
- if (jacobian == NULL) {
+ if (jacobian == nullptr) {
// No jacobian requested, so just directly call the cost function with
// doubles, skipping jets and derivatives.
return cost_functor_(parameters, residuals);
@@ -171,7 +171,7 @@
const CostFunctor& cost_functor_;
// The number of residuals at runtime.
- // This will be overriden if NUM_RESIDUALS == Eigen::Dynamic.
+ // This will be overridden if NUM_RESIDUALS == Eigen::Dynamic.
int num_residuals_ = kNumResiduals;
// To evaluate the cost function with jets, temporary storage is needed. These
diff --git a/include/ceres/tiny_solver_cost_function_adapter.h b/include/ceres/tiny_solver_cost_function_adapter.h
index 18ccb39..166f03f 100644
--- a/include/ceres/tiny_solver_cost_function_adapter.h
+++ b/include/ceres/tiny_solver_cost_function_adapter.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -75,7 +75,7 @@
int kNumParameters = Eigen::Dynamic>
class TinySolverCostFunctionAdapter {
public:
- typedef double Scalar;
+ using Scalar = double;
enum ComponentSizeType {
NUM_PARAMETERS = kNumParameters,
NUM_RESIDUALS = kNumResiduals
@@ -85,7 +85,7 @@
// fixed-size Eigen types.
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
- TinySolverCostFunctionAdapter(const CostFunction& cost_function)
+ explicit TinySolverCostFunctionAdapter(const CostFunction& cost_function)
: cost_function_(cost_function) {
CHECK_EQ(cost_function_.parameter_block_sizes().size(), 1)
<< "Only CostFunctions with exactly one parameter blocks are allowed.";
@@ -108,7 +108,7 @@
double* residuals,
double* jacobian) const {
if (!jacobian) {
- return cost_function_.Evaluate(¶meters, residuals, NULL);
+ return cost_function_.Evaluate(¶meters, residuals, nullptr);
}
double* jacobians[1] = {row_major_jacobian_.data()};
diff --git a/include/ceres/types.h b/include/ceres/types.h
index 5ee6fdc..6e19c51 100644
--- a/include/ceres/types.h
+++ b/include/ceres/types.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,7 +40,7 @@
#include <string>
#include "ceres/internal/disable_warnings.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
namespace ceres {
@@ -67,8 +67,7 @@
// Eigen.
DENSE_QR,
- // Solve the normal equations using a sparse cholesky solver; requires
- // SuiteSparse or CXSparse.
+ // Solve the normal equations using a sparse cholesky solver;
SPARSE_NORMAL_CHOLESKY,
// Specialized solvers, specific to problems with a generalized
@@ -98,7 +97,7 @@
// Block diagonal of the Gauss-Newton Hessian.
JACOBI,
- // Note: The following three preconditioners can only be used with
+ // Note: The following four preconditioners can only be used with
// the ITERATIVE_SCHUR solver. They are well suited for Structure
// from Motion problems.
@@ -106,6 +105,10 @@
// only be used with the ITERATIVE_SCHUR solver.
SCHUR_JACOBI,
+ // Use power series expansion to approximate the inversion of Schur complement
+ // as a preconditioner.
+ SCHUR_POWER_SERIES_EXPANSION,
+
// Visibility clustering based preconditioners.
//
// The following two preconditioners use the visibility structure of
@@ -134,7 +137,7 @@
// well the matrix Q approximates J'J, or how well the chosen
// residual blocks approximate the non-linear least squares
// problem.
- SUBSET,
+ SUBSET
};
enum VisibilityClusteringType {
@@ -165,11 +168,6 @@
// minimum degree ordering.
SUITE_SPARSE,
- // A lightweight replacement for SuiteSparse, which does not require
- // a LAPACK/BLAS implementation. Consequently, its performance is
- // also a bit lower than SuiteSparse.
- CX_SPARSE,
-
// Eigen's sparse linear algebra routines. In particular Ceres uses
// the Simplicial LDLT routines.
EIGEN_SPARSE,
@@ -177,15 +175,43 @@
// Apple's Accelerate framework sparse linear algebra routines.
ACCELERATE_SPARSE,
+ // Nvidia's cuSPARSE library.
+ CUDA_SPARSE,
+
// No sparse linear solver should be used. This does not necessarily
// imply that Ceres was built without any sparse library, although that
// is the likely use case, merely that one should not be used.
NO_SPARSE
};
+// The order in which variables are eliminated in a linear solver
+// can have a significant of impact on the efficiency and accuracy
+// of the method. e.g., when doing sparse Cholesky factorization,
+// there are matrices for which a good ordering will give a
+// Cholesky factor with O(n) storage, where as a bad ordering will
+// result in an completely dense factor.
+//
+// So sparse direct solvers like SPARSE_NORMAL_CHOLESKY and
+// SPARSE_SCHUR and preconditioners like SUBSET, CLUSTER_JACOBI &
+// CLUSTER_TRIDIAGONAL use a fill reducing ordering of the columns and
+// rows of the matrix being factorized before actually the numeric
+// factorization.
+//
+// This enum controls the class of algorithm used to compute this
+// fill reducing ordering. There is no single algorithm that works
+// on all matrices, so determining which algorithm works better is a
+// matter of empirical experimentation.
+enum LinearSolverOrderingType {
+ // Approximate Minimum Degree.
+ AMD,
+ // Nested Dissection.
+ NESDIS
+};
+
enum DenseLinearAlgebraLibraryType {
EIGEN,
LAPACK,
+ CUDA,
};
// Logging options
@@ -466,6 +492,11 @@
CERES_EXPORT bool StringToSparseLinearAlgebraLibraryType(
std::string value, SparseLinearAlgebraLibraryType* type);
+CERES_EXPORT const char* LinearSolverOrderingTypeToString(
+ LinearSolverOrderingType type);
+CERES_EXPORT bool StringToLinearSolverOrderingType(
+ std::string value, LinearSolverOrderingType* type);
+
CERES_EXPORT const char* DenseLinearAlgebraLibraryTypeToString(
DenseLinearAlgebraLibraryType type);
CERES_EXPORT bool StringToDenseLinearAlgebraLibraryType(
diff --git a/include/ceres/version.h b/include/ceres/version.h
index a76cc10..fe6c288 100644
--- a/include/ceres/version.h
+++ b/include/ceres/version.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,7 +32,7 @@
#define CERES_PUBLIC_VERSION_H_
#define CERES_VERSION_MAJOR 2
-#define CERES_VERSION_MINOR 0
+#define CERES_VERSION_MINOR 2
#define CERES_VERSION_REVISION 0
// Classic CPP stringifcation; the extra level of indirection allows the
@@ -40,10 +40,16 @@
#define CERES_TO_STRING_HELPER(x) #x
#define CERES_TO_STRING(x) CERES_TO_STRING_HELPER(x)
+// clang-format off
+#define CERES_SEMVER_VERSION(MAJOR, MINOR, PATCH) \
+ CERES_TO_STRING(MAJOR) "." \
+ CERES_TO_STRING(MINOR) "." \
+ CERES_TO_STRING(PATCH)
+// clang-format on
+
// The Ceres version as a string; for example "1.9.0".
-#define CERES_VERSION_STRING \
- CERES_TO_STRING(CERES_VERSION_MAJOR) \
- "." CERES_TO_STRING(CERES_VERSION_MINOR) "." CERES_TO_STRING( \
- CERES_VERSION_REVISION)
+#define CERES_VERSION_STRING \
+ CERES_SEMVER_VERSION( \
+ CERES_VERSION_MAJOR, CERES_VERSION_MINOR, CERES_VERSION_REVISION)
#endif // CERES_PUBLIC_VERSION_H_
diff --git a/internal/ceres/CMakeLists.txt b/internal/ceres/CMakeLists.txt
index 6dc7262..583757e 100644
--- a/internal/ceres/CMakeLists.txt
+++ b/internal/ceres/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2022 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -7,7 +7,7 @@
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright notice,
+# * Redistributions in binary form %Cmust reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of Google Inc. nor the names of its contributors may be
@@ -28,124 +28,28 @@
#
# Author: keir@google.com (Keir Mierle)
+# Build the list of dependencies for Ceres based on the current configuration.
+
# Avoid 'xxx.cc has no symbols' warnings from source files which are 'empty'
# when their enclosing #ifdefs are disabled.
-if (CERES_THREADING_MODEL STREQUAL "CXX_THREADS")
- set(CERES_PARALLEL_FOR_SRC parallel_for_cxx.cc thread_pool.cc)
-elseif (CERES_THREADING_MODEL STREQUAL "OPENMP")
- set(CERES_PARALLEL_FOR_SRC parallel_for_openmp.cc)
- if (CMAKE_COMPILER_IS_GNUCXX)
- # OpenMP in GCC requires the GNU OpenMP library.
- list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES gomp)
- endif()
-elseif (CERES_THREADING_MODEL STREQUAL "NO_THREADS")
- set(CERES_PARALLEL_FOR_SRC parallel_for_nothreads.cc)
-endif()
+find_package(Threads REQUIRED)
+list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES Threads::Threads)
+# Make dependency visible to the parent CMakeLists.txt
+set(Threads_DEPENDENCY "find_dependency (Threads)" PARENT_SCOPE)
-set(CERES_INTERNAL_SRC
- ${CERES_PARALLEL_FOR_SRC}
- accelerate_sparse.cc
- array_utils.cc
- blas.cc
- block_evaluate_preparer.cc
- block_jacobi_preconditioner.cc
- block_jacobian_writer.cc
- block_random_access_dense_matrix.cc
- block_random_access_diagonal_matrix.cc
- block_random_access_matrix.cc
- block_random_access_sparse_matrix.cc
- block_sparse_matrix.cc
- block_structure.cc
+# Source files that contain public symbols and live in the ceres namespaces.
+# Such symbols are expected to be marked with CERES_EXPORT and the files below
+# sorted in lexicographical order.
+set(CERES_EXPORTED_SRCS
c_api.cc
- canonical_views_clustering.cc
- cgnr_solver.cc
- callbacks.cc
- compressed_col_sparse_matrix_utils.cc
- compressed_row_jacobian_writer.cc
- compressed_row_sparse_matrix.cc
- conditioned_cost_function.cc
- conjugate_gradients_solver.cc
- context.cc
- context_impl.cc
- coordinate_descent_minimizer.cc
- corrector.cc
- covariance.cc
- covariance_impl.cc
- cxsparse.cc
- dense_normal_cholesky_solver.cc
- dense_qr_solver.cc
- dense_sparse_matrix.cc
- detect_structure.cc
- dogleg_strategy.cc
- dynamic_compressed_row_jacobian_writer.cc
- dynamic_compressed_row_sparse_matrix.cc
- dynamic_sparse_normal_cholesky_solver.cc
- evaluator.cc
- eigensparse.cc
- file.cc
- float_suitesparse.cc
- float_cxsparse.cc
- function_sample.cc
gradient_checker.cc
- gradient_checking_cost_function.cc
gradient_problem.cc
- gradient_problem_solver.cc
- implicit_schur_complement.cc
- inner_product_computer.cc
- is_close.cc
- iterative_refiner.cc
- iterative_schur_complement_solver.cc
- levenberg_marquardt_strategy.cc
- lapack.cc
- line_search.cc
- line_search_direction.cc
- line_search_minimizer.cc
- line_search_preprocessor.cc
- linear_least_squares_problems.cc
- linear_operator.cc
- linear_solver.cc
- local_parameterization.cc
loss_function.cc
- low_rank_inverse_hessian.cc
- minimizer.cc
+ manifold.cc
normal_prior.cc
- parallel_utils.cc
- parameter_block_ordering.cc
- partitioned_matrix_view.cc
- polynomial.cc
- preconditioner.cc
- preprocessor.cc
problem.cc
- problem_impl.cc
- program.cc
- reorder_program.cc
- residual_block.cc
- residual_block_utils.cc
- schur_complement_solver.cc
- schur_eliminator.cc
- schur_jacobi_preconditioner.cc
- schur_templates.cc
- scratch_evaluate_preparer.cc
- single_linkage_clustering.cc
solver.cc
- solver_utils.cc
- sparse_matrix.cc
- sparse_cholesky.cc
- sparse_normal_cholesky_solver.cc
- subset_preconditioner.cc
- split.cc
- stringprintf.cc
- suitesparse.cc
- thread_token_provider.cc
- triplet_sparse_matrix.cc
- trust_region_preprocessor.cc
- trust_region_minimizer.cc
- trust_region_step_evaluator.cc
- trust_region_strategy.cc
types.cc
- visibility.cc
- visibility_based_preconditioner.cc
- wall_time.cc
)
# Also depend on the header files so that they appear in IDEs.
@@ -161,6 +65,8 @@
# Depend also on public headers so they appear in IDEs.
file(GLOB CERES_PUBLIC_HDRS ${Ceres_SOURCE_DIR}/include/ceres/*.h)
file(GLOB CERES_PUBLIC_INTERNAL_HDRS ${Ceres_SOURCE_DIR}/include/ceres/internal/*.h)
+file(GLOB CERES_PUBLIC_INTERNAL_HDRS
+ ${Ceres_BINARY_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/ceres/internal/*.h)
# Include the specialized schur solvers.
if (SCHUR_SPECIALIZATIONS)
@@ -170,9 +76,14 @@
file(GLOB CERES_INTERNAL_SCHUR_FILES generated/*_d_d_d.cc)
endif (SCHUR_SPECIALIZATIONS)
-# Build the list of dependencies for Ceres based on the current configuration.
-find_package(Threads QUIET)
-list(APPEND CERES_LIBRARY_PUBLIC_DEPENDENCIES Threads::Threads)
+# The generated specializations of the Schur eliminator include
+# schur_eliminator_impl.h which defines EIGEN_CACHEFRIENDLY_PRODUCT_THRESHOLD
+# to a different value than Eigen's default. Depending on the order of files
+# in the unity build this can lead to clashes. Additionally, these files are
+# already generated in a way which leads to fairly large compilation units,
+# so the gains from a unity build would be marginal.
+set_source_files_properties(${CERES_INTERNAL_SCHUR_FILES} PROPERTIES
+ SKIP_UNITY_BUILD_INCLUSION ON)
if (NOT MINIGLOG AND GLOG_FOUND)
list(APPEND CERES_LIBRARY_PUBLIC_DEPENDENCIES ${GLOG_LIBRARIES})
@@ -186,32 +97,160 @@
endif()
endif (NOT MINIGLOG AND GLOG_FOUND)
-if (SUITESPARSE AND SUITESPARSE_FOUND)
+if (SUITESPARSE AND SuiteSparse_FOUND)
# Define version information for use in Solver::FullReport.
- add_definitions(-DCERES_SUITESPARSE_VERSION="${SUITESPARSE_VERSION}")
- list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES ${SUITESPARSE_LIBRARIES})
-endif (SUITESPARSE AND SUITESPARSE_FOUND)
+ add_definitions(-DCERES_SUITESPARSE_VERSION="${SuiteSparse_VERSION}")
+ list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES SuiteSparse::CHOLMOD
+ SuiteSparse::SPQR)
-if (CXSPARSE AND CXSPARSE_FOUND)
+ if (SuiteSparse_Partition_FOUND)
+ list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES SuiteSparse::Partition)
+ endif (SuiteSparse_Partition_FOUND)
+endif (SUITESPARSE AND SuiteSparse_FOUND)
+
+if (SuiteSparse_Partition_FOUND OR EIGENMETIS)
# Define version information for use in Solver::FullReport.
- add_definitions(-DCERES_CXSPARSE_VERSION="${CXSPARSE_VERSION}")
- list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES ${CXSPARSE_LIBRARIES})
-endif (CXSPARSE AND CXSPARSE_FOUND)
+ add_definitions(-DCERES_METIS_VERSION="${METIS_VERSION}")
+ list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES METIS::METIS)
+endif (SuiteSparse_Partition_FOUND OR EIGENMETIS)
if (ACCELERATESPARSE AND AccelerateSparse_FOUND)
list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES ${AccelerateSparse_LIBRARIES})
endif()
+if (USE_CUDA)
+ list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES ${CERES_CUDA_LIBRARIES})
+ set_source_files_properties(cuda_kernels_vector_ops.cu.cc PROPERTIES LANGUAGE CUDA)
+ set_source_files_properties(cuda_kernels_bsm_to_crs.cu.cc PROPERTIES LANGUAGE CUDA)
+ add_library(ceres_cuda_kernels STATIC cuda_kernels_vector_ops.cu.cc cuda_kernels_bsm_to_crs.cu.cc)
+ target_compile_features(ceres_cuda_kernels PRIVATE cxx_std_14)
+ target_compile_definitions(ceres_cuda_kernels PRIVATE CERES_STATIC_DEFINE)
+ # Enable __host__ / __device__ annotations in lambda declarations
+ target_compile_options(ceres_cuda_kernels PRIVATE --extended-lambda)
+ target_include_directories(ceres_cuda_kernels PRIVATE ${Ceres_SOURCE_DIR}/include ${Ceres_SOURCE_DIR}/internal ${Ceres_BINARY_DIR}/${CMAKE_INSTALL_INCLUDEDIR})
+ list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES ceres_cuda_kernels)
+endif (USE_CUDA)
+
if (LAPACK_FOUND)
list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES ${LAPACK_LIBRARIES})
endif ()
+# Source files that contain private symbols and live in the ceres::internal
+# namespace. The corresponding symbols (classes, functions, etc.) are expected
+# to be marked with CERES_NO_EXPORT and the files below sorted in
+# lexicographical order.
+add_library(ceres_internal OBJECT
+ ${CERES_INTERNAL_SCHUR_FILES}
+ accelerate_sparse.cc
+ array_utils.cc
+ block_evaluate_preparer.cc
+ block_jacobi_preconditioner.cc
+ block_jacobian_writer.cc
+ block_random_access_dense_matrix.cc
+ block_random_access_diagonal_matrix.cc
+ block_random_access_matrix.cc
+ block_random_access_sparse_matrix.cc
+ block_sparse_matrix.cc
+ block_structure.cc
+ callbacks.cc
+ canonical_views_clustering.cc
+ cgnr_solver.cc
+ compressed_col_sparse_matrix_utils.cc
+ compressed_row_jacobian_writer.cc
+ compressed_row_sparse_matrix.cc
+ conditioned_cost_function.cc
+ context.cc
+ context_impl.cc
+ coordinate_descent_minimizer.cc
+ corrector.cc
+ cost_function.cc
+ covariance.cc
+ covariance_impl.cc
+ cuda_block_sparse_crs_view.cc
+ cuda_partitioned_block_sparse_crs_view.cc
+ cuda_block_structure.cc
+ cuda_sparse_matrix.cc
+ cuda_vector.cc
+ dense_cholesky.cc
+ dense_normal_cholesky_solver.cc
+ dense_qr.cc
+ dense_qr_solver.cc
+ dense_sparse_matrix.cc
+ detect_structure.cc
+ dogleg_strategy.cc
+ dynamic_compressed_row_jacobian_writer.cc
+ dynamic_compressed_row_sparse_matrix.cc
+ dynamic_sparse_normal_cholesky_solver.cc
+ eigensparse.cc
+ evaluation_callback.cc
+ evaluator.cc
+ fake_bundle_adjustment_jacobian.cc
+ file.cc
+ first_order_function.cc
+ float_suitesparse.cc
+ function_sample.cc
+ gradient_checking_cost_function.cc
+ gradient_problem_solver.cc
+ implicit_schur_complement.cc
+ inner_product_computer.cc
+ is_close.cc
+ iteration_callback.cc
+ iterative_refiner.cc
+ iterative_schur_complement_solver.cc
+ levenberg_marquardt_strategy.cc
+ line_search.cc
+ line_search_direction.cc
+ line_search_minimizer.cc
+ line_search_preprocessor.cc
+ linear_least_squares_problems.cc
+ linear_operator.cc
+ linear_solver.cc
+ low_rank_inverse_hessian.cc
+ minimizer.cc
+ parallel_invoke.cc
+ parallel_utils.cc
+ parallel_vector_ops.cc
+ parameter_block_ordering.cc
+ partitioned_matrix_view.cc
+ polynomial.cc
+ power_series_expansion_preconditioner.cc
+ preconditioner.cc
+ preprocessor.cc
+ problem_impl.cc
+ program.cc
+ reorder_program.cc
+ residual_block.cc
+ residual_block_utils.cc
+ schur_complement_solver.cc
+ schur_eliminator.cc
+ schur_jacobi_preconditioner.cc
+ schur_templates.cc
+ scratch_evaluate_preparer.cc
+ single_linkage_clustering.cc
+ solver_utils.cc
+ sparse_cholesky.cc
+ sparse_matrix.cc
+ sparse_normal_cholesky_solver.cc
+ stringprintf.cc
+ subset_preconditioner.cc
+ suitesparse.cc
+ thread_pool.cc
+ thread_token_provider.cc
+ triplet_sparse_matrix.cc
+ trust_region_minimizer.cc
+ trust_region_preprocessor.cc
+ trust_region_step_evaluator.cc
+ trust_region_strategy.cc
+ visibility.cc
+ visibility_based_preconditioner.cc
+ wall_time.cc
+)
+
set(CERES_LIBRARY_SOURCE
- ${CERES_INTERNAL_SRC}
+ ${CERES_EXPORTED_SRCS}
${CERES_INTERNAL_HDRS}
${CERES_PUBLIC_HDRS}
- ${CERES_PUBLIC_INTERNAL_HDRS}
- ${CERES_INTERNAL_SCHUR_FILES})
+ ${CERES_PUBLIC_INTERNAL_HDRS})
# Primarily for Android, but optionally for others, compile the minimal
# glog implementation into Ceres.
@@ -229,65 +268,55 @@
APPEND_STRING PROPERTY COMPILE_FLAGS "-Wno-missing-declarations")
endif()
-add_library(ceres ${CERES_LIBRARY_SOURCE})
+add_library(ceres $<TARGET_OBJECTS:ceres_internal> ${CERES_LIBRARY_SOURCE})
+
+if(BUILD_SHARED_LIBS)
+ # While building shared libraries, we additionally require a static variant to
+ # be able to access internal symbols which are not intended for general use.
+ # Therefore, create a static library from object files and apply all the
+ # compiler options from the main library to the static one.
+ add_library(ceres_static STATIC $<TARGET_OBJECTS:ceres_internal> ${CERES_LIBRARY_SOURCE})
+ target_include_directories(ceres_static PUBLIC $<TARGET_PROPERTY:ceres,INCLUDE_DIRECTORIES>)
+ target_compile_definitions(ceres_static PUBLIC $<TARGET_PROPERTY:ceres,COMPILE_DEFINITIONS>)
+ target_compile_features(ceres_static PUBLIC $<TARGET_PROPERTY:ceres,COMPILE_FEATURES>)
+ target_compile_options(ceres_static PUBLIC $<TARGET_PROPERTY:ceres,COMPILE_OPTIONS>)
+ target_link_libraries(ceres_static
+ INTERFACE $<TARGET_PROPERTY:ceres,INTERFACE_LINK_LIBRARIES>
+ PRIVATE ${CERES_LIBRARY_PRIVATE_DEPENDENCIES})
+ # CERES_STATIC_DEFINE is generated by the GenerateExportHeader CMake module
+ # used to autogerate export.h. The macro should not be renamed without
+ # updating the corresponding generate_export_header invocation.
+ target_compile_definitions(ceres_static PUBLIC CERES_STATIC_DEFINE)
+else()
+ # In a static library build, not additional access layer is necessary as all
+ # symbols are visible.
+ add_library(ceres_static ALIAS ceres)
+endif()
+
+# Create a local alias target that matches the expected installed target.
+add_library(Ceres::ceres ALIAS ceres)
+
+# Apply all compiler options from the main Ceres target. Compiler options should
+# be generally defined on the main target referenced by the ceres_target CMake
+# variable.
+target_include_directories(ceres_internal PUBLIC $<TARGET_PROPERTY:ceres,INCLUDE_DIRECTORIES>)
+target_compile_definitions(ceres_internal PUBLIC $<TARGET_PROPERTY:ceres,COMPILE_DEFINITIONS>)
+target_compile_options(ceres_internal PUBLIC $<TARGET_PROPERTY:ceres,COMPILE_OPTIONS>)
+target_compile_definitions(ceres_internal PRIVATE ceres_EXPORTS)
+target_compile_features(ceres_internal PRIVATE $<TARGET_PROPERTY:ceres,COMPILE_FEATURES>)
+
+# Ensure the minimum required C++ language version is fulfilled as our
+# requirement by downstream clients. Consumers can choose the same or a newer
+# language standard revision.
+target_compile_features(ceres PUBLIC cxx_std_17)
+
set_target_properties(ceres PROPERTIES
VERSION ${CERES_VERSION}
- SOVERSION ${CERES_VERSION_MAJOR})
-if (BUILD_SHARED_LIBS)
- set_target_properties(ceres PROPERTIES
- # Set the default symbol visibility to hidden to unify the behavior among
- # the various compilers and to get smaller binaries
- C_VISIBILITY_PRESET hidden
- CXX_VISIBILITY_PRESET hidden)
-endif()
+ SOVERSION 4)
-# When building as a shared libarary with testing enabled, we need to export
-# internal symbols needed by the unit tests
-if (BUILD_TESTING)
- target_compile_definitions(ceres
- PUBLIC
- CERES_EXPORT_INTERNAL_SYMBOLS
- )
-endif()
-
-
-# The ability to specify a minimum language version via cxx_std_[11,14,17]
-# requires CMake >= 3.8. Prior to that we have to specify the compiler features
-# we require.
-if (CMAKE_VERSION VERSION_LESS 3.8)
- set(REQUIRED_PUBLIC_CXX_FEATURES cxx_alignas cxx_alignof cxx_constexpr)
-else()
- # Forward whatever C++ version Ceres was compiled with as our requirement
- # for downstream clients.
- set(REQUIRED_PUBLIC_CXX_FEATURES cxx_std_${CMAKE_CXX_STANDARD})
-endif()
-target_compile_features(ceres PUBLIC ${REQUIRED_PUBLIC_CXX_FEATURES})
-
-include(AppendTargetProperty)
-# Always build position-independent code (PIC), even when building Ceres as a
-# static library so that shared libraries can link against it, not just
-# executables (PIC does not apply on Windows).
-if (NOT WIN32 AND NOT BUILD_SHARED_LIBS)
- # Use set_target_properties() not append_target_property() here as
- # POSITION_INDEPENDENT_CODE is a binary ON/OFF switch.
- set_target_properties(ceres PROPERTIES POSITION_INDEPENDENT_CODE ON)
-endif()
-
-if (BUILD_SHARED_LIBS)
- # When building a shared library, mark all external libraries as
- # PRIVATE so they don't show up as a dependency.
- target_link_libraries(ceres
- PUBLIC ${CERES_LIBRARY_PUBLIC_DEPENDENCIES}
- PRIVATE ${CERES_LIBRARY_PRIVATE_DEPENDENCIES})
-else (BUILD_SHARED_LIBS)
- # When building a static library, all external libraries are
- # PUBLIC(default) since the user needs to link to them.
- # They will be listed in CeresTargets.cmake.
- set(CERES_LIBRARY_DEPENDENCIES
- ${CERES_LIBRARY_PUBLIC_DEPENDENCIES}
- ${CERES_LIBRARY_PRIVATE_DEPENDENCIES})
- target_link_libraries(ceres PUBLIC ${CERES_LIBRARY_DEPENDENCIES})
-endif (BUILD_SHARED_LIBS)
+target_link_libraries(ceres
+ PUBLIC ${CERES_LIBRARY_PUBLIC_DEPENDENCIES}
+ PRIVATE ${CERES_LIBRARY_PRIVATE_DEPENDENCIES})
# Add the Ceres headers to its target.
#
@@ -296,16 +325,16 @@
# that if the user has an installed version of Ceres in the same location as one
# of the dependencies (e.g. /usr/local) that we find the config.h we just
# configured, not the (older) installed config.h.
-target_include_directories(ceres BEFORE PUBLIC
- $<BUILD_INTERFACE:${Ceres_BINARY_DIR}/config>)
-target_include_directories(ceres PRIVATE ${Ceres_SOURCE_DIR}/internal)
-target_include_directories(ceres PUBLIC
- $<BUILD_INTERFACE:${Ceres_SOURCE_DIR}/include>
- $<INSTALL_INTERFACE:include>)
+target_include_directories(ceres
+ BEFORE PUBLIC
+ $<BUILD_INTERFACE:${Ceres_BINARY_DIR}/${CMAKE_INSTALL_INCLUDEDIR}>
+ PRIVATE ${Ceres_SOURCE_DIR}/internal
+ PUBLIC $<BUILD_INTERFACE:${Ceres_SOURCE_DIR}/include>
+ $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>)
# Eigen SparseQR generates various compiler warnings related to unused and
# uninitialised local variables. To avoid having to individually suppress these
-# warnings around the #include statments for Eigen headers across all GCC/Clang
+# warnings around the #include statements for Eigen headers across all GCC/Clang
# versions, we tell CMake to treat Eigen headers as system headers. This
# results in all compiler warnings from them being suppressed.
target_link_libraries(ceres PUBLIC Eigen3::Eigen)
@@ -327,21 +356,13 @@
# themselves (intentionally or otherwise) and so break their build.
target_include_directories(ceres BEFORE PUBLIC
$<BUILD_INTERFACE:${Ceres_SOURCE_DIR}/internal/ceres/miniglog>
- $<INSTALL_INTERFACE:include/ceres/internal/miniglog>)
+ $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/ceres/internal/miniglog>)
elseif (NOT FOUND_INSTALLED_GLOG_CMAKE_CONFIGURATION)
# Only append glog include directories if the glog found was not a CMake
# exported target that already includes them.
list(APPEND CERES_LIBRARY_PUBLIC_DEPENDENCIES_INCLUDE_DIRS
${GLOG_INCLUDE_DIRS})
endif()
-if (SUITESPARSE)
- list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS
- ${SUITESPARSE_INCLUDE_DIRS})
-endif()
-if (CXSPARSE)
- list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS
- ${CXSPARSE_INCLUDE_DIRS})
-endif()
if (ACCELERATESPARSE)
list(APPEND CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS
${AccelerateSparse_INCLUDE_DIRS})
@@ -349,47 +370,45 @@
# Add include locations for optional dependencies to the Ceres target without
# duplication.
list(REMOVE_DUPLICATES CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS)
-foreach(INC_DIR ${CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS})
- target_include_directories(ceres PRIVATE ${INC_DIR})
-endforeach()
+target_include_directories(ceres PRIVATE ${CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS})
list(REMOVE_DUPLICATES CERES_LIBRARY_PUBLIC_DEPENDENCIES_INCLUDE_DIRS)
-foreach(INC_DIR ${CERES_LIBRARY_PUBLIC_DEPENDENCIES_INCLUDE_DIRS})
- target_include_directories(ceres PUBLIC ${INC_DIR})
-endforeach()
+target_include_directories(ceres PUBLIC ${CERES_LIBRARY_PUBLIC_DEPENDENCIES_INCLUDE_DIRS})
+
+# Generate an export header for annotating symbols visibility
+include(GenerateExportHeader)
+generate_export_header(ceres EXPORT_FILE_NAME
+ ${Ceres_BINARY_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/ceres/internal/export.h)
+
+if (USE_CUDA)
+ install(TARGETS ceres_cuda_kernels
+ EXPORT CeresExport
+ RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}
+ LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
+ ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR})
+endif(USE_CUDA)
install(TARGETS ceres
EXPORT CeresExport
- RUNTIME DESTINATION bin
- LIBRARY DESTINATION lib${LIB_SUFFIX}
- ARCHIVE DESTINATION lib${LIB_SUFFIX})
-
-# Create a local alias target that matches the expected installed target.
-add_library(Ceres::ceres ALIAS ceres)
+ RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}
+ LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
+ ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR})
if (BUILD_TESTING AND GFLAGS)
- add_library(gtest gmock_gtest_all.cc gmock_main.cc)
- target_include_directories(gtest PUBLIC ${Ceres_SOURCE_DIR}/internal/ceres)
- if (BUILD_SHARED_LIBS)
- # Define gtest-specific shared library flags for compilation.
- append_target_property(gtest COMPILE_DEFINITIONS
- GTEST_CREATE_SHARED_LIBRARY)
- endif()
+ add_library(gtest STATIC gmock_gtest_all.cc gmock_main.cc)
- add_library(test_util
+ target_include_directories(gtest PRIVATE ${Ceres_SOURCE_DIR}/internal/ceres)
+ if (CMAKE_SYSTEM_NAME MATCHES "QNX")
+ target_link_libraries(gtest PUBLIC regex)
+ endif()
+ target_link_libraries(gtest PRIVATE Ceres::ceres gflags)
+
+ add_library(test_util STATIC
evaluator_test_utils.cc
numeric_diff_test_utils.cc
test_util.cc)
- target_include_directories(test_util PUBLIC ${Ceres_SOURCE_DIR}/internal)
- if (MINIGLOG)
- # When using miniglog, it is compiled into Ceres, thus Ceres becomes
- # the library against which other libraries should link for logging.
- target_link_libraries(gtest PUBLIC gflags Ceres::ceres)
- target_link_libraries(test_util PUBLIC Ceres::ceres gtest)
- else (MINIGLOG)
- target_link_libraries(gtest PUBLIC gflags ${GLOG_LIBRARIES})
- target_link_libraries(test_util PUBLIC Ceres::ceres gtest ${GLOG_LIBRARIES})
- endif (MINIGLOG)
+ target_include_directories(test_util PUBLIC ${Ceres_SOURCE_DIR}/internal)
+ target_link_libraries (test_util PUBLIC ceres_static gflags gtest)
macro (CERES_TEST NAME)
add_executable(${NAME}_test ${NAME}_test.cc)
@@ -398,16 +417,21 @@
# may be referenced without the 'ceres' path prefix and all private
# dependencies that may be directly referenced.
target_include_directories(${NAME}_test
- PUBLIC ${CMAKE_CURRENT_LIST_DIR}
- ${Ceres_SOURCE_DIR}/internal/ceres
- ${CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS})
+ PRIVATE ${Ceres_SOURCE_DIR}/internal/ceres
+ ${CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS})
+ # Some tests include direct references/includes of private dependency
+ # headers which are not propagated via the ceres targets, so link them
+ # explicitly.
+ target_link_libraries(${NAME}_test PRIVATE gtest test_util ceres_static
+ ${CERES_LIBRARY_PRIVATE_DEPENDENCIES})
- target_link_libraries(${NAME}_test PUBLIC test_util Ceres::ceres gtest)
- if (BUILD_SHARED_LIBS)
- # Define gtest-specific shared library flags for linking.
- append_target_property(${NAME}_test COMPILE_DEFINITIONS
- GTEST_LINKED_AS_SHARED_LIBRARY)
- endif()
+ # covariance_test uses SuiteSparseQR.hpp. However, since SuiteSparse import
+ # targets are private (link only) dependencies not propagated to consumers,
+ # we need to link against the target explicitly here.
+ if (TARGET SuiteSparse::SPQR)
+ target_link_libraries (${NAME}_test PRIVATE SuiteSparse::SPQR)
+ endif (TARGET SuiteSparse::SPQR)
+
add_test(NAME ${NAME}_test
COMMAND ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/${NAME}_test
--test_srcdir
@@ -419,7 +443,7 @@
ceres_test(autodiff)
ceres_test(autodiff_first_order_function)
ceres_test(autodiff_cost_function)
- ceres_test(autodiff_local_parameterization)
+ ceres_test(autodiff_manifold)
ceres_test(block_jacobi_preconditioner)
ceres_test(block_random_access_dense_matrix)
ceres_test(block_random_access_diagonal_matrix)
@@ -436,7 +460,18 @@
ceres_test(cost_function_to_functor)
ceres_test(covariance)
ceres_test(cubic_interpolation)
+ ceres_test(cuda_partitioned_block_sparse_crs_view)
+ ceres_test(cuda_block_sparse_crs_view)
+ ceres_test(cuda_block_structure)
+ ceres_test(cuda_dense_cholesky)
+ ceres_test(cuda_dense_qr)
+ ceres_test(cuda_kernels_vector_ops)
+ ceres_test(cuda_sparse_matrix)
+ ceres_test(cuda_streamed_buffer)
+ ceres_test(cuda_vector)
ceres_test(dense_linear_solver)
+ ceres_test(dense_cholesky)
+ ceres_test(dense_qr)
ceres_test(dense_sparse_matrix)
ceres_test(detect_structure)
ceres_test(dogleg_strategy)
@@ -463,14 +498,16 @@
ceres_test(iterative_refiner)
ceres_test(iterative_schur_complement_solver)
ceres_test(jet)
+ ceres_test(jet_traits)
ceres_test(levenberg_marquardt_strategy)
ceres_test(line_search_minimizer)
ceres_test(line_search_preprocessor)
- ceres_test(local_parameterization)
ceres_test(loss_function)
+ ceres_test(manifold)
ceres_test(minimizer)
ceres_test(normal_prior)
ceres_test(numeric_diff_cost_function)
+ ceres_test(numeric_diff_first_order_function)
ceres_test(ordered_groups)
ceres_test(parallel_for)
ceres_test(parallel_utils)
@@ -479,6 +516,7 @@
ceres_test(parameter_dims)
ceres_test(partitioned_matrix_view)
ceres_test(polynomial)
+ ceres_test(power_series_expansion_preconditioner)
ceres_test(problem)
ceres_test(program)
ceres_test(reorder_program)
@@ -509,14 +547,20 @@
endif (BUILD_TESTING AND GFLAGS)
macro(add_dependencies_to_benchmark BENCHMARK_TARGET)
- target_link_libraries(${BENCHMARK_TARGET} PUBLIC Ceres::ceres benchmark::benchmark)
- target_include_directories(${BENCHMARK_TARGET} PUBLIC
- ${Ceres_SOURCE_DIR}/internal
- ${Ceres_SOURCE_DIR}/internal/ceres
- ${CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS})
+ target_include_directories(${BENCHMARK_TARGET}
+ PRIVATE ${Ceres_SOURCE_DIR}/internal
+ ${CERES_LIBRARY_PRIVATE_DEPENDENCIES_INCLUDE_DIRS})
+ # Benchmarks include direct references/includes of private dependency headers
+ # which are not propagated via the ceres targets, so link them explicitly.
+ target_link_libraries(${BENCHMARK_TARGET}
+ PRIVATE benchmark::benchmark ceres_static
+ ${CERES_LIBRARY_PRIVATE_DEPENDENCIES})
endmacro()
if (BUILD_BENCHMARKS)
+ add_executable(evaluation_benchmark evaluation_benchmark.cc)
+ add_dependencies_to_benchmark(evaluation_benchmark)
+
add_executable(small_blas_gemv_benchmark small_blas_gemv_benchmark.cc)
add_dependencies_to_benchmark(small_blas_gemv_benchmark)
@@ -529,6 +573,24 @@
add_executable(schur_eliminator_benchmark schur_eliminator_benchmark.cc)
add_dependencies_to_benchmark(schur_eliminator_benchmark)
+ add_executable(jet_operator_benchmark jet_operator_benchmark.cc)
+ add_dependencies_to_benchmark(jet_operator_benchmark)
+
+ add_executable(dense_linear_solver_benchmark dense_linear_solver_benchmark.cc)
+ add_dependencies_to_benchmark(dense_linear_solver_benchmark)
+
+ add_executable(spmv_benchmark spmv_benchmark.cc)
+ add_dependencies_to_benchmark(spmv_benchmark)
+
+ add_executable(parallel_vector_operations_benchmark parallel_vector_operations_benchmark.cc)
+ add_dependencies_to_benchmark(parallel_vector_operations_benchmark)
+
+ add_executable(parallel_for_benchmark parallel_for_benchmark.cc)
+ add_dependencies_to_benchmark(parallel_for_benchmark)
+
+ add_executable(block_jacobi_preconditioner_benchmark
+ block_jacobi_preconditioner_benchmark.cc)
+ add_dependencies_to_benchmark(block_jacobi_preconditioner_benchmark)
+
add_subdirectory(autodiff_benchmarks)
endif (BUILD_BENCHMARKS)
-
diff --git a/internal/ceres/accelerate_sparse.cc b/internal/ceres/accelerate_sparse.cc
index d2b642b..0baadc0 100644
--- a/internal/ceres/accelerate_sparse.cc
+++ b/internal/ceres/accelerate_sparse.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -29,11 +29,12 @@
// Author: alexs.mac@gmail.com (Alex Stewart)
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
#include <algorithm>
+#include <memory>
#include <string>
#include <vector>
@@ -60,7 +61,7 @@
CASESTR(SparseParameterError);
CASESTR(SparseStatusReleased);
default:
- return "UKNOWN";
+ return "UNKNOWN";
}
}
} // namespace.
@@ -113,12 +114,12 @@
// Accelerate's columnStarts is a long*, not an int*. These types might be
// different (e.g. ARM on iOS) so always make a copy.
column_starts_.resize(A->num_rows() + 1); // +1 for final column length.
- std::copy_n(A->rows(), column_starts_.size(), &column_starts_[0]);
+ std::copy_n(A->rows(), column_starts_.size(), column_starts_.data());
ASSparseMatrix At;
At.structure.rowCount = A->num_cols();
At.structure.columnCount = A->num_rows();
- At.structure.columnStarts = &column_starts_[0];
+ At.structure.columnStarts = column_starts_.data();
At.structure.rowIndices = A->mutable_cols();
At.structure.attributes.transpose = false;
At.structure.attributes.triangle = SparseUpperTriangle;
@@ -126,8 +127,8 @@
At.structure.attributes._reserved = 0;
At.structure.attributes._allocatedBySparse = 0;
At.structure.blockSize = 1;
- if (std::is_same<Scalar, double>::value) {
- At.data = reinterpret_cast<Scalar*>(A->mutable_values());
+ if constexpr (std::is_same_v<Scalar, double>) {
+ At.data = A->mutable_values();
} else {
values_ =
ConstVectorRef(A->values(), A->num_nonzeros()).template cast<Scalar>();
@@ -138,8 +139,23 @@
template <typename Scalar>
typename AccelerateSparse<Scalar>::SymbolicFactorization
-AccelerateSparse<Scalar>::AnalyzeCholesky(ASSparseMatrix* A) {
- return SparseFactor(SparseFactorizationCholesky, A->structure);
+AccelerateSparse<Scalar>::AnalyzeCholesky(OrderingType ordering_type,
+ ASSparseMatrix* A) {
+ SparseSymbolicFactorOptions sfoption;
+ sfoption.control = SparseDefaultControl;
+ sfoption.orderMethod = SparseOrderDefault;
+ sfoption.order = nullptr;
+ sfoption.ignoreRowsAndColumns = nullptr;
+ sfoption.malloc = malloc;
+ sfoption.free = free;
+ sfoption.reportError = nullptr;
+
+ if (ordering_type == OrderingType::AMD) {
+ sfoption.orderMethod = SparseOrderAMD;
+ } else if (ordering_type == OrderingType::NESDIS) {
+ sfoption.orderMethod = SparseOrderMetis;
+ }
+ return SparseFactor(SparseFactorizationCholesky, A->structure, sfoption);
}
template <typename Scalar>
@@ -189,37 +205,38 @@
template <typename Scalar>
CompressedRowSparseMatrix::StorageType
AppleAccelerateCholesky<Scalar>::StorageType() const {
- return CompressedRowSparseMatrix::LOWER_TRIANGULAR;
+ return CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR;
}
template <typename Scalar>
LinearSolverTerminationType AppleAccelerateCholesky<Scalar>::Factorize(
CompressedRowSparseMatrix* lhs, std::string* message) {
CHECK_EQ(lhs->storage_type(), StorageType());
- if (lhs == NULL) {
- *message = "Failure: Input lhs is NULL.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ if (lhs == nullptr) {
+ *message = "Failure: Input lhs is nullptr.";
+ return LinearSolverTerminationType::FATAL_ERROR;
}
typename SparseTypesTrait<Scalar>::SparseMatrix as_lhs =
as_.CreateSparseMatrixTransposeView(lhs);
if (!symbolic_factor_) {
- symbolic_factor_.reset(
- new typename SparseTypesTrait<Scalar>::SymbolicFactorization(
- as_.AnalyzeCholesky(&as_lhs)));
+ symbolic_factor_ = std::make_unique<
+ typename SparseTypesTrait<Scalar>::SymbolicFactorization>(
+ as_.AnalyzeCholesky(ordering_type_, &as_lhs));
+
if (symbolic_factor_->status != SparseStatusOK) {
*message = StringPrintf(
"Apple Accelerate Failure : Symbolic factorisation failed: %s",
SparseStatusToString(symbolic_factor_->status));
FreeSymbolicFactorization();
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
}
}
if (!numeric_factor_) {
- numeric_factor_.reset(
- new typename SparseTypesTrait<Scalar>::NumericFactorization(
- as_.Cholesky(&as_lhs, symbolic_factor_.get())));
+ numeric_factor_ = std::make_unique<
+ typename SparseTypesTrait<Scalar>::NumericFactorization>(
+ as_.Cholesky(&as_lhs, symbolic_factor_.get()));
} else {
// Recycle memory from previous numeric factorization.
as_.Cholesky(&as_lhs, numeric_factor_.get());
@@ -229,10 +246,10 @@
"Apple Accelerate Failure : Numeric factorisation failed: %s",
SparseStatusToString(numeric_factor_->status));
FreeNumericFactorization();
- return LINEAR_SOLVER_FAILURE;
+ return LinearSolverTerminationType::FAILURE;
}
- return LINEAR_SOLVER_SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
}
template <typename Scalar>
@@ -245,8 +262,8 @@
typename SparseTypesTrait<Scalar>::DenseVector as_rhs_and_solution;
as_rhs_and_solution.count = num_cols;
- if (std::is_same<Scalar, double>::value) {
- as_rhs_and_solution.data = reinterpret_cast<Scalar*>(solution);
+ if constexpr (std::is_same_v<Scalar, double>) {
+ as_rhs_and_solution.data = solution;
std::copy_n(rhs, num_cols, solution);
} else {
scalar_rhs_and_solution_ =
@@ -258,14 +275,14 @@
VectorRef(solution, num_cols) =
scalar_rhs_and_solution_.template cast<double>();
}
- return LINEAR_SOLVER_SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
}
template <typename Scalar>
void AppleAccelerateCholesky<Scalar>::FreeSymbolicFactorization() {
if (symbolic_factor_) {
SparseCleanup(*symbolic_factor_);
- symbolic_factor_.reset();
+ symbolic_factor_ = nullptr;
}
}
@@ -273,7 +290,7 @@
void AppleAccelerateCholesky<Scalar>::FreeNumericFactorization() {
if (numeric_factor_) {
SparseCleanup(*numeric_factor_);
- numeric_factor_.reset();
+ numeric_factor_ = nullptr;
}
}
diff --git a/internal/ceres/accelerate_sparse.h b/internal/ceres/accelerate_sparse.h
index e53758d..ef819b8 100644
--- a/internal/ceres/accelerate_sparse.h
+++ b/internal/ceres/accelerate_sparse.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,7 +32,7 @@
#define CERES_INTERNAL_ACCELERATE_SPARSE_H_
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
@@ -55,18 +55,18 @@
template <>
struct SparseTypesTrait<double> {
- typedef DenseVector_Double DenseVector;
- typedef SparseMatrix_Double SparseMatrix;
- typedef SparseOpaqueSymbolicFactorization SymbolicFactorization;
- typedef SparseOpaqueFactorization_Double NumericFactorization;
+ using DenseVector = DenseVector_Double;
+ using SparseMatrix = SparseMatrix_Double;
+ using SymbolicFactorization = SparseOpaqueSymbolicFactorization;
+ using NumericFactorization = SparseOpaqueFactorization_Double;
};
template <>
struct SparseTypesTrait<float> {
- typedef DenseVector_Float DenseVector;
- typedef SparseMatrix_Float SparseMatrix;
- typedef SparseOpaqueSymbolicFactorization SymbolicFactorization;
- typedef SparseOpaqueFactorization_Float NumericFactorization;
+ using DenseVector = DenseVector_Float;
+ using SparseMatrix = SparseMatrix_Float;
+ using SymbolicFactorization = SparseOpaqueSymbolicFactorization;
+ using NumericFactorization = SparseOpaqueFactorization_Float;
};
template <typename Scalar>
@@ -91,7 +91,8 @@
// objects internally).
ASSparseMatrix CreateSparseMatrixTransposeView(CompressedRowSparseMatrix* A);
// Computes a symbolic factorisation of A that can be used in Solve().
- SymbolicFactorization AnalyzeCholesky(ASSparseMatrix* A);
+ SymbolicFactorization AnalyzeCholesky(OrderingType ordering_type,
+ ASSparseMatrix* A);
// Compute the numeric Cholesky factorization of A, given its
// symbolic factorization.
NumericFactorization Cholesky(ASSparseMatrix* A,
@@ -111,7 +112,7 @@
// An implementation of SparseCholesky interface using Apple's Accelerate
// framework.
template <typename Scalar>
-class AppleAccelerateCholesky : public SparseCholesky {
+class AppleAccelerateCholesky final : public SparseCholesky {
public:
// Factory
static std::unique_ptr<SparseCholesky> Create(OrderingType ordering_type);
diff --git a/internal/ceres/array_selector_test.cc b/internal/ceres/array_selector_test.cc
index f7fef3c..ff1bcd7 100644
--- a/internal/ceres/array_selector_test.cc
+++ b/internal/ceres/array_selector_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2020 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://code.google.com/p/ceres-solver/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,8 +33,7 @@
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// This test only checks, if the correct array implementations are selected. The
// test for FixedArray is in fixed_array_test.cc. Tests for std::array and
@@ -42,38 +41,33 @@
TEST(ArraySelector, FixedArray) {
ArraySelector<int, DYNAMIC, 20> array1(10);
static_assert(
- std::is_base_of<internal::FixedArray<int, 20>, decltype(array1)>::value,
- "");
+ std::is_base_of<internal::FixedArray<int, 20>, decltype(array1)>::value);
EXPECT_EQ(array1.size(), 10);
ArraySelector<int, DYNAMIC, 10> array2(20);
static_assert(
- std::is_base_of<internal::FixedArray<int, 10>, decltype(array2)>::value,
- "");
+ std::is_base_of<internal::FixedArray<int, 10>, decltype(array2)>::value);
EXPECT_EQ(array2.size(), 20);
}
TEST(ArraySelector, Array) {
ArraySelector<int, 10, 20> array1(10);
- static_assert(std::is_base_of<std::array<int, 10>, decltype(array1)>::value,
- "");
+ static_assert(std::is_base_of<std::array<int, 10>, decltype(array1)>::value);
EXPECT_EQ(array1.size(), 10);
ArraySelector<int, 20, 20> array2(20);
- static_assert(std::is_base_of<std::array<int, 20>, decltype(array2)>::value,
- "");
+ static_assert(std::is_base_of<std::array<int, 20>, decltype(array2)>::value);
EXPECT_EQ(array2.size(), 20);
}
TEST(ArraySelector, Vector) {
ArraySelector<int, 20, 10> array1(20);
- static_assert(std::is_base_of<std::vector<int>, decltype(array1)>::value, "");
+ static_assert(std::is_base_of<std::vector<int>, decltype(array1)>::value);
EXPECT_EQ(array1.size(), 20);
ArraySelector<int, 1, 0> array2(1);
- static_assert(std::is_base_of<std::vector<int>, decltype(array2)>::value, "");
+ static_assert(std::is_base_of<std::vector<int>, decltype(array2)>::value);
EXPECT_EQ(array2.size(), 1);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/array_utils.cc b/internal/ceres/array_utils.cc
index 6bffd84..a962f7f 100644
--- a/internal/ceres/array_utils.cc
+++ b/internal/ceres/array_utils.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,14 +38,12 @@
#include "ceres/stringprintf.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
-using std::string;
+namespace ceres::internal {
-bool IsArrayValid(const int size, const double* x) {
- if (x != NULL) {
- for (int i = 0; i < size; ++i) {
+bool IsArrayValid(const int64_t size, const double* x) {
+ if (x != nullptr) {
+ for (int64_t i = 0; i < size; ++i) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
return false;
}
@@ -54,12 +52,12 @@
return true;
}
-int FindInvalidValue(const int size, const double* x) {
- if (x == NULL) {
+int64_t FindInvalidValue(const int64_t size, const double* x) {
+ if (x == nullptr) {
return size;
}
- for (int i = 0; i < size; ++i) {
+ for (int64_t i = 0; i < size; ++i) {
if (!std::isfinite(x[i]) || (x[i] == kImpossibleValue)) {
return i;
}
@@ -68,17 +66,19 @@
return size;
}
-void InvalidateArray(const int size, double* x) {
- if (x != NULL) {
- for (int i = 0; i < size; ++i) {
+void InvalidateArray(const int64_t size, double* x) {
+ if (x != nullptr) {
+ for (int64_t i = 0; i < size; ++i) {
x[i] = kImpossibleValue;
}
}
}
-void AppendArrayToString(const int size, const double* x, string* result) {
- for (int i = 0; i < size; ++i) {
- if (x == NULL) {
+void AppendArrayToString(const int64_t size,
+ const double* x,
+ std::string* result) {
+ for (int64_t i = 0; i < size; ++i) {
+ if (x == nullptr) {
StringAppendF(result, "Not Computed ");
} else {
if (x[i] == kImpossibleValue) {
@@ -90,18 +90,17 @@
}
}
-void MapValuesToContiguousRange(const int size, int* array) {
+void MapValuesToContiguousRange(const int64_t size, int* array) {
std::vector<int> unique_values(array, array + size);
std::sort(unique_values.begin(), unique_values.end());
unique_values.erase(std::unique(unique_values.begin(), unique_values.end()),
unique_values.end());
- for (int i = 0; i < size; ++i) {
+ for (int64_t i = 0; i < size; ++i) {
array[i] =
std::lower_bound(unique_values.begin(), unique_values.end(), array[i]) -
unique_values.begin();
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/array_utils.h b/internal/ceres/array_utils.h
index 68feca5..bd51aa5 100644
--- a/internal/ceres/array_utils.h
+++ b/internal/ceres/array_utils.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -43,31 +43,32 @@
#ifndef CERES_INTERNAL_ARRAY_UTILS_H_
#define CERES_INTERNAL_ARRAY_UTILS_H_
+#include <cstdint>
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Fill the array x with an impossible value that the user code is
// never expected to compute.
-CERES_EXPORT_INTERNAL void InvalidateArray(int size, double* x);
+CERES_NO_EXPORT void InvalidateArray(const int64_t size, double* x);
// Check if all the entries of the array x are valid, i.e. all the
// values in the array should be finite and none of them should be
// equal to the "impossible" value used by InvalidateArray.
-CERES_EXPORT_INTERNAL bool IsArrayValid(int size, const double* x);
+CERES_NO_EXPORT bool IsArrayValid(const int64_t size, const double* x);
// If the array contains an invalid value, return the index for it,
// otherwise return size.
-CERES_EXPORT_INTERNAL int FindInvalidValue(const int size, const double* x);
+CERES_NO_EXPORT int64_t FindInvalidValue(const int64_t size, const double* x);
// Utility routine to print an array of doubles to a string. If the
-// array pointer is NULL, it is treated as an array of zeros.
-CERES_EXPORT_INTERNAL void AppendArrayToString(const int size,
- const double* x,
- std::string* result);
+// array pointer is nullptr, it is treated as an array of zeros.
+CERES_NO_EXPORT void AppendArrayToString(const int64_t size,
+ const double* x,
+ std::string* result);
// This routine takes an array of integer values, sorts and uniques
// them and then maps each value in the array to its position in the
@@ -82,9 +83,10 @@
// gets mapped to
//
// [1 0 2 3 0 1 3]
-CERES_EXPORT_INTERNAL void MapValuesToContiguousRange(int size, int* array);
+CERES_NO_EXPORT void MapValuesToContiguousRange(const int64_t size, int* array);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_ARRAY_UTILS_H_
diff --git a/internal/ceres/array_utils_test.cc b/internal/ceres/array_utils_test.cc
index 6c0ea84..2661f57 100644
--- a/internal/ceres/array_utils_test.cc
+++ b/internal/ceres/array_utils_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,10 +36,7 @@
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
TEST(ArrayUtils, IsArrayValid) {
double x[3];
@@ -53,7 +50,7 @@
EXPECT_FALSE(IsArrayValid(3, x));
x[1] = std::numeric_limits<double>::signaling_NaN();
EXPECT_FALSE(IsArrayValid(3, x));
- EXPECT_TRUE(IsArrayValid(1, NULL));
+ EXPECT_TRUE(IsArrayValid(1, nullptr));
InvalidateArray(3, x);
EXPECT_FALSE(IsArrayValid(3, x));
}
@@ -70,16 +67,16 @@
EXPECT_EQ(FindInvalidValue(3, x), 1);
x[1] = std::numeric_limits<double>::signaling_NaN();
EXPECT_EQ(FindInvalidValue(3, x), 1);
- EXPECT_EQ(FindInvalidValue(1, NULL), 1);
+ EXPECT_EQ(FindInvalidValue(1, nullptr), 1);
InvalidateArray(3, x);
EXPECT_EQ(FindInvalidValue(3, x), 0);
}
TEST(MapValuesToContiguousRange, ContiguousEntries) {
- vector<int> array;
+ std::vector<int> array;
array.push_back(0);
array.push_back(1);
- vector<int> expected = array;
+ std::vector<int> expected = array;
MapValuesToContiguousRange(array.size(), &array[0]);
EXPECT_EQ(array, expected);
array.clear();
@@ -92,10 +89,10 @@
}
TEST(MapValuesToContiguousRange, NonContiguousEntries) {
- vector<int> array;
+ std::vector<int> array;
array.push_back(0);
array.push_back(2);
- vector<int> expected;
+ std::vector<int> expected;
expected.push_back(0);
expected.push_back(1);
MapValuesToContiguousRange(array.size(), &array[0]);
@@ -103,14 +100,14 @@
}
TEST(MapValuesToContiguousRange, NonContiguousRepeatingEntries) {
- vector<int> array;
+ std::vector<int> array;
array.push_back(3);
array.push_back(1);
array.push_back(0);
array.push_back(0);
array.push_back(0);
array.push_back(5);
- vector<int> expected;
+ std::vector<int> expected;
expected.push_back(2);
expected.push_back(1);
expected.push_back(0);
@@ -121,5 +118,4 @@
EXPECT_EQ(array, expected);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/autodiff_benchmarks/CMakeLists.txt b/internal/ceres/autodiff_benchmarks/CMakeLists.txt
index 610ebc3..99af152 100644
--- a/internal/ceres/autodiff_benchmarks/CMakeLists.txt
+++ b/internal/ceres/autodiff_benchmarks/CMakeLists.txt
@@ -9,9 +9,3 @@
add_executable(autodiff_benchmarks autodiff_benchmarks.cc)
add_dependencies_to_benchmark(autodiff_benchmarks)
target_compile_options(autodiff_benchmarks PRIVATE ${CERES_BENCHMARK_FLAGS})
-
-# All other flags + fast-math
-list(APPEND CERES_BENCHMARK_FAST_MATH_FLAGS ${CERES_BENCHMARK_FLAGS} "-ffast-math")
-add_executable(autodiff_benchmarks_fast_math autodiff_benchmarks.cc)
-add_dependencies_to_benchmark(autodiff_benchmarks_fast_math)
-target_compile_options(autodiff_benchmarks_fast_math PRIVATE ${CERES_BENCHMARK_FAST_MATH_FLAGS})
diff --git a/internal/ceres/autodiff_benchmarks/brdf_cost_function.h b/internal/ceres/autodiff_benchmarks/brdf_cost_function.h
index 9d7c0cc..41cfcfa 100644
--- a/internal/ceres/autodiff_benchmarks/brdf_cost_function.h
+++ b/internal/ceres/autodiff_benchmarks/brdf_cost_function.h
@@ -35,6 +35,8 @@
#include <Eigen/Core>
#include <cmath>
+#include "ceres/constants.h"
+
namespace ceres {
// The brdf is based on:
@@ -45,8 +47,6 @@
// https://github.com/wdas/brdf/blob/master/src/brdfs/disney.brdf
struct Brdf {
public:
- Brdf() {}
-
template <typename T>
inline bool operator()(const T* const material,
const T* const c_ptr,
@@ -141,7 +141,7 @@
const T gr = cggxn_dot_l * cggxn_dot_v;
const Vec3 result_no_cosine =
- (T(1.0 / M_PI) * Lerp(fd, ss, subsurface) * c + f_sheen) *
+ (T(1.0 / constants::pi) * Lerp(fd, ss, subsurface) * c + f_sheen) *
(T(1) - metallic) +
gs * fs * ds +
Vec3(T(0.25), T(0.25), T(0.25)) * clearcoat * gr * fr * dr;
@@ -179,11 +179,11 @@
T result = T(0);
if (a >= T(1)) {
- result = T(1 / M_PI);
+ result = T(1 / constants::pi);
} else {
const T a2 = a * a;
const T t = T(1) + (a2 - T(1)) * n_dot_h * n_dot_h;
- result = (a2 - T(1)) / (T(M_PI) * T(log(a2) * t));
+ result = (a2 - T(1)) / (T(constants::pi) * T(log(a2) * t));
}
return result;
}
@@ -194,7 +194,7 @@
const T& h_dot_y,
const T& ax,
const T& ay) const {
- return T(1) / (T(M_PI) * ax * ay *
+ return T(1) / (T(constants::pi) * ax * ay *
Square(Square(h_dot_x / ax) + Square(h_dot_y / ay) +
n_dot_h * n_dot_h));
}
diff --git a/internal/ceres/autodiff_benchmarks/relative_pose_error.h b/internal/ceres/autodiff_benchmarks/relative_pose_error.h
index b5c1a93..a54a92f 100644
--- a/internal/ceres/autodiff_benchmarks/relative_pose_error.h
+++ b/internal/ceres/autodiff_benchmarks/relative_pose_error.h
@@ -33,6 +33,7 @@
#define CERES_INTERNAL_AUTODIFF_BENCHMARK_RELATIVE_POSE_ERROR_H_
#include <Eigen/Dense>
+#include <utility>
#include "ceres/rotation.h"
@@ -43,9 +44,8 @@
// poses T_w_i and T_w_j. For the residual we use the log of the the residual
// pose, in split representation SO(3) x R^3.
struct RelativePoseError {
- RelativePoseError(const Eigen::Quaterniond& q_i_j,
- const Eigen::Vector3d& t_i_j)
- : meas_q_i_j_(q_i_j), meas_t_i_j_(t_i_j) {}
+ RelativePoseError(Eigen::Quaterniond q_i_j, Eigen::Vector3d t_i_j)
+ : meas_q_i_j_(std::move(q_i_j)), meas_t_i_j_(std::move(t_i_j)) {}
template <typename T>
inline bool operator()(const T* const pose_i_ptr,
diff --git a/internal/ceres/autodiff_benchmarks/snavely_reprojection_error.h b/internal/ceres/autodiff_benchmarks/snavely_reprojection_error.h
index 795342f..439ee9b 100644
--- a/internal/ceres/autodiff_benchmarks/snavely_reprojection_error.h
+++ b/internal/ceres/autodiff_benchmarks/snavely_reprojection_error.h
@@ -40,7 +40,6 @@
SnavelyReprojectionError(double observed_x, double observed_y)
: observed_x(observed_x), observed_y(observed_y) {}
- SnavelyReprojectionError() = default;
template <typename T>
inline bool operator()(const T* const camera,
const T* const point,
diff --git a/internal/ceres/autodiff_cost_function_test.cc b/internal/ceres/autodiff_cost_function_test.cc
index cc340f6..dca67ba 100644
--- a/internal/ceres/autodiff_cost_function_test.cc
+++ b/internal/ceres/autodiff_cost_function_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,8 +36,7 @@
#include "ceres/cost_function.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BinaryScalarCost {
public:
@@ -57,7 +56,7 @@
new AutoDiffCostFunction<BinaryScalarCost, 1, 2, 2>(
new BinaryScalarCost(1.0));
- double** parameters = new double*[2];
+ auto** parameters = new double*[2];
parameters[0] = new double[2];
parameters[1] = new double[2];
@@ -67,7 +66,7 @@
parameters[1][0] = 3;
parameters[1][1] = 4;
- double** jacobians = new double*[2];
+ auto** jacobians = new double*[2];
jacobians[0] = new double[2];
jacobians[1] = new double[2];
@@ -126,8 +125,8 @@
1,
1>(new TenParameterCost);
- double** parameters = new double*[10];
- double** jacobians = new double*[10];
+ auto** parameters = new double*[10];
+ auto** jacobians = new double*[10];
for (int i = 0; i < 10; ++i) {
parameters[i] = new double[1];
parameters[i][0] = i;
@@ -179,5 +178,24 @@
EXPECT_FALSE(IsArrayValid(2, residuals));
}
-} // namespace internal
-} // namespace ceres
+TEST(AutodiffCostFunction, ArgumentForwarding) {
+ // No narrowing conversion warning should be emitted
+ auto cost_function1 =
+ std::make_unique<AutoDiffCostFunction<BinaryScalarCost, 1, 2, 2>>(1);
+ auto cost_function2 =
+ std::make_unique<AutoDiffCostFunction<BinaryScalarCost, 1, 2, 2>>(2.0);
+ // Default constructible functor
+ auto cost_function3 =
+ std::make_unique<AutoDiffCostFunction<OnlyFillsOneOutputFunctor, 1, 1>>();
+}
+
+TEST(AutodiffCostFunction, UniquePtrCtor) {
+ auto cost_function1 =
+ std::make_unique<AutoDiffCostFunction<BinaryScalarCost, 1, 2, 2>>(
+ std::make_unique<BinaryScalarCost>(1));
+ auto cost_function2 =
+ std::make_unique<AutoDiffCostFunction<BinaryScalarCost, 1, 2, 2>>(
+ std::make_unique<BinaryScalarCost>(2.0));
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/autodiff_first_order_function_test.cc b/internal/ceres/autodiff_first_order_function_test.cc
index 7db7835..e663f13 100644
--- a/internal/ceres/autodiff_first_order_function_test.cc
+++ b/internal/ceres/autodiff_first_order_function_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -44,7 +44,7 @@
explicit QuadraticCostFunctor(double a) : a_(a) {}
template <typename T>
bool operator()(const T* const x, T* cost) const {
- cost[0] = x[0] * x[1] + x[2] * x[3] - T(a_);
+ cost[0] = x[0] * x[1] + x[2] * x[3] - a_;
return true;
}
diff --git a/internal/ceres/autodiff_local_parameterization_test.cc b/internal/ceres/autodiff_local_parameterization_test.cc
deleted file mode 100644
index 36fd3c9..0000000
--- a/internal/ceres/autodiff_local_parameterization_test.cc
+++ /dev/null
@@ -1,227 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-
-#include "ceres/autodiff_local_parameterization.h"
-
-#include <cmath>
-
-#include "ceres/local_parameterization.h"
-#include "ceres/rotation.h"
-#include "gtest/gtest.h"
-
-namespace ceres {
-namespace internal {
-
-struct IdentityPlus {
- template <typename T>
- bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
- for (int i = 0; i < 3; ++i) {
- x_plus_delta[i] = x[i] + delta[i];
- }
- return true;
- }
-};
-
-TEST(AutoDiffLocalParameterizationTest, IdentityParameterization) {
- AutoDiffLocalParameterization<IdentityPlus, 3, 3> parameterization;
-
- double x[3] = {1.0, 2.0, 3.0};
- double delta[3] = {0.0, 1.0, 2.0};
- double x_plus_delta[3] = {0.0, 0.0, 0.0};
- parameterization.Plus(x, delta, x_plus_delta);
-
- EXPECT_EQ(x_plus_delta[0], 1.0);
- EXPECT_EQ(x_plus_delta[1], 3.0);
- EXPECT_EQ(x_plus_delta[2], 5.0);
-
- double jacobian[9];
- parameterization.ComputeJacobian(x, jacobian);
- int k = 0;
- for (int i = 0; i < 3; ++i) {
- for (int j = 0; j < 3; ++j, ++k) {
- EXPECT_EQ(jacobian[k], (i == j) ? 1.0 : 0.0);
- }
- }
-}
-
-struct ScaledPlus {
- explicit ScaledPlus(const double& scale_factor)
- : scale_factor_(scale_factor) {}
-
- template <typename T>
- bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
- for (int i = 0; i < 3; ++i) {
- x_plus_delta[i] = x[i] + T(scale_factor_) * delta[i];
- }
- return true;
- }
-
- const double scale_factor_;
-};
-
-TEST(AutoDiffLocalParameterizationTest, ScaledParameterization) {
- const double kTolerance = 1e-14;
-
- AutoDiffLocalParameterization<ScaledPlus, 3, 3> parameterization(
- new ScaledPlus(1.2345));
-
- double x[3] = {1.0, 2.0, 3.0};
- double delta[3] = {0.0, 1.0, 2.0};
- double x_plus_delta[3] = {0.0, 0.0, 0.0};
- parameterization.Plus(x, delta, x_plus_delta);
-
- EXPECT_NEAR(x_plus_delta[0], 1.0, kTolerance);
- EXPECT_NEAR(x_plus_delta[1], 3.2345, kTolerance);
- EXPECT_NEAR(x_plus_delta[2], 5.469, kTolerance);
-
- double jacobian[9];
- parameterization.ComputeJacobian(x, jacobian);
- int k = 0;
- for (int i = 0; i < 3; ++i) {
- for (int j = 0; j < 3; ++j, ++k) {
- EXPECT_NEAR(jacobian[k], (i == j) ? 1.2345 : 0.0, kTolerance);
- }
- }
-}
-
-struct QuaternionPlus {
- template <typename T>
- bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
- const T squared_norm_delta =
- delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
-
- T q_delta[4];
- if (squared_norm_delta > T(0.0)) {
- T norm_delta = sqrt(squared_norm_delta);
- const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
- q_delta[0] = cos(norm_delta);
- q_delta[1] = sin_delta_by_delta * delta[0];
- q_delta[2] = sin_delta_by_delta * delta[1];
- q_delta[3] = sin_delta_by_delta * delta[2];
- } else {
- // We do not just use q_delta = [1,0,0,0] here because that is a
- // constant and when used for automatic differentiation will
- // lead to a zero derivative. Instead we take a first order
- // approximation and evaluate it at zero.
- q_delta[0] = T(1.0);
- q_delta[1] = delta[0];
- q_delta[2] = delta[1];
- q_delta[3] = delta[2];
- }
-
- QuaternionProduct(q_delta, x, x_plus_delta);
- return true;
- }
-};
-
-static void QuaternionParameterizationTestHelper(const double* x,
- const double* delta) {
- const double kTolerance = 1e-14;
- double x_plus_delta_ref[4] = {0.0, 0.0, 0.0, 0.0};
- double jacobian_ref[12];
-
- QuaternionParameterization ref_parameterization;
- ref_parameterization.Plus(x, delta, x_plus_delta_ref);
- ref_parameterization.ComputeJacobian(x, jacobian_ref);
-
- double x_plus_delta[4] = {0.0, 0.0, 0.0, 0.0};
- double jacobian[12];
- AutoDiffLocalParameterization<QuaternionPlus, 4, 3> parameterization;
- parameterization.Plus(x, delta, x_plus_delta);
- parameterization.ComputeJacobian(x, jacobian);
-
- for (int i = 0; i < 4; ++i) {
- EXPECT_NEAR(x_plus_delta[i], x_plus_delta_ref[i], kTolerance);
- }
-
- // clang-format off
- const double x_plus_delta_norm =
- sqrt(x_plus_delta[0] * x_plus_delta[0] +
- x_plus_delta[1] * x_plus_delta[1] +
- x_plus_delta[2] * x_plus_delta[2] +
- x_plus_delta[3] * x_plus_delta[3]);
- // clang-format on
-
- EXPECT_NEAR(x_plus_delta_norm, 1.0, kTolerance);
-
- for (int i = 0; i < 12; ++i) {
- EXPECT_TRUE(std::isfinite(jacobian[i]));
- EXPECT_NEAR(jacobian[i], jacobian_ref[i], kTolerance)
- << "Jacobian mismatch: i = " << i << "\n Expected \n"
- << ConstMatrixRef(jacobian_ref, 4, 3) << "\n Actual \n"
- << ConstMatrixRef(jacobian, 4, 3);
- }
-}
-
-TEST(AutoDiffLocalParameterization, QuaternionParameterizationZeroTest) {
- double x[4] = {0.5, 0.5, 0.5, 0.5};
- double delta[3] = {0.0, 0.0, 0.0};
- QuaternionParameterizationTestHelper(x, delta);
-}
-
-TEST(AutoDiffLocalParameterization, QuaternionParameterizationNearZeroTest) {
- double x[4] = {0.52, 0.25, 0.15, 0.45};
- // clang-format off
- double norm_x = sqrt(x[0] * x[0] +
- x[1] * x[1] +
- x[2] * x[2] +
- x[3] * x[3]);
- // clang-format on
- for (int i = 0; i < 4; ++i) {
- x[i] = x[i] / norm_x;
- }
-
- double delta[3] = {0.24, 0.15, 0.10};
- for (int i = 0; i < 3; ++i) {
- delta[i] = delta[i] * 1e-14;
- }
-
- QuaternionParameterizationTestHelper(x, delta);
-}
-
-TEST(AutoDiffLocalParameterization, QuaternionParameterizationNonZeroTest) {
- double x[4] = {0.52, 0.25, 0.15, 0.45};
- // clang-format off
- double norm_x = sqrt(x[0] * x[0] +
- x[1] * x[1] +
- x[2] * x[2] +
- x[3] * x[3]);
- // clang-format on
-
- for (int i = 0; i < 4; ++i) {
- x[i] = x[i] / norm_x;
- }
-
- double delta[3] = {0.24, 0.15, 0.10};
- QuaternionParameterizationTestHelper(x, delta);
-}
-
-} // namespace internal
-} // namespace ceres
diff --git a/internal/ceres/autodiff_manifold_test.cc b/internal/ceres/autodiff_manifold_test.cc
new file mode 100644
index 0000000..c687719
--- /dev/null
+++ b/internal/ceres/autodiff_manifold_test.cc
@@ -0,0 +1,295 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#include "ceres/autodiff_manifold.h"
+
+#include <cmath>
+
+#include "ceres/constants.h"
+#include "ceres/manifold.h"
+#include "ceres/manifold_test_utils.h"
+#include "ceres/rotation.h"
+#include "gtest/gtest.h"
+
+namespace ceres::internal {
+
+namespace {
+
+constexpr int kNumTrials = 1000;
+constexpr double kTolerance = 1e-9;
+
+Vector RandomQuaternion() {
+ Vector x = Vector::Random(4);
+ x.normalize();
+ return x;
+}
+
+} // namespace
+
+struct EuclideanFunctor {
+ template <typename T>
+ bool Plus(const T* x, const T* delta, T* x_plus_delta) const {
+ for (int i = 0; i < 3; ++i) {
+ x_plus_delta[i] = x[i] + delta[i];
+ }
+ return true;
+ }
+
+ template <typename T>
+ bool Minus(const T* y, const T* x, T* y_minus_x) const {
+ for (int i = 0; i < 3; ++i) {
+ y_minus_x[i] = y[i] - x[i];
+ }
+ return true;
+ }
+};
+
+TEST(AutoDiffLManifoldTest, EuclideanManifold) {
+ AutoDiffManifold<EuclideanFunctor, 3, 3> manifold;
+ EXPECT_EQ(manifold.AmbientSize(), 3);
+ EXPECT_EQ(manifold.TangentSize(), 3);
+
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(manifold.AmbientSize());
+ const Vector y = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+ Vector x_plus_delta = Vector::Zero(manifold.AmbientSize());
+
+ manifold.Plus(x.data(), delta.data(), x_plus_delta.data());
+ EXPECT_NEAR((x_plus_delta - x - delta).norm() / (x + delta).norm(),
+ 0.0,
+ kTolerance);
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+struct ScaledFunctor {
+ explicit ScaledFunctor(const double s) : s(s) {}
+
+ template <typename T>
+ bool Plus(const T* x, const T* delta, T* x_plus_delta) const {
+ for (int i = 0; i < 3; ++i) {
+ x_plus_delta[i] = x[i] + s * delta[i];
+ }
+ return true;
+ }
+
+ template <typename T>
+ bool Minus(const T* y, const T* x, T* y_minus_x) const {
+ for (int i = 0; i < 3; ++i) {
+ y_minus_x[i] = (y[i] - x[i]) / s;
+ }
+ return true;
+ }
+
+ const double s;
+};
+
+TEST(AutoDiffManifoldTest, ScaledManifold) {
+ constexpr double kScale = 1.2342;
+ AutoDiffManifold<ScaledFunctor, 3, 3> manifold(new ScaledFunctor(kScale));
+ EXPECT_EQ(manifold.AmbientSize(), 3);
+ EXPECT_EQ(manifold.TangentSize(), 3);
+
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(manifold.AmbientSize());
+ const Vector y = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+ Vector x_plus_delta = Vector::Zero(manifold.AmbientSize());
+
+ manifold.Plus(x.data(), delta.data(), x_plus_delta.data());
+ EXPECT_NEAR((x_plus_delta - x - delta * kScale).norm() /
+ (x + delta * kScale).norm(),
+ 0.0,
+ kTolerance);
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+// Templated functor that implements the Plus and Minus operations on the
+// Quaternion manifold.
+struct QuaternionFunctor {
+ template <typename T>
+ bool Plus(const T* x, const T* delta, T* x_plus_delta) const {
+ const T squared_norm_delta =
+ delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
+
+ T q_delta[4];
+ if (squared_norm_delta > T(0.0)) {
+ T norm_delta = sqrt(squared_norm_delta);
+ const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
+ q_delta[0] = cos(norm_delta);
+ q_delta[1] = sin_delta_by_delta * delta[0];
+ q_delta[2] = sin_delta_by_delta * delta[1];
+ q_delta[3] = sin_delta_by_delta * delta[2];
+ } else {
+ // We do not just use q_delta = [1,0,0,0] here because that is a
+ // constant and when used for automatic differentiation will
+ // lead to a zero derivative. Instead we take a first order
+ // approximation and evaluate it at zero.
+ q_delta[0] = T(1.0);
+ q_delta[1] = delta[0];
+ q_delta[2] = delta[1];
+ q_delta[3] = delta[2];
+ }
+
+ QuaternionProduct(q_delta, x, x_plus_delta);
+ return true;
+ }
+
+ template <typename T>
+ bool Minus(const T* y, const T* x, T* y_minus_x) const {
+ T minus_x[4] = {x[0], -x[1], -x[2], -x[3]};
+ T ambient_y_minus_x[4];
+ QuaternionProduct(y, minus_x, ambient_y_minus_x);
+ T u_norm = sqrt(ambient_y_minus_x[1] * ambient_y_minus_x[1] +
+ ambient_y_minus_x[2] * ambient_y_minus_x[2] +
+ ambient_y_minus_x[3] * ambient_y_minus_x[3]);
+ if (u_norm > 0.0) {
+ T theta = atan2(u_norm, ambient_y_minus_x[0]);
+ y_minus_x[0] = theta * ambient_y_minus_x[1] / u_norm;
+ y_minus_x[1] = theta * ambient_y_minus_x[2] / u_norm;
+ y_minus_x[2] = theta * ambient_y_minus_x[3] / u_norm;
+ } else {
+ // We do not use [0,0,0] here because even though the value part is
+ // a constant, the derivative part is not.
+ y_minus_x[0] = ambient_y_minus_x[1];
+ y_minus_x[1] = ambient_y_minus_x[2];
+ y_minus_x[2] = ambient_y_minus_x[3];
+ }
+ return true;
+ }
+};
+
+TEST(AutoDiffManifoldTest, QuaternionPlusPiBy2) {
+ AutoDiffManifold<QuaternionFunctor, 4, 3> manifold;
+
+ Vector x = Vector::Zero(4);
+ x[0] = 1.0;
+
+ for (int i = 0; i < 3; ++i) {
+ Vector delta = Vector::Zero(3);
+ delta[i] = constants::pi / 2;
+ Vector x_plus_delta = Vector::Zero(4);
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), x_plus_delta.data()));
+
+ // Expect that the element corresponding to pi/2 is +/- 1. All other
+ // elements should be zero.
+ for (int j = 0; j < 4; ++j) {
+ if (i == (j - 1)) {
+ EXPECT_LT(std::abs(x_plus_delta[j]) - 1,
+ std::numeric_limits<double>::epsilon())
+ << "\ndelta = " << delta.transpose()
+ << "\nx_plus_delta = " << x_plus_delta.transpose()
+ << "\n expected the " << j
+ << "th element of x_plus_delta to be +/- 1.";
+ } else {
+ EXPECT_LT(std::abs(x_plus_delta[j]),
+ std::numeric_limits<double>::epsilon())
+ << "\ndelta = " << delta.transpose()
+ << "\nx_plus_delta = " << x_plus_delta.transpose()
+ << "\n expected the " << j << "th element of x_plus_delta to be 0.";
+ }
+ }
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(
+ manifold, x, delta, x_plus_delta, kTolerance);
+ }
+}
+
+// Compute the expected value of Quaternion::Plus via functions in rotation.h
+// and compares it to the one computed by Quaternion::Plus.
+MATCHER_P2(QuaternionPlusIsCorrectAt, x, delta, "") {
+ // This multiplication by 2 is needed because AngleAxisToQuaternion uses
+ // |delta|/2 as the angle of rotation where as in the implementation of
+ // Quaternion for historical reasons we use |delta|.
+ const Vector two_delta = delta * 2;
+ Vector delta_q(4);
+ AngleAxisToQuaternion(two_delta.data(), delta_q.data());
+
+ Vector expected(4);
+ QuaternionProduct(delta_q.data(), x.data(), expected.data());
+ Vector actual(4);
+ EXPECT_TRUE(arg.Plus(x.data(), delta.data(), actual.data()));
+
+ const double n = (actual - expected).norm();
+ const double d = expected.norm();
+ const double diffnorm = n / d;
+ if (diffnorm > kTolerance) {
+ *result_listener << "\nx: " << x.transpose()
+ << "\ndelta: " << delta.transpose()
+ << "\nexpected: " << expected.transpose()
+ << "\nactual: " << actual.transpose()
+ << "\ndiff: " << (expected - actual).transpose()
+ << "\ndiffnorm : " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+TEST(AutoDiffManifoldTest, QuaternionGenericDelta) {
+ AutoDiffManifold<QuaternionFunctor, 4, 3> manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ EXPECT_THAT(manifold, QuaternionPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(AutoDiffManifoldTest, QuaternionSmallDelta) {
+ AutoDiffManifold<QuaternionFunctor, 4, 3> manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ delta.normalize();
+ delta *= 1e-6;
+ EXPECT_THAT(manifold, QuaternionPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(AutoDiffManifold, QuaternionDeltaJustBelowPi) {
+ AutoDiffManifold<QuaternionFunctor, 4, 3> manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ delta.normalize();
+ delta *= (constants::pi - 1e-6);
+ EXPECT_THAT(manifold, QuaternionPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/autodiff_test.cc b/internal/ceres/autodiff_test.cc
index 2d56400..b50327c 100644
--- a/internal/ceres/autodiff_test.cc
+++ b/internal/ceres/autodiff_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,11 +30,13 @@
#include "ceres/internal/autodiff.h"
-#include "ceres/random.h"
+#include <algorithm>
+#include <iterator>
+#include <random>
+
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template <typename T>
inline T& RowMajorAccess(T* base, int rows, int cols, int i, int j) {
@@ -163,18 +165,19 @@
// Test projective camera model projector.
TEST(AutoDiff, ProjectiveCameraModel) {
- srand(5);
double const tol = 1e-10; // floating-point tolerance.
double const del = 1e-4; // finite-difference step.
double const err = 1e-6; // finite-difference tolerance.
Projective b;
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform01(0.0, 1.0);
// Make random P and X, in a single vector.
double PX[12 + 4];
- for (int i = 0; i < 12 + 4; ++i) {
- PX[i] = RandDouble();
- }
+ std::generate(std::begin(PX), std::end(PX), [&prng, &uniform01] {
+ return uniform01(prng);
+ });
// Handy names for the P and X parts.
double* P = PX + 0;
@@ -283,16 +286,20 @@
// This test is similar in structure to the previous one.
TEST(AutoDiff, Metric) {
- srand(5);
double const tol = 1e-10; // floating-point tolerance.
double const del = 1e-4; // finite-difference step.
- double const err = 1e-5; // finite-difference tolerance.
+ double const err = 2e-5; // finite-difference tolerance.
Metric b;
// Make random parameter vector.
double qcX[4 + 3 + 3];
- for (int i = 0; i < 4 + 3 + 3; ++i) qcX[i] = RandDouble();
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform01(0.0, 1.0);
+
+ std::generate(std::begin(qcX), std::end(qcX), [&prng, &uniform01] {
+ return uniform01(prng);
+ });
// Handy names.
double* q = qcX;
@@ -658,12 +665,11 @@
// this function.
y += 1;
- typedef Jet<double, 2> JetT;
+ using JetT = Jet<double, 2>;
FixedArray<JetT, (256 * 7) / sizeof(JetT)> x(3);
// Need this to makes sure that x does not get optimized out.
x[0] = x[0] + JetT(1.0);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/blas.cc b/internal/ceres/blas.cc
deleted file mode 100644
index f8d006e..0000000
--- a/internal/ceres/blas.cc
+++ /dev/null
@@ -1,82 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-
-#include "ceres/blas.h"
-
-#include "ceres/internal/port.h"
-#include "glog/logging.h"
-
-#ifndef CERES_NO_LAPACK
-extern "C" void dsyrk_(char* uplo,
- char* trans,
- int* n,
- int* k,
- double* alpha,
- double* a,
- int* lda,
- double* beta,
- double* c,
- int* ldc);
-#endif
-
-namespace ceres {
-namespace internal {
-
-void BLAS::SymmetricRankKUpdate(int num_rows,
- int num_cols,
- const double* a,
- bool transpose,
- double alpha,
- double beta,
- double* c) {
-#ifdef CERES_NO_LAPACK
- LOG(FATAL) << "Ceres was built without a BLAS library.";
-#else
- char uplo = 'L';
- char trans = transpose ? 'T' : 'N';
- int n = transpose ? num_cols : num_rows;
- int k = transpose ? num_rows : num_cols;
- int lda = k;
- int ldc = n;
- dsyrk_(&uplo,
- &trans,
- &n,
- &k,
- &alpha,
- const_cast<double*>(a),
- &lda,
- &beta,
- c,
- &ldc);
-#endif
-}
-
-} // namespace internal
-} // namespace ceres
diff --git a/internal/ceres/blas.h b/internal/ceres/blas.h
deleted file mode 100644
index a43301c..0000000
--- a/internal/ceres/blas.h
+++ /dev/null
@@ -1,57 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-//
-// Wrapper functions around BLAS functions.
-
-#ifndef CERES_INTERNAL_BLAS_H_
-#define CERES_INTERNAL_BLAS_H_
-
-namespace ceres {
-namespace internal {
-
-class BLAS {
- public:
- // transpose = true : c = alpha * a'a + beta * c;
- // transpose = false : c = alpha * aa' + beta * c;
- //
- // Assumes column major matrices.
- static void SymmetricRankKUpdate(int num_rows,
- int num_cols,
- const double* a,
- bool transpose,
- double alpha,
- double beta,
- double* c);
-};
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_INTERNAL_BLAS_H_
diff --git a/internal/ceres/block_evaluate_preparer.cc b/internal/ceres/block_evaluate_preparer.cc
index 7db96d9..c8b8177 100644
--- a/internal/ceres/block_evaluate_preparer.cc
+++ b/internal/ceres/block_evaluate_preparer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,8 +38,7 @@
#include "ceres/residual_block.h"
#include "ceres/sparse_matrix.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
void BlockEvaluatePreparer::Init(int const* const* jacobian_layout,
int max_derivatives_per_residual_block) {
@@ -53,7 +52,7 @@
SparseMatrix* jacobian,
double** jacobians) {
// If the overall jacobian is not available, use the scratch space.
- if (jacobian == NULL) {
+ if (jacobian == nullptr) {
scratch_evaluate_preparer_.Prepare(
residual_block, residual_block_index, jacobian, jacobians);
return;
@@ -73,10 +72,9 @@
// parameters. Instead, bump the pointer for active parameters only.
jacobian_block_offset++;
} else {
- jacobians[j] = NULL;
+ jacobians[j] = nullptr;
}
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_evaluate_preparer.h b/internal/ceres/block_evaluate_preparer.h
index 4378689..8febfac 100644
--- a/internal/ceres/block_evaluate_preparer.h
+++ b/internal/ceres/block_evaluate_preparer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,15 +36,15 @@
#ifndef CERES_INTERNAL_BLOCK_EVALUATE_PREPARER_H_
#define CERES_INTERNAL_BLOCK_EVALUATE_PREPARER_H_
+#include "ceres/internal/export.h"
#include "ceres/scratch_evaluate_preparer.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class ResidualBlock;
class SparseMatrix;
-class BlockEvaluatePreparer {
+class CERES_NO_EXPORT BlockEvaluatePreparer {
public:
// Using Init() instead of a constructor allows for allocating this structure
// with new[]. This is because C++ doesn't allow passing arguments to objects
@@ -71,7 +71,6 @@
ScratchEvaluatePreparer scratch_evaluate_preparer_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_BLOCK_EVALUATE_PREPARER_H_
diff --git a/internal/ceres/block_jacobi_preconditioner.cc b/internal/ceres/block_jacobi_preconditioner.cc
index 6f37aca..8f8893f 100644
--- a/internal/ceres/block_jacobi_preconditioner.cc
+++ b/internal/ceres/block_jacobi_preconditioner.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,72 +30,197 @@
#include "ceres/block_jacobi_preconditioner.h"
+#include <memory>
+#include <mutex>
+#include <utility>
+#include <vector>
+
+#include "Eigen/Dense"
#include "ceres/block_random_access_diagonal_matrix.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/block_structure.h"
#include "ceres/casts.h"
#include "ceres/internal/eigen.h"
+#include "ceres/parallel_for.h"
+#include "ceres/small_blas.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-BlockJacobiPreconditioner::BlockJacobiPreconditioner(
- const BlockSparseMatrix& A) {
- const CompressedRowBlockStructure* bs = A.block_structure();
- std::vector<int> blocks(bs->cols.size());
- for (int i = 0; i < blocks.size(); ++i) {
- blocks[i] = bs->cols[i].size;
- }
-
- m_.reset(new BlockRandomAccessDiagonalMatrix(blocks));
+BlockSparseJacobiPreconditioner::BlockSparseJacobiPreconditioner(
+ Preconditioner::Options options, const BlockSparseMatrix& A)
+ : options_(std::move(options)) {
+ m_ = std::make_unique<BlockRandomAccessDiagonalMatrix>(
+ A.block_structure()->cols, options_.context, options_.num_threads);
}
-BlockJacobiPreconditioner::~BlockJacobiPreconditioner() {}
+BlockSparseJacobiPreconditioner::~BlockSparseJacobiPreconditioner() = default;
-bool BlockJacobiPreconditioner::UpdateImpl(const BlockSparseMatrix& A,
- const double* D) {
+bool BlockSparseJacobiPreconditioner::UpdateImpl(const BlockSparseMatrix& A,
+ const double* D) {
const CompressedRowBlockStructure* bs = A.block_structure();
const double* values = A.values();
m_->SetZero();
- for (int i = 0; i < bs->rows.size(); ++i) {
- const int row_block_size = bs->rows[i].block.size;
- const std::vector<Cell>& cells = bs->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- const int block_id = cells[j].block_id;
- const int col_block_size = bs->cols[block_id].size;
- int r, c, row_stride, col_stride;
- CellInfo* cell_info =
- m_->GetCell(block_id, block_id, &r, &c, &row_stride, &col_stride);
- MatrixRef m(cell_info->values, row_stride, col_stride);
- ConstMatrixRef b(
- values + cells[j].position, row_block_size, col_block_size);
- m.block(r, c, col_block_size, col_block_size) += b.transpose() * b;
- }
- }
+ ParallelFor(options_.context,
+ 0,
+ bs->rows.size(),
+ options_.num_threads,
+ [this, bs, values](int i) {
+ const int row_block_size = bs->rows[i].block.size;
+ const std::vector<Cell>& cells = bs->rows[i].cells;
+ for (const auto& cell : cells) {
+ const int block_id = cell.block_id;
+ const int col_block_size = bs->cols[block_id].size;
+ int r, c, row_stride, col_stride;
+ CellInfo* cell_info = m_->GetCell(
+ block_id, block_id, &r, &c, &row_stride, &col_stride);
+ MatrixRef m(cell_info->values, row_stride, col_stride);
+ ConstMatrixRef b(
+ values + cell.position, row_block_size, col_block_size);
+ auto lock =
+ MakeConditionalLock(options_.num_threads, cell_info->m);
+ // clang-format off
+ MatrixTransposeMatrixMultiply<Eigen::Dynamic, Eigen::Dynamic,
+ Eigen::Dynamic,Eigen::Dynamic, 1>(
+ values + cell.position, row_block_size,col_block_size,
+ values + cell.position, row_block_size,col_block_size,
+ cell_info->values,r, c,row_stride,col_stride);
+ // clang-format on
+ }
+ });
- if (D != NULL) {
+ if (D != nullptr) {
// Add the diagonal.
- int position = 0;
- for (int i = 0; i < bs->cols.size(); ++i) {
- const int block_size = bs->cols[i].size;
- int r, c, row_stride, col_stride;
- CellInfo* cell_info = m_->GetCell(i, i, &r, &c, &row_stride, &col_stride);
- MatrixRef m(cell_info->values, row_stride, col_stride);
- m.block(r, c, block_size, block_size).diagonal() +=
- ConstVectorRef(D + position, block_size).array().square().matrix();
- position += block_size;
- }
+ ParallelFor(options_.context,
+ 0,
+ bs->cols.size(),
+ options_.num_threads,
+ [this, bs, D](int i) {
+ const int block_size = bs->cols[i].size;
+ int r, c, row_stride, col_stride;
+ CellInfo* cell_info =
+ m_->GetCell(i, i, &r, &c, &row_stride, &col_stride);
+ MatrixRef m(cell_info->values, row_stride, col_stride);
+ m.block(r, c, block_size, block_size).diagonal() +=
+ ConstVectorRef(D + bs->cols[i].position, block_size)
+ .array()
+ .square()
+ .matrix();
+ });
}
m_->Invert();
return true;
}
-void BlockJacobiPreconditioner::RightMultiply(const double* x,
- double* y) const {
- m_->RightMultiply(x, y);
+BlockCRSJacobiPreconditioner::BlockCRSJacobiPreconditioner(
+ Preconditioner::Options options, const CompressedRowSparseMatrix& A)
+ : options_(std::move(options)), locks_(A.col_blocks().size()) {
+ auto& col_blocks = A.col_blocks();
+
+ // Compute the number of non-zeros in the preconditioner. This is needed so
+ // that we can construct the CompressedRowSparseMatrix.
+ const int m_nnz = SumSquaredSizes(col_blocks);
+ m_ = std::make_unique<CompressedRowSparseMatrix>(
+ A.num_cols(), A.num_cols(), m_nnz);
+
+ const int num_col_blocks = col_blocks.size();
+
+ // Populate the sparsity structure of the preconditioner matrix.
+ int* m_cols = m_->mutable_cols();
+ int* m_rows = m_->mutable_rows();
+ m_rows[0] = 0;
+ for (int i = 0, idx = 0; i < num_col_blocks; ++i) {
+ // For each column block populate a diagonal block in the preconditioner.
+ // Not that the because of the way the CompressedRowSparseMatrix format
+ // works, the entire diagonal block is laid out contiguously in memory as a
+ // row-major matrix. We will use this when updating the block.
+ auto& block = col_blocks[i];
+ for (int j = 0; j < block.size; ++j) {
+ for (int k = 0; k < block.size; ++k, ++idx) {
+ m_cols[idx] = block.position + k;
+ }
+ m_rows[block.position + j + 1] = idx;
+ }
+ }
+
+ // In reality we only need num_col_blocks locks, however that would require
+ // that in UpdateImpl we are able to look up the column block from the it
+ // first column. To save ourselves this map we will instead spend a few extra
+ // lock objects.
+ std::vector<std::mutex> locks(A.num_cols());
+ locks_.swap(locks);
+ CHECK_EQ(m_rows[A.num_cols()], m_nnz);
}
-} // namespace internal
-} // namespace ceres
+BlockCRSJacobiPreconditioner::~BlockCRSJacobiPreconditioner() = default;
+
+bool BlockCRSJacobiPreconditioner::UpdateImpl(
+ const CompressedRowSparseMatrix& A, const double* D) {
+ const auto& col_blocks = A.col_blocks();
+ const auto& row_blocks = A.row_blocks();
+ const int num_col_blocks = col_blocks.size();
+ const int num_row_blocks = row_blocks.size();
+
+ const int* a_rows = A.rows();
+ const int* a_cols = A.cols();
+ const double* a_values = A.values();
+ double* m_values = m_->mutable_values();
+ const int* m_rows = m_->rows();
+
+ m_->SetZero();
+
+ ParallelFor(
+ options_.context,
+ 0,
+ num_row_blocks,
+ options_.num_threads,
+ [this, row_blocks, a_rows, a_cols, a_values, m_values, m_rows](int i) {
+ const int row = row_blocks[i].position;
+ const int row_block_size = row_blocks[i].size;
+ const int row_nnz = a_rows[row + 1] - a_rows[row];
+ ConstMatrixRef row_block(
+ a_values + a_rows[row], row_block_size, row_nnz);
+ int c = 0;
+ while (c < row_nnz) {
+ const int idx = a_rows[row] + c;
+ const int col = a_cols[idx];
+ const int col_block_size = m_rows[col + 1] - m_rows[col];
+
+ // We make use of the fact that the entire diagonal block is
+ // stored contiguously in memory as a row-major matrix.
+ MatrixRef m(m_values + m_rows[col], col_block_size, col_block_size);
+ // We do not have a row_stride version of
+ // MatrixTransposeMatrixMultiply, otherwise we could use it
+ // here to further speed up the following expression.
+ auto b = row_block.middleCols(c, col_block_size);
+ auto lock = MakeConditionalLock(options_.num_threads, locks_[col]);
+ m.noalias() += b.transpose() * b;
+ c += col_block_size;
+ }
+ });
+
+ ParallelFor(
+ options_.context,
+ 0,
+ num_col_blocks,
+ options_.num_threads,
+ [col_blocks, m_rows, m_values, D](int i) {
+ const int col = col_blocks[i].position;
+ const int col_block_size = col_blocks[i].size;
+ MatrixRef m(m_values + m_rows[col], col_block_size, col_block_size);
+
+ if (D != nullptr) {
+ m.diagonal() +=
+ ConstVectorRef(D + col, col_block_size).array().square().matrix();
+ }
+
+ // TODO(sameeragarwal): Deal with Cholesky inversion failure here and
+ // elsewhere.
+ m = m.llt().solve(Matrix::Identity(col_block_size, col_block_size));
+ });
+
+ return true;
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/block_jacobi_preconditioner.h b/internal/ceres/block_jacobi_preconditioner.h
index 18f7495..d175802 100644
--- a/internal/ceres/block_jacobi_preconditioner.h
+++ b/internal/ceres/block_jacobi_preconditioner.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,37 +34,34 @@
#include <memory>
#include "ceres/block_random_access_diagonal_matrix.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/preconditioner.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockSparseMatrix;
-struct CompressedRowBlockStructure;
+class CompressedRowSparseMatrix;
// A block Jacobi preconditioner. This is intended for use with
-// conjugate gradients, or other iterative symmetric solvers. To use
-// the preconditioner, create one by passing a BlockSparseMatrix "A"
-// to the constructor. This fixes the sparsity pattern to the pattern
-// of the matrix A^TA.
+// conjugate gradients, or other iterative symmetric solvers.
+
+// This version of the preconditioner is for use with BlockSparseMatrix
+// Jacobians.
//
-// Before each use of the preconditioner in a solve with conjugate gradients,
-// update the matrix by running Update(A, D). The values of the matrix A are
-// inspected to construct the preconditioner. The vector D is applied as the
-// D^TD diagonal term.
-class CERES_EXPORT_INTERNAL BlockJacobiPreconditioner
+// TODO(https://github.com/ceres-solver/ceres-solver/issues/936):
+// BlockSparseJacobiPreconditioner::RightMultiply will benefit from
+// multithreading
+class CERES_NO_EXPORT BlockSparseJacobiPreconditioner
: public BlockSparseMatrixPreconditioner {
public:
// A must remain valid while the BlockJacobiPreconditioner is.
- explicit BlockJacobiPreconditioner(const BlockSparseMatrix& A);
- BlockJacobiPreconditioner(const BlockJacobiPreconditioner&) = delete;
- void operator=(const BlockJacobiPreconditioner&) = delete;
-
- virtual ~BlockJacobiPreconditioner();
-
- // Preconditioner interface
- void RightMultiply(const double* x, double* y) const final;
+ explicit BlockSparseJacobiPreconditioner(Preconditioner::Options,
+ const BlockSparseMatrix& A);
+ ~BlockSparseJacobiPreconditioner() override;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final {
+ return m_->RightMultiplyAndAccumulate(x, y);
+ }
int num_rows() const final { return m_->num_rows(); }
int num_cols() const final { return m_->num_rows(); }
const BlockRandomAccessDiagonalMatrix& matrix() const { return *m_; }
@@ -72,10 +69,36 @@
private:
bool UpdateImpl(const BlockSparseMatrix& A, const double* D) final;
+ Preconditioner::Options options_;
std::unique_ptr<BlockRandomAccessDiagonalMatrix> m_;
};
-} // namespace internal
-} // namespace ceres
+// This version of the preconditioner is for use with CompressedRowSparseMatrix
+// Jacobians.
+class CERES_NO_EXPORT BlockCRSJacobiPreconditioner
+ : public CompressedRowSparseMatrixPreconditioner {
+ public:
+ // A must remain valid while the BlockJacobiPreconditioner is.
+ explicit BlockCRSJacobiPreconditioner(Preconditioner::Options options,
+ const CompressedRowSparseMatrix& A);
+ ~BlockCRSJacobiPreconditioner() override;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final {
+ m_->RightMultiplyAndAccumulate(x, y);
+ }
+ int num_rows() const final { return m_->num_rows(); }
+ int num_cols() const final { return m_->num_rows(); }
+ const CompressedRowSparseMatrix& matrix() const { return *m_; }
+
+ private:
+ bool UpdateImpl(const CompressedRowSparseMatrix& A, const double* D) final;
+
+ Preconditioner::Options options_;
+ std::vector<std::mutex> locks_;
+ std::unique_ptr<CompressedRowSparseMatrix> m_;
+};
+
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_BLOCK_JACOBI_PRECONDITIONER_H_
diff --git a/internal/ceres/block_jacobi_preconditioner_benchmark.cc b/internal/ceres/block_jacobi_preconditioner_benchmark.cc
new file mode 100644
index 0000000..e6571f3
--- /dev/null
+++ b/internal/ceres/block_jacobi_preconditioner_benchmark.cc
@@ -0,0 +1,177 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: sameeragarwal@google.com (Sameer Agarwal)
+
+#include <memory>
+#include <random>
+
+#include "Eigen/Dense"
+#include "benchmark/benchmark.h"
+#include "ceres/block_jacobi_preconditioner.h"
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/fake_bundle_adjustment_jacobian.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/eigen.h"
+
+namespace ceres::internal {
+
+constexpr int kNumCameras = 1000;
+constexpr int kNumPoints = 10000;
+constexpr int kCameraSize = 6;
+constexpr int kPointSize = 3;
+constexpr double kVisibility = 0.1;
+
+constexpr int kNumRowBlocks = 100000;
+constexpr int kNumColBlocks = 10000;
+constexpr int kMinRowBlockSize = 1;
+constexpr int kMaxRowBlockSize = 5;
+constexpr int kMinColBlockSize = 1;
+constexpr int kMaxColBlockSize = 15;
+constexpr double kBlockDensity = 5.0 / kNumColBlocks;
+
+static void BM_BlockSparseJacobiPreconditionerBA(benchmark::State& state) {
+ std::mt19937 prng;
+ auto jacobian = CreateFakeBundleAdjustmentJacobian(
+ kNumCameras, kNumPoints, kCameraSize, kPointSize, kVisibility, prng);
+
+ Preconditioner::Options preconditioner_options;
+ ContextImpl context;
+ preconditioner_options.context = &context;
+ preconditioner_options.num_threads = static_cast<int>(state.range(0));
+ context.EnsureMinimumThreads(preconditioner_options.num_threads);
+ BlockSparseJacobiPreconditioner p(preconditioner_options, *jacobian);
+
+ Vector d = Vector::Ones(jacobian->num_cols());
+ for (auto _ : state) {
+ p.Update(*jacobian, d.data());
+ }
+}
+
+BENCHMARK(BM_BlockSparseJacobiPreconditionerBA)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+static void BM_BlockCRSJacobiPreconditionerBA(benchmark::State& state) {
+ std::mt19937 prng;
+ auto jacobian = CreateFakeBundleAdjustmentJacobian(
+ kNumCameras, kNumPoints, kCameraSize, kPointSize, kVisibility, prng);
+
+ auto jacobian_crs = jacobian->ToCompressedRowSparseMatrix();
+ Preconditioner::Options preconditioner_options;
+ ContextImpl context;
+ preconditioner_options.context = &context;
+ preconditioner_options.num_threads = static_cast<int>(state.range(0));
+ context.EnsureMinimumThreads(preconditioner_options.num_threads);
+ BlockCRSJacobiPreconditioner p(preconditioner_options, *jacobian_crs);
+
+ Vector d = Vector::Ones(jacobian_crs->num_cols());
+ for (auto _ : state) {
+ p.Update(*jacobian_crs, d.data());
+ }
+}
+
+BENCHMARK(BM_BlockCRSJacobiPreconditionerBA)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+static void BM_BlockSparseJacobiPreconditionerUnstructured(
+ benchmark::State& state) {
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = kNumRowBlocks;
+ options.num_col_blocks = kNumColBlocks;
+ options.min_row_block_size = kMinRowBlockSize;
+ options.min_col_block_size = kMinColBlockSize;
+ options.max_row_block_size = kMaxRowBlockSize;
+ options.max_col_block_size = kMaxColBlockSize;
+ options.block_density = kBlockDensity;
+ std::mt19937 prng;
+
+ auto jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+ Preconditioner::Options preconditioner_options;
+ ContextImpl context;
+ preconditioner_options.context = &context;
+ preconditioner_options.num_threads = static_cast<int>(state.range(0));
+ context.EnsureMinimumThreads(preconditioner_options.num_threads);
+ BlockSparseJacobiPreconditioner p(preconditioner_options, *jacobian);
+
+ Vector d = Vector::Ones(jacobian->num_cols());
+ for (auto _ : state) {
+ p.Update(*jacobian, d.data());
+ }
+}
+
+BENCHMARK(BM_BlockSparseJacobiPreconditionerUnstructured)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+static void BM_BlockCRSJacobiPreconditionerUnstructured(
+ benchmark::State& state) {
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = kNumRowBlocks;
+ options.num_col_blocks = kNumColBlocks;
+ options.min_row_block_size = kMinRowBlockSize;
+ options.min_col_block_size = kMinColBlockSize;
+ options.max_row_block_size = kMaxRowBlockSize;
+ options.max_col_block_size = kMaxColBlockSize;
+ options.block_density = kBlockDensity;
+ std::mt19937 prng;
+
+ auto jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+ auto jacobian_crs = jacobian->ToCompressedRowSparseMatrix();
+ Preconditioner::Options preconditioner_options;
+ ContextImpl context;
+ preconditioner_options.context = &context;
+ preconditioner_options.num_threads = static_cast<int>(state.range(0));
+ context.EnsureMinimumThreads(preconditioner_options.num_threads);
+ BlockCRSJacobiPreconditioner p(preconditioner_options, *jacobian_crs);
+
+ Vector d = Vector::Ones(jacobian_crs->num_cols());
+ for (auto _ : state) {
+ p.Update(*jacobian_crs, d.data());
+ }
+}
+BENCHMARK(BM_BlockCRSJacobiPreconditionerUnstructured)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+} // namespace ceres::internal
+
+BENCHMARK_MAIN();
diff --git a/internal/ceres/block_jacobi_preconditioner_test.cc b/internal/ceres/block_jacobi_preconditioner_test.cc
index cc582c6..ca19a0d 100644
--- a/internal/ceres/block_jacobi_preconditioner_test.cc
+++ b/internal/ceres/block_jacobi_preconditioner_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,6 +31,7 @@
#include "ceres/block_jacobi_preconditioner.h"
#include <memory>
+#include <random>
#include <vector>
#include "Eigen/Dense"
@@ -39,63 +40,118 @@
#include "ceres/linear_least_squares_problems.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class BlockJacobiPreconditionerTest : public ::testing::Test {
- protected:
- void SetUpFromProblemId(int problem_id) {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(problem_id));
+TEST(BlockSparseJacobiPreconditioner, _) {
+ constexpr int kNumtrials = 10;
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_col_blocks = 3;
+ options.min_col_block_size = 1;
+ options.max_col_block_size = 3;
- CHECK(problem != nullptr);
- A.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
- D.reset(problem->D.release());
+ options.num_row_blocks = 5;
+ options.min_row_block_size = 1;
+ options.max_row_block_size = 4;
+ options.block_density = 0.25;
+ std::mt19937 prng;
- Matrix dense_a;
- A->ToDenseMatrix(&dense_a);
- dense_ata = dense_a.transpose() * dense_a;
- dense_ata += VectorRef(D.get(), A->num_cols())
- .array()
- .square()
- .matrix()
- .asDiagonal();
- }
+ Preconditioner::Options preconditioner_options;
+ ContextImpl context;
+ preconditioner_options.context = &context;
- void VerifyDiagonalBlocks(const int problem_id) {
- SetUpFromProblemId(problem_id);
+ for (int trial = 0; trial < kNumtrials; ++trial) {
+ auto jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+ Vector diagonal = Vector::Ones(jacobian->num_cols());
+ Matrix dense_jacobian;
+ jacobian->ToDenseMatrix(&dense_jacobian);
+ Matrix hessian = dense_jacobian.transpose() * dense_jacobian;
+ hessian.diagonal() += diagonal.array().square().matrix();
- BlockJacobiPreconditioner pre(*A);
- pre.Update(*A, D.get());
- BlockRandomAccessDiagonalMatrix* m =
- const_cast<BlockRandomAccessDiagonalMatrix*>(&pre.matrix());
- EXPECT_EQ(m->num_rows(), A->num_cols());
- EXPECT_EQ(m->num_cols(), A->num_cols());
+ BlockSparseJacobiPreconditioner pre(preconditioner_options, *jacobian);
+ pre.Update(*jacobian, diagonal.data());
- const CompressedRowBlockStructure* bs = A->block_structure();
+ // The const_cast is needed to be able to call GetCell.
+ auto* m = const_cast<BlockRandomAccessDiagonalMatrix*>(&pre.matrix());
+ EXPECT_EQ(m->num_rows(), jacobian->num_cols());
+ EXPECT_EQ(m->num_cols(), jacobian->num_cols());
+
+ const CompressedRowBlockStructure* bs = jacobian->block_structure();
for (int i = 0; i < bs->cols.size(); ++i) {
const int block_size = bs->cols[i].size;
int r, c, row_stride, col_stride;
CellInfo* cell_info = m->GetCell(i, i, &r, &c, &row_stride, &col_stride);
- MatrixRef m(cell_info->values, row_stride, col_stride);
- Matrix actual_block_inverse = m.block(r, c, block_size, block_size);
- Matrix expected_block = dense_ata.block(
+ Matrix actual_block_inverse =
+ MatrixRef(cell_info->values, row_stride, col_stride)
+ .block(r, c, block_size, block_size);
+ Matrix expected_block = hessian.block(
bs->cols[i].position, bs->cols[i].position, block_size, block_size);
const double residual = (actual_block_inverse * expected_block -
Matrix::Identity(block_size, block_size))
.norm();
EXPECT_NEAR(residual, 0.0, 1e-12) << "Block: " << i;
}
+ options.num_col_blocks++;
+ options.num_row_blocks++;
}
+}
- std::unique_ptr<BlockSparseMatrix> A;
- std::unique_ptr<double[]> D;
- Matrix dense_ata;
-};
+TEST(CompressedRowSparseJacobiPreconditioner, _) {
+ constexpr int kNumtrials = 10;
+ CompressedRowSparseMatrix::RandomMatrixOptions options;
+ options.num_col_blocks = 3;
+ options.min_col_block_size = 1;
+ options.max_col_block_size = 3;
-TEST_F(BlockJacobiPreconditionerTest, SmallProblem) { VerifyDiagonalBlocks(2); }
+ options.num_row_blocks = 5;
+ options.min_row_block_size = 1;
+ options.max_row_block_size = 4;
+ options.block_density = 0.25;
+ std::mt19937 prng;
-TEST_F(BlockJacobiPreconditionerTest, LargeProblem) { VerifyDiagonalBlocks(3); }
+ Preconditioner::Options preconditioner_options;
+ ContextImpl context;
+ preconditioner_options.context = &context;
-} // namespace internal
-} // namespace ceres
+ for (int trial = 0; trial < kNumtrials; ++trial) {
+ auto jacobian =
+ CompressedRowSparseMatrix::CreateRandomMatrix(options, prng);
+ Vector diagonal = Vector::Ones(jacobian->num_cols());
+
+ Matrix dense_jacobian;
+ jacobian->ToDenseMatrix(&dense_jacobian);
+ Matrix hessian = dense_jacobian.transpose() * dense_jacobian;
+ hessian.diagonal() += diagonal.array().square().matrix();
+
+ BlockCRSJacobiPreconditioner pre(preconditioner_options, *jacobian);
+ pre.Update(*jacobian, diagonal.data());
+ auto& m = pre.matrix();
+
+ EXPECT_EQ(m.num_rows(), jacobian->num_cols());
+ EXPECT_EQ(m.num_cols(), jacobian->num_cols());
+
+ const auto& col_blocks = jacobian->col_blocks();
+ for (int i = 0, col = 0; i < col_blocks.size(); ++i) {
+ const int block_size = col_blocks[i].size;
+ int idx = m.rows()[col];
+ for (int j = 0; j < block_size; ++j) {
+ EXPECT_EQ(m.rows()[col + j + 1] - m.rows()[col + j], block_size);
+ for (int k = 0; k < block_size; ++k, ++idx) {
+ EXPECT_EQ(m.cols()[idx], col + k);
+ }
+ }
+
+ ConstMatrixRef actual_block_inverse(
+ m.values() + m.rows()[col], block_size, block_size);
+ Matrix expected_block = hessian.block(col, col, block_size, block_size);
+ const double residual = (actual_block_inverse * expected_block -
+ Matrix::Identity(block_size, block_size))
+ .norm();
+ EXPECT_NEAR(residual, 0.0, 1e-12) << "Block: " << i;
+ col += block_size;
+ }
+ options.num_col_blocks++;
+ options.num_row_blocks++;
+ }
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/block_jacobian_writer.cc b/internal/ceres/block_jacobian_writer.cc
index 17c157b..5a769cb 100644
--- a/internal/ceres/block_jacobian_writer.cc
+++ b/internal/ceres/block_jacobian_writer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,18 +30,19 @@
#include "ceres/block_jacobian_writer.h"
+#include <algorithm>
+#include <memory>
+#include <vector>
+
#include "ceres/block_evaluate_preparer.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/parameter_block.h"
#include "ceres/program.h"
#include "ceres/residual_block.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
namespace {
@@ -53,21 +54,28 @@
// the first num_eliminate_blocks parameter blocks as indicated by the parameter
// block ordering. The remaining parameter blocks are the F blocks.
//
+// In order to simplify handling block-sparse to CRS conversion, cells within
+// the row-block of non-partitioned matrix are stored in memory sequentially in
+// the order of increasing column-block id. In case of partitioned matrices,
+// cells corresponding to F sub-matrix are stored sequentially in the order of
+// increasing column-block id (with cells corresponding to E sub-matrix stored
+// separately).
+//
// TODO(keir): Consider if we should use a boolean for each parameter block
// instead of num_eliminate_blocks.
-void BuildJacobianLayout(const Program& program,
+bool BuildJacobianLayout(const Program& program,
int num_eliminate_blocks,
- vector<int*>* jacobian_layout,
- vector<int>* jacobian_layout_storage) {
- const vector<ResidualBlock*>& residual_blocks = program.residual_blocks();
+ std::vector<int*>* jacobian_layout,
+ std::vector<int>* jacobian_layout_storage) {
+ const std::vector<ResidualBlock*>& residual_blocks =
+ program.residual_blocks();
// Iterate over all the active residual blocks and determine how many E blocks
// are there. This will determine where the F blocks start in the jacobian
// matrix. Also compute the number of jacobian blocks.
- int f_block_pos = 0;
- int num_jacobian_blocks = 0;
- for (int i = 0; i < residual_blocks.size(); ++i) {
- ResidualBlock* residual_block = residual_blocks[i];
+ unsigned int f_block_pos = 0;
+ unsigned int num_jacobian_blocks = 0;
+ for (auto* residual_block : residual_blocks) {
const int num_residuals = residual_block->NumResiduals();
const int num_parameter_blocks = residual_block->NumParameterBlocks();
@@ -78,10 +86,15 @@
// Only count blocks for active parameters.
num_jacobian_blocks++;
if (parameter_block->index() < num_eliminate_blocks) {
- f_block_pos += num_residuals * parameter_block->LocalSize();
+ f_block_pos += num_residuals * parameter_block->TangentSize();
}
}
}
+ if (num_jacobian_blocks > std::numeric_limits<int>::max()) {
+ LOG(ERROR) << "Overlow error. Too many blocks in the jacobian matrix : "
+ << num_jacobian_blocks;
+ return false;
+ }
}
// We now know that the E blocks are laid out starting at zero, and the F
@@ -93,65 +106,103 @@
jacobian_layout_storage->resize(num_jacobian_blocks);
int e_block_pos = 0;
- int* jacobian_pos = &(*jacobian_layout_storage)[0];
+ int* jacobian_pos = jacobian_layout_storage->data();
+ std::vector<std::pair<int, int>> active_parameter_blocks;
for (int i = 0; i < residual_blocks.size(); ++i) {
const ResidualBlock* residual_block = residual_blocks[i];
const int num_residuals = residual_block->NumResiduals();
const int num_parameter_blocks = residual_block->NumParameterBlocks();
(*jacobian_layout)[i] = jacobian_pos;
+ // Cells from F sub-matrix are to be stored sequentially with increasing
+ // column block id. For each non-constant parameter block, a pair of indices
+ // (index in the list of active parameter blocks and index in the list of
+ // all parameter blocks) is computed, and index pairs are sorted by the
+ // index of corresponding column block id.
+ active_parameter_blocks.clear();
+ active_parameter_blocks.reserve(num_parameter_blocks);
for (int j = 0; j < num_parameter_blocks; ++j) {
ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
- const int parameter_block_index = parameter_block->index();
if (parameter_block->IsConstant()) {
continue;
}
+ const int k = active_parameter_blocks.size();
+ active_parameter_blocks.emplace_back(k, j);
+ }
+ std::sort(active_parameter_blocks.begin(),
+ active_parameter_blocks.end(),
+ [&residual_block](const std::pair<int, int>& a,
+ const std::pair<int, int>& b) {
+ return residual_block->parameter_blocks()[a.second]->index() <
+ residual_block->parameter_blocks()[b.second]->index();
+ });
+ // Cell positions for each active parameter block are filled in the order of
+ // active parameter block indices sorted by columnd block index. This
+ // guarantees that cells are laid out sequentially with increasing column
+ // block indices.
+ for (const auto& indices : active_parameter_blocks) {
+ const auto [k, j] = indices;
+ ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
+ const int parameter_block_index = parameter_block->index();
const int jacobian_block_size =
- num_residuals * parameter_block->LocalSize();
+ num_residuals * parameter_block->TangentSize();
if (parameter_block_index < num_eliminate_blocks) {
- *jacobian_pos = e_block_pos;
+ jacobian_pos[k] = e_block_pos;
e_block_pos += jacobian_block_size;
} else {
- *jacobian_pos = f_block_pos;
+ jacobian_pos[k] = static_cast<int>(f_block_pos);
f_block_pos += jacobian_block_size;
+ if (f_block_pos > std::numeric_limits<int>::max()) {
+ LOG(ERROR)
+ << "Overlow error. Too many entries in the Jacobian matrix.";
+ return false;
+ }
}
- jacobian_pos++;
}
+ jacobian_pos += active_parameter_blocks.size();
}
+ return true;
}
} // namespace
BlockJacobianWriter::BlockJacobianWriter(const Evaluator::Options& options,
Program* program)
- : program_(program) {
+ : options_(options), program_(program) {
CHECK_GE(options.num_eliminate_blocks, 0)
<< "num_eliminate_blocks must be greater than 0.";
- BuildJacobianLayout(*program,
- options.num_eliminate_blocks,
- &jacobian_layout_,
- &jacobian_layout_storage_);
+ jacobian_layout_is_valid_ = BuildJacobianLayout(*program,
+ options.num_eliminate_blocks,
+ &jacobian_layout_,
+ &jacobian_layout_storage_);
}
-// Create evaluate prepareres that point directly into the final jacobian. This
+// Create evaluate preparers that point directly into the final jacobian. This
// makes the final Write() a nop.
-BlockEvaluatePreparer* BlockJacobianWriter::CreateEvaluatePreparers(
- int num_threads) {
- int max_derivatives_per_residual_block =
+std::unique_ptr<BlockEvaluatePreparer[]>
+BlockJacobianWriter::CreateEvaluatePreparers(unsigned num_threads) {
+ const int max_derivatives_per_residual_block =
program_->MaxDerivativesPerResidualBlock();
- BlockEvaluatePreparer* preparers = new BlockEvaluatePreparer[num_threads];
- for (int i = 0; i < num_threads; i++) {
- preparers[i].Init(&jacobian_layout_[0], max_derivatives_per_residual_block);
+ auto preparers = std::make_unique<BlockEvaluatePreparer[]>(num_threads);
+ for (unsigned i = 0; i < num_threads; i++) {
+ preparers[i].Init(jacobian_layout_.data(),
+ max_derivatives_per_residual_block);
}
return preparers;
}
-SparseMatrix* BlockJacobianWriter::CreateJacobian() const {
- CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
+std::unique_ptr<SparseMatrix> BlockJacobianWriter::CreateJacobian() const {
+ if (!jacobian_layout_is_valid_) {
+ LOG(ERROR) << "Unable to create Jacobian matrix. Too many entries in the "
+ "Jacobian matrix.";
+ return nullptr;
+ }
- const vector<ParameterBlock*>& parameter_blocks =
+ auto* bs = new CompressedRowBlockStructure;
+
+ const std::vector<ParameterBlock*>& parameter_blocks =
program_->parameter_blocks();
// Construct the column blocks.
@@ -159,13 +210,14 @@
for (int i = 0, cursor = 0; i < parameter_blocks.size(); ++i) {
CHECK_NE(parameter_blocks[i]->index(), -1);
CHECK(!parameter_blocks[i]->IsConstant());
- bs->cols[i].size = parameter_blocks[i]->LocalSize();
+ bs->cols[i].size = parameter_blocks[i]->TangentSize();
bs->cols[i].position = cursor;
cursor += bs->cols[i].size;
}
// Construct the cells in each row.
- const vector<ResidualBlock*>& residual_blocks = program_->residual_blocks();
+ const std::vector<ResidualBlock*>& residual_blocks =
+ program_->residual_blocks();
int row_block_position = 0;
bs->rows.resize(residual_blocks.size());
for (int i = 0; i < residual_blocks.size(); ++i) {
@@ -201,13 +253,11 @@
}
}
- sort(row->cells.begin(), row->cells.end(), CellLessThan);
+ std::sort(row->cells.begin(), row->cells.end(), CellLessThan);
}
- BlockSparseMatrix* jacobian = new BlockSparseMatrix(bs);
- CHECK(jacobian != nullptr);
- return jacobian;
+ return std::make_unique<BlockSparseMatrix>(
+ bs, options_.sparse_linear_algebra_library_type == CUDA_SPARSE);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_jacobian_writer.h b/internal/ceres/block_jacobian_writer.h
index 8054d7b..a6d02e3 100644
--- a/internal/ceres/block_jacobian_writer.h
+++ b/internal/ceres/block_jacobian_writer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,30 +38,42 @@
#ifndef CERES_INTERNAL_BLOCK_JACOBIAN_WRITER_H_
#define CERES_INTERNAL_BLOCK_JACOBIAN_WRITER_H_
+#include <memory>
#include <vector>
#include "ceres/evaluator.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockEvaluatePreparer;
class Program;
class SparseMatrix;
-// TODO(sameeragarwal): This class needs documemtation.
-class BlockJacobianWriter {
+// TODO(sameeragarwal): This class needs documentation.
+class CERES_NO_EXPORT BlockJacobianWriter {
public:
+ // Pre-computes positions of cells in block-sparse jacobian.
+ // Two possible memory layouts are implemented:
+ // - Non-partitioned case
+ // - Partitioned case (for Schur type linear solver)
+ //
+ // In non-partitioned case, cells are stored sequentially in the
+ // lexicographic order of (row block id, column block id).
+ //
+ // In the case of partitoned matrix, cells of each sub-matrix (E and F) are
+ // stored sequentially in the lexicographic order of (row block id, column
+ // block id) and cells from E sub-matrix precede cells from F sub-matrix.
BlockJacobianWriter(const Evaluator::Options& options, Program* program);
// JacobianWriter interface.
- // Create evaluate prepareres that point directly into the final jacobian.
+ // Create evaluate preparers that point directly into the final jacobian.
// This makes the final Write() a nop.
- BlockEvaluatePreparer* CreateEvaluatePreparers(int num_threads);
+ std::unique_ptr<BlockEvaluatePreparer[]> CreateEvaluatePreparers(
+ unsigned num_threads);
- SparseMatrix* CreateJacobian() const;
+ std::unique_ptr<SparseMatrix> CreateJacobian() const;
void Write(int /* residual_id */,
int /* residual_offset */,
@@ -73,12 +85,13 @@
}
private:
+ Evaluator::Options options_;
Program* program_;
// Stores the position of each residual / parameter jacobian.
//
// The block sparse matrix that this writer writes to is stored as a set of
- // contiguos dense blocks, one after each other; see BlockSparseMatrix. The
+ // contiguous dense blocks, one after each other; see BlockSparseMatrix. The
// "double* values_" member of the block sparse matrix contains all of these
// blocks. Given a pointer to the first element of a block and the size of
// that block, it's possible to write to it.
@@ -120,9 +133,14 @@
// The pointers in jacobian_layout_ point directly into this vector.
std::vector<int> jacobian_layout_storage_;
+
+ // The constructor computes the layout of the Jacobian, and this bool keeps
+ // track of whether the computation of the layout completed successfully or
+ // not, if it is false, then jacobian_layout and jacobian_layout_storage are
+ // both in an invalid state.
+ bool jacobian_layout_is_valid_ = false;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_BLOCK_JACOBIAN_WRITER_H_
diff --git a/internal/ceres/block_random_access_dense_matrix.cc b/internal/ceres/block_random_access_dense_matrix.cc
index 386f81e..b8be51b 100644
--- a/internal/ceres/block_random_access_dense_matrix.cc
+++ b/internal/ceres/block_random_access_dense_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,27 +30,22 @@
#include "ceres/block_random_access_dense_matrix.h"
+#include <utility>
#include <vector>
#include "ceres/internal/eigen.h"
+#include "ceres/parallel_vector_ops.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
BlockRandomAccessDenseMatrix::BlockRandomAccessDenseMatrix(
- const std::vector<int>& blocks) {
- const int num_blocks = blocks.size();
- block_layout_.resize(num_blocks, 0);
- num_rows_ = 0;
- for (int i = 0; i < num_blocks; ++i) {
- block_layout_[i] = num_rows_;
- num_rows_ += blocks[i];
- }
-
- values_.reset(new double[num_rows_ * num_rows_]);
-
- cell_infos_.reset(new CellInfo[num_blocks * num_blocks]);
+ std::vector<Block> blocks, ContextImpl* context, int num_threads)
+ : blocks_(std::move(blocks)), context_(context), num_threads_(num_threads) {
+ const int num_blocks = blocks_.size();
+ num_rows_ = NumScalarEntries(blocks_);
+ values_ = std::make_unique<double[]>(num_rows_ * num_rows_);
+ cell_infos_ = std::make_unique<CellInfo[]>(num_blocks * num_blocks);
for (int i = 0; i < num_blocks * num_blocks; ++i) {
cell_infos_[i].values = values_.get();
}
@@ -58,30 +53,23 @@
SetZero();
}
-// Assume that the user does not hold any locks on any cell blocks
-// when they are calling SetZero.
-BlockRandomAccessDenseMatrix::~BlockRandomAccessDenseMatrix() {}
-
CellInfo* BlockRandomAccessDenseMatrix::GetCell(const int row_block_id,
const int col_block_id,
int* row,
int* col,
int* row_stride,
int* col_stride) {
- *row = block_layout_[row_block_id];
- *col = block_layout_[col_block_id];
+ *row = blocks_[row_block_id].position;
+ *col = blocks_[col_block_id].position;
*row_stride = num_rows_;
*col_stride = num_rows_;
- return &cell_infos_[row_block_id * block_layout_.size() + col_block_id];
+ return &cell_infos_[row_block_id * blocks_.size() + col_block_id];
}
// Assume that the user does not hold any locks on any cell blocks
// when they are calling SetZero.
void BlockRandomAccessDenseMatrix::SetZero() {
- if (num_rows_) {
- VectorRef(values_.get(), num_rows_ * num_rows_).setZero();
- }
+ ParallelSetZero(context_, num_threads_, values_.get(), num_rows_ * num_rows_);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_random_access_dense_matrix.h b/internal/ceres/block_random_access_dense_matrix.h
index 9e55524..9468249 100644
--- a/internal/ceres/block_random_access_dense_matrix.h
+++ b/internal/ceres/block_random_access_dense_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,10 +35,12 @@
#include <vector>
#include "ceres/block_random_access_matrix.h"
-#include "ceres/internal/port.h"
+#include "ceres/block_structure.h"
+#include "ceres/context_impl.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// A square block random accessible matrix with the same row and
// column block structure. All cells are stored in the same single
@@ -46,22 +48,20 @@
// num_rows x num_cols.
//
// This class is NOT thread safe. Since all n^2 cells are stored,
-// GetCell never returns NULL for any (row_block_id, col_block_id)
+// GetCell never returns nullptr for any (row_block_id, col_block_id)
// pair.
//
// ReturnCell is a nop.
-class CERES_EXPORT_INTERNAL BlockRandomAccessDenseMatrix
+class CERES_NO_EXPORT BlockRandomAccessDenseMatrix
: public BlockRandomAccessMatrix {
public:
// blocks is a vector of block sizes. The resulting matrix has
// blocks.size() * blocks.size() cells.
- explicit BlockRandomAccessDenseMatrix(const std::vector<int>& blocks);
- BlockRandomAccessDenseMatrix(const BlockRandomAccessDenseMatrix&) = delete;
- void operator=(const BlockRandomAccessDenseMatrix&) = delete;
+ explicit BlockRandomAccessDenseMatrix(std::vector<Block> blocks,
+ ContextImpl* context,
+ int num_threads);
- // The destructor is not thread safe. It assumes that no one is
- // modifying any cells when the matrix is being destroyed.
- virtual ~BlockRandomAccessDenseMatrix();
+ ~BlockRandomAccessDenseMatrix() override = default;
// BlockRandomAccessMatrix interface.
CellInfo* GetCell(int row_block_id,
@@ -71,8 +71,6 @@
int* row_stride,
int* col_stride) final;
- // This is not a thread safe method, it assumes that no cell is
- // locked.
void SetZero() final;
// Since the matrix is square with the same row and column block
@@ -85,13 +83,16 @@
double* mutable_values() { return values_.get(); }
private:
- int num_rows_;
- std::vector<int> block_layout_;
+ std::vector<Block> blocks_;
+ ContextImpl* context_ = nullptr;
+ int num_threads_ = -1;
+ int num_rows_ = -1;
std::unique_ptr<double[]> values_;
std::unique_ptr<CellInfo[]> cell_infos_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_BLOCK_RANDOM_ACCESS_DENSE_MATRIX_H_
diff --git a/internal/ceres/block_random_access_dense_matrix_test.cc b/internal/ceres/block_random_access_dense_matrix_test.cc
index 0736d56..ba9c75d 100644
--- a/internal/ceres/block_random_access_dense_matrix_test.cc
+++ b/internal/ceres/block_random_access_dense_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,48 +35,51 @@
#include "ceres/internal/eigen.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST(BlockRandomAccessDenseMatrix, GetCell) {
- std::vector<int> blocks;
- blocks.push_back(3);
- blocks.push_back(4);
- blocks.push_back(5);
- const int num_rows = 3 + 4 + 5;
- BlockRandomAccessDenseMatrix m(blocks);
+ ContextImpl context;
+ constexpr int num_threads = 1;
+
+ std::vector<Block> blocks;
+ blocks.emplace_back(3, 0);
+ blocks.emplace_back(4, 3);
+ blocks.emplace_back(5, 7);
+ constexpr int num_rows = 3 + 4 + 5;
+ BlockRandomAccessDenseMatrix m(blocks, &context, num_threads);
EXPECT_EQ(m.num_rows(), num_rows);
EXPECT_EQ(m.num_cols(), num_rows);
- int row_idx = 0;
for (int i = 0; i < blocks.size(); ++i) {
- int col_idx = 0;
+ const int row_idx = blocks[i].position;
for (int j = 0; j < blocks.size(); ++j) {
+ const int col_idx = blocks[j].position;
int row;
int col;
int row_stride;
int col_stride;
CellInfo* cell = m.GetCell(i, j, &row, &col, &row_stride, &col_stride);
- EXPECT_TRUE(cell != NULL);
+ EXPECT_TRUE(cell != nullptr);
EXPECT_EQ(row, row_idx);
EXPECT_EQ(col, col_idx);
EXPECT_EQ(row_stride, 3 + 4 + 5);
EXPECT_EQ(col_stride, 3 + 4 + 5);
- col_idx += blocks[j];
}
- row_idx += blocks[i];
}
}
TEST(BlockRandomAccessDenseMatrix, WriteCell) {
- std::vector<int> blocks;
- blocks.push_back(3);
- blocks.push_back(4);
- blocks.push_back(5);
- const int num_rows = 3 + 4 + 5;
+ ContextImpl context;
+ constexpr int num_threads = 1;
- BlockRandomAccessDenseMatrix m(blocks);
+ std::vector<Block> blocks;
+ blocks.emplace_back(3, 0);
+ blocks.emplace_back(4, 3);
+ blocks.emplace_back(5, 7);
+ constexpr int num_rows = 3 + 4 + 5;
+
+ BlockRandomAccessDenseMatrix m(blocks, &context, num_threads);
// Fill the cell (i,j) with (i + 1) * (j + 1)
for (int i = 0; i < blocks.size(); ++i) {
@@ -87,29 +90,26 @@
int col_stride;
CellInfo* cell = m.GetCell(i, j, &row, &col, &row_stride, &col_stride);
MatrixRef(cell->values, row_stride, col_stride)
- .block(row, col, blocks[i], blocks[j]) =
- (i + 1) * (j + 1) * Matrix::Ones(blocks[i], blocks[j]);
+ .block(row, col, blocks[i].size, blocks[j].size) =
+ (i + 1) * (j + 1) * Matrix::Ones(blocks[i].size, blocks[j].size);
}
}
// Check the values in the array are correct by going over the
// entries of each block manually.
- int row_idx = 0;
for (int i = 0; i < blocks.size(); ++i) {
- int col_idx = 0;
+ const int row_idx = blocks[i].position;
for (int j = 0; j < blocks.size(); ++j) {
+ const int col_idx = blocks[j].position;
// Check the values of this block.
- for (int r = 0; r < blocks[i]; ++r) {
- for (int c = 0; c < blocks[j]; ++c) {
+ for (int r = 0; r < blocks[i].size; ++r) {
+ for (int c = 0; c < blocks[j].size; ++c) {
int pos = row_idx * num_rows + col_idx;
EXPECT_EQ(m.values()[pos], (i + 1) * (j + 1));
}
}
- col_idx += blocks[j];
}
- row_idx += blocks[i];
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_random_access_diagonal_matrix.cc b/internal/ceres/block_random_access_diagonal_matrix.cc
index 08f6d7f..643dbf1 100644
--- a/internal/ceres/block_random_access_diagonal_matrix.cc
+++ b/internal/ceres/block_random_access_diagonal_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,65 +31,32 @@
#include "ceres/block_random_access_diagonal_matrix.h"
#include <algorithm>
+#include <memory>
#include <set>
#include <utility>
#include <vector>
#include "Eigen/Dense"
-#include "ceres/internal/port.h"
+#include "ceres/compressed_row_sparse_matrix.h"
+#include "ceres/internal/export.h"
+#include "ceres/parallel_for.h"
+#include "ceres/parallel_vector_ops.h"
#include "ceres/stl_util.h"
-#include "ceres/triplet_sparse_matrix.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
-
-// TODO(sameeragarwal): Drop the dependence on TripletSparseMatrix.
+namespace ceres::internal {
BlockRandomAccessDiagonalMatrix::BlockRandomAccessDiagonalMatrix(
- const vector<int>& blocks)
- : blocks_(blocks) {
- // Build the row/column layout vector and count the number of scalar
- // rows/columns.
- int num_cols = 0;
- int num_nonzeros = 0;
- vector<int> block_positions;
- for (int i = 0; i < blocks_.size(); ++i) {
- block_positions.push_back(num_cols);
- num_cols += blocks_[i];
- num_nonzeros += blocks_[i] * blocks_[i];
+ const std::vector<Block>& blocks, ContextImpl* context, int num_threads)
+ : context_(context), num_threads_(num_threads) {
+ m_ = CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(nullptr, blocks);
+ double* values = m_->mutable_values();
+ layout_ = std::make_unique<CellInfo[]>(blocks.size());
+ for (int i = 0; i < blocks.size(); ++i) {
+ layout_[i].values = values;
+ values += blocks[i].size * blocks[i].size;
}
-
- VLOG(1) << "Matrix Size [" << num_cols << "," << num_cols << "] "
- << num_nonzeros;
-
- tsm_.reset(new TripletSparseMatrix(num_cols, num_cols, num_nonzeros));
- tsm_->set_num_nonzeros(num_nonzeros);
- int* rows = tsm_->mutable_rows();
- int* cols = tsm_->mutable_cols();
- double* values = tsm_->mutable_values();
-
- int pos = 0;
- for (int i = 0; i < blocks_.size(); ++i) {
- const int block_size = blocks_[i];
- layout_.push_back(new CellInfo(values + pos));
- const int block_begin = block_positions[i];
- for (int r = 0; r < block_size; ++r) {
- for (int c = 0; c < block_size; ++c, ++pos) {
- rows[pos] = block_begin + r;
- cols[pos] = block_begin + c;
- }
- }
- }
-}
-
-// Assume that the user does not hold any locks on any cell blocks
-// when they are calling SetZero.
-BlockRandomAccessDiagonalMatrix::~BlockRandomAccessDiagonalMatrix() {
- STLDeleteContainerPointers(layout_.begin(), layout_.end());
}
CellInfo* BlockRandomAccessDiagonalMatrix::GetCell(int row_block_id,
@@ -99,51 +66,53 @@
int* row_stride,
int* col_stride) {
if (row_block_id != col_block_id) {
- return NULL;
+ return nullptr;
}
- const int stride = blocks_[row_block_id];
+
+ auto& blocks = m_->row_blocks();
+ const int stride = blocks[row_block_id].size;
// Each cell is stored contiguously as its own little dense matrix.
*row = 0;
*col = 0;
*row_stride = stride;
*col_stride = stride;
- return layout_[row_block_id];
+ return &layout_[row_block_id];
}
// Assume that the user does not hold any locks on any cell blocks
// when they are calling SetZero.
void BlockRandomAccessDiagonalMatrix::SetZero() {
- if (tsm_->num_nonzeros()) {
- VectorRef(tsm_->mutable_values(), tsm_->num_nonzeros()).setZero();
- }
+ ParallelSetZero(
+ context_, num_threads_, m_->mutable_values(), m_->num_nonzeros());
}
void BlockRandomAccessDiagonalMatrix::Invert() {
- double* values = tsm_->mutable_values();
- for (int i = 0; i < blocks_.size(); ++i) {
- const int block_size = blocks_[i];
- MatrixRef block(values, block_size, block_size);
- block = block.selfadjointView<Eigen::Upper>().llt().solve(
- Matrix::Identity(block_size, block_size));
- values += block_size * block_size;
- }
+ auto& blocks = m_->row_blocks();
+ const int num_blocks = blocks.size();
+ ParallelFor(context_, 0, num_blocks, num_threads_, [this, blocks](int i) {
+ auto& cell_info = layout_[i];
+ auto& block = blocks[i];
+ MatrixRef b(cell_info.values, block.size, block.size);
+ b = b.selfadjointView<Eigen::Upper>().llt().solve(
+ Matrix::Identity(block.size, block.size));
+ });
}
-void BlockRandomAccessDiagonalMatrix::RightMultiply(const double* x,
- double* y) const {
+void BlockRandomAccessDiagonalMatrix::RightMultiplyAndAccumulate(
+ const double* x, double* y) const {
CHECK(x != nullptr);
CHECK(y != nullptr);
- const double* values = tsm_->values();
- for (int i = 0; i < blocks_.size(); ++i) {
- const int block_size = blocks_[i];
- ConstMatrixRef block(values, block_size, block_size);
- VectorRef(y, block_size).noalias() += block * ConstVectorRef(x, block_size);
- x += block_size;
- y += block_size;
- values += block_size * block_size;
- }
+ auto& blocks = m_->row_blocks();
+ const int num_blocks = blocks.size();
+ ParallelFor(
+ context_, 0, num_blocks, num_threads_, [this, blocks, x, y](int i) {
+ auto& cell_info = layout_[i];
+ auto& block = blocks[i];
+ ConstMatrixRef b(cell_info.values, block.size, block.size);
+ VectorRef(y + block.position, block.size).noalias() +=
+ b * ConstVectorRef(x + block.position, block.size);
+ });
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_random_access_diagonal_matrix.h b/internal/ceres/block_random_access_diagonal_matrix.h
index 3fe7c1e..9671f3e 100644
--- a/internal/ceres/block_random_access_diagonal_matrix.h
+++ b/internal/ceres/block_random_access_diagonal_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,32 +32,29 @@
#define CERES_INTERNAL_BLOCK_RANDOM_ACCESS_DIAGONAL_MATRIX_H_
#include <memory>
-#include <set>
#include <utility>
-#include <vector>
#include "ceres/block_random_access_matrix.h"
-#include "ceres/internal/port.h"
-#include "ceres/triplet_sparse_matrix.h"
+#include "ceres/block_structure.h"
+#include "ceres/compressed_row_sparse_matrix.h"
+#include "ceres/context_impl.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-// A thread safe block diagonal matrix implementation of
-// BlockRandomAccessMatrix.
-class CERES_EXPORT_INTERNAL BlockRandomAccessDiagonalMatrix
+// A BlockRandomAccessMatrix which only stores the block diagonal.
+// BlockRandomAccessSparseMatrix can also be used to do this, but this class is
+// more efficient in time and in space.
+class CERES_NO_EXPORT BlockRandomAccessDiagonalMatrix
: public BlockRandomAccessMatrix {
public:
// blocks is an array of block sizes.
- explicit BlockRandomAccessDiagonalMatrix(const std::vector<int>& blocks);
- BlockRandomAccessDiagonalMatrix(const BlockRandomAccessDiagonalMatrix&) =
- delete;
- void operator=(const BlockRandomAccessDiagonalMatrix&) = delete;
-
- // The destructor is not thread safe. It assumes that no one is
- // modifying any cells when the matrix is being destroyed.
- virtual ~BlockRandomAccessDiagonalMatrix();
+ BlockRandomAccessDiagonalMatrix(const std::vector<Block>& blocks,
+ ContextImpl* context,
+ int num_threads);
+ ~BlockRandomAccessDiagonalMatrix() override = default;
// BlockRandomAccessMatrix Interface.
CellInfo* GetCell(int row_block_id,
@@ -67,35 +64,31 @@
int* row_stride,
int* col_stride) final;
- // This is not a thread safe method, it assumes that no cell is
- // locked.
+ // m = 0
void SetZero() final;
- // Invert the matrix assuming that each block is positive definite.
+ // m = m^{-1}
void Invert();
- // y += S * x
- void RightMultiply(const double* x, double* y) const;
+ // y += m * x
+ void RightMultiplyAndAccumulate(const double* x, double* y) const;
// Since the matrix is square, num_rows() == num_cols().
- int num_rows() const final { return tsm_->num_rows(); }
- int num_cols() const final { return tsm_->num_cols(); }
+ int num_rows() const final { return m_->num_rows(); }
+ int num_cols() const final { return m_->num_cols(); }
- const TripletSparseMatrix* matrix() const { return tsm_.get(); }
- TripletSparseMatrix* mutable_matrix() { return tsm_.get(); }
+ const CompressedRowSparseMatrix* matrix() const { return m_.get(); }
+ CompressedRowSparseMatrix* mutable_matrix() { return m_.get(); }
private:
- // row/column block sizes.
- const std::vector<int> blocks_;
- std::vector<CellInfo*> layout_;
-
- // The underlying matrix object which actually stores the cells.
- std::unique_ptr<TripletSparseMatrix> tsm_;
-
- friend class BlockRandomAccessDiagonalMatrixTest;
+ ContextImpl* context_ = nullptr;
+ const int num_threads_ = 1;
+ std::unique_ptr<CompressedRowSparseMatrix> m_;
+ std::unique_ptr<CellInfo[]> layout_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_BLOCK_RANDOM_ACCESS_DIAGONAL_MATRIX_H_
diff --git a/internal/ceres/block_random_access_diagonal_matrix_test.cc b/internal/ceres/block_random_access_diagonal_matrix_test.cc
index e384dac..f665bd8 100644
--- a/internal/ceres/block_random_access_diagonal_matrix_test.cc
+++ b/internal/ceres/block_random_access_diagonal_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,20 +39,21 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockRandomAccessDiagonalMatrixTest : public ::testing::Test {
public:
- void SetUp() {
- std::vector<int> blocks;
- blocks.push_back(3);
- blocks.push_back(4);
- blocks.push_back(5);
+ void SetUp() override {
+ std::vector<Block> blocks;
+ blocks.emplace_back(3, 0);
+ blocks.emplace_back(4, 3);
+ blocks.emplace_back(5, 7);
+
const int num_rows = 3 + 4 + 5;
num_nonzeros_ = 3 * 3 + 4 * 4 + 5 * 5;
- m_.reset(new BlockRandomAccessDiagonalMatrix(blocks));
+ m_ =
+ std::make_unique<BlockRandomAccessDiagonalMatrix>(blocks, &context_, 1);
EXPECT_EQ(m_->num_rows(), num_rows);
EXPECT_EQ(m_->num_cols(), num_rows);
@@ -71,38 +72,43 @@
row_block_id, col_block_id, &row, &col, &row_stride, &col_stride);
// Off diagonal entries are not present.
if (i != j) {
- EXPECT_TRUE(cell == NULL);
+ EXPECT_TRUE(cell == nullptr);
continue;
}
- EXPECT_TRUE(cell != NULL);
+ EXPECT_TRUE(cell != nullptr);
EXPECT_EQ(row, 0);
EXPECT_EQ(col, 0);
- EXPECT_EQ(row_stride, blocks[row_block_id]);
- EXPECT_EQ(col_stride, blocks[col_block_id]);
+ EXPECT_EQ(row_stride, blocks[row_block_id].size);
+ EXPECT_EQ(col_stride, blocks[col_block_id].size);
// Write into the block
MatrixRef(cell->values, row_stride, col_stride)
- .block(row, col, blocks[row_block_id], blocks[col_block_id]) =
+ .block(row,
+ col,
+ blocks[row_block_id].size,
+ blocks[col_block_id].size) =
(row_block_id + 1) * (col_block_id + 1) *
- Matrix::Ones(blocks[row_block_id], blocks[col_block_id]) +
- Matrix::Identity(blocks[row_block_id], blocks[row_block_id]);
+ Matrix::Ones(blocks[row_block_id].size,
+ blocks[col_block_id].size) +
+ Matrix::Identity(blocks[row_block_id].size,
+ blocks[row_block_id].size);
}
}
}
protected:
+ ContextImpl context_;
int num_nonzeros_;
std::unique_ptr<BlockRandomAccessDiagonalMatrix> m_;
};
TEST_F(BlockRandomAccessDiagonalMatrixTest, MatrixContents) {
- const TripletSparseMatrix* tsm = m_->matrix();
- EXPECT_EQ(tsm->num_nonzeros(), num_nonzeros_);
- EXPECT_EQ(tsm->max_num_nonzeros(), num_nonzeros_);
+ auto* crsm = m_->matrix();
+ EXPECT_EQ(crsm->num_nonzeros(), num_nonzeros_);
Matrix dense;
- tsm->ToDenseMatrix(&dense);
+ crsm->ToDenseMatrix(&dense);
double kTolerance = 1e-14;
@@ -134,31 +140,30 @@
kTolerance);
}
-TEST_F(BlockRandomAccessDiagonalMatrixTest, RightMultiply) {
+TEST_F(BlockRandomAccessDiagonalMatrixTest, RightMultiplyAndAccumulate) {
double kTolerance = 1e-14;
- const TripletSparseMatrix* tsm = m_->matrix();
+ auto* crsm = m_->matrix();
Matrix dense;
- tsm->ToDenseMatrix(&dense);
+ crsm->ToDenseMatrix(&dense);
Vector x = Vector::Random(dense.rows());
Vector expected_y = dense * x;
Vector actual_y = Vector::Zero(dense.rows());
- m_->RightMultiply(x.data(), actual_y.data());
+ m_->RightMultiplyAndAccumulate(x.data(), actual_y.data());
EXPECT_NEAR((expected_y - actual_y).norm(), 0, kTolerance);
}
TEST_F(BlockRandomAccessDiagonalMatrixTest, Invert) {
double kTolerance = 1e-14;
- const TripletSparseMatrix* tsm = m_->matrix();
+ auto* crsm = m_->matrix();
Matrix dense;
- tsm->ToDenseMatrix(&dense);
+ crsm->ToDenseMatrix(&dense);
Matrix expected_inverse =
dense.llt().solve(Matrix::Identity(dense.rows(), dense.rows()));
m_->Invert();
- tsm->ToDenseMatrix(&dense);
+ crsm->ToDenseMatrix(&dense);
EXPECT_NEAR((expected_inverse - dense).norm(), 0.0, kTolerance);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_random_access_matrix.cc b/internal/ceres/block_random_access_matrix.cc
index ea88855..cb3d9dc 100644
--- a/internal/ceres/block_random_access_matrix.cc
+++ b/internal/ceres/block_random_access_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,10 +30,8 @@
#include "ceres/block_random_access_matrix.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-BlockRandomAccessMatrix::~BlockRandomAccessMatrix() {}
+BlockRandomAccessMatrix::~BlockRandomAccessMatrix() = default;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_random_access_matrix.h b/internal/ceres/block_random_access_matrix.h
index f190622..66390d7 100644
--- a/internal/ceres/block_random_access_matrix.h
+++ b/internal/ceres/block_random_access_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,10 +35,9 @@
#include <mutex>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// A matrix implementing the BlockRandomAccessMatrix interface is a
// matrix whose rows and columns are divided into blocks. For example
@@ -62,7 +61,7 @@
//
// There is no requirement that all cells be present, i.e. the matrix
// itself can be block sparse. When a cell is not present, the GetCell
-// method will return a NULL pointer.
+// method will return a nullptr pointer.
//
// There is no requirement about how the cells are stored beyond that
// form a dense submatrix of a larger dense matrix. Like everywhere
@@ -77,7 +76,7 @@
// &row, &col,
// &row_stride, &col_stride);
//
-// if (cell != NULL) {
+// if (cell != nullptr) {
// MatrixRef m(cell->values, row_stride, col_stride);
// std::lock_guard<std::mutex> l(&cell->m);
// m.block(row, col, row_block_size, col_block_size) = ...
@@ -85,21 +84,21 @@
// Structure to carry a pointer to the array containing a cell and the
// mutex guarding it.
-struct CellInfo {
- CellInfo() : values(nullptr) {}
+struct CERES_NO_EXPORT CellInfo {
+ CellInfo() = default;
explicit CellInfo(double* values) : values(values) {}
- double* values;
+ double* values{nullptr};
std::mutex m;
};
-class CERES_EXPORT_INTERNAL BlockRandomAccessMatrix {
+class CERES_NO_EXPORT BlockRandomAccessMatrix {
public:
virtual ~BlockRandomAccessMatrix();
// If the cell (row_block_id, col_block_id) is present, then return
// a CellInfo with a pointer to the dense matrix containing it,
- // otherwise return NULL. The dense matrix containing this cell has
+ // otherwise return nullptr. The dense matrix containing this cell has
// size row_stride, col_stride and the cell is located at position
// (row, col) within this matrix.
//
@@ -123,7 +122,6 @@
virtual int num_cols() const = 0;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_BLOCK_RANDOM_ACCESS_MATRIX_H_
diff --git a/internal/ceres/block_random_access_sparse_matrix.cc b/internal/ceres/block_random_access_sparse_matrix.cc
index c28b7ce..b9f8b36 100644
--- a/internal/ceres/block_random_access_sparse_matrix.cc
+++ b/internal/ceres/block_random_access_sparse_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,87 +36,63 @@
#include <utility>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
+#include "ceres/parallel_vector_ops.h"
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::make_pair;
-using std::pair;
-using std::set;
-using std::vector;
+namespace ceres::internal {
BlockRandomAccessSparseMatrix::BlockRandomAccessSparseMatrix(
- const vector<int>& blocks, const set<pair<int, int>>& block_pairs)
- : kMaxRowBlocks(10 * 1000 * 1000), blocks_(blocks) {
- CHECK_LT(blocks.size(), kMaxRowBlocks);
+ const std::vector<Block>& blocks,
+ const std::set<std::pair<int, int>>& block_pairs,
+ ContextImpl* context,
+ int num_threads)
+ : blocks_(blocks), context_(context), num_threads_(num_threads) {
+ CHECK_LE(blocks.size(), std::numeric_limits<std::int32_t>::max());
- // Build the row/column layout vector and count the number of scalar
- // rows/columns.
- int num_cols = 0;
- block_positions_.reserve(blocks_.size());
- for (int i = 0; i < blocks_.size(); ++i) {
- block_positions_.push_back(num_cols);
- num_cols += blocks_[i];
+ const int num_cols = NumScalarEntries(blocks);
+ const int num_blocks = blocks.size();
+
+ std::vector<int> num_cells_at_row(num_blocks);
+ for (auto& p : block_pairs) {
+ ++num_cells_at_row[p.first];
}
-
- // Count the number of scalar non-zero entries and build the layout
- // object for looking into the values array of the
- // TripletSparseMatrix.
+ auto block_structure_ = new CompressedRowBlockStructure;
+ block_structure_->cols = blocks;
+ block_structure_->rows.resize(num_blocks);
+ auto p = block_pairs.begin();
int num_nonzeros = 0;
- for (const auto& block_pair : block_pairs) {
- const int row_block_size = blocks_[block_pair.first];
- const int col_block_size = blocks_[block_pair.second];
- num_nonzeros += row_block_size * col_block_size;
- }
-
- VLOG(1) << "Matrix Size [" << num_cols << "," << num_cols << "] "
- << num_nonzeros;
-
- tsm_.reset(new TripletSparseMatrix(num_cols, num_cols, num_nonzeros));
- tsm_->set_num_nonzeros(num_nonzeros);
- int* rows = tsm_->mutable_rows();
- int* cols = tsm_->mutable_cols();
- double* values = tsm_->mutable_values();
-
- int pos = 0;
- for (const auto& block_pair : block_pairs) {
- const int row_block_size = blocks_[block_pair.first];
- const int col_block_size = blocks_[block_pair.second];
- cell_values_.push_back(make_pair(block_pair, values + pos));
- layout_[IntPairToLong(block_pair.first, block_pair.second)] =
- new CellInfo(values + pos);
- pos += row_block_size * col_block_size;
- }
-
- // Fill the sparsity pattern of the underlying matrix.
- for (const auto& block_pair : block_pairs) {
- const int row_block_id = block_pair.first;
- const int col_block_id = block_pair.second;
- const int row_block_size = blocks_[row_block_id];
- const int col_block_size = blocks_[col_block_id];
- int pos =
- layout_[IntPairToLong(row_block_id, col_block_id)]->values - values;
- for (int r = 0; r < row_block_size; ++r) {
- for (int c = 0; c < col_block_size; ++c, ++pos) {
- rows[pos] = block_positions_[row_block_id] + r;
- cols[pos] = block_positions_[col_block_id] + c;
- values[pos] = 1.0;
- DCHECK_LT(rows[pos], tsm_->num_rows());
- DCHECK_LT(cols[pos], tsm_->num_rows());
- }
+ // Pairs of block indices are sorted lexicographically, thus pairs
+ // corresponding to a single row-block are stored in segments of index pairs
+ // with constant row-block index and increasing column-block index.
+ // CompressedRowBlockStructure is created by traversing block_pairs set.
+ for (int row_block_id = 0; row_block_id < num_blocks; ++row_block_id) {
+ auto& row = block_structure_->rows[row_block_id];
+ row.block = blocks[row_block_id];
+ row.cells.reserve(num_cells_at_row[row_block_id]);
+ const int row_block_size = blocks[row_block_id].size;
+ // Process all index pairs corresponding to the current row block. Because
+ // index pairs are sorted lexicographically, cells are being appended to the
+ // current row-block till the first change in row-block index
+ for (; p != block_pairs.end() && row_block_id == p->first; ++p) {
+ const int col_block_id = p->second;
+ row.cells.emplace_back(col_block_id, num_nonzeros);
+ num_nonzeros += row_block_size * blocks[col_block_id].size;
}
}
-}
-
-// Assume that the user does not hold any locks on any cell blocks
-// when they are calling SetZero.
-BlockRandomAccessSparseMatrix::~BlockRandomAccessSparseMatrix() {
- for (const auto& entry : layout_) {
- delete entry.second;
+ bsm_ = std::make_unique<BlockSparseMatrix>(block_structure_);
+ VLOG(1) << "Matrix Size [" << num_cols << "," << num_cols << "] "
+ << num_nonzeros;
+ double* values = bsm_->mutable_values();
+ for (int row_block_id = 0; row_block_id < num_blocks; ++row_block_id) {
+ const auto& cells = block_structure_->rows[row_block_id].cells;
+ for (auto& c : cells) {
+ const int col_block_id = c.block_id;
+ double* const data = values + c.position;
+ layout_.emplace(IntPairToInt64(row_block_id, col_block_id), data);
+ }
}
}
@@ -126,53 +102,57 @@
int* col,
int* row_stride,
int* col_stride) {
- const LayoutType::iterator it =
- layout_.find(IntPairToLong(row_block_id, col_block_id));
+ const auto it = layout_.find(IntPairToInt64(row_block_id, col_block_id));
if (it == layout_.end()) {
- return NULL;
+ return nullptr;
}
// Each cell is stored contiguously as its own little dense matrix.
*row = 0;
*col = 0;
- *row_stride = blocks_[row_block_id];
- *col_stride = blocks_[col_block_id];
- return it->second;
+ *row_stride = blocks_[row_block_id].size;
+ *col_stride = blocks_[col_block_id].size;
+ return &it->second;
}
// Assume that the user does not hold any locks on any cell blocks
// when they are calling SetZero.
void BlockRandomAccessSparseMatrix::SetZero() {
- if (tsm_->num_nonzeros()) {
- VectorRef(tsm_->mutable_values(), tsm_->num_nonzeros()).setZero();
- }
+ bsm_->SetZero(context_, num_threads_);
}
-void BlockRandomAccessSparseMatrix::SymmetricRightMultiply(const double* x,
- double* y) const {
- for (const auto& cell_position_and_data : cell_values_) {
- const int row = cell_position_and_data.first.first;
- const int row_block_size = blocks_[row];
- const int row_block_pos = block_positions_[row];
+void BlockRandomAccessSparseMatrix::SymmetricRightMultiplyAndAccumulate(
+ const double* x, double* y) const {
+ const auto bs = bsm_->block_structure();
+ const auto values = bsm_->values();
+ const int num_blocks = blocks_.size();
- const int col = cell_position_and_data.first.second;
- const int col_block_size = blocks_[col];
- const int col_block_pos = block_positions_[col];
+ for (int row_block_id = 0; row_block_id < num_blocks; ++row_block_id) {
+ const auto& row_block = bs->rows[row_block_id];
+ const int row_block_size = row_block.block.size;
+ const int row_block_pos = row_block.block.position;
- MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
- cell_position_and_data.second,
- row_block_size,
- col_block_size,
- x + col_block_pos,
- y + row_block_pos);
+ for (auto& c : row_block.cells) {
+ const int col_block_id = c.block_id;
+ const int col_block_size = blocks_[col_block_id].size;
+ const int col_block_pos = blocks_[col_block_id].position;
- // Since the matrix is symmetric, but only the upper triangular
- // part is stored, if the block being accessed is not a diagonal
- // block, then use the same block to do the corresponding lower
- // triangular multiply also.
- if (row != col) {
+ MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
+ values + c.position,
+ row_block_size,
+ col_block_size,
+ x + col_block_pos,
+ y + row_block_pos);
+ if (col_block_id == row_block_id) {
+ continue;
+ }
+
+ // Since the matrix is symmetric, but only the upper triangular
+ // part is stored, if the block being accessed is not a diagonal
+ // block, then use the same block to do the corresponding lower
+ // triangular multiply also
MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
- cell_position_and_data.second,
+ values + c.position,
row_block_size,
col_block_size,
x + row_block_pos,
@@ -181,5 +161,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_random_access_sparse_matrix.h b/internal/ceres/block_random_access_sparse_matrix.h
index 0e58bbb..5121832 100644
--- a/internal/ceres/block_random_access_sparse_matrix.h
+++ b/internal/ceres/block_random_access_sparse_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,33 +39,35 @@
#include <vector>
#include "ceres/block_random_access_matrix.h"
-#include "ceres/internal/port.h"
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/block_structure.h"
+#include "ceres/context_impl.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/small_blas.h"
-#include "ceres/triplet_sparse_matrix.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// A thread safe square block sparse implementation of
-// BlockRandomAccessMatrix. Internally a TripletSparseMatrix is used
+// BlockRandomAccessMatrix. Internally a BlockSparseMatrix is used
// for doing the actual storage. This class augments this matrix with
// an unordered_map that allows random read/write access.
-class CERES_EXPORT_INTERNAL BlockRandomAccessSparseMatrix
+class CERES_NO_EXPORT BlockRandomAccessSparseMatrix
: public BlockRandomAccessMatrix {
public:
// blocks is an array of block sizes. block_pairs is a set of
// <row_block_id, col_block_id> pairs to identify the non-zero cells
// of this matrix.
BlockRandomAccessSparseMatrix(
- const std::vector<int>& blocks,
- const std::set<std::pair<int, int>>& block_pairs);
- BlockRandomAccessSparseMatrix(const BlockRandomAccessSparseMatrix&) = delete;
- void operator=(const BlockRandomAccessSparseMatrix&) = delete;
+ const std::vector<Block>& blocks,
+ const std::set<std::pair<int, int>>& block_pairs,
+ ContextImpl* context,
+ int num_threads);
// The destructor is not thread safe. It assumes that no one is
// modifying any cells when the matrix is being destroyed.
- virtual ~BlockRandomAccessSparseMatrix();
+ ~BlockRandomAccessSparseMatrix() override = default;
// BlockRandomAccessMatrix Interface.
CellInfo* GetCell(int row_block_id,
@@ -79,52 +81,50 @@
// locked.
void SetZero() final;
- // Assume that the matrix is symmetric and only one half of the
- // matrix is stored.
+ // Assume that the matrix is symmetric and only one half of the matrix is
+ // stored.
//
// y += S * x
- void SymmetricRightMultiply(const double* x, double* y) const;
+ void SymmetricRightMultiplyAndAccumulate(const double* x, double* y) const;
// Since the matrix is square, num_rows() == num_cols().
- int num_rows() const final { return tsm_->num_rows(); }
- int num_cols() const final { return tsm_->num_cols(); }
+ int num_rows() const final { return bsm_->num_rows(); }
+ int num_cols() const final { return bsm_->num_cols(); }
// Access to the underlying matrix object.
- const TripletSparseMatrix* matrix() const { return tsm_.get(); }
- TripletSparseMatrix* mutable_matrix() { return tsm_.get(); }
+ const BlockSparseMatrix* matrix() const { return bsm_.get(); }
+ BlockSparseMatrix* mutable_matrix() { return bsm_.get(); }
private:
- int64_t IntPairToLong(int row, int col) const {
- return row * kMaxRowBlocks + col;
+ int64_t IntPairToInt64(int row, int col) const {
+ return row * kRowShift + col;
}
- void LongToIntPair(int64_t index, int* row, int* col) const {
- *row = index / kMaxRowBlocks;
- *col = index % kMaxRowBlocks;
+ void Int64ToIntPair(int64_t index, int* row, int* col) const {
+ *row = index / kRowShift;
+ *col = index % kRowShift;
}
- const int64_t kMaxRowBlocks;
+ constexpr static int64_t kRowShift{1ll << 32};
// row/column block sizes.
- const std::vector<int> blocks_;
- std::vector<int> block_positions_;
+ const std::vector<Block> blocks_;
+ ContextImpl* context_ = nullptr;
+ const int num_threads_ = 1;
// A mapping from <row_block_id, col_block_id> to the position in
- // the values array of tsm_ where the block is stored.
- typedef std::unordered_map<long int, CellInfo*> LayoutType;
+ // the values array of bsm_ where the block is stored.
+ using LayoutType = std::unordered_map<std::int64_t, CellInfo>;
LayoutType layout_;
- // In order traversal of contents of the matrix. This allows us to
- // implement a matrix-vector which is 20% faster than using the
- // iterator in the Layout object instead.
- std::vector<std::pair<std::pair<int, int>, double*>> cell_values_;
// The underlying matrix object which actually stores the cells.
- std::unique_ptr<TripletSparseMatrix> tsm_;
+ std::unique_ptr<BlockSparseMatrix> bsm_;
friend class BlockRandomAccessSparseMatrixTest;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_BLOCK_RANDOM_ACCESS_SPARSE_MATRIX_H_
diff --git a/internal/ceres/block_random_access_sparse_matrix_test.cc b/internal/ceres/block_random_access_sparse_matrix_test.cc
index 557b678..0bb39f1 100644
--- a/internal/ceres/block_random_access_sparse_matrix_test.cc
+++ b/internal/ceres/block_random_access_sparse_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,42 +32,40 @@
#include <limits>
#include <memory>
+#include <set>
+#include <utility>
#include <vector>
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::make_pair;
-using std::pair;
-using std::set;
-using std::vector;
+namespace ceres::internal {
TEST(BlockRandomAccessSparseMatrix, GetCell) {
- vector<int> blocks;
- blocks.push_back(3);
- blocks.push_back(4);
- blocks.push_back(5);
- const int num_rows = 3 + 4 + 5;
+ ContextImpl context;
+ constexpr int num_threads = 1;
+ std::vector<Block> blocks;
+ blocks.emplace_back(3, 0);
+ blocks.emplace_back(4, 3);
+ blocks.emplace_back(5, 7);
+ constexpr int num_rows = 3 + 4 + 5;
- set<pair<int, int>> block_pairs;
+ std::set<std::pair<int, int>> block_pairs;
int num_nonzeros = 0;
- block_pairs.insert(make_pair(0, 0));
- num_nonzeros += blocks[0] * blocks[0];
+ block_pairs.emplace(0, 0);
+ num_nonzeros += blocks[0].size * blocks[0].size;
- block_pairs.insert(make_pair(1, 1));
- num_nonzeros += blocks[1] * blocks[1];
+ block_pairs.emplace(1, 1);
+ num_nonzeros += blocks[1].size * blocks[1].size;
- block_pairs.insert(make_pair(1, 2));
- num_nonzeros += blocks[1] * blocks[2];
+ block_pairs.emplace(1, 2);
+ num_nonzeros += blocks[1].size * blocks[2].size;
- block_pairs.insert(make_pair(0, 2));
- num_nonzeros += blocks[2] * blocks[0];
+ block_pairs.emplace(0, 2);
+ num_nonzeros += blocks[2].size * blocks[0].size;
- BlockRandomAccessSparseMatrix m(blocks, block_pairs);
+ BlockRandomAccessSparseMatrix m(blocks, block_pairs, &context, num_threads);
EXPECT_EQ(m.num_rows(), num_rows);
EXPECT_EQ(m.num_cols(), num_rows);
@@ -80,25 +78,24 @@
int col_stride;
CellInfo* cell = m.GetCell(
row_block_id, col_block_id, &row, &col, &row_stride, &col_stride);
- EXPECT_TRUE(cell != NULL);
+ EXPECT_TRUE(cell != nullptr);
EXPECT_EQ(row, 0);
EXPECT_EQ(col, 0);
- EXPECT_EQ(row_stride, blocks[row_block_id]);
- EXPECT_EQ(col_stride, blocks[col_block_id]);
+ EXPECT_EQ(row_stride, blocks[row_block_id].size);
+ EXPECT_EQ(col_stride, blocks[col_block_id].size);
// Write into the block
MatrixRef(cell->values, row_stride, col_stride)
- .block(row, col, blocks[row_block_id], blocks[col_block_id]) =
+ .block(row, col, blocks[row_block_id].size, blocks[col_block_id].size) =
(row_block_id + 1) * (col_block_id + 1) *
- Matrix::Ones(blocks[row_block_id], blocks[col_block_id]);
+ Matrix::Ones(blocks[row_block_id].size, blocks[col_block_id].size);
}
- const TripletSparseMatrix* tsm = m.matrix();
- EXPECT_EQ(tsm->num_nonzeros(), num_nonzeros);
- EXPECT_EQ(tsm->max_num_nonzeros(), num_nonzeros);
+ const BlockSparseMatrix* bsm = m.matrix();
+ EXPECT_EQ(bsm->num_nonzeros(), num_nonzeros);
Matrix dense;
- tsm->ToDenseMatrix(&dense);
+ bsm->ToDenseMatrix(&dense);
double kTolerance = 1e-14;
@@ -127,39 +124,40 @@
Vector expected_y = Vector::Zero(dense.rows());
expected_y += dense.selfadjointView<Eigen::Upper>() * x;
- m.SymmetricRightMultiply(x.data(), actual_y.data());
+ m.SymmetricRightMultiplyAndAccumulate(x.data(), actual_y.data());
EXPECT_NEAR((expected_y - actual_y).norm(), 0.0, kTolerance)
<< "actual: " << actual_y.transpose() << "\n"
<< "expected: " << expected_y.transpose() << "matrix: \n " << dense;
}
-// IntPairToLong is private, thus this fixture is needed to access and
+// IntPairToInt64 is private, thus this fixture is needed to access and
// test it.
class BlockRandomAccessSparseMatrixTest : public ::testing::Test {
public:
void SetUp() final {
- vector<int> blocks;
- blocks.push_back(1);
- set<pair<int, int>> block_pairs;
- block_pairs.insert(make_pair(0, 0));
- m_.reset(new BlockRandomAccessSparseMatrix(blocks, block_pairs));
+ std::vector<Block> blocks;
+ blocks.emplace_back(1, 0);
+ std::set<std::pair<int, int>> block_pairs;
+ block_pairs.emplace(0, 0);
+ m_ = std::make_unique<BlockRandomAccessSparseMatrix>(
+ blocks, block_pairs, &context_, 1);
}
- void CheckIntPairToLong(int a, int b) {
- int64_t value = m_->IntPairToLong(a, b);
+ void CheckIntPairToInt64(int a, int b) {
+ int64_t value = m_->IntPairToInt64(a, b);
EXPECT_GT(value, 0) << "Overflow a = " << a << " b = " << b;
EXPECT_GT(value, a) << "Overflow a = " << a << " b = " << b;
EXPECT_GT(value, b) << "Overflow a = " << a << " b = " << b;
}
- void CheckLongToIntPair() {
- uint64_t max_rows = m_->kMaxRowBlocks;
+ void CheckInt64ToIntPair() {
+ uint64_t max_rows = m_->kRowShift;
for (int row = max_rows - 10; row < max_rows; ++row) {
for (int col = 0; col < 10; ++col) {
int row_computed;
int col_computed;
- m_->LongToIntPair(
- m_->IntPairToLong(row, col), &row_computed, &col_computed);
+ m_->Int64ToIntPair(
+ m_->IntPairToInt64(row, col), &row_computed, &col_computed);
EXPECT_EQ(row, row_computed);
EXPECT_EQ(col, col_computed);
}
@@ -167,17 +165,17 @@
}
private:
+ ContextImpl context_;
std::unique_ptr<BlockRandomAccessSparseMatrix> m_;
};
-TEST_F(BlockRandomAccessSparseMatrixTest, IntPairToLongOverflow) {
- CheckIntPairToLong(std::numeric_limits<int>::max(),
- std::numeric_limits<int>::max());
+TEST_F(BlockRandomAccessSparseMatrixTest, IntPairToInt64Overflow) {
+ CheckIntPairToInt64(std::numeric_limits<int32_t>::max(),
+ std::numeric_limits<int32_t>::max());
}
-TEST_F(BlockRandomAccessSparseMatrixTest, LongToIntPair) {
- CheckLongToIntPair();
+TEST_F(BlockRandomAccessSparseMatrixTest, Int64ToIntPair) {
+ CheckInt64ToIntPair();
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/block_sparse_matrix.cc b/internal/ceres/block_sparse_matrix.cc
index 5efd2e1..2efee39 100644
--- a/internal/ceres/block_sparse_matrix.cc
+++ b/internal/ceres/block_sparse_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,33 +32,159 @@
#include <algorithm>
#include <cstddef>
+#include <memory>
+#include <numeric>
+#include <random>
#include <vector>
#include "ceres/block_structure.h"
+#include "ceres/crs_matrix.h"
#include "ceres/internal/eigen.h"
-#include "ceres/random.h"
+#include "ceres/parallel_for.h"
+#include "ceres/parallel_vector_ops.h"
#include "ceres/small_blas.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+#ifndef CERES_NO_CUDA
+#include "cuda_runtime.h"
+#endif
-using std::vector;
+namespace ceres::internal {
-BlockSparseMatrix::~BlockSparseMatrix() {}
+namespace {
+void ComputeCumulativeNumberOfNonZeros(std::vector<CompressedList>& rows) {
+ if (rows.empty()) {
+ return;
+ }
+ rows[0].cumulative_nnz = rows[0].nnz;
+ for (int c = 1; c < rows.size(); ++c) {
+ const int curr_nnz = rows[c].nnz;
+ rows[c].cumulative_nnz = curr_nnz + rows[c - 1].cumulative_nnz;
+ }
+}
+
+template <bool transpose>
+std::unique_ptr<CompressedRowSparseMatrix>
+CreateStructureOfCompressedRowSparseMatrix(
+ int num_rows,
+ int num_cols,
+ int num_nonzeros,
+ const CompressedRowBlockStructure* block_structure) {
+ auto crs_matrix = std::make_unique<CompressedRowSparseMatrix>(
+ num_rows, num_cols, num_nonzeros);
+ auto crs_cols = crs_matrix->mutable_cols();
+ auto crs_rows = crs_matrix->mutable_rows();
+ int value_offset = 0;
+ const int num_row_blocks = block_structure->rows.size();
+ const auto& cols = block_structure->cols;
+ *crs_rows++ = 0;
+ for (int row_block_id = 0; row_block_id < num_row_blocks; ++row_block_id) {
+ const auto& row_block = block_structure->rows[row_block_id];
+ // Empty row block: only requires setting row offsets
+ if (row_block.cells.empty()) {
+ std::fill(crs_rows, crs_rows + row_block.block.size, value_offset);
+ crs_rows += row_block.block.size;
+ continue;
+ }
+
+ int row_nnz = 0;
+ if constexpr (transpose) {
+ // Transposed block structure comes with nnz in row-block filled-in
+ row_nnz = row_block.nnz / row_block.block.size;
+ } else {
+ // Nnz field of non-transposed block structure is not filled and it can
+ // have non-sequential structure (consider the case of jacobian for
+ // Schur-complement solver: E and F blocks are stored separately).
+ for (auto& c : row_block.cells) {
+ row_nnz += cols[c.block_id].size;
+ }
+ }
+
+ // Row-wise setup of matrix structure
+ for (int row = 0; row < row_block.block.size; ++row) {
+ value_offset += row_nnz;
+ *crs_rows++ = value_offset;
+ for (auto& c : row_block.cells) {
+ const int col_block_size = cols[c.block_id].size;
+ const int col_position = cols[c.block_id].position;
+ std::iota(crs_cols, crs_cols + col_block_size, col_position);
+ crs_cols += col_block_size;
+ }
+ }
+ }
+ return crs_matrix;
+}
+
+template <bool transpose>
+void UpdateCompressedRowSparseMatrixImpl(
+ CompressedRowSparseMatrix* crs_matrix,
+ const double* values,
+ const CompressedRowBlockStructure* block_structure) {
+ auto crs_values = crs_matrix->mutable_values();
+ auto crs_rows = crs_matrix->mutable_rows();
+ const int num_row_blocks = block_structure->rows.size();
+ const auto& cols = block_structure->cols;
+ for (int row_block_id = 0; row_block_id < num_row_blocks; ++row_block_id) {
+ const auto& row_block = block_structure->rows[row_block_id];
+ const int row_block_size = row_block.block.size;
+ const int row_nnz = crs_rows[1] - crs_rows[0];
+ crs_rows += row_block_size;
+
+ if (row_nnz == 0) {
+ continue;
+ }
+
+ MatrixRef crs_row_block(crs_values, row_block_size, row_nnz);
+ int col_offset = 0;
+ for (auto& c : row_block.cells) {
+ const int col_block_size = cols[c.block_id].size;
+ auto crs_cell =
+ crs_row_block.block(0, col_offset, row_block_size, col_block_size);
+ if constexpr (transpose) {
+ // Transposed matrix is filled using transposed block-strucutre
+ ConstMatrixRef cell(
+ values + c.position, col_block_size, row_block_size);
+ crs_cell = cell.transpose();
+ } else {
+ ConstMatrixRef cell(
+ values + c.position, row_block_size, col_block_size);
+ crs_cell = cell;
+ }
+ col_offset += col_block_size;
+ }
+ crs_values += row_nnz * row_block_size;
+ }
+}
+
+void SetBlockStructureOfCompressedRowSparseMatrix(
+ CompressedRowSparseMatrix* crs_matrix,
+ CompressedRowBlockStructure* block_structure) {
+ const int num_row_blocks = block_structure->rows.size();
+ auto& row_blocks = *crs_matrix->mutable_row_blocks();
+ row_blocks.resize(num_row_blocks);
+ for (int i = 0; i < num_row_blocks; ++i) {
+ row_blocks[i] = block_structure->rows[i].block;
+ }
+
+ auto& col_blocks = *crs_matrix->mutable_col_blocks();
+ col_blocks = block_structure->cols;
+}
+
+} // namespace
BlockSparseMatrix::BlockSparseMatrix(
- CompressedRowBlockStructure* block_structure)
- : num_rows_(0),
+ CompressedRowBlockStructure* block_structure, bool use_page_locked_memory)
+ : use_page_locked_memory_(use_page_locked_memory),
+ num_rows_(0),
num_cols_(0),
num_nonzeros_(0),
block_structure_(block_structure) {
CHECK(block_structure_ != nullptr);
// Count the number of columns in the matrix.
- for (int i = 0; i < block_structure_->cols.size(); ++i) {
- num_cols_ += block_structure_->cols[i].size;
+ for (auto& col : block_structure_->cols) {
+ num_cols_ += col.size;
}
// Count the number of non-zero entries and the number of rows in
@@ -67,9 +193,9 @@
int row_block_size = block_structure_->rows[i].block.size;
num_rows_ += row_block_size;
- const vector<Cell>& cells = block_structure_->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- int col_block_id = cells[j].block_id;
+ const std::vector<Cell>& cells = block_structure_->rows[i].cells;
+ for (const auto& cell : cells) {
+ int col_block_id = cell.block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
num_nonzeros_ += col_block_size * row_block_size;
}
@@ -80,51 +206,138 @@
CHECK_GE(num_nonzeros_, 0);
VLOG(2) << "Allocating values array with " << num_nonzeros_ * sizeof(double)
<< " bytes."; // NOLINT
- values_.reset(new double[num_nonzeros_]);
+
+ values_ = AllocateValues(num_nonzeros_);
max_num_nonzeros_ = num_nonzeros_;
CHECK(values_ != nullptr);
+ AddTransposeBlockStructure();
}
-void BlockSparseMatrix::SetZero() {
- std::fill(values_.get(), values_.get() + num_nonzeros_, 0.0);
-}
+BlockSparseMatrix::~BlockSparseMatrix() { FreeValues(values_); }
-void BlockSparseMatrix::RightMultiply(const double* x, double* y) const {
- CHECK(x != nullptr);
- CHECK(y != nullptr);
-
- for (int i = 0; i < block_structure_->rows.size(); ++i) {
- int row_block_pos = block_structure_->rows[i].block.position;
- int row_block_size = block_structure_->rows[i].block.size;
- const vector<Cell>& cells = block_structure_->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- int col_block_id = cells[j].block_id;
- int col_block_size = block_structure_->cols[col_block_id].size;
- int col_block_pos = block_structure_->cols[col_block_id].position;
- MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
- values_.get() + cells[j].position,
- row_block_size,
- col_block_size,
- x + col_block_pos,
- y + row_block_pos);
- }
+void BlockSparseMatrix::AddTransposeBlockStructure() {
+ if (transpose_block_structure_ == nullptr) {
+ transpose_block_structure_ = CreateTranspose(*block_structure_);
}
}
-void BlockSparseMatrix::LeftMultiply(const double* x, double* y) const {
+void BlockSparseMatrix::SetZero() {
+ std::fill(values_, values_ + num_nonzeros_, 0.0);
+}
+
+void BlockSparseMatrix::SetZero(ContextImpl* context, int num_threads) {
+ ParallelSetZero(context, num_threads, values_, num_nonzeros_);
+}
+
+void BlockSparseMatrix::RightMultiplyAndAccumulate(const double* x,
+ double* y) const {
+ RightMultiplyAndAccumulate(x, y, nullptr, 1);
+}
+
+void BlockSparseMatrix::RightMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const {
CHECK(x != nullptr);
CHECK(y != nullptr);
+ const auto values = values_;
+ const auto block_structure = block_structure_.get();
+ const auto num_row_blocks = block_structure->rows.size();
+
+ ParallelFor(context,
+ 0,
+ num_row_blocks,
+ num_threads,
+ [values, block_structure, x, y](int row_block_id) {
+ const int row_block_pos =
+ block_structure->rows[row_block_id].block.position;
+ const int row_block_size =
+ block_structure->rows[row_block_id].block.size;
+ const auto& cells = block_structure->rows[row_block_id].cells;
+ for (const auto& cell : cells) {
+ const int col_block_id = cell.block_id;
+ const int col_block_size =
+ block_structure->cols[col_block_id].size;
+ const int col_block_pos =
+ block_structure->cols[col_block_id].position;
+ MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
+ values + cell.position,
+ row_block_size,
+ col_block_size,
+ x + col_block_pos,
+ y + row_block_pos);
+ }
+ });
+}
+
+// TODO(https://github.com/ceres-solver/ceres-solver/issues/933): This method
+// might benefit from caching column-block partition
+void BlockSparseMatrix::LeftMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const {
+ // While utilizing transposed structure allows to perform parallel
+ // left-multiplication by dense vector, it makes access patterns to matrix
+ // elements scattered. Thus, multiplication using transposed structure
+ // is only useful for parallel execution
+ CHECK(x != nullptr);
+ CHECK(y != nullptr);
+ if (transpose_block_structure_ == nullptr || num_threads == 1) {
+ LeftMultiplyAndAccumulate(x, y);
+ return;
+ }
+
+ auto transpose_bs = transpose_block_structure_.get();
+ const auto values = values_;
+ const int num_col_blocks = transpose_bs->rows.size();
+ if (!num_col_blocks) {
+ return;
+ }
+
+ // Use non-zero count as iteration cost for guided parallel-for loop
+ ParallelFor(
+ context,
+ 0,
+ num_col_blocks,
+ num_threads,
+ [values, transpose_bs, x, y](int row_block_id) {
+ int row_block_pos = transpose_bs->rows[row_block_id].block.position;
+ int row_block_size = transpose_bs->rows[row_block_id].block.size;
+ auto& cells = transpose_bs->rows[row_block_id].cells;
+
+ for (auto& cell : cells) {
+ const int col_block_id = cell.block_id;
+ const int col_block_size = transpose_bs->cols[col_block_id].size;
+ const int col_block_pos = transpose_bs->cols[col_block_id].position;
+ MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
+ values + cell.position,
+ col_block_size,
+ row_block_size,
+ x + col_block_pos,
+ y + row_block_pos);
+ }
+ },
+ transpose_bs->rows.data(),
+ [](const CompressedRow& row) { return row.cumulative_nnz; });
+}
+
+void BlockSparseMatrix::LeftMultiplyAndAccumulate(const double* x,
+ double* y) const {
+ CHECK(x != nullptr);
+ CHECK(y != nullptr);
+ // Single-threaded left products are always computed using a non-transpose
+ // block structure, because it has linear acess pattern to matrix elements
for (int i = 0; i < block_structure_->rows.size(); ++i) {
int row_block_pos = block_structure_->rows[i].block.position;
int row_block_size = block_structure_->rows[i].block.size;
- const vector<Cell>& cells = block_structure_->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- int col_block_id = cells[j].block_id;
+ const auto& cells = block_structure_->rows[i].cells;
+ for (const auto& cell : cells) {
+ int col_block_id = cell.block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
- values_.get() + cells[j].position,
+ values_ + cell.position,
row_block_size,
col_block_size,
x + row_block_pos,
@@ -138,35 +351,144 @@
VectorRef(x, num_cols_).setZero();
for (int i = 0; i < block_structure_->rows.size(); ++i) {
int row_block_size = block_structure_->rows[i].block.size;
- const vector<Cell>& cells = block_structure_->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- int col_block_id = cells[j].block_id;
+ auto& cells = block_structure_->rows[i].cells;
+ for (const auto& cell : cells) {
+ int col_block_id = cell.block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
const MatrixRef m(
- values_.get() + cells[j].position, row_block_size, col_block_size);
+ values_ + cell.position, row_block_size, col_block_size);
VectorRef(x + col_block_pos, col_block_size) += m.colwise().squaredNorm();
}
}
}
+// TODO(https://github.com/ceres-solver/ceres-solver/issues/933): This method
+// might benefit from caching column-block partition
+void BlockSparseMatrix::SquaredColumnNorm(double* x,
+ ContextImpl* context,
+ int num_threads) const {
+ if (transpose_block_structure_ == nullptr || num_threads == 1) {
+ SquaredColumnNorm(x);
+ return;
+ }
+
+ CHECK(x != nullptr);
+ ParallelSetZero(context, num_threads, x, num_cols_);
+
+ auto transpose_bs = transpose_block_structure_.get();
+ const auto values = values_;
+ const int num_col_blocks = transpose_bs->rows.size();
+ ParallelFor(
+ context,
+ 0,
+ num_col_blocks,
+ num_threads,
+ [values, transpose_bs, x](int row_block_id) {
+ const auto& row = transpose_bs->rows[row_block_id];
+
+ for (auto& cell : row.cells) {
+ const auto& col = transpose_bs->cols[cell.block_id];
+ const MatrixRef m(values + cell.position, col.size, row.block.size);
+ VectorRef(x + row.block.position, row.block.size) +=
+ m.colwise().squaredNorm();
+ }
+ },
+ transpose_bs->rows.data(),
+ [](const CompressedRow& row) { return row.cumulative_nnz; });
+}
+
void BlockSparseMatrix::ScaleColumns(const double* scale) {
CHECK(scale != nullptr);
for (int i = 0; i < block_structure_->rows.size(); ++i) {
int row_block_size = block_structure_->rows[i].block.size;
- const vector<Cell>& cells = block_structure_->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- int col_block_id = cells[j].block_id;
+ auto& cells = block_structure_->rows[i].cells;
+ for (const auto& cell : cells) {
+ int col_block_id = cell.block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
- MatrixRef m(
- values_.get() + cells[j].position, row_block_size, col_block_size);
+ MatrixRef m(values_ + cell.position, row_block_size, col_block_size);
m *= ConstVectorRef(scale + col_block_pos, col_block_size).asDiagonal();
}
}
}
+// TODO(https://github.com/ceres-solver/ceres-solver/issues/933): This method
+// might benefit from caching column-block partition
+void BlockSparseMatrix::ScaleColumns(const double* scale,
+ ContextImpl* context,
+ int num_threads) {
+ if (transpose_block_structure_ == nullptr || num_threads == 1) {
+ ScaleColumns(scale);
+ return;
+ }
+
+ CHECK(scale != nullptr);
+ auto transpose_bs = transpose_block_structure_.get();
+ auto values = values_;
+ const int num_col_blocks = transpose_bs->rows.size();
+ ParallelFor(
+ context,
+ 0,
+ num_col_blocks,
+ num_threads,
+ [values, transpose_bs, scale](int row_block_id) {
+ const auto& row = transpose_bs->rows[row_block_id];
+
+ for (auto& cell : row.cells) {
+ const auto& col = transpose_bs->cols[cell.block_id];
+ MatrixRef m(values + cell.position, col.size, row.block.size);
+ m *= ConstVectorRef(scale + row.block.position, row.block.size)
+ .asDiagonal();
+ }
+ },
+ transpose_bs->rows.data(),
+ [](const CompressedRow& row) { return row.cumulative_nnz; });
+}
+std::unique_ptr<CompressedRowSparseMatrix>
+BlockSparseMatrix::ToCompressedRowSparseMatrixTranspose() const {
+ auto bs = transpose_block_structure_.get();
+ auto crs_matrix = CreateStructureOfCompressedRowSparseMatrix<true>(
+ num_cols_, num_rows_, num_nonzeros_, bs);
+
+ SetBlockStructureOfCompressedRowSparseMatrix(crs_matrix.get(), bs);
+
+ UpdateCompressedRowSparseMatrixTranspose(crs_matrix.get());
+ return crs_matrix;
+}
+
+std::unique_ptr<CompressedRowSparseMatrix>
+BlockSparseMatrix::ToCompressedRowSparseMatrix() const {
+ auto crs_matrix = CreateStructureOfCompressedRowSparseMatrix<false>(
+ num_rows_, num_cols_, num_nonzeros_, block_structure_.get());
+
+ SetBlockStructureOfCompressedRowSparseMatrix(crs_matrix.get(),
+ block_structure_.get());
+
+ UpdateCompressedRowSparseMatrix(crs_matrix.get());
+ return crs_matrix;
+}
+
+void BlockSparseMatrix::UpdateCompressedRowSparseMatrixTranspose(
+ CompressedRowSparseMatrix* crs_matrix) const {
+ CHECK(crs_matrix != nullptr);
+ CHECK_EQ(crs_matrix->num_rows(), num_cols_);
+ CHECK_EQ(crs_matrix->num_cols(), num_rows_);
+ CHECK_EQ(crs_matrix->num_nonzeros(), num_nonzeros_);
+ UpdateCompressedRowSparseMatrixImpl<true>(
+ crs_matrix, values(), transpose_block_structure_.get());
+}
+void BlockSparseMatrix::UpdateCompressedRowSparseMatrix(
+ CompressedRowSparseMatrix* crs_matrix) const {
+ CHECK(crs_matrix != nullptr);
+ CHECK_EQ(crs_matrix->num_rows(), num_rows_);
+ CHECK_EQ(crs_matrix->num_cols(), num_cols_);
+ CHECK_EQ(crs_matrix->num_nonzeros(), num_nonzeros_);
+ UpdateCompressedRowSparseMatrixImpl<false>(
+ crs_matrix, values(), block_structure_.get());
+}
+
void BlockSparseMatrix::ToDenseMatrix(Matrix* dense_matrix) const {
CHECK(dense_matrix != nullptr);
@@ -177,14 +499,14 @@
for (int i = 0; i < block_structure_->rows.size(); ++i) {
int row_block_pos = block_structure_->rows[i].block.position;
int row_block_size = block_structure_->rows[i].block.size;
- const vector<Cell>& cells = block_structure_->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- int col_block_id = cells[j].block_id;
+ auto& cells = block_structure_->rows[i].cells;
+ for (const auto& cell : cells) {
+ int col_block_id = cell.block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
- int jac_pos = cells[j].position;
+ int jac_pos = cell.position;
m.block(row_block_pos, col_block_pos, row_block_size, col_block_size) +=
- MatrixRef(values_.get() + jac_pos, row_block_size, col_block_size);
+ MatrixRef(values_ + jac_pos, row_block_size, col_block_size);
}
}
}
@@ -200,12 +522,12 @@
for (int i = 0; i < block_structure_->rows.size(); ++i) {
int row_block_pos = block_structure_->rows[i].block.position;
int row_block_size = block_structure_->rows[i].block.size;
- const vector<Cell>& cells = block_structure_->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- int col_block_id = cells[j].block_id;
+ const auto& cells = block_structure_->rows[i].cells;
+ for (const auto& cell : cells) {
+ int col_block_id = cell.block_id;
int col_block_size = block_structure_->cols[col_block_id].size;
int col_block_pos = block_structure_->cols[col_block_id].position;
- int jac_pos = cells[j].position;
+ int jac_pos = cell.position;
for (int r = 0; r < row_block_size; ++r) {
for (int c = 0; c < col_block_size; ++c, ++jac_pos) {
matrix->mutable_rows()[jac_pos] = row_block_pos + r;
@@ -224,17 +546,24 @@
return block_structure_.get();
}
+// Return a pointer to the block structure of matrix transpose. We continue to
+// hold ownership of the object though.
+const CompressedRowBlockStructure*
+BlockSparseMatrix::transpose_block_structure() const {
+ return transpose_block_structure_.get();
+}
+
void BlockSparseMatrix::ToTextFile(FILE* file) const {
CHECK(file != nullptr);
for (int i = 0; i < block_structure_->rows.size(); ++i) {
const int row_block_pos = block_structure_->rows[i].block.position;
const int row_block_size = block_structure_->rows[i].block.size;
- const vector<Cell>& cells = block_structure_->rows[i].cells;
- for (int j = 0; j < cells.size(); ++j) {
- const int col_block_id = cells[j].block_id;
+ const auto& cells = block_structure_->rows[i].cells;
+ for (const auto& cell : cells) {
+ const int col_block_id = cell.block_id;
const int col_block_size = block_structure_->cols[col_block_id].size;
const int col_block_pos = block_structure_->cols[col_block_id].position;
- int jac_pos = cells[j].position;
+ int jac_pos = cell.position;
for (int r = 0; r < row_block_size; ++r) {
for (int c = 0; c < col_block_size; ++c) {
fprintf(file,
@@ -248,10 +577,10 @@
}
}
-BlockSparseMatrix* BlockSparseMatrix::CreateDiagonalMatrix(
+std::unique_ptr<BlockSparseMatrix> BlockSparseMatrix::CreateDiagonalMatrix(
const double* diagonal, const std::vector<Block>& column_blocks) {
// Create the block structure for the diagonal matrix.
- CompressedRowBlockStructure* bs = new CompressedRowBlockStructure();
+ auto* bs = new CompressedRowBlockStructure();
bs->cols = column_blocks;
int position = 0;
bs->rows.resize(column_blocks.size(), CompressedRow(1));
@@ -265,13 +594,13 @@
}
// Create the BlockSparseMatrix with the given block structure.
- BlockSparseMatrix* matrix = new BlockSparseMatrix(bs);
+ auto matrix = std::make_unique<BlockSparseMatrix>(bs);
matrix->SetZero();
// Fill the values array of the block sparse matrix.
double* values = matrix->mutable_values();
- for (int i = 0; i < column_blocks.size(); ++i) {
- const int size = column_blocks[i].size;
+ for (const auto& column_block : column_blocks) {
+ const int size = column_block.size;
for (int j = 0; j < size; ++j) {
// (j + 1) * size is compact way of accessing the (j,j) entry.
values[j * (size + 1)] = diagonal[j];
@@ -294,33 +623,51 @@
for (int i = 0; i < m_bs->rows.size(); ++i) {
const CompressedRow& m_row = m_bs->rows[i];
- CompressedRow& row = block_structure_->rows[old_num_row_blocks + i];
+ const int row_block_id = old_num_row_blocks + i;
+ CompressedRow& row = block_structure_->rows[row_block_id];
row.block.size = m_row.block.size;
row.block.position = num_rows_;
num_rows_ += m_row.block.size;
row.cells.resize(m_row.cells.size());
+ if (transpose_block_structure_) {
+ transpose_block_structure_->cols.emplace_back(row.block);
+ }
for (int c = 0; c < m_row.cells.size(); ++c) {
const int block_id = m_row.cells[c].block_id;
row.cells[c].block_id = block_id;
row.cells[c].position = num_nonzeros_;
- num_nonzeros_ += m_row.block.size * m_bs->cols[block_id].size;
+
+ const int cell_nnz = m_row.block.size * m_bs->cols[block_id].size;
+ if (transpose_block_structure_) {
+ transpose_block_structure_->rows[block_id].cells.emplace_back(
+ row_block_id, num_nonzeros_);
+ transpose_block_structure_->rows[block_id].nnz += cell_nnz;
+ }
+
+ num_nonzeros_ += cell_nnz;
}
}
if (num_nonzeros_ > max_num_nonzeros_) {
- double* new_values = new double[num_nonzeros_];
- std::copy(values_.get(), values_.get() + old_num_nonzeros, new_values);
- values_.reset(new_values);
+ double* old_values = values_;
+ values_ = AllocateValues(num_nonzeros_);
+ std::copy_n(old_values, old_num_nonzeros, values_);
max_num_nonzeros_ = num_nonzeros_;
+ FreeValues(old_values);
}
- std::copy(m.values(),
- m.values() + m.num_nonzeros(),
- values_.get() + old_num_nonzeros);
+ std::copy(
+ m.values(), m.values() + m.num_nonzeros(), values_ + old_num_nonzeros);
+
+ if (transpose_block_structure_ == nullptr) {
+ return;
+ }
+ ComputeCumulativeNumberOfNonZeros(transpose_block_structure_->rows);
}
void BlockSparseMatrix::DeleteRowBlocks(const int delta_row_blocks) {
const int num_row_blocks = block_structure_->rows.size();
+ const int new_num_row_blocks = num_row_blocks - delta_row_blocks;
int delta_num_nonzeros = 0;
int delta_num_rows = 0;
const std::vector<Block>& column_blocks = block_structure_->cols;
@@ -330,15 +677,40 @@
for (int c = 0; c < row.cells.size(); ++c) {
const Cell& cell = row.cells[c];
delta_num_nonzeros += row.block.size * column_blocks[cell.block_id].size;
+
+ if (transpose_block_structure_) {
+ auto& col_cells = transpose_block_structure_->rows[cell.block_id].cells;
+ while (!col_cells.empty() &&
+ col_cells.back().block_id >= new_num_row_blocks) {
+ const int del_block_id = col_cells.back().block_id;
+ const int del_block_rows =
+ block_structure_->rows[del_block_id].block.size;
+ const int del_block_cols = column_blocks[cell.block_id].size;
+ const int del_cell_nnz = del_block_rows * del_block_cols;
+ transpose_block_structure_->rows[cell.block_id].nnz -= del_cell_nnz;
+ col_cells.pop_back();
+ }
+ }
}
}
num_nonzeros_ -= delta_num_nonzeros;
num_rows_ -= delta_num_rows;
- block_structure_->rows.resize(num_row_blocks - delta_row_blocks);
+ block_structure_->rows.resize(new_num_row_blocks);
+
+ if (transpose_block_structure_ == nullptr) {
+ return;
+ }
+ for (int i = 0; i < delta_row_blocks; ++i) {
+ transpose_block_structure_->cols.pop_back();
+ }
+
+ ComputeCumulativeNumberOfNonZeros(transpose_block_structure_->rows);
}
-BlockSparseMatrix* BlockSparseMatrix::CreateRandomMatrix(
- const BlockSparseMatrix::RandomMatrixOptions& options) {
+std::unique_ptr<BlockSparseMatrix> BlockSparseMatrix::CreateRandomMatrix(
+ const BlockSparseMatrix::RandomMatrixOptions& options,
+ std::mt19937& prng,
+ bool use_page_locked_memory) {
CHECK_GT(options.num_row_blocks, 0);
CHECK_GT(options.min_row_block_size, 0);
CHECK_GT(options.max_row_block_size, 0);
@@ -346,7 +718,11 @@
CHECK_GT(options.block_density, 0.0);
CHECK_LE(options.block_density, 1.0);
- CompressedRowBlockStructure* bs = new CompressedRowBlockStructure();
+ std::uniform_int_distribution<int> col_distribution(
+ options.min_col_block_size, options.max_col_block_size);
+ std::uniform_int_distribution<int> row_distribution(
+ options.min_row_block_size, options.max_row_block_size);
+ auto bs = std::make_unique<CompressedRowBlockStructure>();
if (options.col_blocks.empty()) {
CHECK_GT(options.num_col_blocks, 0);
CHECK_GT(options.min_col_block_size, 0);
@@ -356,11 +732,8 @@
// Generate the col block structure.
int col_block_position = 0;
for (int i = 0; i < options.num_col_blocks; ++i) {
- // Generate a random integer in [min_col_block_size, max_col_block_size]
- const int delta_block_size =
- Uniform(options.max_col_block_size - options.min_col_block_size);
- const int col_block_size = options.min_col_block_size + delta_block_size;
- bs->cols.push_back(Block(col_block_size, col_block_position));
+ const int col_block_size = col_distribution(prng);
+ bs->cols.emplace_back(col_block_size, col_block_position);
col_block_position += col_block_size;
}
} else {
@@ -368,24 +741,23 @@
}
bool matrix_has_blocks = false;
+ std::uniform_real_distribution<double> uniform01(0.0, 1.0);
while (!matrix_has_blocks) {
VLOG(1) << "Clearing";
bs->rows.clear();
int row_block_position = 0;
int value_position = 0;
for (int r = 0; r < options.num_row_blocks; ++r) {
- const int delta_block_size =
- Uniform(options.max_row_block_size - options.min_row_block_size);
- const int row_block_size = options.min_row_block_size + delta_block_size;
- bs->rows.push_back(CompressedRow());
+ const int row_block_size = row_distribution(prng);
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = row_block_size;
row.block.position = row_block_position;
row_block_position += row_block_size;
for (int c = 0; c < bs->cols.size(); ++c) {
- if (RandDouble() > options.block_density) continue;
+ if (uniform01(prng) > options.block_density) continue;
- row.cells.push_back(Cell());
+ row.cells.emplace_back();
Cell& cell = row.cells.back();
cell.block_id = c;
cell.position = value_position;
@@ -395,14 +767,76 @@
}
}
- BlockSparseMatrix* matrix = new BlockSparseMatrix(bs);
+ auto matrix =
+ std::make_unique<BlockSparseMatrix>(bs.release(), use_page_locked_memory);
double* values = matrix->mutable_values();
- for (int i = 0; i < matrix->num_nonzeros(); ++i) {
- values[i] = RandNormal();
- }
+ std::normal_distribution<double> standard_normal_distribution;
+ std::generate_n(
+ values, matrix->num_nonzeros(), [&standard_normal_distribution, &prng] {
+ return standard_normal_distribution(prng);
+ });
return matrix;
}
-} // namespace internal
-} // namespace ceres
+std::unique_ptr<CompressedRowBlockStructure> CreateTranspose(
+ const CompressedRowBlockStructure& bs) {
+ auto transpose = std::make_unique<CompressedRowBlockStructure>();
+
+ transpose->rows.resize(bs.cols.size());
+ for (int i = 0; i < bs.cols.size(); ++i) {
+ transpose->rows[i].block = bs.cols[i];
+ transpose->rows[i].nnz = 0;
+ }
+
+ transpose->cols.resize(bs.rows.size());
+ for (int i = 0; i < bs.rows.size(); ++i) {
+ auto& row = bs.rows[i];
+ transpose->cols[i] = row.block;
+
+ const int nrows = row.block.size;
+ for (auto& cell : row.cells) {
+ transpose->rows[cell.block_id].cells.emplace_back(i, cell.position);
+ const int ncols = transpose->rows[cell.block_id].block.size;
+ transpose->rows[cell.block_id].nnz += nrows * ncols;
+ }
+ }
+ ComputeCumulativeNumberOfNonZeros(transpose->rows);
+ return transpose;
+}
+
+double* BlockSparseMatrix::AllocateValues(int size) {
+ if (!use_page_locked_memory_) {
+ return new double[size];
+ }
+
+#ifndef CERES_NO_CUDA
+
+ double* values = nullptr;
+ CHECK_EQ(cudaSuccess,
+ cudaHostAlloc(&values, sizeof(double) * size, cudaHostAllocDefault));
+ return values;
+#else
+ LOG(FATAL) << "Page locked memory requested when CUDA is not available. "
+ << "This is a Ceres bug; please contact the developers!";
+ return nullptr;
+#endif
+};
+
+void BlockSparseMatrix::FreeValues(double*& values) {
+ if (!use_page_locked_memory_) {
+ delete[] values;
+ values = nullptr;
+ return;
+ }
+
+#ifndef CERES_NO_CUDA
+ CHECK_EQ(cudaSuccess, cudaFreeHost(values));
+ values = nullptr;
+#else
+ LOG(FATAL) << "Page locked memory requested when CUDA is not available. "
+ << "This is a Ceres bug; please contact the developers!";
+#endif
+};
+
+} // namespace ceres::internal
diff --git a/internal/ceres/block_sparse_matrix.h b/internal/ceres/block_sparse_matrix.h
index e5b3634..2e45488 100644
--- a/internal/ceres/block_sparse_matrix.h
+++ b/internal/ceres/block_sparse_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,14 +35,17 @@
#define CERES_INTERNAL_BLOCK_SPARSE_MATRIX_H_
#include <memory>
+#include <random>
#include "ceres/block_structure.h"
+#include "ceres/compressed_row_sparse_matrix.h"
+#include "ceres/context_impl.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/sparse_matrix.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class TripletSparseMatrix;
@@ -54,7 +57,7 @@
//
// internal/ceres/block_structure.h
//
-class CERES_EXPORT_INTERNAL BlockSparseMatrix : public SparseMatrix {
+class CERES_NO_EXPORT BlockSparseMatrix final : public SparseMatrix {
public:
// Construct a block sparse matrix with a fully initialized
// CompressedRowBlockStructure objected. The matrix takes over
@@ -62,33 +65,64 @@
//
// TODO(sameeragarwal): Add a function which will validate legal
// CompressedRowBlockStructure objects.
- explicit BlockSparseMatrix(CompressedRowBlockStructure* block_structure);
+ explicit BlockSparseMatrix(CompressedRowBlockStructure* block_structure,
+ bool use_page_locked_memory = false);
+ ~BlockSparseMatrix();
- BlockSparseMatrix();
BlockSparseMatrix(const BlockSparseMatrix&) = delete;
void operator=(const BlockSparseMatrix&) = delete;
- virtual ~BlockSparseMatrix();
-
// Implementation of SparseMatrix interface.
- void SetZero() final;
- void RightMultiply(const double* x, double* y) const final;
- void LeftMultiply(const double* x, double* y) const final;
+ void SetZero() override final;
+ void SetZero(ContextImpl* context, int num_threads) override final;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const final;
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const final;
+ void LeftMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const final;
void SquaredColumnNorm(double* x) const final;
+ void SquaredColumnNorm(double* x,
+ ContextImpl* context,
+ int num_threads) const final;
void ScaleColumns(const double* scale) final;
+ void ScaleColumns(const double* scale,
+ ContextImpl* context,
+ int num_threads) final;
+
+ // Convert to CompressedRowSparseMatrix
+ std::unique_ptr<CompressedRowSparseMatrix> ToCompressedRowSparseMatrix()
+ const;
+ // Create CompressedRowSparseMatrix corresponding to transposed matrix
+ std::unique_ptr<CompressedRowSparseMatrix>
+ ToCompressedRowSparseMatrixTranspose() const;
+ // Copy values to CompressedRowSparseMatrix that has compatible structure
+ void UpdateCompressedRowSparseMatrix(
+ CompressedRowSparseMatrix* crs_matrix) const;
+ // Copy values to CompressedRowSparseMatrix that has structure of transposed
+ // matrix
+ void UpdateCompressedRowSparseMatrixTranspose(
+ CompressedRowSparseMatrix* crs_matrix) const;
void ToDenseMatrix(Matrix* dense_matrix) const final;
void ToTextFile(FILE* file) const final;
+ void AddTransposeBlockStructure();
+
// clang-format off
int num_rows() const final { return num_rows_; }
int num_cols() const final { return num_cols_; }
int num_nonzeros() const final { return num_nonzeros_; }
- const double* values() const final { return values_.get(); }
- double* mutable_values() final { return values_.get(); }
+ const double* values() const final { return values_; }
+ double* mutable_values() final { return values_; }
// clang-format on
void ToTripletSparseMatrix(TripletSparseMatrix* matrix) const;
const CompressedRowBlockStructure* block_structure() const;
+ const CompressedRowBlockStructure* transpose_block_structure() const;
// Append the contents of m to the bottom of this matrix. m must
// have the same column blocks structure as this matrix.
@@ -97,7 +131,7 @@
// Delete the bottom delta_rows_blocks.
void DeleteRowBlocks(int delta_row_blocks);
- static BlockSparseMatrix* CreateDiagonalMatrix(
+ static std::unique_ptr<BlockSparseMatrix> CreateDiagonalMatrix(
const double* diagonal, const std::vector<Block>& column_blocks);
struct RandomMatrixOptions {
@@ -122,18 +156,23 @@
// Create a random BlockSparseMatrix whose entries are normally
// distributed and whose structure is determined by
// RandomMatrixOptions.
- //
- // Caller owns the result.
- static BlockSparseMatrix* CreateRandomMatrix(
- const RandomMatrixOptions& options);
+ static std::unique_ptr<BlockSparseMatrix> CreateRandomMatrix(
+ const RandomMatrixOptions& options,
+ std::mt19937& prng,
+ bool use_page_locked_memory = false);
private:
+ double* AllocateValues(int size);
+ void FreeValues(double*& values);
+
+ const bool use_page_locked_memory_;
int num_rows_;
int num_cols_;
int num_nonzeros_;
int max_num_nonzeros_;
- std::unique_ptr<double[]> values_;
+ double* values_;
std::unique_ptr<CompressedRowBlockStructure> block_structure_;
+ std::unique_ptr<CompressedRowBlockStructure> transpose_block_structure_;
};
// A number of algorithms like the SchurEliminator do not need
@@ -142,9 +181,9 @@
//
// BlockSparseDataMatrix a struct that carries these two bits of
// information
-class BlockSparseMatrixData {
+class CERES_NO_EXPORT BlockSparseMatrixData {
public:
- BlockSparseMatrixData(const BlockSparseMatrix& m)
+ explicit BlockSparseMatrixData(const BlockSparseMatrix& m)
: block_structure_(m.block_structure()), values_(m.values()){};
BlockSparseMatrixData(const CompressedRowBlockStructure* block_structure,
@@ -161,7 +200,11 @@
const double* values_;
};
-} // namespace internal
-} // namespace ceres
+std::unique_ptr<CompressedRowBlockStructure> CreateTranspose(
+ const CompressedRowBlockStructure& bs);
+
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_BLOCK_SPARSE_MATRIX_H_
diff --git a/internal/ceres/block_sparse_matrix_test.cc b/internal/ceres/block_sparse_matrix_test.cc
index 02d3fb1..4a524f9 100644
--- a/internal/ceres/block_sparse_matrix_test.cc
+++ b/internal/ceres/block_sparse_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,10 +30,14 @@
#include "ceres/block_sparse_matrix.h"
+#include <algorithm>
#include <memory>
+#include <random>
#include <string>
+#include <vector>
#include "ceres/casts.h"
+#include "ceres/crs_matrix.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_least_squares_problems.h"
#include "ceres/triplet_sparse_matrix.h"
@@ -43,102 +47,371 @@
namespace ceres {
namespace internal {
+namespace {
+
+std::unique_ptr<BlockSparseMatrix> CreateTestMatrixFromId(int id) {
+ if (id == 0) {
+ // Create the following block sparse matrix:
+ // [ 1 2 0 0 0 0 ]
+ // [ 3 4 0 0 0 0 ]
+ // [ 0 0 5 6 7 0 ]
+ // [ 0 0 8 9 10 0 ]
+ CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
+ bs->cols = {
+ // Block size 2, position 0.
+ Block(2, 0),
+ // Block size 3, position 2.
+ Block(3, 2),
+ // Block size 1, position 5.
+ Block(1, 5),
+ };
+ bs->rows = {CompressedRow(1), CompressedRow(1)};
+ bs->rows[0].block = Block(2, 0);
+ bs->rows[0].cells = {Cell(0, 0)};
+
+ bs->rows[1].block = Block(2, 2);
+ bs->rows[1].cells = {Cell(1, 4)};
+ auto m = std::make_unique<BlockSparseMatrix>(bs);
+ EXPECT_NE(m, nullptr);
+ EXPECT_EQ(m->num_rows(), 4);
+ EXPECT_EQ(m->num_cols(), 6);
+ EXPECT_EQ(m->num_nonzeros(), 10);
+ double* values = m->mutable_values();
+ for (int i = 0; i < 10; ++i) {
+ values[i] = i + 1;
+ }
+ return m;
+ } else if (id == 1) {
+ // Create the following block sparse matrix:
+ // [ 1 2 0 5 6 0 ]
+ // [ 3 4 0 7 8 0 ]
+ // [ 0 0 9 0 0 0 ]
+ CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
+ bs->cols = {
+ // Block size 2, position 0.
+ Block(2, 0),
+ // Block size 1, position 2.
+ Block(1, 2),
+ // Block size 2, position 3.
+ Block(2, 3),
+ // Block size 1, position 5.
+ Block(1, 5),
+ };
+ bs->rows = {CompressedRow(2), CompressedRow(1)};
+ bs->rows[0].block = Block(2, 0);
+ bs->rows[0].cells = {Cell(0, 0), Cell(2, 4)};
+
+ bs->rows[1].block = Block(1, 2);
+ bs->rows[1].cells = {Cell(1, 8)};
+ auto m = std::make_unique<BlockSparseMatrix>(bs);
+ EXPECT_NE(m, nullptr);
+ EXPECT_EQ(m->num_rows(), 3);
+ EXPECT_EQ(m->num_cols(), 6);
+ EXPECT_EQ(m->num_nonzeros(), 9);
+ double* values = m->mutable_values();
+ for (int i = 0; i < 9; ++i) {
+ values[i] = i + 1;
+ }
+ return m;
+ } else if (id == 2) {
+ // Create the following block sparse matrix:
+ // [ 1 2 0 | 6 7 0 ]
+ // [ 3 4 0 | 8 9 0 ]
+ // [ 0 0 5 | 0 0 10]
+ // With cells of the left submatrix preceding cells of the right submatrix
+ CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
+ bs->cols = {
+ // Block size 2, position 0.
+ Block(2, 0),
+ // Block size 1, position 2.
+ Block(1, 2),
+ // Block size 2, position 3.
+ Block(2, 3),
+ // Block size 1, position 5.
+ Block(1, 5),
+ };
+ bs->rows = {CompressedRow(2), CompressedRow(1)};
+ bs->rows[0].block = Block(2, 0);
+ bs->rows[0].cells = {Cell(0, 0), Cell(2, 5)};
+
+ bs->rows[1].block = Block(1, 2);
+ bs->rows[1].cells = {Cell(1, 4), Cell(3, 9)};
+ auto m = std::make_unique<BlockSparseMatrix>(bs);
+ EXPECT_NE(m, nullptr);
+ EXPECT_EQ(m->num_rows(), 3);
+ EXPECT_EQ(m->num_cols(), 6);
+ EXPECT_EQ(m->num_nonzeros(), 10);
+ double* values = m->mutable_values();
+ for (int i = 0; i < 10; ++i) {
+ values[i] = i + 1;
+ }
+ return m;
+ }
+ return nullptr;
+}
+} // namespace
+
+const int kNumThreads = 4;
+
class BlockSparseMatrixTest : public ::testing::Test {
protected:
void SetUp() final {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(2));
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(2);
CHECK(problem != nullptr);
- A_.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
+ a_.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
- problem.reset(CreateLinearLeastSquaresProblemFromId(1));
+ problem = CreateLinearLeastSquaresProblemFromId(1);
CHECK(problem != nullptr);
- B_.reset(down_cast<TripletSparseMatrix*>(problem->A.release()));
+ b_.reset(down_cast<TripletSparseMatrix*>(problem->A.release()));
- CHECK_EQ(A_->num_rows(), B_->num_rows());
- CHECK_EQ(A_->num_cols(), B_->num_cols());
- CHECK_EQ(A_->num_nonzeros(), B_->num_nonzeros());
+ CHECK_EQ(a_->num_rows(), b_->num_rows());
+ CHECK_EQ(a_->num_cols(), b_->num_cols());
+ CHECK_EQ(a_->num_nonzeros(), b_->num_nonzeros());
+ context_.EnsureMinimumThreads(kNumThreads);
+
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = 1000;
+ options.min_row_block_size = 1;
+ options.max_row_block_size = 8;
+ options.num_col_blocks = 100;
+ options.min_col_block_size = 1;
+ options.max_col_block_size = 8;
+ options.block_density = 0.05;
+
+ std::mt19937 rng;
+ c_ = BlockSparseMatrix::CreateRandomMatrix(options, rng);
}
- std::unique_ptr<BlockSparseMatrix> A_;
- std::unique_ptr<TripletSparseMatrix> B_;
+ std::unique_ptr<BlockSparseMatrix> a_;
+ std::unique_ptr<TripletSparseMatrix> b_;
+ std::unique_ptr<BlockSparseMatrix> c_;
+ ContextImpl context_;
};
TEST_F(BlockSparseMatrixTest, SetZeroTest) {
- A_->SetZero();
- EXPECT_EQ(13, A_->num_nonzeros());
+ a_->SetZero();
+ EXPECT_EQ(13, a_->num_nonzeros());
}
-TEST_F(BlockSparseMatrixTest, RightMultiplyTest) {
- Vector y_a = Vector::Zero(A_->num_rows());
- Vector y_b = Vector::Zero(A_->num_rows());
- for (int i = 0; i < A_->num_cols(); ++i) {
- Vector x = Vector::Zero(A_->num_cols());
+TEST_F(BlockSparseMatrixTest, RightMultiplyAndAccumulateTest) {
+ Vector y_a = Vector::Zero(a_->num_rows());
+ Vector y_b = Vector::Zero(a_->num_rows());
+ for (int i = 0; i < a_->num_cols(); ++i) {
+ Vector x = Vector::Zero(a_->num_cols());
x[i] = 1.0;
- A_->RightMultiply(x.data(), y_a.data());
- B_->RightMultiply(x.data(), y_b.data());
+ a_->RightMultiplyAndAccumulate(x.data(), y_a.data());
+ b_->RightMultiplyAndAccumulate(x.data(), y_b.data());
EXPECT_LT((y_a - y_b).norm(), 1e-12);
}
}
-TEST_F(BlockSparseMatrixTest, LeftMultiplyTest) {
- Vector y_a = Vector::Zero(A_->num_cols());
- Vector y_b = Vector::Zero(A_->num_cols());
- for (int i = 0; i < A_->num_rows(); ++i) {
- Vector x = Vector::Zero(A_->num_rows());
+TEST_F(BlockSparseMatrixTest, RightMultiplyAndAccumulateParallelTest) {
+ Vector y_0 = Vector::Random(a_->num_rows());
+ Vector y_s = y_0;
+ Vector y_p = y_0;
+
+ Vector x = Vector::Random(a_->num_cols());
+ a_->RightMultiplyAndAccumulate(x.data(), y_s.data());
+
+ a_->RightMultiplyAndAccumulate(x.data(), y_p.data(), &context_, kNumThreads);
+
+ // Current parallel implementation is expected to be bit-exact
+ EXPECT_EQ((y_s - y_p).norm(), 0.);
+}
+
+TEST_F(BlockSparseMatrixTest, LeftMultiplyAndAccumulateTest) {
+ Vector y_a = Vector::Zero(a_->num_cols());
+ Vector y_b = Vector::Zero(a_->num_cols());
+ for (int i = 0; i < a_->num_rows(); ++i) {
+ Vector x = Vector::Zero(a_->num_rows());
x[i] = 1.0;
- A_->LeftMultiply(x.data(), y_a.data());
- B_->LeftMultiply(x.data(), y_b.data());
+ a_->LeftMultiplyAndAccumulate(x.data(), y_a.data());
+ b_->LeftMultiplyAndAccumulate(x.data(), y_b.data());
EXPECT_LT((y_a - y_b).norm(), 1e-12);
}
}
+TEST_F(BlockSparseMatrixTest, LeftMultiplyAndAccumulateParallelTest) {
+ Vector y_0 = Vector::Random(a_->num_cols());
+ Vector y_s = y_0;
+ Vector y_p = y_0;
+
+ Vector x = Vector::Random(a_->num_rows());
+ a_->LeftMultiplyAndAccumulate(x.data(), y_s.data());
+
+ a_->LeftMultiplyAndAccumulate(x.data(), y_p.data(), &context_, kNumThreads);
+
+ // Parallel implementation for left products uses a different order of
+ // traversal, thus results might be different
+ EXPECT_LT((y_s - y_p).norm(), 1e-12);
+}
+
TEST_F(BlockSparseMatrixTest, SquaredColumnNormTest) {
- Vector y_a = Vector::Zero(A_->num_cols());
- Vector y_b = Vector::Zero(A_->num_cols());
- A_->SquaredColumnNorm(y_a.data());
- B_->SquaredColumnNorm(y_b.data());
+ Vector y_a = Vector::Zero(a_->num_cols());
+ Vector y_b = Vector::Zero(a_->num_cols());
+ a_->SquaredColumnNorm(y_a.data());
+ b_->SquaredColumnNorm(y_b.data());
EXPECT_LT((y_a - y_b).norm(), 1e-12);
}
+TEST_F(BlockSparseMatrixTest, SquaredColumnNormParallelTest) {
+ Vector y_a = Vector::Zero(c_->num_cols());
+ Vector y_b = Vector::Zero(c_->num_cols());
+ c_->SquaredColumnNorm(y_a.data());
+
+ c_->SquaredColumnNorm(y_b.data(), &context_, kNumThreads);
+ EXPECT_LT((y_a - y_b).norm(), 1e-12);
+}
+
+TEST_F(BlockSparseMatrixTest, ScaleColumnsTest) {
+ const Vector scale = Vector::Random(c_->num_cols()).cwiseAbs();
+
+ const Vector x = Vector::Random(c_->num_rows());
+ Vector y_expected = Vector::Zero(c_->num_cols());
+ c_->LeftMultiplyAndAccumulate(x.data(), y_expected.data());
+ y_expected.array() *= scale.array();
+
+ c_->ScaleColumns(scale.data());
+ Vector y_observed = Vector::Zero(c_->num_cols());
+ c_->LeftMultiplyAndAccumulate(x.data(), y_observed.data());
+
+ EXPECT_GT(y_expected.norm(), 1.);
+ EXPECT_LT((y_observed - y_expected).norm(), 1e-12 * y_expected.norm());
+}
+
+TEST_F(BlockSparseMatrixTest, ScaleColumnsParallelTest) {
+ const Vector scale = Vector::Random(c_->num_cols()).cwiseAbs();
+
+ const Vector x = Vector::Random(c_->num_rows());
+ Vector y_expected = Vector::Zero(c_->num_cols());
+ c_->LeftMultiplyAndAccumulate(x.data(), y_expected.data());
+ y_expected.array() *= scale.array();
+
+ c_->ScaleColumns(scale.data(), &context_, kNumThreads);
+ Vector y_observed = Vector::Zero(c_->num_cols());
+ c_->LeftMultiplyAndAccumulate(x.data(), y_observed.data());
+
+ EXPECT_GT(y_expected.norm(), 1.);
+ EXPECT_LT((y_observed - y_expected).norm(), 1e-12 * y_expected.norm());
+}
+
TEST_F(BlockSparseMatrixTest, ToDenseMatrixTest) {
Matrix m_a;
Matrix m_b;
- A_->ToDenseMatrix(&m_a);
- B_->ToDenseMatrix(&m_b);
+ a_->ToDenseMatrix(&m_a);
+ b_->ToDenseMatrix(&m_b);
EXPECT_LT((m_a - m_b).norm(), 1e-12);
}
TEST_F(BlockSparseMatrixTest, AppendRows) {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(2));
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(2);
std::unique_ptr<BlockSparseMatrix> m(
down_cast<BlockSparseMatrix*>(problem->A.release()));
- A_->AppendRows(*m);
- EXPECT_EQ(A_->num_rows(), 2 * m->num_rows());
- EXPECT_EQ(A_->num_cols(), m->num_cols());
+ a_->AppendRows(*m);
+ EXPECT_EQ(a_->num_rows(), 2 * m->num_rows());
+ EXPECT_EQ(a_->num_cols(), m->num_cols());
- problem.reset(CreateLinearLeastSquaresProblemFromId(1));
+ problem = CreateLinearLeastSquaresProblemFromId(1);
std::unique_ptr<TripletSparseMatrix> m2(
down_cast<TripletSparseMatrix*>(problem->A.release()));
- B_->AppendRows(*m2);
+ b_->AppendRows(*m2);
- Vector y_a = Vector::Zero(A_->num_rows());
- Vector y_b = Vector::Zero(A_->num_rows());
- for (int i = 0; i < A_->num_cols(); ++i) {
- Vector x = Vector::Zero(A_->num_cols());
+ Vector y_a = Vector::Zero(a_->num_rows());
+ Vector y_b = Vector::Zero(a_->num_rows());
+ for (int i = 0; i < a_->num_cols(); ++i) {
+ Vector x = Vector::Zero(a_->num_cols());
x[i] = 1.0;
y_a.setZero();
y_b.setZero();
- A_->RightMultiply(x.data(), y_a.data());
- B_->RightMultiply(x.data(), y_b.data());
+ a_->RightMultiplyAndAccumulate(x.data(), y_a.data());
+ b_->RightMultiplyAndAccumulate(x.data(), y_b.data());
EXPECT_LT((y_a - y_b).norm(), 1e-12);
}
}
+TEST_F(BlockSparseMatrixTest, AppendDeleteRowsTransposedStructure) {
+ auto problem = CreateLinearLeastSquaresProblemFromId(2);
+ std::unique_ptr<BlockSparseMatrix> m(
+ down_cast<BlockSparseMatrix*>(problem->A.release()));
+
+ auto block_structure = a_->block_structure();
+
+ // Several AppendRows and DeleteRowBlocks operations are applied to matrix,
+ // with regular and transpose block structures being compared after each
+ // operation.
+ //
+ // Non-negative values encode number of row blocks to remove
+ // -1 encodes appending matrix m
+ const int num_row_blocks_to_delete[] = {0, -1, 1, -1, 8, -1, 10};
+ for (auto& t : num_row_blocks_to_delete) {
+ if (t == -1) {
+ a_->AppendRows(*m);
+ } else if (t > 0) {
+ CHECK_GE(block_structure->rows.size(), t);
+ a_->DeleteRowBlocks(t);
+ }
+
+ auto block_structure = a_->block_structure();
+ auto transpose_block_structure = a_->transpose_block_structure();
+ ASSERT_NE(block_structure, nullptr);
+ ASSERT_NE(transpose_block_structure, nullptr);
+
+ EXPECT_EQ(block_structure->rows.size(),
+ transpose_block_structure->cols.size());
+ EXPECT_EQ(block_structure->cols.size(),
+ transpose_block_structure->rows.size());
+
+ std::vector<int> nnz_col(transpose_block_structure->rows.size());
+ for (int i = 0; i < block_structure->cols.size(); ++i) {
+ EXPECT_EQ(block_structure->cols[i].position,
+ transpose_block_structure->rows[i].block.position);
+ const int col_size = transpose_block_structure->rows[i].block.size;
+ EXPECT_EQ(block_structure->cols[i].size, col_size);
+
+ for (auto& col_cell : transpose_block_structure->rows[i].cells) {
+ int matches = 0;
+ const int row_block_id = col_cell.block_id;
+ nnz_col[i] +=
+ col_size * transpose_block_structure->cols[row_block_id].size;
+ for (auto& row_cell : block_structure->rows[row_block_id].cells) {
+ if (row_cell.block_id != i) continue;
+ EXPECT_EQ(row_cell.position, col_cell.position);
+ ++matches;
+ }
+ EXPECT_EQ(matches, 1);
+ }
+ EXPECT_EQ(nnz_col[i], transpose_block_structure->rows[i].nnz);
+ if (i > 0) {
+ nnz_col[i] += nnz_col[i - 1];
+ }
+ EXPECT_EQ(nnz_col[i], transpose_block_structure->rows[i].cumulative_nnz);
+ }
+ for (int i = 0; i < block_structure->rows.size(); ++i) {
+ EXPECT_EQ(block_structure->rows[i].block.position,
+ transpose_block_structure->cols[i].position);
+ EXPECT_EQ(block_structure->rows[i].block.size,
+ transpose_block_structure->cols[i].size);
+
+ for (auto& row_cell : block_structure->rows[i].cells) {
+ int matches = 0;
+ const int col_block_id = row_cell.block_id;
+ for (auto& col_cell :
+ transpose_block_structure->rows[col_block_id].cells) {
+ if (col_cell.block_id != i) continue;
+ EXPECT_EQ(col_cell.position, row_cell.position);
+ ++matches;
+ }
+ EXPECT_EQ(matches, 1);
+ }
+ }
+ }
+}
+
TEST_F(BlockSparseMatrixTest, AppendAndDeleteBlockDiagonalMatrix) {
- const std::vector<Block>& column_blocks = A_->block_structure()->cols;
+ const std::vector<Block>& column_blocks = a_->block_structure()->cols;
const int num_cols =
column_blocks.back().size + column_blocks.back().position;
Vector diagonal(num_cols);
@@ -148,48 +421,48 @@
std::unique_ptr<BlockSparseMatrix> appendage(
BlockSparseMatrix::CreateDiagonalMatrix(diagonal.data(), column_blocks));
- A_->AppendRows(*appendage);
+ a_->AppendRows(*appendage);
Vector y_a, y_b;
- y_a.resize(A_->num_rows());
- y_b.resize(A_->num_rows());
- for (int i = 0; i < A_->num_cols(); ++i) {
- Vector x = Vector::Zero(A_->num_cols());
+ y_a.resize(a_->num_rows());
+ y_b.resize(a_->num_rows());
+ for (int i = 0; i < a_->num_cols(); ++i) {
+ Vector x = Vector::Zero(a_->num_cols());
x[i] = 1.0;
y_a.setZero();
y_b.setZero();
- A_->RightMultiply(x.data(), y_a.data());
- B_->RightMultiply(x.data(), y_b.data());
- EXPECT_LT((y_a.head(B_->num_rows()) - y_b.head(B_->num_rows())).norm(),
+ a_->RightMultiplyAndAccumulate(x.data(), y_a.data());
+ b_->RightMultiplyAndAccumulate(x.data(), y_b.data());
+ EXPECT_LT((y_a.head(b_->num_rows()) - y_b.head(b_->num_rows())).norm(),
1e-12);
- Vector expected_tail = Vector::Zero(A_->num_cols());
+ Vector expected_tail = Vector::Zero(a_->num_cols());
expected_tail(i) = diagonal(i);
- EXPECT_LT((y_a.tail(A_->num_cols()) - expected_tail).norm(), 1e-12);
+ EXPECT_LT((y_a.tail(a_->num_cols()) - expected_tail).norm(), 1e-12);
}
- A_->DeleteRowBlocks(column_blocks.size());
- EXPECT_EQ(A_->num_rows(), B_->num_rows());
- EXPECT_EQ(A_->num_cols(), B_->num_cols());
+ a_->DeleteRowBlocks(column_blocks.size());
+ EXPECT_EQ(a_->num_rows(), b_->num_rows());
+ EXPECT_EQ(a_->num_cols(), b_->num_cols());
- y_a.resize(A_->num_rows());
- y_b.resize(A_->num_rows());
- for (int i = 0; i < A_->num_cols(); ++i) {
- Vector x = Vector::Zero(A_->num_cols());
+ y_a.resize(a_->num_rows());
+ y_b.resize(a_->num_rows());
+ for (int i = 0; i < a_->num_cols(); ++i) {
+ Vector x = Vector::Zero(a_->num_cols());
x[i] = 1.0;
y_a.setZero();
y_b.setZero();
- A_->RightMultiply(x.data(), y_a.data());
- B_->RightMultiply(x.data(), y_b.data());
+ a_->RightMultiplyAndAccumulate(x.data(), y_a.data());
+ b_->RightMultiplyAndAccumulate(x.data(), y_b.data());
EXPECT_LT((y_a - y_b).norm(), 1e-12);
}
}
TEST(BlockSparseMatrix, CreateDiagonalMatrix) {
std::vector<Block> column_blocks;
- column_blocks.push_back(Block(2, 0));
- column_blocks.push_back(Block(1, 2));
- column_blocks.push_back(Block(3, 3));
+ column_blocks.emplace_back(2, 0);
+ column_blocks.emplace_back(1, 2);
+ column_blocks.emplace_back(3, 3);
const int num_cols =
column_blocks.back().size + column_blocks.back().position;
Vector diagonal(num_cols);
@@ -208,11 +481,195 @@
EXPECT_EQ(m->num_rows(), m->num_cols());
Vector x = Vector::Ones(num_cols);
Vector y = Vector::Zero(num_cols);
- m->RightMultiply(x.data(), y.data());
+ m->RightMultiplyAndAccumulate(x.data(), y.data());
for (int i = 0; i < num_cols; ++i) {
EXPECT_NEAR(y[i], diagonal[i], std::numeric_limits<double>::epsilon());
}
}
+TEST(BlockSparseMatrix, ToDenseMatrix) {
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(0);
+ Matrix m_dense;
+ m->ToDenseMatrix(&m_dense);
+ EXPECT_EQ(m_dense.rows(), 4);
+ EXPECT_EQ(m_dense.cols(), 6);
+ Matrix m_expected(4, 6);
+ m_expected << 1, 2, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 0, 0, 5, 6, 7, 0, 0, 0, 8,
+ 9, 10, 0;
+ EXPECT_EQ(m_dense, m_expected);
+ }
+
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(1);
+ Matrix m_dense;
+ m->ToDenseMatrix(&m_dense);
+ EXPECT_EQ(m_dense.rows(), 3);
+ EXPECT_EQ(m_dense.cols(), 6);
+ Matrix m_expected(3, 6);
+ m_expected << 1, 2, 0, 5, 6, 0, 3, 4, 0, 7, 8, 0, 0, 0, 9, 0, 0, 0;
+ EXPECT_EQ(m_dense, m_expected);
+ }
+
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(2);
+ Matrix m_dense;
+ m->ToDenseMatrix(&m_dense);
+ EXPECT_EQ(m_dense.rows(), 3);
+ EXPECT_EQ(m_dense.cols(), 6);
+ Matrix m_expected(3, 6);
+ m_expected << 1, 2, 0, 6, 7, 0, 3, 4, 0, 8, 9, 0, 0, 0, 5, 0, 0, 10;
+ EXPECT_EQ(m_dense, m_expected);
+ }
+}
+
+TEST(BlockSparseMatrix, ToCRSMatrix) {
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(0);
+ auto m_crs = m->ToCompressedRowSparseMatrix();
+ std::vector<int> rows_expected = {0, 2, 4, 7, 10};
+ std::vector<int> cols_expected = {0, 1, 0, 1, 2, 3, 4, 2, 3, 4};
+ std::vector<double> values_expected = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
+ for (int i = 0; i < rows_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->rows()[i], rows_expected[i]);
+ }
+ for (int i = 0; i < cols_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->cols()[i], cols_expected[i]);
+ }
+ for (int i = 0; i < values_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->values()[i], values_expected[i]);
+ }
+ }
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(1);
+ auto m_crs = m->ToCompressedRowSparseMatrix();
+ std::vector<int> rows_expected = {0, 4, 8, 9};
+ std::vector<int> cols_expected = {0, 1, 3, 4, 0, 1, 3, 4, 2};
+ std::vector<double> values_expected = {1, 2, 5, 6, 3, 4, 7, 8, 9};
+ for (int i = 0; i < rows_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->rows()[i], rows_expected[i]);
+ }
+ for (int i = 0; i < cols_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->cols()[i], cols_expected[i]);
+ }
+ for (int i = 0; i < values_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->values()[i], values_expected[i]);
+ }
+ }
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(2);
+ auto m_crs = m->ToCompressedRowSparseMatrix();
+ std::vector<int> rows_expected = {0, 4, 8, 10};
+ std::vector<int> cols_expected = {0, 1, 3, 4, 0, 1, 3, 4, 2, 5};
+ std::vector<double> values_expected = {1, 2, 6, 7, 3, 4, 8, 9, 5, 10};
+ for (int i = 0; i < rows_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->rows()[i], rows_expected[i]);
+ }
+ for (int i = 0; i < cols_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->cols()[i], cols_expected[i]);
+ }
+ for (int i = 0; i < values_expected.size(); ++i) {
+ EXPECT_EQ(m_crs->values()[i], values_expected[i]);
+ }
+ }
+}
+
+TEST(BlockSparseMatrix, ToCRSMatrixTranspose) {
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(0);
+ auto m_crs_transpose = m->ToCompressedRowSparseMatrixTranspose();
+ std::vector<int> rows_expected = {0, 2, 4, 6, 8, 10, 10};
+ std::vector<int> cols_expected = {0, 1, 0, 1, 2, 3, 2, 3, 2, 3};
+ std::vector<double> values_expected = {1, 3, 2, 4, 5, 8, 6, 9, 7, 10};
+ EXPECT_EQ(m_crs_transpose->num_nonzeros(), cols_expected.size());
+ EXPECT_EQ(m_crs_transpose->num_rows(), rows_expected.size() - 1);
+ for (int i = 0; i < rows_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->rows()[i], rows_expected[i]);
+ }
+ for (int i = 0; i < cols_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->cols()[i], cols_expected[i]);
+ }
+ for (int i = 0; i < values_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->values()[i], values_expected[i]);
+ }
+ }
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(1);
+ auto m_crs_transpose = m->ToCompressedRowSparseMatrixTranspose();
+ std::vector<int> rows_expected = {0, 2, 4, 5, 7, 9, 9};
+ std::vector<int> cols_expected = {0, 1, 0, 1, 2, 0, 1, 0, 1};
+ std::vector<double> values_expected = {1, 3, 2, 4, 9, 5, 7, 6, 8};
+ EXPECT_EQ(m_crs_transpose->num_nonzeros(), cols_expected.size());
+ EXPECT_EQ(m_crs_transpose->num_rows(), rows_expected.size() - 1);
+ for (int i = 0; i < rows_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->rows()[i], rows_expected[i]);
+ }
+ for (int i = 0; i < cols_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->cols()[i], cols_expected[i]);
+ }
+ for (int i = 0; i < values_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->values()[i], values_expected[i]);
+ }
+ }
+ {
+ std::unique_ptr<BlockSparseMatrix> m = CreateTestMatrixFromId(2);
+ auto m_crs_transpose = m->ToCompressedRowSparseMatrixTranspose();
+ std::vector<int> rows_expected = {0, 2, 4, 5, 7, 9, 10};
+ std::vector<int> cols_expected = {0, 1, 0, 1, 2, 0, 1, 0, 1, 2};
+ std::vector<double> values_expected = {1, 3, 2, 4, 5, 6, 8, 7, 9, 10};
+ EXPECT_EQ(m_crs_transpose->num_nonzeros(), cols_expected.size());
+ EXPECT_EQ(m_crs_transpose->num_rows(), rows_expected.size() - 1);
+ for (int i = 0; i < rows_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->rows()[i], rows_expected[i]);
+ }
+ for (int i = 0; i < cols_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->cols()[i], cols_expected[i]);
+ }
+ for (int i = 0; i < values_expected.size(); ++i) {
+ EXPECT_EQ(m_crs_transpose->values()[i], values_expected[i]);
+ }
+ }
+}
+
+TEST(BlockSparseMatrix, CreateTranspose) {
+ constexpr int kNumtrials = 10;
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_col_blocks = 10;
+ options.min_col_block_size = 1;
+ options.max_col_block_size = 3;
+
+ options.num_row_blocks = 20;
+ options.min_row_block_size = 1;
+ options.max_row_block_size = 4;
+ options.block_density = 0.25;
+ std::mt19937 prng;
+
+ for (int trial = 0; trial < kNumtrials; ++trial) {
+ auto a = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+
+ auto ap_bs = std::make_unique<CompressedRowBlockStructure>();
+ *ap_bs = *a->block_structure();
+ BlockSparseMatrix ap(ap_bs.release());
+ std::copy_n(a->values(), a->num_nonzeros(), ap.mutable_values());
+
+ Vector x = Vector::Random(a->num_cols());
+ Vector y = Vector::Random(a->num_rows());
+ Vector a_x = Vector::Zero(a->num_rows());
+ Vector a_t_y = Vector::Zero(a->num_cols());
+ Vector ap_x = Vector::Zero(a->num_rows());
+ Vector ap_t_y = Vector::Zero(a->num_cols());
+ a->RightMultiplyAndAccumulate(x.data(), a_x.data());
+ ap.RightMultiplyAndAccumulate(x.data(), ap_x.data());
+ EXPECT_NEAR((a_x - ap_x).norm() / a_x.norm(),
+ 0.0,
+ std::numeric_limits<double>::epsilon());
+ a->LeftMultiplyAndAccumulate(y.data(), a_t_y.data());
+ ap.LeftMultiplyAndAccumulate(y.data(), ap_t_y.data());
+ EXPECT_NEAR((a_t_y - ap_t_y).norm() / a_t_y.norm(),
+ 0.0,
+ std::numeric_limits<double>::epsilon());
+ }
+}
+
} // namespace internal
} // namespace ceres
diff --git a/internal/ceres/block_structure.cc b/internal/ceres/block_structure.cc
index 39ba082..70f68b2 100644
--- a/internal/ceres/block_structure.cc
+++ b/internal/ceres/block_structure.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,8 +30,11 @@
#include "ceres/block_structure.h"
-namespace ceres {
-namespace internal {
+#include <vector>
+
+#include "glog/logging.h"
+
+namespace ceres::internal {
bool CellLessThan(const Cell& lhs, const Cell& rhs) {
if (lhs.block_id == rhs.block_id) {
@@ -40,5 +43,28 @@
return (lhs.block_id < rhs.block_id);
}
-} // namespace internal
-} // namespace ceres
+std::vector<Block> Tail(const std::vector<Block>& blocks, int n) {
+ CHECK_LE(n, blocks.size());
+ std::vector<Block> tail;
+ const int num_blocks = blocks.size();
+ const int start = num_blocks - n;
+
+ int position = 0;
+ tail.reserve(n);
+ for (int i = start; i < num_blocks; ++i) {
+ tail.emplace_back(blocks[i].size, position);
+ position += blocks[i].size;
+ }
+
+ return tail;
+}
+
+int SumSquaredSizes(const std::vector<Block>& blocks) {
+ int sum = 0;
+ for (const auto& b : blocks) {
+ sum += b.size * b.size;
+ }
+ return sum;
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/block_structure.h b/internal/ceres/block_structure.h
index d49d7d3..9500fbb 100644
--- a/internal/ceres/block_structure.h
+++ b/internal/ceres/block_structure.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -41,58 +41,158 @@
#include <cstdint>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
+// This file is being included into source files that are compiled with nvcc.
+// nvcc shipped with ubuntu 20.04 does not support some features of c++17,
+// including nested namespace definitions
namespace ceres {
namespace internal {
-typedef int32_t BlockSize;
+using BlockSize = int32_t;
-struct Block {
- Block() : size(-1), position(-1) {}
- Block(int size_, int position_) : size(size_), position(position_) {}
+struct CERES_NO_EXPORT Block {
+ Block() = default;
+ Block(int size_, int position_) noexcept : size(size_), position(position_) {}
- BlockSize size;
- int position; // Position along the row/column.
+ BlockSize size{-1};
+ int position{-1}; // Position along the row/column.
};
-struct Cell {
- Cell() : block_id(-1), position(-1) {}
- Cell(int block_id_, int position_)
+inline bool operator==(const Block& left, const Block& right) noexcept {
+ return (left.size == right.size) && (left.position == right.position);
+}
+
+struct CERES_NO_EXPORT Cell {
+ Cell() = default;
+ Cell(int block_id_, int position_) noexcept
: block_id(block_id_), position(position_) {}
// Column or row block id as the case maybe.
- int block_id;
+ int block_id{-1};
// Where in the values array of the jacobian is this cell located.
- int position;
+ int position{-1};
};
// Order cell by their block_id;
-bool CellLessThan(const Cell& lhs, const Cell& rhs);
+CERES_NO_EXPORT bool CellLessThan(const Cell& lhs, const Cell& rhs);
-struct CompressedList {
- CompressedList() {}
+struct CERES_NO_EXPORT CompressedList {
+ CompressedList() = default;
// Construct a CompressedList with the cells containing num_cells
// entries.
- CompressedList(int num_cells) : cells(num_cells) {}
+ explicit CompressedList(int num_cells) noexcept : cells(num_cells) {}
Block block;
std::vector<Cell> cells;
+ // Number of non-zeros in cells of this row block
+ int nnz{-1};
+ // Number of non-zeros in cells of this and every preceeding row block in
+ // block-sparse matrix
+ int cumulative_nnz{-1};
};
-typedef CompressedList CompressedRow;
-typedef CompressedList CompressedColumn;
+using CompressedRow = CompressedList;
+using CompressedColumn = CompressedList;
-struct CompressedRowBlockStructure {
+// CompressedRowBlockStructure specifies the storage structure of a row block
+// sparse matrix.
+//
+// Consider the following matrix A:
+// A = [A_11 A_12 ...
+// A_21 A_22 ...
+// ...
+// A_m1 A_m2 ... ]
+//
+// A row block sparse matrix is a matrix where the following properties hold:
+// 1. The number of rows in every block A_ij and A_ik are the same.
+// 2. The number of columns in every block A_ij and A_kj are the same.
+// 3. The number of rows in A_ij and A_kj may be different (i != k).
+// 4. The number of columns in A_ij and A_ik may be different (j != k).
+// 5. Any block A_ij may be all 0s, in which case the block is not stored.
+//
+// The structure of the matrix is stored as follows:
+//
+// The `rows' array contains the following information for each row block:
+// - rows[i].block.size: The number of rows in each block A_ij in the row block.
+// - rows[i].block.position: The starting row in the full matrix A of the
+// row block i.
+// - rows[i].cells[j].block_id: The index into the `cols' array corresponding to
+// the non-zero blocks A_ij.
+// - rows[i].cells[j].position: The index in the `values' array for the contents
+// of block A_ij.
+//
+// The `cols' array contains the following information for block:
+// - cols[.].size: The number of columns spanned by the block.
+// - cols[.].position: The starting column in the full matrix A of the block.
+//
+//
+// Example of a row block sparse matrix:
+// block_id: | 0 |1|2 |3 |
+// rows[0]: [ 1 2 0 3 4 0 ]
+// [ 5 6 0 7 8 0 ]
+// rows[1]: [ 0 0 9 0 0 0 ]
+//
+// This matrix is stored as follows:
+//
+// There are four column blocks:
+// cols[0].size = 2
+// cols[0].position = 0
+// cols[1].size = 1
+// cols[1].position = 2
+// cols[2].size = 2
+// cols[2].position = 3
+// cols[3].size = 1
+// cols[3].position = 5
+
+// The first row block spans two rows, starting at row 0:
+// rows[0].block.size = 2 // This row block spans two rows.
+// rows[0].block.position = 0 // It starts at row 0.
+// rows[0] has two cells, at column blocks 0 and 2:
+// rows[0].cells[0].block_id = 0 // This cell is in column block 0.
+// rows[0].cells[0].position = 0 // See below for an explanation of this.
+// rows[0].cells[1].block_id = 2 // This cell is in column block 2.
+// rows[0].cells[1].position = 4 // See below for an explanation of this.
+//
+// The second row block spans two rows, starting at row 2:
+// rows[1].block.size = 1 // This row block spans one row.
+// rows[1].block.position = 2 // It starts at row 2.
+// rows[1] has one cell at column block 1:
+// rows[1].cells[0].block_id = 1 // This cell is in column block 1.
+// rows[1].cells[0].position = 8 // See below for an explanation of this.
+//
+// The values in each blocks are stored contiguously in row major order.
+// However, there is no unique way to order the blocks -- it is usually
+// optimized to promote cache coherent access, e.g. ordering it so that
+// Jacobian blocks of parameters of the same type are stored nearby.
+// This is one possible way to store the values of the blocks in a values array:
+// values = { 1, 2, 5, 6, 3, 4, 7, 8, 9 }
+// | | | | // The three blocks.
+// ^ rows[0].cells[0].position = 0
+// ^ rows[0].cells[1].position = 4
+// ^ rows[1].cells[0].position = 8
+struct CERES_NO_EXPORT CompressedRowBlockStructure {
std::vector<Block> cols;
std::vector<CompressedRow> rows;
};
-struct CompressedColumnBlockStructure {
+struct CERES_NO_EXPORT CompressedColumnBlockStructure {
std::vector<Block> rows;
std::vector<CompressedColumn> cols;
};
+inline int NumScalarEntries(const std::vector<Block>& blocks) {
+ if (blocks.empty()) {
+ return 0;
+ }
+
+ auto& block = blocks.back();
+ return block.position + block.size;
+}
+
+std::vector<Block> Tail(const std::vector<Block>& blocks, int n);
+int SumSquaredSizes(const std::vector<Block>& blocks);
+
} // namespace internal
} // namespace ceres
diff --git a/internal/ceres/bundle_adjustment_test_util.h b/internal/ceres/bundle_adjustment_test_util.h
index 074931f..48b833b 100644
--- a/internal/ceres/bundle_adjustment_test_util.h
+++ b/internal/ceres/bundle_adjustment_test_util.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,7 +38,7 @@
#include <string>
#include "ceres/autodiff_cost_function.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/ordered_groups.h"
#include "ceres/problem.h"
#include "ceres/rotation.h"
@@ -46,16 +46,11 @@
#include "ceres/stringprintf.h"
#include "ceres/test_util.h"
#include "ceres/types.h"
-#include "gflags/gflags.h"
#include "glog/logging.h"
-#include "gtest/gtest.h"
namespace ceres {
namespace internal {
-using std::string;
-using std::vector;
-
const bool kAutomaticOrdering = true;
const bool kUserOrdering = false;
@@ -65,8 +60,13 @@
// problem is hard coded in the constructor.
class BundleAdjustmentProblem {
public:
+ BundleAdjustmentProblem(const std::string input_file) {
+ ReadData(input_file);
+ BuildProblem();
+ }
BundleAdjustmentProblem() {
- const string input_file = TestFileAbsolutePath("problem-16-22106-pre.txt");
+ const std::string input_file =
+ TestFileAbsolutePath("problem-16-22106-pre.txt");
ReadData(input_file);
BuildProblem();
}
@@ -82,20 +82,21 @@
Solver::Options* mutable_solver_options() { return &options_; }
// clang-format off
- int num_cameras() const { return num_cameras_; }
- int num_points() const { return num_points_; }
- int num_observations() const { return num_observations_; }
- const int* point_index() const { return point_index_; }
- const int* camera_index() const { return camera_index_; }
- const double* observations() const { return observations_; }
- double* mutable_cameras() { return parameters_; }
- double* mutable_points() { return parameters_ + 9 * num_cameras_; }
+ int num_cameras() const { return num_cameras_; }
+ int num_points() const { return num_points_; }
+ int num_observations() const { return num_observations_; }
+ const int* point_index() const { return point_index_; }
+ const int* camera_index() const { return camera_index_; }
+ const double* observations() const { return observations_; }
+ double* mutable_cameras() { return parameters_; }
+ double* mutable_points() { return parameters_ + 9 * num_cameras_; }
+ const Solver::Options& options() const { return options_; }
// clang-format on
static double kResidualTolerance;
private:
- void ReadData(const string& filename) {
+ void ReadData(const std::string& filename) {
FILE* fptr = fopen(filename.c_str(), "r");
if (!fptr) {
@@ -149,10 +150,11 @@
// point_index()[i] respectively.
double* camera = cameras + 9 * camera_index_[i];
double* point = points + 3 * point_index()[i];
- problem_.AddResidualBlock(cost_function, NULL, camera, point);
+ problem_.AddResidualBlock(cost_function, nullptr, camera, point);
}
- options_.linear_solver_ordering.reset(new ParameterBlockOrdering);
+ options_.linear_solver_ordering =
+ std::make_shared<ParameterBlockOrdering>();
// The points come before the cameras.
for (int i = 0; i < num_points_; ++i) {
@@ -241,7 +243,7 @@
};
double BundleAdjustmentProblem::kResidualTolerance = 1e-4;
-typedef SystemTest<BundleAdjustmentProblem> BundleAdjustmentTest;
+using BundleAdjustmentTest = SystemTest<BundleAdjustmentProblem>;
} // namespace internal
} // namespace ceres
diff --git a/internal/ceres/c_api.cc b/internal/ceres/c_api.cc
index 251cde4..56e1324 100644
--- a/internal/ceres/c_api.cc
+++ b/internal/ceres/c_api.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,6 +35,7 @@
#include "ceres/c_api.h"
#include <iostream>
+#include <memory>
#include <string>
#include <vector>
@@ -64,7 +65,7 @@
// This cost function wraps a C-level function pointer from the user, to bridge
// between C and C++.
-class CallbackCostFunction : public ceres::CostFunction {
+class CERES_NO_EXPORT CallbackCostFunction final : public ceres::CostFunction {
public:
CallbackCostFunction(ceres_cost_function_t cost_function,
void* user_data,
@@ -78,8 +79,6 @@
}
}
- virtual ~CallbackCostFunction() {}
-
bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const final {
@@ -94,7 +93,7 @@
// This loss function wraps a C-level function pointer from the user, to bridge
// between C and C++.
-class CallbackLossFunction : public ceres::LossFunction {
+class CallbackLossFunction final : public ceres::LossFunction {
public:
explicit CallbackLossFunction(ceres_loss_function_t loss_function,
void* user_data)
@@ -146,30 +145,31 @@
int num_parameter_blocks,
int* parameter_block_sizes,
double** parameters) {
- Problem* ceres_problem = reinterpret_cast<Problem*>(problem);
+ auto* ceres_problem = reinterpret_cast<Problem*>(problem);
- ceres::CostFunction* callback_cost_function =
- new CallbackCostFunction(cost_function,
- cost_function_data,
- num_residuals,
- num_parameter_blocks,
- parameter_block_sizes);
+ auto callback_cost_function =
+ std::make_unique<CallbackCostFunction>(cost_function,
+ cost_function_data,
+ num_residuals,
+ num_parameter_blocks,
+ parameter_block_sizes);
- ceres::LossFunction* callback_loss_function = NULL;
- if (loss_function != NULL) {
- callback_loss_function =
- new CallbackLossFunction(loss_function, loss_function_data);
+ std::unique_ptr<ceres::LossFunction> callback_loss_function;
+ if (loss_function != nullptr) {
+ callback_loss_function = std::make_unique<CallbackLossFunction>(
+ loss_function, loss_function_data);
}
std::vector<double*> parameter_blocks(parameters,
parameters + num_parameter_blocks);
return reinterpret_cast<ceres_residual_block_id_t*>(
- ceres_problem->AddResidualBlock(
- callback_cost_function, callback_loss_function, parameter_blocks));
+ ceres_problem->AddResidualBlock(callback_cost_function.release(),
+ callback_loss_function.release(),
+ parameter_blocks));
}
void ceres_solve(ceres_problem_t* c_problem) {
- Problem* problem = reinterpret_cast<Problem*>(c_problem);
+ auto* problem = reinterpret_cast<Problem*>(c_problem);
// TODO(keir): Obviously, this way of setting options won't scale or last.
// Instead, figure out a way to specify some of the options without
diff --git a/internal/ceres/c_api_test.cc b/internal/ceres/c_api_test.cc
index 043f6ab..9386765 100644
--- a/internal/ceres/c_api_test.cc
+++ b/internal/ceres/c_api_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -114,20 +114,20 @@
double** parameters,
double* residuals,
double** jacobians) {
- double* measurement = (double*)user_data;
+ auto* measurement = static_cast<double*>(user_data);
double x = measurement[0];
double y = measurement[1];
double m = parameters[0][0];
double c = parameters[1][0];
residuals[0] = y - exp(m * x + c);
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return 1;
}
- if (jacobians[0] != NULL) {
+ if (jacobians[0] != nullptr) {
jacobians[0][0] = -x * exp(m * x + c); // dr/dm
}
- if (jacobians[1] != NULL) {
+ if (jacobians[1] != nullptr) {
jacobians[1][0] = -exp(m * x + c); // dr/dc
}
return 1;
@@ -148,8 +148,8 @@
problem,
exponential_residual, // Cost function
&data[2 * i], // Points to the (x,y) measurement
- NULL, // Loss function
- NULL, // Loss function user data
+ nullptr, // Loss function
+ nullptr, // Loss function user data
1, // Number of residuals
2, // Number of parameter blocks
parameter_sizes,
diff --git a/internal/ceres/callbacks.cc b/internal/ceres/callbacks.cc
index 0e0df9d..e6e0644 100644
--- a/internal/ceres/callbacks.cc
+++ b/internal/ceres/callbacks.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,25 +30,24 @@
#include "ceres/callbacks.h"
+#include <algorithm>
#include <iostream> // NO LINT
+#include <string>
#include "ceres/program.h"
#include "ceres/stringprintf.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::string;
+namespace ceres::internal {
StateUpdatingCallback::StateUpdatingCallback(Program* program,
double* parameters)
: program_(program), parameters_(parameters) {}
-StateUpdatingCallback::~StateUpdatingCallback() {}
+StateUpdatingCallback::~StateUpdatingCallback() = default;
CallbackReturnType StateUpdatingCallback::operator()(
- const IterationSummary& summary) {
+ const IterationSummary& /*summary*/) {
program_->StateVectorToParameterBlocks(parameters_);
program_->CopyParameterBlockStateToUserState();
return SOLVER_CONTINUE;
@@ -64,14 +63,12 @@
user_parameters_(user_parameters) {}
GradientProblemSolverStateUpdatingCallback::
- ~GradientProblemSolverStateUpdatingCallback() {}
+ ~GradientProblemSolverStateUpdatingCallback() = default;
CallbackReturnType GradientProblemSolverStateUpdatingCallback::operator()(
const IterationSummary& summary) {
if (summary.step_is_successful) {
- std::copy(internal_parameters_,
- internal_parameters_ + num_parameters_,
- user_parameters_);
+ std::copy_n(internal_parameters_, num_parameters_, user_parameters_);
}
return SOLVER_CONTINUE;
}
@@ -80,44 +77,42 @@
const bool log_to_stdout)
: minimizer_type(minimizer_type), log_to_stdout_(log_to_stdout) {}
-LoggingCallback::~LoggingCallback() {}
+LoggingCallback::~LoggingCallback() = default;
CallbackReturnType LoggingCallback::operator()(
const IterationSummary& summary) {
- string output;
+ std::string output;
if (minimizer_type == LINE_SEARCH) {
- const char* kReportRowFormat =
- "% 4d: f:% 8e d:% 3.2e g:% 3.2e h:% 3.2e "
- "s:% 3.2e e:% 3d it:% 3.2e tt:% 3.2e";
- output = StringPrintf(kReportRowFormat,
- summary.iteration,
- summary.cost,
- summary.cost_change,
- summary.gradient_max_norm,
- summary.step_norm,
- summary.step_size,
- summary.line_search_function_evaluations,
- summary.iteration_time_in_seconds,
- summary.cumulative_time_in_seconds);
+ output = StringPrintf(
+ "% 4d: f:% 8e d:% 3.2e g:% 3.2e h:% 3.2e s:% 3.2e e:% 3d it:% 3.2e "
+ "tt:% 3.2e",
+ summary.iteration,
+ summary.cost,
+ summary.cost_change,
+ summary.gradient_max_norm,
+ summary.step_norm,
+ summary.step_size,
+ summary.line_search_function_evaluations,
+ summary.iteration_time_in_seconds,
+ summary.cumulative_time_in_seconds);
} else if (minimizer_type == TRUST_REGION) {
// clang-format off
if (summary.iteration == 0) {
output = "iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time\n"; // NOLINT
}
- const char* kReportRowFormat =
- "% 4d % 8e % 3.2e % 3.2e % 3.2e % 3.2e % 3.2e % 4d % 3.2e % 3.2e"; // NOLINT
- // clang-format on
- output += StringPrintf(kReportRowFormat,
- summary.iteration,
- summary.cost,
- summary.cost_change,
- summary.gradient_max_norm,
- summary.step_norm,
- summary.relative_decrease,
- summary.trust_region_radius,
- summary.linear_solver_iterations,
- summary.iteration_time_in_seconds,
- summary.cumulative_time_in_seconds);
+ output += StringPrintf(
+ "% 4d % 8e % 3.2e % 3.2e % 3.2e % 3.2e % 3.2e % 4d % 3.2e % 3.2e", // NOLINT
+ // clang-format on
+ summary.iteration,
+ summary.cost,
+ summary.cost_change,
+ summary.gradient_max_norm,
+ summary.step_norm,
+ summary.relative_decrease,
+ summary.trust_region_radius,
+ summary.linear_solver_iterations,
+ summary.iteration_time_in_seconds,
+ summary.cumulative_time_in_seconds);
} else {
LOG(FATAL) << "Unknown minimizer type.";
}
@@ -130,5 +125,4 @@
return SOLVER_CONTINUE;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/callbacks.h b/internal/ceres/callbacks.h
index 47112b8..d3a7657 100644
--- a/internal/ceres/callbacks.h
+++ b/internal/ceres/callbacks.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,20 +33,19 @@
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/iteration_callback.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class Program;
// Callback for updating the externally visible state of parameter
// blocks.
-class StateUpdatingCallback : public IterationCallback {
+class CERES_NO_EXPORT StateUpdatingCallback final : public IterationCallback {
public:
StateUpdatingCallback(Program* program, double* parameters);
- virtual ~StateUpdatingCallback();
+ ~StateUpdatingCallback() override;
CallbackReturnType operator()(const IterationSummary& summary) final;
private:
@@ -56,12 +55,13 @@
// Callback for updating the externally visible state of the
// parameters vector for GradientProblemSolver.
-class GradientProblemSolverStateUpdatingCallback : public IterationCallback {
+class CERES_NO_EXPORT GradientProblemSolverStateUpdatingCallback final
+ : public IterationCallback {
public:
GradientProblemSolverStateUpdatingCallback(int num_parameters,
const double* internal_parameters,
double* user_parameters);
- virtual ~GradientProblemSolverStateUpdatingCallback();
+ ~GradientProblemSolverStateUpdatingCallback() override;
CallbackReturnType operator()(const IterationSummary& summary) final;
private:
@@ -72,10 +72,10 @@
// Callback for logging the state of the minimizer to STDERR or
// STDOUT depending on the user's preferences and logging level.
-class LoggingCallback : public IterationCallback {
+class CERES_NO_EXPORT LoggingCallback final : public IterationCallback {
public:
LoggingCallback(MinimizerType minimizer_type, bool log_to_stdout);
- virtual ~LoggingCallback();
+ ~LoggingCallback() override;
CallbackReturnType operator()(const IterationSummary& summary) final;
private:
@@ -83,7 +83,6 @@
const bool log_to_stdout_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_CALLBACKS_H_
diff --git a/internal/ceres/canonical_views_clustering.cc b/internal/ceres/canonical_views_clustering.cc
index c193735..d74e570 100644
--- a/internal/ceres/canonical_views_clustering.cc
+++ b/internal/ceres/canonical_views_clustering.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,23 +33,20 @@
#include <unordered_map>
#include <unordered_set>
+#include <vector>
#include "ceres/graph.h"
+#include "ceres/internal/export.h"
#include "ceres/map_util.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::vector;
+using IntMap = std::unordered_map<int, int>;
+using IntSet = std::unordered_set<int>;
-typedef std::unordered_map<int, int> IntMap;
-typedef std::unordered_set<int> IntSet;
-
-class CanonicalViewsClustering {
+class CERES_NO_EXPORT CanonicalViewsClustering {
public:
- CanonicalViewsClustering() {}
-
// Compute the canonical views clustering of the vertices of the
// graph. centers will contain the vertices that are the identified
// as the canonical views/cluster centers, and membership is a map
@@ -60,15 +57,15 @@
// are assigned to a cluster with id = kInvalidClusterId.
void ComputeClustering(const CanonicalViewsClusteringOptions& options,
const WeightedGraph<int>& graph,
- vector<int>* centers,
+ std::vector<int>* centers,
IntMap* membership);
private:
void FindValidViews(IntSet* valid_views) const;
- double ComputeClusteringQualityDifference(const int candidate,
- const vector<int>& centers) const;
+ double ComputeClusteringQualityDifference(
+ int candidate, const std::vector<int>& centers) const;
void UpdateCanonicalViewAssignments(const int canonical_view);
- void ComputeClusterMembership(const vector<int>& centers,
+ void ComputeClusterMembership(const std::vector<int>& centers,
IntMap* membership) const;
CanonicalViewsClusteringOptions options_;
@@ -83,20 +80,20 @@
void ComputeCanonicalViewsClustering(
const CanonicalViewsClusteringOptions& options,
const WeightedGraph<int>& graph,
- vector<int>* centers,
+ std::vector<int>* centers,
IntMap* membership) {
- time_t start_time = time(NULL);
+ time_t start_time = time(nullptr);
CanonicalViewsClustering cv;
cv.ComputeClustering(options, graph, centers, membership);
VLOG(2) << "Canonical views clustering time (secs): "
- << time(NULL) - start_time;
+ << time(nullptr) - start_time;
}
// Implementation of CanonicalViewsClustering
void CanonicalViewsClustering::ComputeClustering(
const CanonicalViewsClusteringOptions& options,
const WeightedGraph<int>& graph,
- vector<int>* centers,
+ std::vector<int>* centers,
IntMap* membership) {
options_ = options;
CHECK(centers != nullptr);
@@ -107,7 +104,7 @@
IntSet valid_views;
FindValidViews(&valid_views);
- while (valid_views.size() > 0) {
+ while (!valid_views.empty()) {
// Find the next best canonical view.
double best_difference = -std::numeric_limits<double>::max();
int best_view = 0;
@@ -152,7 +149,7 @@
// Computes the difference in the quality score if 'candidate' were
// added to the set of canonical views.
double CanonicalViewsClustering::ComputeClusteringQualityDifference(
- const int candidate, const vector<int>& centers) const {
+ const int candidate, const std::vector<int>& centers) const {
// View score.
double difference =
options_.view_score_weight * graph_->VertexWeight(candidate);
@@ -174,9 +171,9 @@
difference -= options_.size_penalty_weight;
// Orthogonality.
- for (int i = 0; i < centers.size(); ++i) {
+ for (int center : centers) {
difference -= options_.similarity_penalty_weight *
- graph_->EdgeWeight(centers[i], candidate);
+ graph_->EdgeWeight(center, candidate);
}
return difference;
@@ -199,7 +196,7 @@
// Assign a cluster id to each view.
void CanonicalViewsClustering::ComputeClusterMembership(
- const vector<int>& centers, IntMap* membership) const {
+ const std::vector<int>& centers, IntMap* membership) const {
CHECK(membership != nullptr);
membership->clear();
@@ -223,5 +220,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/canonical_views_clustering.h b/internal/ceres/canonical_views_clustering.h
index 465233d..eb05a91 100644
--- a/internal/ceres/canonical_views_clustering.h
+++ b/internal/ceres/canonical_views_clustering.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -45,10 +45,10 @@
#include <vector>
#include "ceres/graph.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
struct CanonicalViewsClusteringOptions;
@@ -95,13 +95,13 @@
// It is possible depending on the configuration of the clustering
// algorithm that some of the vertices may not be assigned to any
// cluster. In this case they are assigned to a cluster with id = -1;
-CERES_EXPORT_INTERNAL void ComputeCanonicalViewsClustering(
+CERES_NO_EXPORT void ComputeCanonicalViewsClustering(
const CanonicalViewsClusteringOptions& options,
const WeightedGraph<int>& graph,
std::vector<int>* centers,
std::unordered_map<int, int>* membership);
-struct CERES_EXPORT_INTERNAL CanonicalViewsClusteringOptions {
+struct CERES_NO_EXPORT CanonicalViewsClusteringOptions {
// The minimum number of canonical views to compute.
int min_views = 3;
@@ -119,7 +119,8 @@
double view_score_weight = 0.0;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_CANONICAL_VIEWS_CLUSTERING_H_
diff --git a/internal/ceres/canonical_views_clustering_test.cc b/internal/ceres/canonical_views_clustering_test.cc
index 0593d65..fa79582 100644
--- a/internal/ceres/canonical_views_clustering_test.cc
+++ b/internal/ceres/canonical_views_clustering_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,8 +36,7 @@
#include "ceres/graph.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
const int kVertexIds[] = {0, 1, 2, 3};
class CanonicalViewsTest : public ::testing::Test {
@@ -139,5 +138,4 @@
EXPECT_EQ(centers_[0], kVertexIds[1]);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/casts.h b/internal/ceres/casts.h
index d137071..af944520 100644
--- a/internal/ceres/casts.h
+++ b/internal/ceres/casts.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,14 +32,13 @@
#define CERES_INTERNAL_CASTS_H_
#include <cassert>
-#include <cstddef> // For NULL.
namespace ceres {
// Identity metafunction.
template <class T>
struct identity_ {
- typedef T type;
+ using type = T;
};
// Use implicit_cast as a safe version of static_cast or const_cast
@@ -86,6 +85,7 @@
// if (dynamic_cast<Subclass2>(foo)) HandleASubclass2Object(foo);
// You should design the code some other way not to need this.
+// TODO(sameeragarwal): Modernize this.
template <typename To, typename From> // use like this: down_cast<T*>(foo);
inline To down_cast(From* f) { // so we only accept pointers
// Ensures that To is a sub-type of From *. This test is here only
@@ -95,11 +95,11 @@
// TODO(csilvers): This should use COMPILE_ASSERT.
if (false) {
- implicit_cast<From*, To>(NULL);
+ implicit_cast<From*, To>(nullptr);
}
// uses RTTI in dbg and fastbuild. asserts are disabled in opt builds.
- assert(f == NULL || dynamic_cast<To>(f) != NULL); // NOLINT
+ assert(f == nullptr || dynamic_cast<To>(f) != nullptr); // NOLINT
return static_cast<To>(f);
}
diff --git a/internal/ceres/cgnr_linear_operator.h b/internal/ceres/cgnr_linear_operator.h
deleted file mode 100644
index beb8bbc..0000000
--- a/internal/ceres/cgnr_linear_operator.h
+++ /dev/null
@@ -1,120 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: keir@google.com (Keir Mierle)
-
-#ifndef CERES_INTERNAL_CGNR_LINEAR_OPERATOR_H_
-#define CERES_INTERNAL_CGNR_LINEAR_OPERATOR_H_
-
-#include <algorithm>
-#include <memory>
-
-#include "ceres/internal/eigen.h"
-#include "ceres/linear_operator.h"
-
-namespace ceres {
-namespace internal {
-
-class SparseMatrix;
-
-// A linear operator which takes a matrix A and a diagonal vector D and
-// performs products of the form
-//
-// (A^T A + D^T D)x
-//
-// This is used to implement iterative general sparse linear solving with
-// conjugate gradients, where A is the Jacobian and D is a regularizing
-// parameter. A brief proof that D^T D is the correct regularizer:
-//
-// Given a regularized least squares problem:
-//
-// min ||Ax - b||^2 + ||Dx||^2
-// x
-//
-// First expand into matrix notation:
-//
-// (Ax - b)^T (Ax - b) + xD^TDx
-//
-// Then multiply out to get:
-//
-// = xA^TAx - 2b^T Ax + b^Tb + xD^TDx
-//
-// Take the derivative:
-//
-// 0 = 2A^TAx - 2A^T b + 2 D^TDx
-// 0 = A^TAx - A^T b + D^TDx
-// 0 = (A^TA + D^TD)x - A^T b
-//
-// Thus, the symmetric system we need to solve for CGNR is
-//
-// Sx = z
-//
-// with S = A^TA + D^TD
-// and z = A^T b
-//
-// Note: This class is not thread safe, since it uses some temporary storage.
-class CgnrLinearOperator : public LinearOperator {
- public:
- CgnrLinearOperator(const LinearOperator& A, const double* D)
- : A_(A), D_(D), z_(new double[A.num_rows()]) {}
- virtual ~CgnrLinearOperator() {}
-
- void RightMultiply(const double* x, double* y) const final {
- std::fill(z_.get(), z_.get() + A_.num_rows(), 0.0);
-
- // z = Ax
- A_.RightMultiply(x, z_.get());
-
- // y = y + Atz
- A_.LeftMultiply(z_.get(), y);
-
- // y = y + DtDx
- if (D_ != NULL) {
- int n = A_.num_cols();
- VectorRef(y, n).array() +=
- ConstVectorRef(D_, n).array().square() * ConstVectorRef(x, n).array();
- }
- }
-
- void LeftMultiply(const double* x, double* y) const final {
- RightMultiply(x, y);
- }
-
- int num_rows() const final { return A_.num_cols(); }
- int num_cols() const final { return A_.num_cols(); }
-
- private:
- const LinearOperator& A_;
- const double* D_;
- std::unique_ptr<double[]> z_;
-};
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_INTERNAL_CGNR_LINEAR_OPERATOR_H_
diff --git a/internal/ceres/cgnr_solver.cc b/internal/ceres/cgnr_solver.cc
index 9dba1cf..da63484 100644
--- a/internal/ceres/cgnr_solver.cc
+++ b/internal/ceres/cgnr_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,20 +30,99 @@
#include "ceres/cgnr_solver.h"
+#include <memory>
+#include <utility>
+
#include "ceres/block_jacobi_preconditioner.h"
-#include "ceres/cgnr_linear_operator.h"
#include "ceres/conjugate_gradients_solver.h"
+#include "ceres/cuda_sparse_matrix.h"
+#include "ceres/cuda_vector.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_solver.h"
#include "ceres/subset_preconditioner.h"
#include "ceres/wall_time.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-CgnrSolver::CgnrSolver(const LinearSolver::Options& options)
- : options_(options) {
+// A linear operator which takes a matrix A and a diagonal vector D and
+// performs products of the form
+//
+// (A^T A + D^T D)x
+//
+// This is used to implement iterative general sparse linear solving with
+// conjugate gradients, where A is the Jacobian and D is a regularizing
+// parameter. A brief proof that D^T D is the correct regularizer:
+//
+// Given a regularized least squares problem:
+//
+// min ||Ax - b||^2 + ||Dx||^2
+// x
+//
+// First expand into matrix notation:
+//
+// (Ax - b)^T (Ax - b) + xD^TDx
+//
+// Then multiply out to get:
+//
+// = xA^TAx - 2b^T Ax + b^Tb + xD^TDx
+//
+// Take the derivative:
+//
+// 0 = 2A^TAx - 2A^T b + 2 D^TDx
+// 0 = A^TAx - A^T b + D^TDx
+// 0 = (A^TA + D^TD)x - A^T b
+//
+// Thus, the symmetric system we need to solve for CGNR is
+//
+// Sx = z
+//
+// with S = A^TA + D^TD
+// and z = A^T b
+//
+// Note: This class is not thread safe, since it uses some temporary storage.
+class CERES_NO_EXPORT CgnrLinearOperator final
+ : public ConjugateGradientsLinearOperator<Vector> {
+ public:
+ CgnrLinearOperator(const LinearOperator& A,
+ const double* D,
+ ContextImpl* context,
+ int num_threads)
+ : A_(A),
+ D_(D),
+ z_(Vector::Zero(A.num_rows())),
+ context_(context),
+ num_threads_(num_threads) {}
+
+ void RightMultiplyAndAccumulate(const Vector& x, Vector& y) final {
+ // z = Ax
+ // y = y + Atz
+ z_.setZero();
+ A_.RightMultiplyAndAccumulate(x, z_, context_, num_threads_);
+ A_.LeftMultiplyAndAccumulate(z_, y, context_, num_threads_);
+
+ // y = y + DtDx
+ if (D_ != nullptr) {
+ int n = A_.num_cols();
+ ParallelAssign(
+ context_,
+ num_threads_,
+ y,
+ y.array() + ConstVectorRef(D_, n).array().square() * x.array());
+ }
+ }
+
+ private:
+ const LinearOperator& A_;
+ const double* D_;
+ Vector z_;
+
+ ContextImpl* context_;
+ int num_threads_;
+};
+
+CgnrSolver::CgnrSolver(LinearSolver::Options options)
+ : options_(std::move(options)) {
if (options_.preconditioner_type != JACOBI &&
options_.preconditioner_type != IDENTITY &&
options_.preconditioner_type != SUBSET) {
@@ -54,7 +133,14 @@
}
}
-CgnrSolver::~CgnrSolver() {}
+CgnrSolver::~CgnrSolver() {
+ for (int i = 0; i < 4; ++i) {
+ if (scratch_[i]) {
+ delete scratch_[i];
+ scratch_[i] = nullptr;
+ }
+ }
+}
LinearSolver::Summary CgnrSolver::SolveImpl(
BlockSparseMatrix* A,
@@ -62,48 +148,244 @@
const LinearSolver::PerSolveOptions& per_solve_options,
double* x) {
EventLogger event_logger("CgnrSolver::Solve");
-
- // Form z = Atb.
- Vector z(A->num_cols());
- z.setZero();
- A->LeftMultiply(b, z.data());
-
if (!preconditioner_) {
+ Preconditioner::Options preconditioner_options;
+ preconditioner_options.type = options_.preconditioner_type;
+ preconditioner_options.subset_preconditioner_start_row_block =
+ options_.subset_preconditioner_start_row_block;
+ preconditioner_options.sparse_linear_algebra_library_type =
+ options_.sparse_linear_algebra_library_type;
+ preconditioner_options.ordering_type = options_.ordering_type;
+ preconditioner_options.num_threads = options_.num_threads;
+ preconditioner_options.context = options_.context;
+
if (options_.preconditioner_type == JACOBI) {
- preconditioner_.reset(new BlockJacobiPreconditioner(*A));
+ preconditioner_ = std::make_unique<BlockSparseJacobiPreconditioner>(
+ preconditioner_options, *A);
} else if (options_.preconditioner_type == SUBSET) {
- Preconditioner::Options preconditioner_options;
- preconditioner_options.type = SUBSET;
- preconditioner_options.subset_preconditioner_start_row_block =
- options_.subset_preconditioner_start_row_block;
- preconditioner_options.sparse_linear_algebra_library_type =
- options_.sparse_linear_algebra_library_type;
- preconditioner_options.use_postordering = options_.use_postordering;
- preconditioner_options.num_threads = options_.num_threads;
- preconditioner_options.context = options_.context;
- preconditioner_.reset(
- new SubsetPreconditioner(preconditioner_options, *A));
+ preconditioner_ =
+ std::make_unique<SubsetPreconditioner>(preconditioner_options, *A);
+ } else {
+ preconditioner_ = std::make_unique<IdentityPreconditioner>(A->num_cols());
}
}
+ preconditioner_->Update(*A, per_solve_options.D);
- if (preconditioner_) {
- preconditioner_->Update(*A, per_solve_options.D);
+ ConjugateGradientsSolverOptions cg_options;
+ cg_options.min_num_iterations = options_.min_num_iterations;
+ cg_options.max_num_iterations = options_.max_num_iterations;
+ cg_options.residual_reset_period = options_.residual_reset_period;
+ cg_options.q_tolerance = per_solve_options.q_tolerance;
+ cg_options.r_tolerance = per_solve_options.r_tolerance;
+ cg_options.context = options_.context;
+ cg_options.num_threads = options_.num_threads;
+
+ // lhs = AtA + DtD
+ CgnrLinearOperator lhs(
+ *A, per_solve_options.D, options_.context, options_.num_threads);
+ // rhs = Atb.
+ Vector rhs(A->num_cols());
+ rhs.setZero();
+ A->LeftMultiplyAndAccumulate(
+ b, rhs.data(), options_.context, options_.num_threads);
+
+ cg_solution_ = Vector::Zero(A->num_cols());
+ for (int i = 0; i < 4; ++i) {
+ if (scratch_[i] == nullptr) {
+ scratch_[i] = new Vector(A->num_cols());
+ }
}
-
- LinearSolver::PerSolveOptions cg_per_solve_options = per_solve_options;
- cg_per_solve_options.preconditioner = preconditioner_.get();
-
- // Solve (AtA + DtD)x = z (= Atb).
- VectorRef(x, A->num_cols()).setZero();
- CgnrLinearOperator lhs(*A, per_solve_options.D);
event_logger.AddEvent("Setup");
- ConjugateGradientsSolver conjugate_gradient_solver(options_);
- LinearSolver::Summary summary =
- conjugate_gradient_solver.Solve(&lhs, z.data(), cg_per_solve_options, x);
+ LinearOperatorAdapter preconditioner(*preconditioner_);
+ auto summary = ConjugateGradientsSolver(
+ cg_options, lhs, rhs, preconditioner, scratch_, cg_solution_);
+ VectorRef(x, A->num_cols()) = cg_solution_;
event_logger.AddEvent("Solve");
return summary;
}
-} // namespace internal
-} // namespace ceres
+#ifndef CERES_NO_CUDA
+
+// A linear operator which takes a matrix A and a diagonal vector D and
+// performs products of the form
+//
+// (A^T A + D^T D)x
+//
+// This is used to implement iterative general sparse linear solving with
+// conjugate gradients, where A is the Jacobian and D is a regularizing
+// parameter. A brief proof is included in cgnr_linear_operator.h.
+class CERES_NO_EXPORT CudaCgnrLinearOperator final
+ : public ConjugateGradientsLinearOperator<CudaVector> {
+ public:
+ CudaCgnrLinearOperator(CudaSparseMatrix& A,
+ const CudaVector& D,
+ CudaVector* z)
+ : A_(A), D_(D), z_(z) {}
+
+ void RightMultiplyAndAccumulate(const CudaVector& x, CudaVector& y) final {
+ // z = Ax
+ z_->SetZero();
+ A_.RightMultiplyAndAccumulate(x, z_);
+
+ // y = y + Atz
+ // = y + AtAx
+ A_.LeftMultiplyAndAccumulate(*z_, &y);
+
+ // y = y + DtDx
+ y.DtDxpy(D_, x);
+ }
+
+ private:
+ CudaSparseMatrix& A_;
+ const CudaVector& D_;
+ CudaVector* z_ = nullptr;
+};
+
+class CERES_NO_EXPORT CudaIdentityPreconditioner final
+ : public CudaPreconditioner {
+ public:
+ void Update(const CompressedRowSparseMatrix& A, const double* D) final {}
+ void RightMultiplyAndAccumulate(const CudaVector& x, CudaVector& y) final {
+ y.Axpby(1.0, x, 1.0);
+ }
+};
+
+// This class wraps the existing CPU Jacobi preconditioner, caches the structure
+// of the block diagonal, and for each CGNR solve updates the values on the CPU
+// and then copies them over to the GPU.
+class CERES_NO_EXPORT CudaJacobiPreconditioner final
+ : public CudaPreconditioner {
+ public:
+ explicit CudaJacobiPreconditioner(Preconditioner::Options options,
+ const CompressedRowSparseMatrix& A)
+ : options_(std::move(options)),
+ cpu_preconditioner_(options_, A),
+ m_(options_.context, cpu_preconditioner_.matrix()) {}
+ ~CudaJacobiPreconditioner() = default;
+
+ void Update(const CompressedRowSparseMatrix& A, const double* D) final {
+ cpu_preconditioner_.Update(A, D);
+ m_.CopyValuesFromCpu(cpu_preconditioner_.matrix());
+ }
+
+ void RightMultiplyAndAccumulate(const CudaVector& x, CudaVector& y) final {
+ m_.RightMultiplyAndAccumulate(x, &y);
+ }
+
+ private:
+ Preconditioner::Options options_;
+ BlockCRSJacobiPreconditioner cpu_preconditioner_;
+ CudaSparseMatrix m_;
+};
+
+CudaCgnrSolver::CudaCgnrSolver(LinearSolver::Options options)
+ : options_(std::move(options)) {}
+
+CudaCgnrSolver::~CudaCgnrSolver() {
+ for (int i = 0; i < 4; ++i) {
+ if (scratch_[i]) {
+ delete scratch_[i];
+ scratch_[i] = nullptr;
+ }
+ }
+}
+
+std::unique_ptr<CudaCgnrSolver> CudaCgnrSolver::Create(
+ LinearSolver::Options options, std::string* error) {
+ CHECK(error != nullptr);
+ if (options.preconditioner_type != IDENTITY &&
+ options.preconditioner_type != JACOBI) {
+ *error =
+ "CudaCgnrSolver does not support preconditioner type " +
+ std::string(PreconditionerTypeToString(options.preconditioner_type)) +
+ ". ";
+ return nullptr;
+ }
+ CHECK(options.context->IsCudaInitialized())
+ << "CudaCgnrSolver requires CUDA initialization.";
+ auto solver = std::make_unique<CudaCgnrSolver>(options);
+ return solver;
+}
+
+void CudaCgnrSolver::CpuToGpuTransfer(const CompressedRowSparseMatrix& A,
+ const double* b,
+ const double* D) {
+ if (A_ == nullptr) {
+ // Assume structure is not cached, do an initialization and structural copy.
+ A_ = std::make_unique<CudaSparseMatrix>(options_.context, A);
+ b_ = std::make_unique<CudaVector>(options_.context, A.num_rows());
+ x_ = std::make_unique<CudaVector>(options_.context, A.num_cols());
+ Atb_ = std::make_unique<CudaVector>(options_.context, A.num_cols());
+ Ax_ = std::make_unique<CudaVector>(options_.context, A.num_rows());
+ D_ = std::make_unique<CudaVector>(options_.context, A.num_cols());
+
+ Preconditioner::Options preconditioner_options;
+ preconditioner_options.type = options_.preconditioner_type;
+ preconditioner_options.subset_preconditioner_start_row_block =
+ options_.subset_preconditioner_start_row_block;
+ preconditioner_options.sparse_linear_algebra_library_type =
+ options_.sparse_linear_algebra_library_type;
+ preconditioner_options.ordering_type = options_.ordering_type;
+ preconditioner_options.num_threads = options_.num_threads;
+ preconditioner_options.context = options_.context;
+
+ if (options_.preconditioner_type == JACOBI) {
+ preconditioner_ =
+ std::make_unique<CudaJacobiPreconditioner>(preconditioner_options, A);
+ } else {
+ preconditioner_ = std::make_unique<CudaIdentityPreconditioner>();
+ }
+ for (int i = 0; i < 4; ++i) {
+ scratch_[i] = new CudaVector(options_.context, A.num_cols());
+ }
+ } else {
+ // Assume structure is cached, do a value copy.
+ A_->CopyValuesFromCpu(A);
+ }
+ b_->CopyFromCpu(ConstVectorRef(b, A.num_rows()));
+ D_->CopyFromCpu(ConstVectorRef(D, A.num_cols()));
+}
+
+LinearSolver::Summary CudaCgnrSolver::SolveImpl(
+ CompressedRowSparseMatrix* A,
+ const double* b,
+ const LinearSolver::PerSolveOptions& per_solve_options,
+ double* x) {
+ EventLogger event_logger("CudaCgnrSolver::Solve");
+ LinearSolver::Summary summary;
+ summary.num_iterations = 0;
+ summary.termination_type = LinearSolverTerminationType::FATAL_ERROR;
+
+ CpuToGpuTransfer(*A, b, per_solve_options.D);
+ event_logger.AddEvent("CPU to GPU Transfer");
+ preconditioner_->Update(*A, per_solve_options.D);
+ event_logger.AddEvent("Preconditioner Update");
+
+ // Form z = Atb.
+ Atb_->SetZero();
+ A_->LeftMultiplyAndAccumulate(*b_, Atb_.get());
+
+ // Solve (AtA + DtD)x = z (= Atb).
+ x_->SetZero();
+ CudaCgnrLinearOperator lhs(*A_, *D_, Ax_.get());
+
+ event_logger.AddEvent("Setup");
+
+ ConjugateGradientsSolverOptions cg_options;
+ cg_options.min_num_iterations = options_.min_num_iterations;
+ cg_options.max_num_iterations = options_.max_num_iterations;
+ cg_options.residual_reset_period = options_.residual_reset_period;
+ cg_options.q_tolerance = per_solve_options.q_tolerance;
+ cg_options.r_tolerance = per_solve_options.r_tolerance;
+
+ summary = ConjugateGradientsSolver(
+ cg_options, lhs, *Atb_, *preconditioner_, scratch_, *x_);
+ x_->CopyTo(x);
+ event_logger.AddEvent("Solve");
+ return summary;
+}
+
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
diff --git a/internal/ceres/cgnr_solver.h b/internal/ceres/cgnr_solver.h
index bc701c0..c634538 100644
--- a/internal/ceres/cgnr_solver.h
+++ b/internal/ceres/cgnr_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,10 +33,13 @@
#include <memory>
+#include "ceres/conjugate_gradients_solver.h"
+#include "ceres/cuda_sparse_matrix.h"
+#include "ceres/cuda_vector.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class Preconditioner;
@@ -49,12 +52,12 @@
//
// as required for solving for x in the least squares sense. Currently only
// block diagonal preconditioning is supported.
-class CgnrSolver : public BlockSparseMatrixSolver {
+class CERES_NO_EXPORT CgnrSolver final : public BlockSparseMatrixSolver {
public:
- explicit CgnrSolver(const LinearSolver::Options& options);
+ explicit CgnrSolver(LinearSolver::Options options);
CgnrSolver(const CgnrSolver&) = delete;
void operator=(const CgnrSolver&) = delete;
- virtual ~CgnrSolver();
+ ~CgnrSolver() override;
Summary SolveImpl(BlockSparseMatrix* A,
const double* b,
@@ -64,9 +67,50 @@
private:
const LinearSolver::Options options_;
std::unique_ptr<Preconditioner> preconditioner_;
+ Vector cg_solution_;
+ Vector* scratch_[4] = {nullptr, nullptr, nullptr, nullptr};
};
-} // namespace internal
-} // namespace ceres
+#ifndef CERES_NO_CUDA
+class CudaPreconditioner : public ConjugateGradientsLinearOperator<CudaVector> {
+ public:
+ virtual void Update(const CompressedRowSparseMatrix& A, const double* D) = 0;
+ virtual ~CudaPreconditioner() = default;
+};
+
+// A Cuda-accelerated version of CgnrSolver.
+// This solver assumes that the sparsity structure of A remains constant for its
+// lifetime.
+class CERES_NO_EXPORT CudaCgnrSolver final
+ : public CompressedRowSparseMatrixSolver {
+ public:
+ explicit CudaCgnrSolver(LinearSolver::Options options);
+ static std::unique_ptr<CudaCgnrSolver> Create(LinearSolver::Options options,
+ std::string* error);
+ ~CudaCgnrSolver() override;
+
+ Summary SolveImpl(CompressedRowSparseMatrix* A,
+ const double* b,
+ const LinearSolver::PerSolveOptions& per_solve_options,
+ double* x) final;
+
+ private:
+ void CpuToGpuTransfer(const CompressedRowSparseMatrix& A,
+ const double* b,
+ const double* D);
+
+ LinearSolver::Options options_;
+ std::unique_ptr<CudaSparseMatrix> A_;
+ std::unique_ptr<CudaVector> b_;
+ std::unique_ptr<CudaVector> x_;
+ std::unique_ptr<CudaVector> Atb_;
+ std::unique_ptr<CudaVector> Ax_;
+ std::unique_ptr<CudaVector> D_;
+ std::unique_ptr<CudaPreconditioner> preconditioner_;
+ CudaVector* scratch_[4] = {nullptr, nullptr, nullptr, nullptr};
+};
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
#endif // CERES_INTERNAL_CGNR_SOLVER_H_
diff --git a/internal/ceres/compressed_col_sparse_matrix_utils.cc b/internal/ceres/compressed_col_sparse_matrix_utils.cc
index e1f6bb8..5a25e31 100644
--- a/internal/ceres/compressed_col_sparse_matrix_utils.cc
+++ b/internal/ceres/compressed_col_sparse_matrix_utils.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,33 +33,24 @@
#include <algorithm>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::vector;
-
-void CompressedColumnScalarMatrixToBlockMatrix(const int* scalar_rows,
- const int* scalar_cols,
- const vector<int>& row_blocks,
- const vector<int>& col_blocks,
- vector<int>* block_rows,
- vector<int>* block_cols) {
+void CompressedColumnScalarMatrixToBlockMatrix(
+ const int* scalar_rows,
+ const int* scalar_cols,
+ const std::vector<Block>& row_blocks,
+ const std::vector<Block>& col_blocks,
+ std::vector<int>* block_rows,
+ std::vector<int>* block_cols) {
CHECK(block_rows != nullptr);
CHECK(block_cols != nullptr);
block_rows->clear();
block_cols->clear();
- const int num_row_blocks = row_blocks.size();
const int num_col_blocks = col_blocks.size();
- vector<int> row_block_starts(num_row_blocks);
- for (int i = 0, cursor = 0; i < num_row_blocks; ++i) {
- row_block_starts[i] = cursor;
- cursor += row_blocks[i];
- }
-
// This loop extracts the block sparsity of the scalar sparse matrix
// It does so by iterating over the columns, but only considering
// the columns corresponding to the first element of each column
@@ -71,52 +62,46 @@
for (int col_block = 0; col_block < num_col_blocks; ++col_block) {
int column_size = 0;
for (int idx = scalar_cols[c]; idx < scalar_cols[c + 1]; ++idx) {
- vector<int>::const_iterator it = std::lower_bound(
- row_block_starts.begin(), row_block_starts.end(), scalar_rows[idx]);
- // Since we are using lower_bound, it will return the row id
- // where the row block starts. For everything but the first row
- // of the block, where these values will be the same, we can
- // skip, as we only need the first row to detect the presence of
- // the block.
+ auto it = std::lower_bound(row_blocks.begin(),
+ row_blocks.end(),
+ scalar_rows[idx],
+ [](const Block& block, double value) {
+ return block.position < value;
+ });
+ // Since we are using lower_bound, it will return the row id where the row
+ // block starts. For everything but the first row of the block, where
+ // these values will be the same, we can skip, as we only need the first
+ // row to detect the presence of the block.
//
- // For rows all but the first row in the last row block,
- // lower_bound will return row_block_starts.end(), but those can
- // be skipped like the rows in other row blocks too.
- if (it == row_block_starts.end() || *it != scalar_rows[idx]) {
+ // For rows all but the first row in the last row block, lower_bound will
+ // return row_blocks_.end(), but those can be skipped like the rows in
+ // other row blocks too.
+ if (it == row_blocks.end() || it->position != scalar_rows[idx]) {
continue;
}
- block_rows->push_back(it - row_block_starts.begin());
+ block_rows->push_back(it - row_blocks.begin());
++column_size;
}
block_cols->push_back(block_cols->back() + column_size);
- c += col_blocks[col_block];
+ c += col_blocks[col_block].size;
}
}
-void BlockOrderingToScalarOrdering(const vector<int>& blocks,
- const vector<int>& block_ordering,
- vector<int>* scalar_ordering) {
+void BlockOrderingToScalarOrdering(const std::vector<Block>& blocks,
+ const std::vector<int>& block_ordering,
+ std::vector<int>* scalar_ordering) {
CHECK_EQ(blocks.size(), block_ordering.size());
const int num_blocks = blocks.size();
-
- // block_starts = [0, block1, block1 + block2 ..]
- vector<int> block_starts(num_blocks);
- for (int i = 0, cursor = 0; i < num_blocks; ++i) {
- block_starts[i] = cursor;
- cursor += blocks[i];
- }
-
- scalar_ordering->resize(block_starts.back() + blocks.back());
+ scalar_ordering->resize(NumScalarEntries(blocks));
int cursor = 0;
for (int i = 0; i < num_blocks; ++i) {
const int block_id = block_ordering[i];
- const int block_size = blocks[block_id];
- int block_position = block_starts[block_id];
+ const int block_size = blocks[block_id].size;
+ int block_position = blocks[block_id].position;
for (int j = 0; j < block_size; ++j) {
(*scalar_ordering)[cursor++] = block_position++;
}
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/compressed_col_sparse_matrix_utils.h b/internal/ceres/compressed_col_sparse_matrix_utils.h
index d442e1a..e9e067f 100644
--- a/internal/ceres/compressed_col_sparse_matrix_utils.h
+++ b/internal/ceres/compressed_col_sparse_matrix_utils.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,12 +31,14 @@
#ifndef CERES_INTERNAL_COMPRESSED_COL_SPARSE_MATRIX_UTILS_H_
#define CERES_INTERNAL_COMPRESSED_COL_SPARSE_MATRIX_UTILS_H_
+#include <algorithm>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/block_structure.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Extract the block sparsity pattern of the scalar compressed columns
// matrix and return it in compressed column form. The compressed
@@ -48,19 +50,19 @@
// and column block j, then it is expected that A contains at least
// one non-zero entry corresponding to the top left entry of c_ij,
// as that entry is used to detect the presence of a non-zero c_ij.
-CERES_EXPORT_INTERNAL void CompressedColumnScalarMatrixToBlockMatrix(
+CERES_NO_EXPORT void CompressedColumnScalarMatrixToBlockMatrix(
const int* scalar_rows,
const int* scalar_cols,
- const std::vector<int>& row_blocks,
- const std::vector<int>& col_blocks,
+ const std::vector<Block>& row_blocks,
+ const std::vector<Block>& col_blocks,
std::vector<int>* block_rows,
std::vector<int>* block_cols);
// Given a set of blocks and a permutation of these blocks, compute
// the corresponding "scalar" ordering, where the scalar ordering of
// size sum(blocks).
-CERES_EXPORT_INTERNAL void BlockOrderingToScalarOrdering(
- const std::vector<int>& blocks,
+CERES_NO_EXPORT void BlockOrderingToScalarOrdering(
+ const std::vector<Block>& blocks,
const std::vector<int>& block_ordering,
std::vector<int>* scalar_ordering);
@@ -139,7 +141,8 @@
SolveUpperTriangularInPlace(num_cols, rows, cols, values, solution);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_COMPRESSED_COL_SPARSE_MATRIX_UTILS_H_
diff --git a/internal/ceres/compressed_col_sparse_matrix_utils_test.cc b/internal/ceres/compressed_col_sparse_matrix_utils_test.cc
index 339c064..b1c8219 100644
--- a/internal/ceres/compressed_col_sparse_matrix_utils_test.cc
+++ b/internal/ceres/compressed_col_sparse_matrix_utils_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,53 +32,30 @@
#include <algorithm>
#include <numeric>
+#include <vector>
#include "Eigen/SparseCore"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
TEST(_, BlockPermutationToScalarPermutation) {
- vector<int> blocks;
// Block structure
// 0 --1- ---2--- ---3--- 4
// [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
- blocks.push_back(1);
- blocks.push_back(2);
- blocks.push_back(3);
- blocks.push_back(3);
- blocks.push_back(1);
-
+ std::vector<Block> blocks{{1, 0}, {2, 1}, {3, 3}, {3, 6}, {1, 9}};
// Block ordering
// [1, 0, 2, 4, 5]
- vector<int> block_ordering;
- block_ordering.push_back(1);
- block_ordering.push_back(0);
- block_ordering.push_back(2);
- block_ordering.push_back(4);
- block_ordering.push_back(3);
+ std::vector<int> block_ordering{{1, 0, 2, 4, 3}};
// Expected ordering
// [1, 2, 0, 3, 4, 5, 9, 6, 7, 8]
- vector<int> expected_scalar_ordering;
- expected_scalar_ordering.push_back(1);
- expected_scalar_ordering.push_back(2);
- expected_scalar_ordering.push_back(0);
- expected_scalar_ordering.push_back(3);
- expected_scalar_ordering.push_back(4);
- expected_scalar_ordering.push_back(5);
- expected_scalar_ordering.push_back(9);
- expected_scalar_ordering.push_back(6);
- expected_scalar_ordering.push_back(7);
- expected_scalar_ordering.push_back(8);
+ std::vector<int> expected_scalar_ordering{{1, 2, 0, 3, 4, 5, 9, 6, 7, 8}};
- vector<int> scalar_ordering;
+ std::vector<int> scalar_ordering;
BlockOrderingToScalarOrdering(blocks, block_ordering, &scalar_ordering);
EXPECT_EQ(scalar_ordering.size(), expected_scalar_ordering.size());
for (int i = 0; i < expected_scalar_ordering.size(); ++i) {
@@ -86,19 +63,17 @@
}
}
-static void FillBlock(const vector<int>& row_blocks,
- const vector<int>& col_blocks,
+static void FillBlock(const std::vector<Block>& row_blocks,
+ const std::vector<Block>& col_blocks,
const int row_block_id,
const int col_block_id,
- vector<Eigen::Triplet<double>>* triplets) {
- const int row_offset =
- std::accumulate(&row_blocks[0], &row_blocks[row_block_id], 0);
- const int col_offset =
- std::accumulate(&col_blocks[0], &col_blocks[col_block_id], 0);
- for (int r = 0; r < row_blocks[row_block_id]; ++r) {
- for (int c = 0; c < col_blocks[col_block_id]; ++c) {
+ std::vector<Eigen::Triplet<double>>* triplets) {
+ for (int r = 0; r < row_blocks[row_block_id].size; ++r) {
+ for (int c = 0; c < col_blocks[col_block_id].size; ++c) {
triplets->push_back(
- Eigen::Triplet<double>(row_offset + r, col_offset + c, 1.0));
+ Eigen::Triplet<double>(row_blocks[row_block_id].position + r,
+ col_blocks[col_block_id].position + c,
+ 1.0));
}
}
}
@@ -112,23 +87,13 @@
// [2] x x
// num_nonzeros = 1 + 3 + 4 + 4 + 1 + 2 = 15
- vector<int> col_blocks;
- col_blocks.push_back(1);
- col_blocks.push_back(2);
- col_blocks.push_back(3);
- col_blocks.push_back(2);
+ std::vector<Block> col_blocks{{1, 0}, {2, 1}, {3, 3}, {2, 5}};
+ const int num_cols = NumScalarEntries(col_blocks);
- vector<int> row_blocks;
- row_blocks.push_back(1);
- row_blocks.push_back(2);
- row_blocks.push_back(2);
+ std::vector<Block> row_blocks{{1, 0}, {2, 1}, {2, 3}};
+ const int num_rows = NumScalarEntries(row_blocks);
- const int num_rows =
- std::accumulate(row_blocks.begin(), row_blocks.end(), 0.0);
- const int num_cols =
- std::accumulate(col_blocks.begin(), col_blocks.end(), 0.0);
-
- vector<Eigen::Triplet<double>> triplets;
+ std::vector<Eigen::Triplet<double>> triplets;
FillBlock(row_blocks, col_blocks, 0, 0, &triplets);
FillBlock(row_blocks, col_blocks, 2, 0, &triplets);
FillBlock(row_blocks, col_blocks, 1, 1, &triplets);
@@ -138,23 +103,11 @@
Eigen::SparseMatrix<double> sparse_matrix(num_rows, num_cols);
sparse_matrix.setFromTriplets(triplets.begin(), triplets.end());
- vector<int> expected_compressed_block_rows;
- expected_compressed_block_rows.push_back(0);
- expected_compressed_block_rows.push_back(2);
- expected_compressed_block_rows.push_back(1);
- expected_compressed_block_rows.push_back(2);
- expected_compressed_block_rows.push_back(0);
- expected_compressed_block_rows.push_back(1);
+ const std::vector<int> expected_compressed_block_rows{{0, 2, 1, 2, 0, 1}};
+ const std::vector<int> expected_compressed_block_cols{{0, 2, 4, 5, 6}};
- vector<int> expected_compressed_block_cols;
- expected_compressed_block_cols.push_back(0);
- expected_compressed_block_cols.push_back(2);
- expected_compressed_block_cols.push_back(4);
- expected_compressed_block_cols.push_back(5);
- expected_compressed_block_cols.push_back(6);
-
- vector<int> compressed_block_rows;
- vector<int> compressed_block_cols;
+ std::vector<int> compressed_block_rows;
+ std::vector<int> compressed_block_cols;
CompressedColumnScalarMatrixToBlockMatrix(sparse_matrix.innerIndexPtr(),
sparse_matrix.outerIndexPtr(),
row_blocks,
@@ -168,47 +121,26 @@
class SolveUpperTriangularTest : public ::testing::Test {
protected:
- void SetUp() {
- cols.resize(5);
- rows.resize(7);
- values.resize(7);
+ const std::vector<int>& cols() const { return cols_; }
+ const std::vector<int>& rows() const { return rows_; }
+ const std::vector<double>& values() const { return values_; }
- cols[0] = 0;
- rows[0] = 0;
- values[0] = 0.50754;
-
- cols[1] = 1;
- rows[1] = 1;
- values[1] = 0.80483;
-
- cols[2] = 2;
- rows[2] = 1;
- values[2] = 0.14120;
- rows[3] = 2;
- values[3] = 0.3;
-
- cols[3] = 4;
- rows[4] = 0;
- values[4] = 0.77696;
- rows[5] = 1;
- values[5] = 0.41860;
- rows[6] = 3;
- values[6] = 0.88979;
-
- cols[4] = 7;
- }
-
- vector<int> cols;
- vector<int> rows;
- vector<double> values;
+ private:
+ const std::vector<int> cols_ = {0, 1, 2, 4, 7};
+ const std::vector<int> rows_ = {0, 1, 1, 2, 0, 1, 3};
+ const std::vector<double> values_ = {
+ 0.50754, 0.80483, 0.14120, 0.3, 0.77696, 0.41860, 0.88979};
};
TEST_F(SolveUpperTriangularTest, SolveInPlace) {
double rhs_and_solution[] = {1.0, 1.0, 2.0, 2.0};
const double expected[] = {-1.4706, -1.0962, 6.6667, 2.2477};
- SolveUpperTriangularInPlace<int>(
- cols.size() - 1, &rows[0], &cols[0], &values[0], rhs_and_solution);
+ SolveUpperTriangularInPlace<int>(cols().size() - 1,
+ rows().data(),
+ cols().data(),
+ values().data(),
+ rhs_and_solution);
for (int i = 0; i < 4; ++i) {
EXPECT_NEAR(rhs_and_solution[i], expected[i], 1e-4) << i;
@@ -219,8 +151,11 @@
double rhs_and_solution[] = {1.0, 1.0, 2.0, 2.0};
double expected[] = {1.970288, 1.242498, 6.081864, -0.057255};
- SolveUpperTriangularTransposeInPlace<int>(
- cols.size() - 1, &rows[0], &cols[0], &values[0], rhs_and_solution);
+ SolveUpperTriangularTransposeInPlace<int>(cols().size() - 1,
+ rows().data(),
+ cols().data(),
+ values().data(),
+ rhs_and_solution);
for (int i = 0; i < 4; ++i) {
EXPECT_NEAR(rhs_and_solution[i], expected[i], 1e-4) << i;
@@ -237,13 +172,16 @@
// clang-format on
for (int i = 0; i < 4; ++i) {
- SolveRTRWithSparseRHS<int>(
- cols.size() - 1, &rows[0], &cols[0], &values[0], i, solution);
+ SolveRTRWithSparseRHS<int>(cols().size() - 1,
+ rows().data(),
+ cols().data(),
+ values().data(),
+ i,
+ solution);
for (int j = 0; j < 4; ++j) {
EXPECT_NEAR(solution[j], expected[4 * i + j], 1e-3) << i;
}
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/compressed_row_jacobian_writer.cc b/internal/ceres/compressed_row_jacobian_writer.cc
index 8e7e3e7..007346d 100644
--- a/internal/ceres/compressed_row_jacobian_writer.cc
+++ b/internal/ceres/compressed_row_jacobian_writer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,7 +30,10 @@
#include "ceres/compressed_row_jacobian_writer.h"
+#include <algorithm>
#include <iterator>
+#include <memory>
+#include <string>
#include <utility>
#include <vector>
@@ -41,65 +44,73 @@
#include "ceres/residual_block.h"
#include "ceres/scratch_evaluate_preparer.h"
-namespace ceres {
-namespace internal {
-
-using std::adjacent_find;
-using std::make_pair;
-using std::pair;
-using std::vector;
-
+namespace ceres::internal {
void CompressedRowJacobianWriter::PopulateJacobianRowAndColumnBlockVectors(
const Program* program, CompressedRowSparseMatrix* jacobian) {
- const vector<ParameterBlock*>& parameter_blocks = program->parameter_blocks();
- vector<int>& col_blocks = *(jacobian->mutable_col_blocks());
+ const auto& parameter_blocks = program->parameter_blocks();
+ auto& col_blocks = *(jacobian->mutable_col_blocks());
col_blocks.resize(parameter_blocks.size());
+ int col_pos = 0;
for (int i = 0; i < parameter_blocks.size(); ++i) {
- col_blocks[i] = parameter_blocks[i]->LocalSize();
+ col_blocks[i].size = parameter_blocks[i]->TangentSize();
+ col_blocks[i].position = col_pos;
+ col_pos += col_blocks[i].size;
}
- const vector<ResidualBlock*>& residual_blocks = program->residual_blocks();
- vector<int>& row_blocks = *(jacobian->mutable_row_blocks());
+ const auto& residual_blocks = program->residual_blocks();
+ auto& row_blocks = *(jacobian->mutable_row_blocks());
row_blocks.resize(residual_blocks.size());
+ int row_pos = 0;
for (int i = 0; i < residual_blocks.size(); ++i) {
- row_blocks[i] = residual_blocks[i]->NumResiduals();
+ row_blocks[i].size = residual_blocks[i]->NumResiduals();
+ row_blocks[i].position = row_pos;
+ row_pos += row_blocks[i].size;
}
}
void CompressedRowJacobianWriter::GetOrderedParameterBlocks(
const Program* program,
int residual_id,
- vector<pair<int, int>>* evaluated_jacobian_blocks) {
- const ResidualBlock* residual_block = program->residual_blocks()[residual_id];
+ std::vector<std::pair<int, int>>* evaluated_jacobian_blocks) {
+ auto residual_block = program->residual_blocks()[residual_id];
const int num_parameter_blocks = residual_block->NumParameterBlocks();
for (int j = 0; j < num_parameter_blocks; ++j) {
- const ParameterBlock* parameter_block =
- residual_block->parameter_blocks()[j];
+ auto parameter_block = residual_block->parameter_blocks()[j];
if (!parameter_block->IsConstant()) {
evaluated_jacobian_blocks->push_back(
- make_pair(parameter_block->index(), j));
+ std::make_pair(parameter_block->index(), j));
}
}
- sort(evaluated_jacobian_blocks->begin(), evaluated_jacobian_blocks->end());
+ std::sort(evaluated_jacobian_blocks->begin(),
+ evaluated_jacobian_blocks->end());
}
-SparseMatrix* CompressedRowJacobianWriter::CreateJacobian() const {
- const vector<ResidualBlock*>& residual_blocks = program_->residual_blocks();
+std::unique_ptr<SparseMatrix> CompressedRowJacobianWriter::CreateJacobian()
+ const {
+ const auto& residual_blocks = program_->residual_blocks();
- int total_num_residuals = program_->NumResiduals();
- int total_num_effective_parameters = program_->NumEffectiveParameters();
+ const int total_num_residuals = program_->NumResiduals();
+ const int total_num_effective_parameters = program_->NumEffectiveParameters();
// Count the number of jacobian nonzeros.
- int num_jacobian_nonzeros = 0;
- for (int i = 0; i < residual_blocks.size(); ++i) {
- ResidualBlock* residual_block = residual_blocks[i];
+ //
+ // We used an unsigned int here, so that we can compare it INT_MAX without
+ // triggering overflow behaviour.
+ unsigned int num_jacobian_nonzeros = total_num_effective_parameters;
+ for (auto* residual_block : residual_blocks) {
const int num_residuals = residual_block->NumResiduals();
const int num_parameter_blocks = residual_block->NumParameterBlocks();
for (int j = 0; j < num_parameter_blocks; ++j) {
- ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
+ auto parameter_block = residual_block->parameter_blocks()[j];
if (!parameter_block->IsConstant()) {
- num_jacobian_nonzeros += num_residuals * parameter_block->LocalSize();
+ num_jacobian_nonzeros += num_residuals * parameter_block->TangentSize();
+ if (num_jacobian_nonzeros > std::numeric_limits<int>::max()) {
+ LOG(ERROR) << "Unable to create Jacobian matrix: Too many entries in "
+ "the Jacobian matrix. num_jacobian_nonzeros = "
+ << num_jacobian_nonzeros;
+ return nullptr;
+ }
}
}
}
@@ -108,41 +119,41 @@
// Allocate more space than needed to store the jacobian so that when the LM
// algorithm adds the diagonal, no reallocation is necessary. This reduces
// peak memory usage significantly.
- CompressedRowSparseMatrix* jacobian = new CompressedRowSparseMatrix(
+ auto jacobian = std::make_unique<CompressedRowSparseMatrix>(
total_num_residuals,
total_num_effective_parameters,
- num_jacobian_nonzeros + total_num_effective_parameters);
+ static_cast<int>(num_jacobian_nonzeros));
- // At this stage, the CompressedRowSparseMatrix is an invalid state. But this
- // seems to be the only way to construct it without doing a memory copy.
+ // At this stage, the CompressedRowSparseMatrix is an invalid state. But
+ // this seems to be the only way to construct it without doing a memory
+ // copy.
int* rows = jacobian->mutable_rows();
int* cols = jacobian->mutable_cols();
int row_pos = 0;
rows[0] = 0;
- for (int i = 0; i < residual_blocks.size(); ++i) {
- const ResidualBlock* residual_block = residual_blocks[i];
+ for (auto* residual_block : residual_blocks) {
const int num_parameter_blocks = residual_block->NumParameterBlocks();
// Count the number of derivatives for a row of this residual block and
// build a list of active parameter block indices.
int num_derivatives = 0;
- vector<int> parameter_indices;
+ std::vector<int> parameter_indices;
for (int j = 0; j < num_parameter_blocks; ++j) {
- ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
+ auto parameter_block = residual_block->parameter_blocks()[j];
if (!parameter_block->IsConstant()) {
parameter_indices.push_back(parameter_block->index());
- num_derivatives += parameter_block->LocalSize();
+ num_derivatives += parameter_block->TangentSize();
}
}
// Sort the parameters by their position in the state vector.
- sort(parameter_indices.begin(), parameter_indices.end());
+ std::sort(parameter_indices.begin(), parameter_indices.end());
if (adjacent_find(parameter_indices.begin(), parameter_indices.end()) !=
parameter_indices.end()) {
std::string parameter_block_description;
for (int j = 0; j < num_parameter_blocks; ++j) {
- ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
+ auto parameter_block = residual_block->parameter_blocks()[j];
parameter_block_description += parameter_block->ToString() + "\n";
}
LOG(FATAL) << "Ceres internal error: "
@@ -163,16 +174,14 @@
// parameter vector. This code mirrors that in Write(), where jacobian
// values are updated.
int col_pos = 0;
- for (int j = 0; j < parameter_indices.size(); ++j) {
- ParameterBlock* parameter_block =
- program_->parameter_blocks()[parameter_indices[j]];
- const int parameter_block_size = parameter_block->LocalSize();
+ for (int parameter_index : parameter_indices) {
+ auto parameter_block = program_->parameter_blocks()[parameter_index];
+ const int parameter_block_size = parameter_block->TangentSize();
for (int r = 0; r < num_residuals; ++r) {
// This is the position in the values array of the jacobian where this
// row of the jacobian block should go.
const int column_block_begin = rows[row_pos + r] + col_pos;
-
for (int c = 0; c < parameter_block_size; ++c) {
cols[column_block_begin + c] = parameter_block->delta_offset() + c;
}
@@ -181,9 +190,10 @@
}
row_pos += num_residuals;
}
- CHECK_EQ(num_jacobian_nonzeros, rows[total_num_residuals]);
+ CHECK_EQ(num_jacobian_nonzeros - total_num_effective_parameters,
+ rows[total_num_residuals]);
- PopulateJacobianRowAndColumnBlockVectors(program_, jacobian);
+ PopulateJacobianRowAndColumnBlockVectors(program_, jacobian.get());
return jacobian;
}
@@ -192,17 +202,15 @@
int residual_offset,
double** jacobians,
SparseMatrix* base_jacobian) {
- CompressedRowSparseMatrix* jacobian =
- down_cast<CompressedRowSparseMatrix*>(base_jacobian);
+ auto* jacobian = down_cast<CompressedRowSparseMatrix*>(base_jacobian);
double* jacobian_values = jacobian->mutable_values();
const int* jacobian_rows = jacobian->rows();
- const ResidualBlock* residual_block =
- program_->residual_blocks()[residual_id];
+ auto residual_block = program_->residual_blocks()[residual_id];
const int num_residuals = residual_block->NumResiduals();
- vector<pair<int, int>> evaluated_jacobian_blocks;
+ std::vector<std::pair<int, int>> evaluated_jacobian_blocks;
GetOrderedParameterBlocks(program_, residual_id, &evaluated_jacobian_blocks);
// Where in the current row does the jacobian for a parameter block begin.
@@ -210,11 +218,11 @@
// Iterate over the jacobian blocks in increasing order of their
// positions in the reduced parameter vector.
- for (int i = 0; i < evaluated_jacobian_blocks.size(); ++i) {
- const ParameterBlock* parameter_block =
- program_->parameter_blocks()[evaluated_jacobian_blocks[i].first];
- const int argument = evaluated_jacobian_blocks[i].second;
- const int parameter_block_size = parameter_block->LocalSize();
+ for (auto& evaluated_jacobian_block : evaluated_jacobian_blocks) {
+ auto parameter_block =
+ program_->parameter_blocks()[evaluated_jacobian_block.first];
+ const int argument = evaluated_jacobian_block.second;
+ const int parameter_block_size = parameter_block->TangentSize();
// Copy one row of the jacobian block at a time.
for (int r = 0; r < num_residuals; ++r) {
@@ -235,5 +243,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/compressed_row_jacobian_writer.h b/internal/ceres/compressed_row_jacobian_writer.h
index b1251ca..6fc40e9 100644
--- a/internal/ceres/compressed_row_jacobian_writer.h
+++ b/internal/ceres/compressed_row_jacobian_writer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,20 +33,21 @@
#ifndef CERES_INTERNAL_COMPRESSED_ROW_JACOBIAN_WRITER_H_
#define CERES_INTERNAL_COMPRESSED_ROW_JACOBIAN_WRITER_H_
+#include <memory>
#include <utility>
#include <vector>
#include "ceres/evaluator.h"
+#include "ceres/internal/export.h"
#include "ceres/scratch_evaluate_preparer.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class CompressedRowSparseMatrix;
class Program;
class SparseMatrix;
-class CompressedRowJacobianWriter {
+class CERES_NO_EXPORT CompressedRowJacobianWriter {
public:
CompressedRowJacobianWriter(Evaluator::Options /* ignored */,
Program* program)
@@ -89,11 +90,12 @@
// assumed by the cost functions, use scratch space to store the
// jacobians temporarily then copy them over to the larger jacobian
// in the Write() function.
- ScratchEvaluatePreparer* CreateEvaluatePreparers(int num_threads) {
+ std::unique_ptr<ScratchEvaluatePreparer[]> CreateEvaluatePreparers(
+ int num_threads) {
return ScratchEvaluatePreparer::Create(*program_, num_threads);
}
- SparseMatrix* CreateJacobian() const;
+ std::unique_ptr<SparseMatrix> CreateJacobian() const;
void Write(int residual_id,
int residual_offset,
@@ -104,7 +106,6 @@
Program* program_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_COMPRESSED_ROW_JACOBIAN_WRITER_H_
diff --git a/internal/ceres/compressed_row_sparse_matrix.cc b/internal/ceres/compressed_row_sparse_matrix.cc
index 900586c..21697f8 100644
--- a/internal/ceres/compressed_row_sparse_matrix.cc
+++ b/internal/ceres/compressed_row_sparse_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,24 +31,24 @@
#include "ceres/compressed_row_sparse_matrix.h"
#include <algorithm>
+#include <functional>
+#include <memory>
#include <numeric>
+#include <random>
#include <vector>
+#include "ceres/context_impl.h"
#include "ceres/crs_matrix.h"
-#include "ceres/internal/port.h"
-#include "ceres/random.h"
+#include "ceres/internal/export.h"
+#include "ceres/parallel_for.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
-
+namespace ceres::internal {
namespace {
// Helper functor used by the constructor for reordering the contents
-// of a TripletSparseMatrix. This comparator assumes thay there are no
+// of a TripletSparseMatrix. This comparator assumes that there are no
// duplicates in the pair of arrays rows and cols, i.e., there is no
// indices i and j (not equal to each other) s.t.
//
@@ -104,7 +104,7 @@
const int c = cols[idx];
const int transpose_idx = transpose_rows[c]++;
transpose_cols[transpose_idx] = r;
- if (values != NULL && transpose_values != NULL) {
+ if (values != nullptr && transpose_values != nullptr) {
transpose_values[transpose_idx] = values[idx];
}
}
@@ -118,10 +118,12 @@
transpose_rows[0] = 0;
}
+template <class RandomNormalFunctor>
void AddRandomBlock(const int num_rows,
const int num_cols,
const int row_block_begin,
const int col_block_begin,
+ RandomNormalFunctor&& randn,
std::vector<int>* rows,
std::vector<int>* cols,
std::vector<double>* values) {
@@ -129,19 +131,21 @@
for (int c = 0; c < num_cols; ++c) {
rows->push_back(row_block_begin + r);
cols->push_back(col_block_begin + c);
- values->push_back(RandNormal());
+ values->push_back(randn());
}
}
}
+template <class RandomNormalFunctor>
void AddSymmetricRandomBlock(const int num_rows,
const int row_block_begin,
+ RandomNormalFunctor&& randn,
std::vector<int>* rows,
std::vector<int>* cols,
std::vector<double>* values) {
for (int r = 0; r < num_rows; ++r) {
for (int c = r; c < num_rows; ++c) {
- const double v = RandNormal();
+ const double v = randn();
rows->push_back(row_block_begin + r);
cols->push_back(row_block_begin + c);
values->push_back(v);
@@ -162,7 +166,7 @@
int max_num_nonzeros) {
num_rows_ = num_rows;
num_cols_ = num_cols;
- storage_type_ = UNSYMMETRIC;
+ storage_type_ = StorageType::UNSYMMETRIC;
rows_.resize(num_rows + 1, 0);
cols_.resize(max_num_nonzeros, 0);
values_.resize(max_num_nonzeros, 0.0);
@@ -174,18 +178,20 @@
cols_.size() * sizeof(double); // NOLINT
}
-CompressedRowSparseMatrix* CompressedRowSparseMatrix::FromTripletSparseMatrix(
+std::unique_ptr<CompressedRowSparseMatrix>
+CompressedRowSparseMatrix::FromTripletSparseMatrix(
const TripletSparseMatrix& input) {
return CompressedRowSparseMatrix::FromTripletSparseMatrix(input, false);
}
-CompressedRowSparseMatrix*
+std::unique_ptr<CompressedRowSparseMatrix>
CompressedRowSparseMatrix::FromTripletSparseMatrixTransposed(
const TripletSparseMatrix& input) {
return CompressedRowSparseMatrix::FromTripletSparseMatrix(input, true);
}
-CompressedRowSparseMatrix* CompressedRowSparseMatrix::FromTripletSparseMatrix(
+std::unique_ptr<CompressedRowSparseMatrix>
+CompressedRowSparseMatrix::FromTripletSparseMatrix(
const TripletSparseMatrix& input, bool transpose) {
int num_rows = input.num_rows();
int num_cols = input.num_cols();
@@ -199,7 +205,7 @@
}
// index is the list of indices into the TripletSparseMatrix input.
- vector<int> index(input.num_nonzeros(), 0);
+ std::vector<int> index(input.num_nonzeros(), 0);
for (int i = 0; i < input.num_nonzeros(); ++i) {
index[i] = i;
}
@@ -214,8 +220,8 @@
input.num_nonzeros() * sizeof(int) + // NOLINT
input.num_nonzeros() * sizeof(double)); // NOLINT
- CompressedRowSparseMatrix* output =
- new CompressedRowSparseMatrix(num_rows, num_cols, input.num_nonzeros());
+ auto output = std::make_unique<CompressedRowSparseMatrix>(
+ num_rows, num_cols, input.num_nonzeros());
if (num_rows == 0) {
// No data to copy.
@@ -251,7 +257,7 @@
num_rows_ = num_rows;
num_cols_ = num_rows;
- storage_type_ = UNSYMMETRIC;
+ storage_type_ = StorageType::UNSYMMETRIC;
rows_.resize(num_rows + 1);
cols_.resize(num_rows);
values_.resize(num_rows);
@@ -266,28 +272,43 @@
CHECK_EQ(num_nonzeros(), num_rows);
}
-CompressedRowSparseMatrix::~CompressedRowSparseMatrix() {}
+CompressedRowSparseMatrix::~CompressedRowSparseMatrix() = default;
void CompressedRowSparseMatrix::SetZero() {
std::fill(values_.begin(), values_.end(), 0);
}
-// TODO(sameeragarwal): Make RightMultiply and LeftMultiply
-// block-aware for higher performance.
-void CompressedRowSparseMatrix::RightMultiply(const double* x,
- double* y) const {
+// TODO(sameeragarwal): Make RightMultiplyAndAccumulate and
+// LeftMultiplyAndAccumulate block-aware for higher performance.
+void CompressedRowSparseMatrix::RightMultiplyAndAccumulate(
+ const double* x, double* y, ContextImpl* context, int num_threads) const {
+ if (storage_type_ != StorageType::UNSYMMETRIC) {
+ RightMultiplyAndAccumulate(x, y);
+ return;
+ }
+
+ auto values = values_.data();
+ auto rows = rows_.data();
+ auto cols = cols_.data();
+
+ ParallelFor(
+ context, 0, num_rows_, num_threads, [values, rows, cols, x, y](int row) {
+ for (int idx = rows[row]; idx < rows[row + 1]; ++idx) {
+ const int c = cols[idx];
+ const double v = values[idx];
+ y[row] += v * x[c];
+ }
+ });
+}
+
+void CompressedRowSparseMatrix::RightMultiplyAndAccumulate(const double* x,
+ double* y) const {
CHECK(x != nullptr);
CHECK(y != nullptr);
- if (storage_type_ == UNSYMMETRIC) {
- for (int r = 0; r < num_rows_; ++r) {
- for (int idx = rows_[r]; idx < rows_[r + 1]; ++idx) {
- const int c = cols_[idx];
- const double v = values_[idx];
- y[r] += v * x[c];
- }
- }
- } else if (storage_type_ == UPPER_TRIANGULAR) {
+ if (storage_type_ == StorageType::UNSYMMETRIC) {
+ RightMultiplyAndAccumulate(x, y, nullptr, 1);
+ } else if (storage_type_ == StorageType::UPPER_TRIANGULAR) {
// Because of their block structure, we will have entries that lie
// above (below) the diagonal for lower (upper) triangular matrices,
// so the loops below need to account for this.
@@ -313,7 +334,7 @@
}
}
}
- } else if (storage_type_ == LOWER_TRIANGULAR) {
+ } else if (storage_type_ == StorageType::LOWER_TRIANGULAR) {
for (int r = 0; r < num_rows_; ++r) {
int idx = rows_[r];
const int idx_end = rows_[r + 1];
@@ -336,19 +357,21 @@
}
}
-void CompressedRowSparseMatrix::LeftMultiply(const double* x, double* y) const {
+void CompressedRowSparseMatrix::LeftMultiplyAndAccumulate(const double* x,
+ double* y) const {
CHECK(x != nullptr);
CHECK(y != nullptr);
- if (storage_type_ == UNSYMMETRIC) {
+ if (storage_type_ == StorageType::UNSYMMETRIC) {
for (int r = 0; r < num_rows_; ++r) {
for (int idx = rows_[r]; idx < rows_[r + 1]; ++idx) {
y[cols_[idx]] += values_[idx] * x[r];
}
}
} else {
- // Since the matrix is symmetric, LeftMultiply = RightMultiply.
- RightMultiply(x, y);
+ // Since the matrix is symmetric, LeftMultiplyAndAccumulate =
+ // RightMultiplyAndAccumulate.
+ RightMultiplyAndAccumulate(x, y);
}
}
@@ -356,11 +379,11 @@
CHECK(x != nullptr);
std::fill(x, x + num_cols_, 0.0);
- if (storage_type_ == UNSYMMETRIC) {
+ if (storage_type_ == StorageType::UNSYMMETRIC) {
for (int idx = 0; idx < rows_[num_rows_]; ++idx) {
x[cols_[idx]] += values_[idx] * values_[idx];
}
- } else if (storage_type_ == UPPER_TRIANGULAR) {
+ } else if (storage_type_ == StorageType::UPPER_TRIANGULAR) {
// Because of their block structure, we will have entries that lie
// above (below) the diagonal for lower (upper) triangular
// matrices, so the loops below need to account for this.
@@ -386,7 +409,7 @@
}
}
}
- } else if (storage_type_ == LOWER_TRIANGULAR) {
+ } else if (storage_type_ == StorageType::LOWER_TRIANGULAR) {
for (int r = 0; r < num_rows_; ++r) {
int idx = rows_[r];
const int idx_end = rows_[r + 1];
@@ -431,7 +454,7 @@
void CompressedRowSparseMatrix::DeleteRows(int delta_rows) {
CHECK_GE(delta_rows, 0);
CHECK_LE(delta_rows, num_rows_);
- CHECK_EQ(storage_type_, UNSYMMETRIC);
+ CHECK_EQ(storage_type_, StorageType::UNSYMMETRIC);
num_rows_ -= delta_rows;
rows_.resize(num_rows_ + 1);
@@ -447,7 +470,7 @@
int num_row_blocks = 0;
int num_rows = 0;
while (num_row_blocks < row_blocks_.size() && num_rows < num_rows_) {
- num_rows += row_blocks_[num_row_blocks];
+ num_rows += row_blocks_[num_row_blocks].size;
++num_row_blocks;
}
@@ -455,7 +478,7 @@
}
void CompressedRowSparseMatrix::AppendRows(const CompressedRowSparseMatrix& m) {
- CHECK_EQ(storage_type_, UNSYMMETRIC);
+ CHECK_EQ(storage_type_, StorageType::UNSYMMETRIC);
CHECK_EQ(m.num_cols(), num_cols_);
CHECK((row_blocks_.empty() && m.row_blocks().empty()) ||
@@ -533,17 +556,17 @@
values_.resize(num_nonzeros);
}
-CompressedRowSparseMatrix* CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(
- const double* diagonal, const vector<int>& blocks) {
- int num_rows = 0;
+std::unique_ptr<CompressedRowSparseMatrix>
+CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(
+ const double* diagonal, const std::vector<Block>& blocks) {
+ const int num_rows = NumScalarEntries(blocks);
int num_nonzeros = 0;
- for (int i = 0; i < blocks.size(); ++i) {
- num_rows += blocks[i];
- num_nonzeros += blocks[i] * blocks[i];
+ for (auto& block : blocks) {
+ num_nonzeros += block.size * block.size;
}
- CompressedRowSparseMatrix* matrix =
- new CompressedRowSparseMatrix(num_rows, num_rows, num_nonzeros);
+ auto matrix = std::make_unique<CompressedRowSparseMatrix>(
+ num_rows, num_rows, num_nonzeros);
int* rows = matrix->mutable_rows();
int* cols = matrix->mutable_cols();
@@ -552,16 +575,17 @@
int idx_cursor = 0;
int col_cursor = 0;
- for (int i = 0; i < blocks.size(); ++i) {
- const int block_size = blocks[i];
- for (int r = 0; r < block_size; ++r) {
+ for (auto& block : blocks) {
+ for (int r = 0; r < block.size; ++r) {
*(rows++) = idx_cursor;
- values[idx_cursor + r] = diagonal[col_cursor + r];
- for (int c = 0; c < block_size; ++c, ++idx_cursor) {
+ if (diagonal != nullptr) {
+ values[idx_cursor + r] = diagonal[col_cursor + r];
+ }
+ for (int c = 0; c < block.size; ++c, ++idx_cursor) {
*(cols++) = col_cursor + c;
}
}
- col_cursor += block_size;
+ col_cursor += block.size;
}
*rows = idx_cursor;
@@ -573,19 +597,20 @@
return matrix;
}
-CompressedRowSparseMatrix* CompressedRowSparseMatrix::Transpose() const {
- CompressedRowSparseMatrix* transpose =
- new CompressedRowSparseMatrix(num_cols_, num_rows_, num_nonzeros());
+std::unique_ptr<CompressedRowSparseMatrix>
+CompressedRowSparseMatrix::Transpose() const {
+ auto transpose = std::make_unique<CompressedRowSparseMatrix>(
+ num_cols_, num_rows_, num_nonzeros());
switch (storage_type_) {
- case UNSYMMETRIC:
- transpose->set_storage_type(UNSYMMETRIC);
+ case StorageType::UNSYMMETRIC:
+ transpose->set_storage_type(StorageType::UNSYMMETRIC);
break;
- case LOWER_TRIANGULAR:
- transpose->set_storage_type(UPPER_TRIANGULAR);
+ case StorageType::LOWER_TRIANGULAR:
+ transpose->set_storage_type(StorageType::UPPER_TRIANGULAR);
break;
- case UPPER_TRIANGULAR:
- transpose->set_storage_type(LOWER_TRIANGULAR);
+ case StorageType::UPPER_TRIANGULAR:
+ transpose->set_storage_type(StorageType::LOWER_TRIANGULAR);
break;
default:
LOG(FATAL) << "Unknown storage type: " << storage_type_;
@@ -612,14 +637,16 @@
return transpose;
}
-CompressedRowSparseMatrix* CompressedRowSparseMatrix::CreateRandomMatrix(
- CompressedRowSparseMatrix::RandomMatrixOptions options) {
+std::unique_ptr<CompressedRowSparseMatrix>
+CompressedRowSparseMatrix::CreateRandomMatrix(
+ CompressedRowSparseMatrix::RandomMatrixOptions options,
+ std::mt19937& prng) {
CHECK_GT(options.num_row_blocks, 0);
CHECK_GT(options.min_row_block_size, 0);
CHECK_GT(options.max_row_block_size, 0);
CHECK_LE(options.min_row_block_size, options.max_row_block_size);
- if (options.storage_type == UNSYMMETRIC) {
+ if (options.storage_type == StorageType::UNSYMMETRIC) {
CHECK_GT(options.num_col_blocks, 0);
CHECK_GT(options.min_col_block_size, 0);
CHECK_GT(options.max_col_block_size, 0);
@@ -634,33 +661,42 @@
CHECK_GT(options.block_density, 0.0);
CHECK_LE(options.block_density, 1.0);
- vector<int> row_blocks;
- vector<int> col_blocks;
+ std::vector<Block> row_blocks;
+ row_blocks.reserve(options.num_row_blocks);
+ std::vector<Block> col_blocks;
+ col_blocks.reserve(options.num_col_blocks);
+
+ std::uniform_int_distribution<int> col_distribution(
+ options.min_col_block_size, options.max_col_block_size);
+ std::uniform_int_distribution<int> row_distribution(
+ options.min_row_block_size, options.max_row_block_size);
+ std::uniform_real_distribution<double> uniform01(0.0, 1.0);
+ std::normal_distribution<double> standard_normal_distribution;
// Generate the row block structure.
+ int row_pos = 0;
for (int i = 0; i < options.num_row_blocks; ++i) {
// Generate a random integer in [min_row_block_size, max_row_block_size]
- const int delta_block_size =
- Uniform(options.max_row_block_size - options.min_row_block_size);
- row_blocks.push_back(options.min_row_block_size + delta_block_size);
+ row_blocks.emplace_back(row_distribution(prng), row_pos);
+ row_pos += row_blocks.back().size;
}
- if (options.storage_type == UNSYMMETRIC) {
+ if (options.storage_type == StorageType::UNSYMMETRIC) {
// Generate the col block structure.
+ int col_pos = 0;
for (int i = 0; i < options.num_col_blocks; ++i) {
// Generate a random integer in [min_col_block_size, max_col_block_size]
- const int delta_block_size =
- Uniform(options.max_col_block_size - options.min_col_block_size);
- col_blocks.push_back(options.min_col_block_size + delta_block_size);
+ col_blocks.emplace_back(col_distribution(prng), col_pos);
+ col_pos += col_blocks.back().size;
}
} else {
// Symmetric matrices (LOWER_TRIANGULAR or UPPER_TRIANGULAR);
col_blocks = row_blocks;
}
- vector<int> tsm_rows;
- vector<int> tsm_cols;
- vector<double> tsm_values;
+ std::vector<int> tsm_rows;
+ std::vector<int> tsm_cols;
+ std::vector<double> tsm_values;
// For ease of construction, we are going to generate the
// CompressedRowSparseMatrix by generating it as a
@@ -679,51 +715,55 @@
for (int r = 0; r < options.num_row_blocks; ++r) {
int col_block_begin = 0;
for (int c = 0; c < options.num_col_blocks; ++c) {
- if (((options.storage_type == UPPER_TRIANGULAR) && (r > c)) ||
- ((options.storage_type == LOWER_TRIANGULAR) && (r < c))) {
- col_block_begin += col_blocks[c];
+ if (((options.storage_type == StorageType::UPPER_TRIANGULAR) &&
+ (r > c)) ||
+ ((options.storage_type == StorageType::LOWER_TRIANGULAR) &&
+ (r < c))) {
+ col_block_begin += col_blocks[c].size;
continue;
}
// Randomly determine if this block is present or not.
- if (RandDouble() <= options.block_density) {
+ if (uniform01(prng) <= options.block_density) {
+ auto randn = [&standard_normal_distribution, &prng] {
+ return standard_normal_distribution(prng);
+ };
// If the matrix is symmetric, then we take care to generate
// symmetric diagonal blocks.
- if (options.storage_type == UNSYMMETRIC || r != c) {
- AddRandomBlock(row_blocks[r],
- col_blocks[c],
+ if (options.storage_type == StorageType::UNSYMMETRIC || r != c) {
+ AddRandomBlock(row_blocks[r].size,
+ col_blocks[c].size,
row_block_begin,
col_block_begin,
+ randn,
&tsm_rows,
&tsm_cols,
&tsm_values);
} else {
- AddSymmetricRandomBlock(row_blocks[r],
+ AddSymmetricRandomBlock(row_blocks[r].size,
row_block_begin,
+ randn,
&tsm_rows,
&tsm_cols,
&tsm_values);
}
}
- col_block_begin += col_blocks[c];
+ col_block_begin += col_blocks[c].size;
}
- row_block_begin += row_blocks[r];
+ row_block_begin += row_blocks[r].size;
}
}
- const int num_rows = std::accumulate(row_blocks.begin(), row_blocks.end(), 0);
- const int num_cols = std::accumulate(col_blocks.begin(), col_blocks.end(), 0);
+ const int num_rows = NumScalarEntries(row_blocks);
+ const int num_cols = NumScalarEntries(col_blocks);
const bool kDoNotTranspose = false;
- CompressedRowSparseMatrix* matrix =
- CompressedRowSparseMatrix::FromTripletSparseMatrix(
- TripletSparseMatrix(
- num_rows, num_cols, tsm_rows, tsm_cols, tsm_values),
- kDoNotTranspose);
+ auto matrix = CompressedRowSparseMatrix::FromTripletSparseMatrix(
+ TripletSparseMatrix(num_rows, num_cols, tsm_rows, tsm_cols, tsm_values),
+ kDoNotTranspose);
(*matrix->mutable_row_blocks()) = row_blocks;
(*matrix->mutable_col_blocks()) = col_blocks;
matrix->set_storage_type(options.storage_type);
return matrix;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/compressed_row_sparse_matrix.h b/internal/ceres/compressed_row_sparse_matrix.h
index 0a1b945..36c8895 100644
--- a/internal/ceres/compressed_row_sparse_matrix.h
+++ b/internal/ceres/compressed_row_sparse_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,9 +31,13 @@
#ifndef CERES_INTERNAL_COMPRESSED_ROW_SPARSE_MATRIX_H_
#define CERES_INTERNAL_COMPRESSED_ROW_SPARSE_MATRIX_H_
+#include <memory>
+#include <random>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/block_structure.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/sparse_matrix.h"
#include "ceres/types.h"
#include "glog/logging.h"
@@ -44,11 +48,12 @@
namespace internal {
+class ContextImpl;
class TripletSparseMatrix;
-class CERES_EXPORT_INTERNAL CompressedRowSparseMatrix : public SparseMatrix {
+class CERES_NO_EXPORT CompressedRowSparseMatrix : public SparseMatrix {
public:
- enum StorageType {
+ enum class StorageType {
UNSYMMETRIC,
// Matrix is assumed to be symmetric but only the lower triangular
// part of the matrix is stored.
@@ -63,9 +68,7 @@
// entries.
//
// The storage type of the matrix is set to UNSYMMETRIC.
- //
- // Caller owns the result.
- static CompressedRowSparseMatrix* FromTripletSparseMatrix(
+ static std::unique_ptr<CompressedRowSparseMatrix> FromTripletSparseMatrix(
const TripletSparseMatrix& input);
// Create a matrix with the same content as the TripletSparseMatrix
@@ -73,10 +76,8 @@
// entries.
//
// The storage type of the matrix is set to UNSYMMETRIC.
- //
- // Caller owns the result.
- static CompressedRowSparseMatrix* FromTripletSparseMatrixTransposed(
- const TripletSparseMatrix& input);
+ static std::unique_ptr<CompressedRowSparseMatrix>
+ FromTripletSparseMatrixTransposed(const TripletSparseMatrix& input);
// Use this constructor only if you know what you are doing. This
// creates a "blank" matrix with the appropriate amount of memory
@@ -100,10 +101,14 @@
CompressedRowSparseMatrix(const double* diagonal, int num_rows);
// SparseMatrix interface.
- virtual ~CompressedRowSparseMatrix();
+ ~CompressedRowSparseMatrix() override;
void SetZero() final;
- void RightMultiply(const double* x, double* y) const final;
- void LeftMultiply(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const final;
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const final;
void SquaredColumnNorm(double* x) const final;
void ScaleColumns(const double* scale) final;
void ToDenseMatrix(Matrix* dense_matrix) const final;
@@ -111,8 +116,8 @@
int num_rows() const final { return num_rows_; }
int num_cols() const final { return num_cols_; }
int num_nonzeros() const final { return rows_[num_rows_]; }
- const double* values() const final { return &values_[0]; }
- double* mutable_values() final { return &values_[0]; }
+ const double* values() const final { return values_.data(); }
+ double* mutable_values() final { return values_.data(); }
// Delete the bottom delta_rows.
// num_rows -= delta_rows
@@ -124,7 +129,7 @@
void ToCRSMatrix(CRSMatrix* matrix) const;
- CompressedRowSparseMatrix* Transpose() const;
+ std::unique_ptr<CompressedRowSparseMatrix> Transpose() const;
// Destructive array resizing method.
void SetMaxNumNonZeros(int num_nonzeros);
@@ -134,30 +139,28 @@
void set_num_cols(const int num_cols) { num_cols_ = num_cols; }
// Low level access methods that expose the structure of the matrix.
- const int* cols() const { return &cols_[0]; }
- int* mutable_cols() { return &cols_[0]; }
+ const int* cols() const { return cols_.data(); }
+ int* mutable_cols() { return cols_.data(); }
- const int* rows() const { return &rows_[0]; }
- int* mutable_rows() { return &rows_[0]; }
+ const int* rows() const { return rows_.data(); }
+ int* mutable_rows() { return rows_.data(); }
- const StorageType storage_type() const { return storage_type_; }
+ StorageType storage_type() const { return storage_type_; }
void set_storage_type(const StorageType storage_type) {
storage_type_ = storage_type;
}
- const std::vector<int>& row_blocks() const { return row_blocks_; }
- std::vector<int>* mutable_row_blocks() { return &row_blocks_; }
+ const std::vector<Block>& row_blocks() const { return row_blocks_; }
+ std::vector<Block>* mutable_row_blocks() { return &row_blocks_; }
- const std::vector<int>& col_blocks() const { return col_blocks_; }
- std::vector<int>* mutable_col_blocks() { return &col_blocks_; }
+ const std::vector<Block>& col_blocks() const { return col_blocks_; }
+ std::vector<Block>* mutable_col_blocks() { return &col_blocks_; }
// Create a block diagonal CompressedRowSparseMatrix with the given
// block structure. The individual blocks are assumed to be laid out
// contiguously in the diagonal array, one block at a time.
- //
- // Caller owns the result.
- static CompressedRowSparseMatrix* CreateBlockDiagonalMatrix(
- const double* diagonal, const std::vector<int>& blocks);
+ static std::unique_ptr<CompressedRowSparseMatrix> CreateBlockDiagonalMatrix(
+ const double* diagonal, const std::vector<Block>& blocks);
// Options struct to control the generation of random block sparse
// matrices in compressed row sparse format.
@@ -169,7 +172,7 @@
// given bounds.
//
// Then we walk the block structure of the resulting matrix, and with
- // probability block_density detemine whether they are structurally
+ // probability block_density determine whether they are structurally
// zero or not. If the answer is no, then we generate entries for the
// block which are distributed normally.
struct RandomMatrixOptions {
@@ -180,7 +183,7 @@
// (lower triangular) part. In this case, num_col_blocks,
// min_col_block_size and max_col_block_size will be ignored and
// assumed to be equal to the corresponding row settings.
- StorageType storage_type = UNSYMMETRIC;
+ StorageType storage_type = StorageType::UNSYMMETRIC;
int num_row_blocks = 0;
int min_row_block_size = 0;
@@ -198,13 +201,11 @@
// Create a random CompressedRowSparseMatrix whose entries are
// normally distributed and whose structure is determined by
// RandomMatrixOptions.
- //
- // Caller owns the result.
- static CompressedRowSparseMatrix* CreateRandomMatrix(
- RandomMatrixOptions options);
+ static std::unique_ptr<CompressedRowSparseMatrix> CreateRandomMatrix(
+ RandomMatrixOptions options, std::mt19937& prng);
private:
- static CompressedRowSparseMatrix* FromTripletSparseMatrix(
+ static std::unique_ptr<CompressedRowSparseMatrix> FromTripletSparseMatrix(
const TripletSparseMatrix& input, bool transpose);
int num_rows_;
@@ -215,15 +216,34 @@
StorageType storage_type_;
// If the matrix has an underlying block structure, then it can also
- // carry with it row and column block sizes. This is auxilliary and
+ // carry with it row and column block sizes. This is auxiliary and
// optional information for use by algorithms operating on the
// matrix. The class itself does not make use of this information in
// any way.
- std::vector<int> row_blocks_;
- std::vector<int> col_blocks_;
+ std::vector<Block> row_blocks_;
+ std::vector<Block> col_blocks_;
};
+inline std::ostream& operator<<(std::ostream& s,
+ CompressedRowSparseMatrix::StorageType type) {
+ switch (type) {
+ case CompressedRowSparseMatrix::StorageType::UNSYMMETRIC:
+ s << "UNSYMMETRIC";
+ break;
+ case CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR:
+ s << "UPPER_TRIANGULAR";
+ break;
+ case CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR:
+ s << "LOWER_TRIANGULAR";
+ break;
+ default:
+ s << "UNKNOWN CompressedRowSparseMatrix::StorageType";
+ }
+ return s;
+}
} // namespace internal
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_INTERNAL_COMPRESSED_ROW_SPARSE_MATRIX_H_
diff --git a/internal/ceres/compressed_row_sparse_matrix_test.cc b/internal/ceres/compressed_row_sparse_matrix_test.cc
index 91f3ba4..b6bcdb7 100644
--- a/internal/ceres/compressed_row_sparse_matrix_test.cc
+++ b/internal/ceres/compressed_row_sparse_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,23 +30,24 @@
#include "ceres/compressed_row_sparse_matrix.h"
+#include <algorithm>
#include <memory>
#include <numeric>
+#include <random>
+#include <string>
+#include <vector>
#include "Eigen/SparseCore"
#include "ceres/casts.h"
+#include "ceres/context_impl.h"
#include "ceres/crs_matrix.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_least_squares_problems.h"
-#include "ceres/random.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
static void CompareMatrices(const SparseMatrix* a, const SparseMatrix* b) {
EXPECT_EQ(a->num_rows(), b->num_rows());
@@ -62,9 +63,8 @@
Vector y_a = Vector::Zero(num_rows);
Vector y_b = Vector::Zero(num_rows);
- a->RightMultiply(x.data(), y_a.data());
- b->RightMultiply(x.data(), y_b.data());
-
+ a->RightMultiplyAndAccumulate(x.data(), y_a.data());
+ b->RightMultiplyAndAccumulate(x.data(), y_b.data());
EXPECT_EQ((y_a - y_b).norm(), 0);
}
}
@@ -72,24 +72,26 @@
class CompressedRowSparseMatrixTest : public ::testing::Test {
protected:
void SetUp() final {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(1));
+ auto problem = CreateLinearLeastSquaresProblemFromId(1);
CHECK(problem != nullptr);
tsm.reset(down_cast<TripletSparseMatrix*>(problem->A.release()));
- crsm.reset(CompressedRowSparseMatrix::FromTripletSparseMatrix(*tsm));
+ crsm = CompressedRowSparseMatrix::FromTripletSparseMatrix(*tsm);
num_rows = tsm->num_rows();
num_cols = tsm->num_cols();
- vector<int>* row_blocks = crsm->mutable_row_blocks();
+ std::vector<Block>* row_blocks = crsm->mutable_row_blocks();
row_blocks->resize(num_rows);
- std::fill(row_blocks->begin(), row_blocks->end(), 1);
-
- vector<int>* col_blocks = crsm->mutable_col_blocks();
+ for (int i = 0; i < row_blocks->size(); ++i) {
+ (*row_blocks)[i] = Block(1, i);
+ }
+ std::vector<Block>* col_blocks = crsm->mutable_col_blocks();
col_blocks->resize(num_cols);
- std::fill(col_blocks->begin(), col_blocks->end(), 1);
+ for (int i = 0; i < col_blocks->size(); ++i) {
+ (*col_blocks)[i] = Block(1, i);
+ }
}
int num_rows;
@@ -132,8 +134,8 @@
tsm_appendage.Resize(i, num_cols);
tsm->AppendRows(tsm_appendage);
- std::unique_ptr<CompressedRowSparseMatrix> crsm_appendage(
- CompressedRowSparseMatrix::FromTripletSparseMatrix(tsm_appendage));
+ auto crsm_appendage =
+ CompressedRowSparseMatrix::FromTripletSparseMatrix(tsm_appendage);
crsm->AppendRows(*crsm_appendage);
CompareMatrices(tsm.get(), crsm.get());
@@ -143,34 +145,33 @@
TEST_F(CompressedRowSparseMatrixTest, AppendAndDeleteBlockDiagonalMatrix) {
int num_diagonal_rows = crsm->num_cols();
- std::unique_ptr<double[]> diagonal(new double[num_diagonal_rows]);
+ auto diagonal = std::make_unique<double[]>(num_diagonal_rows);
for (int i = 0; i < num_diagonal_rows; ++i) {
diagonal[i] = i;
}
- vector<int> row_and_column_blocks;
- row_and_column_blocks.push_back(1);
- row_and_column_blocks.push_back(2);
- row_and_column_blocks.push_back(2);
+ std::vector<Block> row_and_column_blocks;
+ row_and_column_blocks.emplace_back(1, 0);
+ row_and_column_blocks.emplace_back(2, 1);
+ row_and_column_blocks.emplace_back(2, 3);
- const vector<int> pre_row_blocks = crsm->row_blocks();
- const vector<int> pre_col_blocks = crsm->col_blocks();
+ const std::vector<Block> pre_row_blocks = crsm->row_blocks();
+ const std::vector<Block> pre_col_blocks = crsm->col_blocks();
- std::unique_ptr<CompressedRowSparseMatrix> appendage(
- CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(
- diagonal.get(), row_and_column_blocks));
+ auto appendage = CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(
+ diagonal.get(), row_and_column_blocks);
crsm->AppendRows(*appendage);
- const vector<int> post_row_blocks = crsm->row_blocks();
- const vector<int> post_col_blocks = crsm->col_blocks();
+ const std::vector<Block> post_row_blocks = crsm->row_blocks();
+ const std::vector<Block> post_col_blocks = crsm->col_blocks();
- vector<int> expected_row_blocks = pre_row_blocks;
+ std::vector<Block> expected_row_blocks = pre_row_blocks;
expected_row_blocks.insert(expected_row_blocks.end(),
row_and_column_blocks.begin(),
row_and_column_blocks.end());
- vector<int> expected_col_blocks = pre_col_blocks;
+ std::vector<Block> expected_col_blocks = pre_col_blocks;
EXPECT_EQ(expected_row_blocks, crsm->row_blocks());
EXPECT_EQ(expected_col_blocks, crsm->col_blocks());
@@ -210,19 +211,18 @@
}
TEST(CompressedRowSparseMatrix, CreateBlockDiagonalMatrix) {
- vector<int> blocks;
- blocks.push_back(1);
- blocks.push_back(2);
- blocks.push_back(2);
+ std::vector<Block> blocks;
+ blocks.emplace_back(1, 0);
+ blocks.emplace_back(2, 1);
+ blocks.emplace_back(2, 3);
Vector diagonal(5);
for (int i = 0; i < 5; ++i) {
diagonal(i) = i + 1;
}
- std::unique_ptr<CompressedRowSparseMatrix> matrix(
- CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(diagonal.data(),
- blocks));
+ auto matrix = CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(
+ diagonal.data(), blocks);
EXPECT_EQ(matrix->num_rows(), 5);
EXPECT_EQ(matrix->num_cols(), 5);
@@ -235,13 +235,13 @@
x.setOnes();
y.setZero();
- matrix->RightMultiply(x.data(), y.data());
+ matrix->RightMultiplyAndAccumulate(x.data(), y.data());
for (int i = 0; i < diagonal.size(); ++i) {
EXPECT_EQ(y[i], diagonal[i]);
}
y.setZero();
- matrix->LeftMultiply(x.data(), y.data());
+ matrix->LeftMultiplyAndAccumulate(x.data(), y.data());
for (int i = 0; i < diagonal.size(); ++i) {
EXPECT_EQ(y[i], diagonal[i]);
}
@@ -253,9 +253,9 @@
TEST(CompressedRowSparseMatrix, Transpose) {
// 0 1 0 2 3 0
- // 4 6 7 0 0 8
- // 9 10 0 11 12 0
- // 13 0 14 15 9 0
+ // 4 5 6 0 0 7
+ // 8 9 0 10 11 0
+ // 12 0 13 14 15 0
// 0 16 17 0 0 0
// Block structure:
@@ -270,10 +270,10 @@
int* rows = matrix.mutable_rows();
int* cols = matrix.mutable_cols();
double* values = matrix.mutable_values();
- matrix.mutable_row_blocks()->push_back(3);
- matrix.mutable_row_blocks()->push_back(3);
- matrix.mutable_col_blocks()->push_back(4);
- matrix.mutable_col_blocks()->push_back(2);
+ matrix.mutable_row_blocks()->emplace_back(3, 0);
+ matrix.mutable_row_blocks()->emplace_back(3, 3);
+ matrix.mutable_col_blocks()->emplace_back(4, 0);
+ matrix.mutable_col_blocks()->emplace_back(2, 4);
rows[0] = 0;
cols[0] = 1;
@@ -303,9 +303,9 @@
cols[16] = 2;
rows[5] = 17;
- std::copy(values, values + 17, cols);
+ std::iota(values, values + 17, 1);
- std::unique_ptr<CompressedRowSparseMatrix> transpose(matrix.Transpose());
+ auto transpose = matrix.Transpose();
ASSERT_EQ(transpose->row_blocks().size(), matrix.col_blocks().size());
for (int i = 0; i < transpose->row_blocks().size(); ++i) {
@@ -326,6 +326,7 @@
}
TEST(CompressedRowSparseMatrix, FromTripletSparseMatrix) {
+ std::mt19937 prng;
TripletSparseMatrix::RandomMatrixOptions options;
options.num_rows = 5;
options.num_cols = 7;
@@ -333,10 +334,8 @@
const int kNumTrials = 10;
for (int i = 0; i < kNumTrials; ++i) {
- std::unique_ptr<TripletSparseMatrix> tsm(
- TripletSparseMatrix::CreateRandomMatrix(options));
- std::unique_ptr<CompressedRowSparseMatrix> crsm(
- CompressedRowSparseMatrix::FromTripletSparseMatrix(*tsm));
+ auto tsm = TripletSparseMatrix::CreateRandomMatrix(options, prng);
+ auto crsm = CompressedRowSparseMatrix::FromTripletSparseMatrix(*tsm);
Matrix expected;
tsm->ToDenseMatrix(&expected);
@@ -352,6 +351,7 @@
}
TEST(CompressedRowSparseMatrix, FromTripletSparseMatrixTransposed) {
+ std::mt19937 prng;
TripletSparseMatrix::RandomMatrixOptions options;
options.num_rows = 5;
options.num_cols = 7;
@@ -359,10 +359,9 @@
const int kNumTrials = 10;
for (int i = 0; i < kNumTrials; ++i) {
- std::unique_ptr<TripletSparseMatrix> tsm(
- TripletSparseMatrix::CreateRandomMatrix(options));
- std::unique_ptr<CompressedRowSparseMatrix> crsm(
- CompressedRowSparseMatrix::FromTripletSparseMatrixTransposed(*tsm));
+ auto tsm = TripletSparseMatrix::CreateRandomMatrix(options, prng);
+ auto crsm =
+ CompressedRowSparseMatrix::FromTripletSparseMatrixTransposed(*tsm);
Matrix tmp;
tsm->ToDenseMatrix(&tmp);
@@ -378,31 +377,33 @@
}
}
-typedef ::testing::tuple<CompressedRowSparseMatrix::StorageType> Param;
+using Param = ::testing::tuple<CompressedRowSparseMatrix::StorageType>;
static std::string ParamInfoToString(testing::TestParamInfo<Param> info) {
if (::testing::get<0>(info.param) ==
- CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
return "UPPER";
}
if (::testing::get<0>(info.param) ==
- CompressedRowSparseMatrix::LOWER_TRIANGULAR) {
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR) {
return "LOWER";
}
return "UNSYMMETRIC";
}
-class RightMultiplyTest : public ::testing::TestWithParam<Param> {};
+class RightMultiplyAndAccumulateTest : public ::testing::TestWithParam<Param> {
+};
-TEST_P(RightMultiplyTest, _) {
+TEST_P(RightMultiplyAndAccumulateTest, _) {
const int kMinNumBlocks = 1;
const int kMaxNumBlocks = 10;
const int kMinBlockSize = 1;
const int kMaxBlockSize = 5;
const int kNumTrials = 10;
-
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform(0.5, 1.0);
for (int num_blocks = kMinNumBlocks; num_blocks < kMaxNumBlocks;
++num_blocks) {
for (int trial = 0; trial < kNumTrials; ++trial) {
@@ -414,10 +415,10 @@
options.num_row_blocks = 2 * num_blocks;
options.min_row_block_size = kMinBlockSize;
options.max_row_block_size = kMaxBlockSize;
- options.block_density = std::max(0.5, RandDouble());
+ options.block_density = uniform(prng);
options.storage_type = ::testing::get<0>(param);
- std::unique_ptr<CompressedRowSparseMatrix> matrix(
- CompressedRowSparseMatrix::CreateRandomMatrix(options));
+ auto matrix =
+ CompressedRowSparseMatrix::CreateRandomMatrix(options, prng);
const int num_rows = matrix->num_rows();
const int num_cols = matrix->num_cols();
@@ -426,16 +427,16 @@
Vector actual_y(num_rows);
actual_y.setZero();
- matrix->RightMultiply(x.data(), actual_y.data());
+ matrix->RightMultiplyAndAccumulate(x.data(), actual_y.data());
Matrix dense;
matrix->ToDenseMatrix(&dense);
Vector expected_y;
if (::testing::get<0>(param) ==
- CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
expected_y = dense.selfadjointView<Eigen::Upper>() * x;
} else if (::testing::get<0>(param) ==
- CompressedRowSparseMatrix::LOWER_TRIANGULAR) {
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR) {
expected_y = dense.selfadjointView<Eigen::Lower>() * x;
} else {
expected_y = dense * x;
@@ -457,21 +458,22 @@
INSTANTIATE_TEST_SUITE_P(
CompressedRowSparseMatrix,
- RightMultiplyTest,
- ::testing::Values(CompressedRowSparseMatrix::LOWER_TRIANGULAR,
- CompressedRowSparseMatrix::UPPER_TRIANGULAR,
- CompressedRowSparseMatrix::UNSYMMETRIC),
+ RightMultiplyAndAccumulateTest,
+ ::testing::Values(CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR,
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR,
+ CompressedRowSparseMatrix::StorageType::UNSYMMETRIC),
ParamInfoToString);
-class LeftMultiplyTest : public ::testing::TestWithParam<Param> {};
+class LeftMultiplyAndAccumulateTest : public ::testing::TestWithParam<Param> {};
-TEST_P(LeftMultiplyTest, _) {
+TEST_P(LeftMultiplyAndAccumulateTest, _) {
const int kMinNumBlocks = 1;
const int kMaxNumBlocks = 10;
const int kMinBlockSize = 1;
const int kMaxBlockSize = 5;
const int kNumTrials = 10;
-
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform(0.5, 1.0);
for (int num_blocks = kMinNumBlocks; num_blocks < kMaxNumBlocks;
++num_blocks) {
for (int trial = 0; trial < kNumTrials; ++trial) {
@@ -483,10 +485,10 @@
options.num_row_blocks = 2 * num_blocks;
options.min_row_block_size = kMinBlockSize;
options.max_row_block_size = kMaxBlockSize;
- options.block_density = std::max(0.5, RandDouble());
+ options.block_density = uniform(prng);
options.storage_type = ::testing::get<0>(param);
- std::unique_ptr<CompressedRowSparseMatrix> matrix(
- CompressedRowSparseMatrix::CreateRandomMatrix(options));
+ auto matrix =
+ CompressedRowSparseMatrix::CreateRandomMatrix(options, prng);
const int num_rows = matrix->num_rows();
const int num_cols = matrix->num_cols();
@@ -495,16 +497,16 @@
Vector actual_y(num_cols);
actual_y.setZero();
- matrix->LeftMultiply(x.data(), actual_y.data());
+ matrix->LeftMultiplyAndAccumulate(x.data(), actual_y.data());
Matrix dense;
matrix->ToDenseMatrix(&dense);
Vector expected_y;
if (::testing::get<0>(param) ==
- CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
expected_y = dense.selfadjointView<Eigen::Upper>() * x;
} else if (::testing::get<0>(param) ==
- CompressedRowSparseMatrix::LOWER_TRIANGULAR) {
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR) {
expected_y = dense.selfadjointView<Eigen::Lower>() * x;
} else {
expected_y = dense.transpose() * x;
@@ -526,10 +528,10 @@
INSTANTIATE_TEST_SUITE_P(
CompressedRowSparseMatrix,
- LeftMultiplyTest,
- ::testing::Values(CompressedRowSparseMatrix::LOWER_TRIANGULAR,
- CompressedRowSparseMatrix::UPPER_TRIANGULAR,
- CompressedRowSparseMatrix::UNSYMMETRIC),
+ LeftMultiplyAndAccumulateTest,
+ ::testing::Values(CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR,
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR,
+ CompressedRowSparseMatrix::StorageType::UNSYMMETRIC),
ParamInfoToString);
class SquaredColumnNormTest : public ::testing::TestWithParam<Param> {};
@@ -540,7 +542,8 @@
const int kMinBlockSize = 1;
const int kMaxBlockSize = 5;
const int kNumTrials = 10;
-
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform(0.5, 1.0);
for (int num_blocks = kMinNumBlocks; num_blocks < kMaxNumBlocks;
++num_blocks) {
for (int trial = 0; trial < kNumTrials; ++trial) {
@@ -552,10 +555,10 @@
options.num_row_blocks = 2 * num_blocks;
options.min_row_block_size = kMinBlockSize;
options.max_row_block_size = kMaxBlockSize;
- options.block_density = std::max(0.5, RandDouble());
+ options.block_density = uniform(prng);
options.storage_type = ::testing::get<0>(param);
- std::unique_ptr<CompressedRowSparseMatrix> matrix(
- CompressedRowSparseMatrix::CreateRandomMatrix(options));
+ auto matrix =
+ CompressedRowSparseMatrix::CreateRandomMatrix(options, prng);
const int num_cols = matrix->num_cols();
Vector actual(num_cols);
@@ -566,11 +569,11 @@
matrix->ToDenseMatrix(&dense);
Vector expected;
if (::testing::get<0>(param) ==
- CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
const Matrix full = dense.selfadjointView<Eigen::Upper>();
expected = full.colwise().squaredNorm();
} else if (::testing::get<0>(param) ==
- CompressedRowSparseMatrix::LOWER_TRIANGULAR) {
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR) {
const Matrix full = dense.selfadjointView<Eigen::Lower>();
expected = full.colwise().squaredNorm();
} else {
@@ -592,12 +595,78 @@
INSTANTIATE_TEST_SUITE_P(
CompressedRowSparseMatrix,
SquaredColumnNormTest,
- ::testing::Values(CompressedRowSparseMatrix::LOWER_TRIANGULAR,
- CompressedRowSparseMatrix::UPPER_TRIANGULAR,
- CompressedRowSparseMatrix::UNSYMMETRIC),
+ ::testing::Values(CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR,
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR,
+ CompressedRowSparseMatrix::StorageType::UNSYMMETRIC),
ParamInfoToString);
+const int kMaxNumThreads = 8;
+class CompressedRowSparseMatrixParallelTest
+ : public ::testing::TestWithParam<int> {
+ void SetUp() final { context_.EnsureMinimumThreads(kMaxNumThreads); }
+
+ protected:
+ ContextImpl context_;
+};
+
+TEST_P(CompressedRowSparseMatrixParallelTest,
+ RightMultiplyAndAccumulateUnsymmetric) {
+ const int kMinNumBlocks = 1;
+ const int kMaxNumBlocks = 10;
+ const int kMinBlockSize = 1;
+ const int kMaxBlockSize = 5;
+ const int kNumTrials = 10;
+ const int kNumThreads = GetParam();
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform(0.5, 1.0);
+ for (int num_blocks = kMinNumBlocks; num_blocks < kMaxNumBlocks;
+ ++num_blocks) {
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ CompressedRowSparseMatrix::RandomMatrixOptions options;
+ options.num_col_blocks = num_blocks;
+ options.min_col_block_size = kMinBlockSize;
+ options.max_col_block_size = kMaxBlockSize;
+ options.num_row_blocks = 2 * num_blocks;
+ options.min_row_block_size = kMinBlockSize;
+ options.max_row_block_size = kMaxBlockSize;
+ options.block_density = uniform(prng);
+ options.storage_type =
+ CompressedRowSparseMatrix::StorageType::UNSYMMETRIC;
+ auto matrix =
+ CompressedRowSparseMatrix::CreateRandomMatrix(options, prng);
+ const int num_rows = matrix->num_rows();
+ const int num_cols = matrix->num_cols();
+
+ Vector x(num_cols);
+ x.setRandom();
+
+ Vector actual_y(num_rows);
+ actual_y.setZero();
+ matrix->RightMultiplyAndAccumulate(
+ x.data(), actual_y.data(), &context_, kNumThreads);
+
+ Matrix dense;
+ matrix->ToDenseMatrix(&dense);
+ Vector expected_y = dense * x;
+
+ ASSERT_NEAR((expected_y - actual_y).norm() / actual_y.norm(),
+ 0.0,
+ std::numeric_limits<double>::epsilon() * 10)
+ << "\n"
+ << dense << "x:\n"
+ << x.transpose() << "\n"
+ << "expected: \n"
+ << expected_y.transpose() << "\n"
+ << "actual: \n"
+ << actual_y.transpose();
+ }
+ }
+}
+INSTANTIATE_TEST_SUITE_P(ParallelProducts,
+ CompressedRowSparseMatrixParallelTest,
+ ::testing::Values(1, 2, 4, 8),
+ ::testing::PrintToStringParamName());
+
// TODO(sameeragarwal) Add tests for the random matrix creation methods.
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/concurrent_queue.h b/internal/ceres/concurrent_queue.h
index a04d147..5f490ab 100644
--- a/internal/ceres/concurrent_queue.h
+++ b/internal/ceres/concurrent_queue.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,8 +38,7 @@
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// A thread-safe multi-producer, multi-consumer queue for queueing items that
// are typically handled asynchronously by multiple threads. The ConcurrentQueue
@@ -78,7 +77,7 @@
class ConcurrentQueue {
public:
// Defaults the queue to blocking on Wait calls.
- ConcurrentQueue() : wait_(true) {}
+ ConcurrentQueue() = default;
// Atomically push an element onto the queue. If a thread was waiting for an
// element, wake it up.
@@ -149,10 +148,9 @@
std::queue<T> queue_;
// If true, signals that callers of Wait will block waiting to pop an
// element off the queue.
- bool wait_;
+ bool wait_{true};
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_CONCURRENT_QUEUE_H_
diff --git a/internal/ceres/concurrent_queue_test.cc b/internal/ceres/concurrent_queue_test.cc
index 430111a..db1446f 100644
--- a/internal/ceres/concurrent_queue_test.cc
+++ b/internal/ceres/concurrent_queue_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,20 +28,16 @@
//
// Author: vitus@google.com (Michael Vitus)
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
-
-#ifdef CERES_USE_CXX_THREADS
+#include "ceres/concurrent_queue.h"
#include <chrono>
#include <thread>
-#include "ceres/concurrent_queue.h"
+#include "ceres/internal/config.h"
#include "gmock/gmock.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// A basic test of push and pop.
TEST(ConcurrentQueue, PushPop) {
@@ -300,7 +296,4 @@
EXPECT_EQ(13456, value);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_USE_CXX_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/conditioned_cost_function.cc b/internal/ceres/conditioned_cost_function.cc
index fb4c52a..5c826a9 100644
--- a/internal/ceres/conditioned_cost_function.cc
+++ b/internal/ceres/conditioned_cost_function.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -98,7 +98,7 @@
double** conditioner_derivative_pointer2 =
&conditioner_derivative_pointer;
if (!jacobians) {
- conditioner_derivative_pointer2 = NULL;
+ conditioner_derivative_pointer2 = nullptr;
}
double unconditioned_residual = residuals[r];
diff --git a/internal/ceres/conditioned_cost_function_test.cc b/internal/ceres/conditioned_cost_function_test.cc
index f21f84c..e5938bc 100644
--- a/internal/ceres/conditioned_cost_function_test.cc
+++ b/internal/ceres/conditioned_cost_function_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -85,7 +85,7 @@
VectorRef v2_vector(v2, kTestCostFunctionSize, 1);
Matrix identity(kTestCostFunctionSize, kTestCostFunctionSize);
identity.setIdentity();
- NormalPrior* difference_cost_function = new NormalPrior(identity, v2_vector);
+ auto* difference_cost_function = new NormalPrior(identity, v2_vector);
std::vector<CostFunction*> conditioners;
for (int i = 0; i < kTestCostFunctionSize; i++) {
@@ -127,7 +127,7 @@
VectorRef v2_vector(v2, kTestCostFunctionSize, 1);
Matrix identity =
Matrix::Identity(kTestCostFunctionSize, kTestCostFunctionSize);
- NormalPrior* difference_cost_function = new NormalPrior(identity, v2_vector);
+ auto* difference_cost_function = new NormalPrior(identity, v2_vector);
CostFunction* conditioner = new LinearCostFunction(2, 7);
std::vector<CostFunction*> conditioners;
for (int i = 0; i < kTestCostFunctionSize; i++) {
diff --git a/internal/ceres/conjugate_gradients_solver.cc b/internal/ceres/conjugate_gradients_solver.cc
deleted file mode 100644
index 3019628..0000000
--- a/internal/ceres/conjugate_gradients_solver.cc
+++ /dev/null
@@ -1,252 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-//
-// A preconditioned conjugate gradients solver
-// (ConjugateGradientsSolver) for positive semidefinite linear
-// systems.
-//
-// We have also augmented the termination criterion used by this
-// solver to support not just residual based termination but also
-// termination based on decrease in the value of the quadratic model
-// that CG optimizes.
-
-#include "ceres/conjugate_gradients_solver.h"
-
-#include <cmath>
-#include <cstddef>
-
-#include "ceres/internal/eigen.h"
-#include "ceres/linear_operator.h"
-#include "ceres/stringprintf.h"
-#include "ceres/types.h"
-#include "glog/logging.h"
-
-namespace ceres {
-namespace internal {
-namespace {
-
-bool IsZeroOrInfinity(double x) { return ((x == 0.0) || std::isinf(x)); }
-
-} // namespace
-
-ConjugateGradientsSolver::ConjugateGradientsSolver(
- const LinearSolver::Options& options)
- : options_(options) {}
-
-LinearSolver::Summary ConjugateGradientsSolver::Solve(
- LinearOperator* A,
- const double* b,
- const LinearSolver::PerSolveOptions& per_solve_options,
- double* x) {
- CHECK(A != nullptr);
- CHECK(x != nullptr);
- CHECK(b != nullptr);
- CHECK_EQ(A->num_rows(), A->num_cols());
-
- LinearSolver::Summary summary;
- summary.termination_type = LINEAR_SOLVER_NO_CONVERGENCE;
- summary.message = "Maximum number of iterations reached.";
- summary.num_iterations = 0;
-
- const int num_cols = A->num_cols();
- VectorRef xref(x, num_cols);
- ConstVectorRef bref(b, num_cols);
-
- const double norm_b = bref.norm();
- if (norm_b == 0.0) {
- xref.setZero();
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- summary.message = "Convergence. |b| = 0.";
- return summary;
- }
-
- Vector r(num_cols);
- Vector p(num_cols);
- Vector z(num_cols);
- Vector tmp(num_cols);
-
- const double tol_r = per_solve_options.r_tolerance * norm_b;
-
- tmp.setZero();
- A->RightMultiply(x, tmp.data());
- r = bref - tmp;
- double norm_r = r.norm();
- if (options_.min_num_iterations == 0 && norm_r <= tol_r) {
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- summary.message =
- StringPrintf("Convergence. |r| = %e <= %e.", norm_r, tol_r);
- return summary;
- }
-
- double rho = 1.0;
-
- // Initial value of the quadratic model Q = x'Ax - 2 * b'x.
- double Q0 = -1.0 * xref.dot(bref + r);
-
- for (summary.num_iterations = 1;; ++summary.num_iterations) {
- // Apply preconditioner
- if (per_solve_options.preconditioner != NULL) {
- z.setZero();
- per_solve_options.preconditioner->RightMultiply(r.data(), z.data());
- } else {
- z = r;
- }
-
- double last_rho = rho;
- rho = r.dot(z);
- if (IsZeroOrInfinity(rho)) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
- summary.message = StringPrintf("Numerical failure. rho = r'z = %e.", rho);
- break;
- }
-
- if (summary.num_iterations == 1) {
- p = z;
- } else {
- double beta = rho / last_rho;
- if (IsZeroOrInfinity(beta)) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
- summary.message = StringPrintf(
- "Numerical failure. beta = rho_n / rho_{n-1} = %e, "
- "rho_n = %e, rho_{n-1} = %e",
- beta,
- rho,
- last_rho);
- break;
- }
- p = z + beta * p;
- }
-
- Vector& q = z;
- q.setZero();
- A->RightMultiply(p.data(), q.data());
- const double pq = p.dot(q);
- if ((pq <= 0) || std::isinf(pq)) {
- summary.termination_type = LINEAR_SOLVER_NO_CONVERGENCE;
- summary.message = StringPrintf(
- "Matrix is indefinite, no more progress can be made. "
- "p'q = %e. |p| = %e, |q| = %e",
- pq,
- p.norm(),
- q.norm());
- break;
- }
-
- const double alpha = rho / pq;
- if (std::isinf(alpha)) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
- summary.message = StringPrintf(
- "Numerical failure. alpha = rho / pq = %e, rho = %e, pq = %e.",
- alpha,
- rho,
- pq);
- break;
- }
-
- xref = xref + alpha * p;
-
- // Ideally we would just use the update r = r - alpha*q to keep
- // track of the residual vector. However this estimate tends to
- // drift over time due to round off errors. Thus every
- // residual_reset_period iterations, we calculate the residual as
- // r = b - Ax. We do not do this every iteration because this
- // requires an additional matrix vector multiply which would
- // double the complexity of the CG algorithm.
- if (summary.num_iterations % options_.residual_reset_period == 0) {
- tmp.setZero();
- A->RightMultiply(x, tmp.data());
- r = bref - tmp;
- } else {
- r = r - alpha * q;
- }
-
- // Quadratic model based termination.
- // Q1 = x'Ax - 2 * b' x.
- const double Q1 = -1.0 * xref.dot(bref + r);
-
- // For PSD matrices A, let
- //
- // Q(x) = x'Ax - 2b'x
- //
- // be the cost of the quadratic function defined by A and b. Then,
- // the solver terminates at iteration i if
- //
- // i * (Q(x_i) - Q(x_i-1)) / Q(x_i) < q_tolerance.
- //
- // This termination criterion is more useful when using CG to
- // solve the Newton step. This particular convergence test comes
- // from Stephen Nash's work on truncated Newton
- // methods. References:
- //
- // 1. Stephen G. Nash & Ariela Sofer, Assessing A Search
- // Direction Within A Truncated Newton Method, Operation
- // Research Letters 9(1990) 219-221.
- //
- // 2. Stephen G. Nash, A Survey of Truncated Newton Methods,
- // Journal of Computational and Applied Mathematics,
- // 124(1-2), 45-59, 2000.
- //
- const double zeta = summary.num_iterations * (Q1 - Q0) / Q1;
- if (zeta < per_solve_options.q_tolerance &&
- summary.num_iterations >= options_.min_num_iterations) {
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- summary.message =
- StringPrintf("Iteration: %d Convergence: zeta = %e < %e. |r| = %e",
- summary.num_iterations,
- zeta,
- per_solve_options.q_tolerance,
- r.norm());
- break;
- }
- Q0 = Q1;
-
- // Residual based termination.
- norm_r = r.norm();
- if (norm_r <= tol_r &&
- summary.num_iterations >= options_.min_num_iterations) {
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- summary.message =
- StringPrintf("Iteration: %d Convergence. |r| = %e <= %e.",
- summary.num_iterations,
- norm_r,
- tol_r);
- break;
- }
-
- if (summary.num_iterations >= options_.max_num_iterations) {
- break;
- }
- }
-
- return summary;
-}
-
-} // namespace internal
-} // namespace ceres
diff --git a/internal/ceres/conjugate_gradients_solver.h b/internal/ceres/conjugate_gradients_solver.h
index f79ca49..84383ea 100644
--- a/internal/ceres/conjugate_gradients_solver.h
+++ b/internal/ceres/conjugate_gradients_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,40 +34,278 @@
#ifndef CERES_INTERNAL_CONJUGATE_GRADIENTS_SOLVER_H_
#define CERES_INTERNAL_CONJUGATE_GRADIENTS_SOLVER_H_
-#include "ceres/internal/port.h"
+#include <cmath>
+#include <cstddef>
+#include <utility>
+
+#include "ceres/eigen_vector_ops.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
+#include "ceres/linear_operator.h"
#include "ceres/linear_solver.h"
+#include "ceres/stringprintf.h"
+#include "ceres/types.h"
+#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class LinearOperator;
-
-// This class implements the now classical Conjugate Gradients
-// algorithm of Hestenes & Stiefel for solving postive semidefinite
-// linear sytems. Optionally it can use a preconditioner also to
-// reduce the condition number of the linear system and improve the
-// convergence rate. Modern references for Conjugate Gradients are the
-// books by Yousef Saad and Trefethen & Bau. This implementation of CG
-// has been augmented with additional termination tests that are
-// needed for forcing early termination when used as part of an
-// inexact Newton solver.
-//
-// For more details see the documentation for
-// LinearSolver::PerSolveOptions::r_tolerance and
-// LinearSolver::PerSolveOptions::q_tolerance in linear_solver.h.
-class CERES_EXPORT_INTERNAL ConjugateGradientsSolver : public LinearSolver {
+// Interface for the linear operator used by ConjugateGradientsSolver.
+template <typename DenseVectorType>
+class ConjugateGradientsLinearOperator {
public:
- explicit ConjugateGradientsSolver(const LinearSolver::Options& options);
- Summary Solve(LinearOperator* A,
- const double* b,
- const LinearSolver::PerSolveOptions& per_solve_options,
- double* x) final;
-
- private:
- const LinearSolver::Options options_;
+ ~ConjugateGradientsLinearOperator() = default;
+ virtual void RightMultiplyAndAccumulate(const DenseVectorType& x,
+ DenseVectorType& y) = 0;
};
-} // namespace internal
-} // namespace ceres
+// Adapter class that makes LinearOperator appear like an instance of
+// ConjugateGradientsLinearOperator.
+class LinearOperatorAdapter : public ConjugateGradientsLinearOperator<Vector> {
+ public:
+ LinearOperatorAdapter(LinearOperator& linear_operator)
+ : linear_operator_(linear_operator) {}
+
+ void RightMultiplyAndAccumulate(const Vector& x, Vector& y) final {
+ linear_operator_.RightMultiplyAndAccumulate(x, y);
+ }
+
+ private:
+ LinearOperator& linear_operator_;
+};
+
+// Options to control the ConjugateGradientsSolver. For detailed documentation
+// for each of these options see linear_solver.h
+struct ConjugateGradientsSolverOptions {
+ int min_num_iterations = 1;
+ int max_num_iterations = 1;
+ int residual_reset_period = 10;
+ double r_tolerance = 0.0;
+ double q_tolerance = 0.0;
+ ContextImpl* context = nullptr;
+ int num_threads = 1;
+};
+
+// This function implements the now classical Conjugate Gradients algorithm of
+// Hestenes & Stiefel for solving positive semidefinite linear systems.
+// Optionally it can use a preconditioner also to reduce the condition number of
+// the linear system and improve the convergence rate. Modern references for
+// Conjugate Gradients are the books by Yousef Saad and Trefethen & Bau. This
+// implementation of CG has been augmented with additional termination tests
+// that are needed for forcing early termination when used as part of an inexact
+// Newton solver.
+//
+// This implementation is templated over DenseVectorType and then in turn on
+// ConjugateGradientsLinearOperator, which allows us to write an abstract
+// implementaion of the Conjugate Gradients algorithm without worrying about how
+// these objects are implemented or where they are stored. In particular it
+// allows us to have a single implementation that works on CPU and GPU based
+// matrices and vectors.
+//
+// scratch must contain pointers to four DenseVector objects of the same size as
+// rhs and solution. By asking the user for scratch space, we guarantee that we
+// will not perform any allocations inside this function.
+template <typename DenseVectorType>
+LinearSolver::Summary ConjugateGradientsSolver(
+ const ConjugateGradientsSolverOptions options,
+ ConjugateGradientsLinearOperator<DenseVectorType>& lhs,
+ const DenseVectorType& rhs,
+ ConjugateGradientsLinearOperator<DenseVectorType>& preconditioner,
+ DenseVectorType* scratch[4],
+ DenseVectorType& solution) {
+ auto IsZeroOrInfinity = [](double x) {
+ return ((x == 0.0) || std::isinf(x));
+ };
+
+ DenseVectorType& p = *scratch[0];
+ DenseVectorType& r = *scratch[1];
+ DenseVectorType& z = *scratch[2];
+ DenseVectorType& tmp = *scratch[3];
+
+ LinearSolver::Summary summary;
+ summary.termination_type = LinearSolverTerminationType::NO_CONVERGENCE;
+ summary.message = "Maximum number of iterations reached.";
+ summary.num_iterations = 0;
+
+ const double norm_rhs = Norm(rhs, options.context, options.num_threads);
+ if (norm_rhs == 0.0) {
+ SetZero(solution, options.context, options.num_threads);
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
+ summary.message = "Convergence. |b| = 0.";
+ return summary;
+ }
+
+ const double tol_r = options.r_tolerance * norm_rhs;
+
+ SetZero(tmp, options.context, options.num_threads);
+ lhs.RightMultiplyAndAccumulate(solution, tmp);
+
+ // r = rhs - tmp
+ Axpby(1.0, rhs, -1.0, tmp, r, options.context, options.num_threads);
+
+ double norm_r = Norm(r, options.context, options.num_threads);
+ if (options.min_num_iterations == 0 && norm_r <= tol_r) {
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
+ summary.message =
+ StringPrintf("Convergence. |r| = %e <= %e.", norm_r, tol_r);
+ return summary;
+ }
+
+ double rho = 1.0;
+
+ // Initial value of the quadratic model Q = x'Ax - 2 * b'x.
+ // double Q0 = -1.0 * solution.dot(rhs + r);
+ Axpby(1.0, rhs, 1.0, r, tmp, options.context, options.num_threads);
+ double Q0 = -Dot(solution, tmp, options.context, options.num_threads);
+
+ for (summary.num_iterations = 1;; ++summary.num_iterations) {
+ SetZero(z, options.context, options.num_threads);
+ preconditioner.RightMultiplyAndAccumulate(r, z);
+
+ const double last_rho = rho;
+ // rho = r.dot(z);
+ rho = Dot(r, z, options.context, options.num_threads);
+ if (IsZeroOrInfinity(rho)) {
+ summary.termination_type = LinearSolverTerminationType::FAILURE;
+ summary.message = StringPrintf("Numerical failure. rho = r'z = %e.", rho);
+ break;
+ }
+
+ if (summary.num_iterations == 1) {
+ Copy(z, p, options.context, options.num_threads);
+ } else {
+ const double beta = rho / last_rho;
+ if (IsZeroOrInfinity(beta)) {
+ summary.termination_type = LinearSolverTerminationType::FAILURE;
+ summary.message = StringPrintf(
+ "Numerical failure. beta = rho_n / rho_{n-1} = %e, "
+ "rho_n = %e, rho_{n-1} = %e",
+ beta,
+ rho,
+ last_rho);
+ break;
+ }
+ // p = z + beta * p;
+ Axpby(1.0, z, beta, p, p, options.context, options.num_threads);
+ }
+
+ DenseVectorType& q = z;
+ SetZero(q, options.context, options.num_threads);
+ lhs.RightMultiplyAndAccumulate(p, q);
+ const double pq = Dot(p, q, options.context, options.num_threads);
+ if ((pq <= 0) || std::isinf(pq)) {
+ summary.termination_type = LinearSolverTerminationType::NO_CONVERGENCE;
+ summary.message = StringPrintf(
+ "Matrix is indefinite, no more progress can be made. "
+ "p'q = %e. |p| = %e, |q| = %e",
+ pq,
+ Norm(p, options.context, options.num_threads),
+ Norm(q, options.context, options.num_threads));
+ break;
+ }
+
+ const double alpha = rho / pq;
+ if (std::isinf(alpha)) {
+ summary.termination_type = LinearSolverTerminationType::FAILURE;
+ summary.message = StringPrintf(
+ "Numerical failure. alpha = rho / pq = %e, rho = %e, pq = %e.",
+ alpha,
+ rho,
+ pq);
+ break;
+ }
+
+ // solution = solution + alpha * p;
+ Axpby(1.0,
+ solution,
+ alpha,
+ p,
+ solution,
+ options.context,
+ options.num_threads);
+
+ // Ideally we would just use the update r = r - alpha*q to keep
+ // track of the residual vector. However this estimate tends to
+ // drift over time due to round off errors. Thus every
+ // residual_reset_period iterations, we calculate the residual as
+ // r = b - Ax. We do not do this every iteration because this
+ // requires an additional matrix vector multiply which would
+ // double the complexity of the CG algorithm.
+ if (summary.num_iterations % options.residual_reset_period == 0) {
+ SetZero(tmp, options.context, options.num_threads);
+ lhs.RightMultiplyAndAccumulate(solution, tmp);
+ Axpby(1.0, rhs, -1.0, tmp, r, options.context, options.num_threads);
+ // r = rhs - tmp;
+ } else {
+ Axpby(1.0, r, -alpha, q, r, options.context, options.num_threads);
+ // r = r - alpha * q;
+ }
+
+ // Quadratic model based termination.
+ // Q1 = x'Ax - 2 * b' x.
+ // const double Q1 = -1.0 * solution.dot(rhs + r);
+ Axpby(1.0, rhs, 1.0, r, tmp, options.context, options.num_threads);
+ const double Q1 = -Dot(solution, tmp, options.context, options.num_threads);
+
+ // For PSD matrices A, let
+ //
+ // Q(x) = x'Ax - 2b'x
+ //
+ // be the cost of the quadratic function defined by A and b. Then,
+ // the solver terminates at iteration i if
+ //
+ // i * (Q(x_i) - Q(x_i-1)) / Q(x_i) < q_tolerance.
+ //
+ // This termination criterion is more useful when using CG to
+ // solve the Newton step. This particular convergence test comes
+ // from Stephen Nash's work on truncated Newton
+ // methods. References:
+ //
+ // 1. Stephen G. Nash & Ariela Sofer, Assessing A Search
+ // Direction Within A Truncated Newton Method, Operation
+ // Research Letters 9(1990) 219-221.
+ //
+ // 2. Stephen G. Nash, A Survey of Truncated Newton Methods,
+ // Journal of Computational and Applied Mathematics,
+ // 124(1-2), 45-59, 2000.
+ //
+ const double zeta = summary.num_iterations * (Q1 - Q0) / Q1;
+ if (zeta < options.q_tolerance &&
+ summary.num_iterations >= options.min_num_iterations) {
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
+ summary.message =
+ StringPrintf("Iteration: %d Convergence: zeta = %e < %e. |r| = %e",
+ summary.num_iterations,
+ zeta,
+ options.q_tolerance,
+ Norm(r, options.context, options.num_threads));
+ break;
+ }
+ Q0 = Q1;
+
+ // Residual based termination.
+ norm_r = Norm(r, options.context, options.num_threads);
+ if (norm_r <= tol_r &&
+ summary.num_iterations >= options.min_num_iterations) {
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
+ summary.message =
+ StringPrintf("Iteration: %d Convergence. |r| = %e <= %e.",
+ summary.num_iterations,
+ norm_r,
+ tol_r);
+ break;
+ }
+
+ if (summary.num_iterations >= options.max_num_iterations) {
+ break;
+ }
+ }
+
+ return summary;
+}
+
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_CONJUGATE_GRADIENTS_SOLVER_H_
diff --git a/internal/ceres/conjugate_gradients_solver_test.cc b/internal/ceres/conjugate_gradients_solver_test.cc
index b11e522..4727564 100644
--- a/internal/ceres/conjugate_gradients_solver_test.cc
+++ b/internal/ceres/conjugate_gradients_solver_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,12 +37,12 @@
#include "ceres/internal/eigen.h"
#include "ceres/linear_solver.h"
+#include "ceres/preconditioner.h"
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/types.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST(ConjugateGradientTest, Solves3x3IdentitySystem) {
double diagonal[] = {1.0, 1.0, 1.0};
@@ -59,17 +59,27 @@
x(1) = 1;
x(2) = 1;
- LinearSolver::Options options;
- options.max_num_iterations = 10;
+ ConjugateGradientsSolverOptions cg_options;
+ cg_options.min_num_iterations = 1;
+ cg_options.max_num_iterations = 10;
+ cg_options.residual_reset_period = 20;
+ cg_options.q_tolerance = 0.0;
+ cg_options.r_tolerance = 1e-9;
- LinearSolver::PerSolveOptions per_solve_options;
- per_solve_options.r_tolerance = 1e-9;
+ Vector scratch[4];
+ for (int i = 0; i < 4; ++i) {
+ scratch[i] = Vector::Zero(A->num_cols());
+ }
- ConjugateGradientsSolver solver(options);
- LinearSolver::Summary summary =
- solver.Solve(A.get(), b.data(), per_solve_options, x.data());
+ IdentityPreconditioner identity(A->num_cols());
+ LinearOperatorAdapter lhs(*A);
+ LinearOperatorAdapter preconditioner(identity);
+ Vector* scratch_array[4] = {
+ &scratch[0], &scratch[1], &scratch[2], &scratch[3]};
+ auto summary = ConjugateGradientsSolver(
+ cg_options, lhs, b, preconditioner, scratch_array, x);
- EXPECT_EQ(summary.termination_type, LINEAR_SOLVER_SUCCESS);
+ EXPECT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
ASSERT_EQ(summary.num_iterations, 1);
ASSERT_DOUBLE_EQ(1, x(0));
@@ -115,22 +125,31 @@
x(1) = 1;
x(2) = 1;
- LinearSolver::Options options;
- options.max_num_iterations = 10;
+ ConjugateGradientsSolverOptions cg_options;
+ cg_options.min_num_iterations = 1;
+ cg_options.max_num_iterations = 10;
+ cg_options.residual_reset_period = 20;
+ cg_options.q_tolerance = 0.0;
+ cg_options.r_tolerance = 1e-9;
- LinearSolver::PerSolveOptions per_solve_options;
- per_solve_options.r_tolerance = 1e-9;
+ Vector scratch[4];
+ for (int i = 0; i < 4; ++i) {
+ scratch[i] = Vector::Zero(A->num_cols());
+ }
+ Vector* scratch_array[4] = {
+ &scratch[0], &scratch[1], &scratch[2], &scratch[3]};
+ IdentityPreconditioner identity(A->num_cols());
+ LinearOperatorAdapter lhs(*A);
+ LinearOperatorAdapter preconditioner(identity);
- ConjugateGradientsSolver solver(options);
- LinearSolver::Summary summary =
- solver.Solve(A.get(), b.data(), per_solve_options, x.data());
+ auto summary = ConjugateGradientsSolver(
+ cg_options, lhs, b, preconditioner, scratch_array, x);
- EXPECT_EQ(summary.termination_type, LINEAR_SOLVER_SUCCESS);
+ EXPECT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
ASSERT_DOUBLE_EQ(0, x(0));
ASSERT_DOUBLE_EQ(1, x(1));
ASSERT_DOUBLE_EQ(2, x(2));
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/context.cc b/internal/ceres/context.cc
index 55e7635..e5d85f6 100644
--- a/internal/ceres/context.cc
+++ b/internal/ceres/context.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,6 +34,8 @@
namespace ceres {
+Context::Context() = default;
Context* Context::Create() { return new internal::ContextImpl(); }
+Context::~Context() = default;
} // namespace ceres
diff --git a/internal/ceres/context_impl.cc b/internal/ceres/context_impl.cc
index 20fe5cb..2b9d9cc 100644
--- a/internal/ceres/context_impl.cc
+++ b/internal/ceres/context_impl.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,13 +30,167 @@
#include "ceres/context_impl.h"
-namespace ceres {
-namespace internal {
+#include <string>
+
+#include "ceres/internal/config.h"
+#include "ceres/stringprintf.h"
+#include "ceres/wall_time.h"
+
+#ifndef CERES_NO_CUDA
+#include "cublas_v2.h"
+#include "cuda_runtime.h"
+#include "cusolverDn.h"
+#endif // CERES_NO_CUDA
+
+namespace ceres::internal {
+
+ContextImpl::ContextImpl() = default;
+
+#ifndef CERES_NO_CUDA
+void ContextImpl::TearDown() {
+ if (cusolver_handle_ != nullptr) {
+ cusolverDnDestroy(cusolver_handle_);
+ cusolver_handle_ = nullptr;
+ }
+ if (cublas_handle_ != nullptr) {
+ cublasDestroy(cublas_handle_);
+ cublas_handle_ = nullptr;
+ }
+ if (cusparse_handle_ != nullptr) {
+ cusparseDestroy(cusparse_handle_);
+ cusparse_handle_ = nullptr;
+ }
+ for (auto& s : streams_) {
+ if (s != nullptr) {
+ cudaStreamDestroy(s);
+ s = nullptr;
+ }
+ }
+ is_cuda_initialized_ = false;
+}
+
+std::string ContextImpl::CudaConfigAsString() const {
+ return ceres::internal::StringPrintf(
+ "======================= CUDA Device Properties ======================\n"
+ "Cuda version : %d.%d\n"
+ "Device ID : %d\n"
+ "Device name : %s\n"
+ "Total GPU memory : %6.f MiB\n"
+ "GPU memory available : %6.f MiB\n"
+ "Compute capability : %d.%d\n"
+ "Warp size : %d\n"
+ "Max threads per block : %d\n"
+ "Max threads per dim : %d %d %d\n"
+ "Max grid size : %d %d %d\n"
+ "Multiprocessor count : %d\n"
+ "cudaMallocAsync supported : %s\n"
+ "====================================================================",
+ cuda_version_major_,
+ cuda_version_minor_,
+ gpu_device_id_in_use_,
+ gpu_device_properties_.name,
+ gpu_device_properties_.totalGlobalMem / 1024.0 / 1024.0,
+ GpuMemoryAvailable() / 1024.0 / 1024.0,
+ gpu_device_properties_.major,
+ gpu_device_properties_.minor,
+ gpu_device_properties_.warpSize,
+ gpu_device_properties_.maxThreadsPerBlock,
+ gpu_device_properties_.maxThreadsDim[0],
+ gpu_device_properties_.maxThreadsDim[1],
+ gpu_device_properties_.maxThreadsDim[2],
+ gpu_device_properties_.maxGridSize[0],
+ gpu_device_properties_.maxGridSize[1],
+ gpu_device_properties_.maxGridSize[2],
+ gpu_device_properties_.multiProcessorCount,
+ // In CUDA 12.0.0+ cudaDeviceProp has field memoryPoolsSupported, but it
+ // is not available in older versions
+ is_cuda_memory_pools_supported_ ? "Yes" : "No");
+}
+
+size_t ContextImpl::GpuMemoryAvailable() const {
+ size_t free, total;
+ cudaMemGetInfo(&free, &total);
+ return free;
+}
+
+bool ContextImpl::InitCuda(std::string* message) {
+ if (is_cuda_initialized_) {
+ return true;
+ }
+ CHECK_EQ(cudaGetDevice(&gpu_device_id_in_use_), cudaSuccess);
+ int cuda_version;
+ CHECK_EQ(cudaRuntimeGetVersion(&cuda_version), cudaSuccess);
+ cuda_version_major_ = cuda_version / 1000;
+ cuda_version_minor_ = (cuda_version % 1000) / 10;
+ CHECK_EQ(
+ cudaGetDeviceProperties(&gpu_device_properties_, gpu_device_id_in_use_),
+ cudaSuccess);
+#if CUDART_VERSION >= 11020
+ int is_cuda_memory_pools_supported;
+ CHECK_EQ(cudaDeviceGetAttribute(&is_cuda_memory_pools_supported,
+ cudaDevAttrMemoryPoolsSupported,
+ gpu_device_id_in_use_),
+ cudaSuccess);
+ is_cuda_memory_pools_supported_ = is_cuda_memory_pools_supported == 1;
+#endif
+ VLOG(3) << "\n" << CudaConfigAsString();
+ EventLogger event_logger("InitCuda");
+ if (cublasCreate(&cublas_handle_) != CUBLAS_STATUS_SUCCESS) {
+ *message =
+ "CUDA initialization failed because cuBLAS::cublasCreate failed.";
+ cublas_handle_ = nullptr;
+ return false;
+ }
+ event_logger.AddEvent("cublasCreate");
+ if (cusolverDnCreate(&cusolver_handle_) != CUSOLVER_STATUS_SUCCESS) {
+ *message =
+ "CUDA initialization failed because cuSolverDN::cusolverDnCreate "
+ "failed.";
+ TearDown();
+ return false;
+ }
+ event_logger.AddEvent("cusolverDnCreate");
+ if (cusparseCreate(&cusparse_handle_) != CUSPARSE_STATUS_SUCCESS) {
+ *message =
+ "CUDA initialization failed because cuSPARSE::cusparseCreate failed.";
+ TearDown();
+ return false;
+ }
+ event_logger.AddEvent("cusparseCreate");
+ for (auto& s : streams_) {
+ if (cudaStreamCreateWithFlags(&s, cudaStreamNonBlocking) != cudaSuccess) {
+ *message =
+ "CUDA initialization failed because CUDA::cudaStreamCreateWithFlags "
+ "failed.";
+ TearDown();
+ return false;
+ }
+ }
+ event_logger.AddEvent("cudaStreamCreateWithFlags");
+ if (cusolverDnSetStream(cusolver_handle_, DefaultStream()) !=
+ CUSOLVER_STATUS_SUCCESS ||
+ cublasSetStream(cublas_handle_, DefaultStream()) !=
+ CUBLAS_STATUS_SUCCESS ||
+ cusparseSetStream(cusparse_handle_, DefaultStream()) !=
+ CUSPARSE_STATUS_SUCCESS) {
+ *message = "CUDA initialization failed because SetStream failed.";
+ TearDown();
+ return false;
+ }
+ event_logger.AddEvent("SetStream");
+ is_cuda_initialized_ = true;
+ return true;
+}
+#endif // CERES_NO_CUDA
+
+ContextImpl::~ContextImpl() {
+#ifndef CERES_NO_CUDA
+ TearDown();
+#endif // CERES_NO_CUDA
+}
void ContextImpl::EnsureMinimumThreads(int num_threads) {
-#ifdef CERES_USE_CXX_THREADS
thread_pool.Resize(num_threads);
-#endif // CERES_USE_CXX_THREADS
}
-} // namespace internal
-} // namespace ceres
+
+} // namespace ceres::internal
diff --git a/internal/ceres/context_impl.h b/internal/ceres/context_impl.h
index 574d1ef..46692e6 100644
--- a/internal/ceres/context_impl.h
+++ b/internal/ceres/context_impl.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,37 +33,115 @@
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
-#include "ceres/internal/port.h"
-// clanf-format on
+#include "ceres/internal/config.h"
+// clang-format on
+
+#include <string>
#include "ceres/context.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-#ifdef CERES_USE_CXX_THREADS
+#ifndef CERES_NO_CUDA
+#include "cublas_v2.h"
+#include "cuda_runtime.h"
+#include "cusolverDn.h"
+#include "cusparse.h"
+#endif // CERES_NO_CUDA
+
#include "ceres/thread_pool.h"
-#endif // CERES_USE_CXX_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class CERES_EXPORT_INTERNAL ContextImpl : public Context {
+class CERES_NO_EXPORT ContextImpl final : public Context {
public:
- ContextImpl() {}
+ ContextImpl();
+ ~ContextImpl() override;
ContextImpl(const ContextImpl&) = delete;
void operator=(const ContextImpl&) = delete;
- virtual ~ContextImpl() {}
-
// When compiled with C++ threading support, resize the thread pool to have
// at min(num_thread, num_hardware_threads) where num_hardware_threads is
// defined by the hardware. Otherwise this call is a no-op.
void EnsureMinimumThreads(int num_threads);
-#ifdef CERES_USE_CXX_THREADS
ThreadPool thread_pool;
-#endif // CERES_USE_CXX_THREADS
+
+#ifndef CERES_NO_CUDA
+ // Note on Ceres' use of CUDA Devices on multi-GPU systems:
+ // 1. On a multi-GPU system, if nothing special is done, the "default" CUDA
+ // device will be used, which is device 0.
+ // 2. If the user masks out GPUs using the CUDA_VISIBLE_DEVICES environment
+ // variable, Ceres will still use device 0 visible to the program, but
+ // device 0 will be the first GPU indicated in the environment variable.
+ // 3. If the user explicitly selects a GPU in the host process before calling
+ // Ceres, Ceres will use that GPU.
+
+ // Note on Ceres' use of CUDA Streams:
+ // Most of operations on the GPU are performed using a single stream. In
+ // those cases DefaultStream() should be used. This ensures that operations
+ // are stream-ordered, and might be concurrent with cpu processing with no
+ // additional efforts.
+ //
+ // a. Single-stream workloads
+ // - Only use default stream
+ // - Return control to the callee without synchronization whenever possible
+ // - Stream synchronization occurs only after GPU to CPU transfers, and is
+ // handled by CudaBuffer
+ //
+ // b. Multi-stream workloads
+ // Multi-stream workloads are more restricted in order to make it harder to
+ // get a race-condition.
+ // - Should always synchronize the default stream on entry
+ // - Should always synchronize all utilized streams on exit
+ // - Should not make any assumptions on one of streams_[] being default
+ //
+ // With those rules in place
+ // - All single-stream asynchronous workloads are serialized using default
+ // stream
+ // - Multiple-stream workloads always wait single-stream workloads to finish
+ // and leave no running computations on exit.
+ // This slightly penalizes multi-stream workloads, but makes it easier to
+ // avoid race conditions when multiple-stream workload depends on results of
+ // any preceeding gpu computations.
+
+ // Initializes cuBLAS, cuSOLVER, and cuSPARSE contexts, creates an
+ // asynchronous CUDA stream, and associates the stream with the contexts.
+ // Returns true iff initialization was successful, else it returns false and a
+ // human-readable error message is returned.
+ bool InitCuda(std::string* message);
+ void TearDown();
+ inline bool IsCudaInitialized() const { return is_cuda_initialized_; }
+ // Returns a human-readable string describing the capabilities of the current
+ // CUDA device. CudaConfigAsString can only be called after InitCuda has been
+ // called.
+ std::string CudaConfigAsString() const;
+ // Returns the number of bytes of available global memory on the current CUDA
+ // device. If it is called before InitCuda, it returns 0.
+ size_t GpuMemoryAvailable() const;
+
+ cusolverDnHandle_t cusolver_handle_ = nullptr;
+ cublasHandle_t cublas_handle_ = nullptr;
+
+ // Default stream.
+ // Kernel invocations and memory copies on this stream can be left without
+ // synchronization.
+ cudaStream_t DefaultStream() { return streams_[0]; }
+ static constexpr int kNumCudaStreams = 2;
+ cudaStream_t streams_[kNumCudaStreams] = {0};
+
+ cusparseHandle_t cusparse_handle_ = nullptr;
+ bool is_cuda_initialized_ = false;
+ int gpu_device_id_in_use_ = -1;
+ cudaDeviceProp gpu_device_properties_;
+ bool is_cuda_memory_pools_supported_ = false;
+ int cuda_version_major_ = 0;
+ int cuda_version_minor_ = 0;
+#endif // CERES_NO_CUDA
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_CONTEXT_IMPL_H_
diff --git a/internal/ceres/coordinate_descent_minimizer.cc b/internal/ceres/coordinate_descent_minimizer.cc
index 93096ac..53986ee 100644
--- a/internal/ceres/coordinate_descent_minimizer.cc
+++ b/internal/ceres/coordinate_descent_minimizer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,8 +32,11 @@
#include <algorithm>
#include <iterator>
+#include <map>
#include <memory>
#include <numeric>
+#include <set>
+#include <string>
#include <vector>
#include "ceres/evaluator.h"
@@ -49,36 +52,32 @@
#include "ceres/trust_region_minimizer.h"
#include "ceres/trust_region_strategy.h"
-namespace ceres {
-namespace internal {
-
-using std::map;
-using std::max;
-using std::min;
-using std::set;
-using std::string;
-using std::vector;
+namespace ceres::internal {
CoordinateDescentMinimizer::CoordinateDescentMinimizer(ContextImpl* context)
: context_(context) {
CHECK(context_ != nullptr);
}
-CoordinateDescentMinimizer::~CoordinateDescentMinimizer() {}
+CoordinateDescentMinimizer::~CoordinateDescentMinimizer() = default;
bool CoordinateDescentMinimizer::Init(
const Program& program,
const ProblemImpl::ParameterMap& parameter_map,
const ParameterBlockOrdering& ordering,
- string* error) {
+ std::string* /*error*/) {
parameter_blocks_.clear();
independent_set_offsets_.clear();
independent_set_offsets_.push_back(0);
// Serialize the OrderedGroups into a vector of parameter block
// offsets for parallel access.
- map<ParameterBlock*, int> parameter_block_index;
- map<int, set<double*>> group_to_elements = ordering.group_to_elements();
+
+ // TODO(sameeragarwal): Investigate if parameter_block_index should be an
+ // ordered or an unordered container.
+ std::map<ParameterBlock*, int> parameter_block_index;
+ std::map<int, std::set<double*>> group_to_elements =
+ ordering.group_to_elements();
for (const auto& g_t_e : group_to_elements) {
const auto& elements = g_t_e.second;
for (double* parameter_block : elements) {
@@ -93,10 +92,11 @@
// The ordering does not have to contain all parameter blocks, so
// assign zero offsets/empty independent sets to these parameter
// blocks.
- const vector<ParameterBlock*>& parameter_blocks = program.parameter_blocks();
- for (int i = 0; i < parameter_blocks.size(); ++i) {
- if (!ordering.IsMember(parameter_blocks[i]->mutable_user_state())) {
- parameter_blocks_.push_back(parameter_blocks[i]);
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program.parameter_blocks();
+ for (auto* parameter_block : parameter_blocks) {
+ if (!ordering.IsMember(parameter_block->mutable_user_state())) {
+ parameter_blocks_.push_back(parameter_block);
independent_set_offsets_.push_back(independent_set_offsets_.back());
}
}
@@ -104,9 +104,9 @@
// Compute the set of residual blocks that depend on each parameter
// block.
residual_blocks_.resize(parameter_block_index.size());
- const vector<ResidualBlock*>& residual_blocks = program.residual_blocks();
- for (int i = 0; i < residual_blocks.size(); ++i) {
- ResidualBlock* residual_block = residual_blocks[i];
+ const std::vector<ResidualBlock*>& residual_blocks =
+ program.residual_blocks();
+ for (auto* residual_block : residual_blocks) {
const int num_parameter_blocks = residual_block->NumParameterBlocks();
for (int j = 0; j < num_parameter_blocks; ++j) {
ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
@@ -127,16 +127,15 @@
void CoordinateDescentMinimizer::Minimize(const Minimizer::Options& options,
double* parameters,
- Solver::Summary* summary) {
+ Solver::Summary* /*summary*/) {
// Set the state and mark all parameter blocks constant.
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- ParameterBlock* parameter_block = parameter_blocks_[i];
+ for (auto* parameter_block : parameter_blocks_) {
parameter_block->SetState(parameters + parameter_block->state_offset());
parameter_block->SetConstant();
}
- std::unique_ptr<LinearSolver*[]> linear_solvers(
- new LinearSolver*[options.num_threads]);
+ std::vector<std::unique_ptr<LinearSolver>> linear_solvers(
+ options.num_threads);
LinearSolver::Options linear_solver_options;
linear_solver_options.type = DENSE_QR;
@@ -155,9 +154,9 @@
}
const int num_inner_iteration_threads =
- min(options.num_threads, num_problems);
+ std::min(options.num_threads, num_problems);
evaluator_options_.num_threads =
- max(1, options.num_threads / num_inner_iteration_threads);
+ std::max(1, options.num_threads / num_inner_iteration_threads);
// The parameter blocks in each independent set can be optimized
// in parallel, since they do not co-occur in any residual block.
@@ -170,9 +169,11 @@
ParameterBlock* parameter_block = parameter_blocks_[j];
const int old_index = parameter_block->index();
const int old_delta_offset = parameter_block->delta_offset();
+ const int old_state_offset = parameter_block->state_offset();
parameter_block->SetVarying();
parameter_block->set_index(0);
parameter_block->set_delta_offset(0);
+ parameter_block->set_state_offset(0);
Program inner_program;
inner_program.mutable_parameter_blocks()->push_back(parameter_block);
@@ -188,24 +189,21 @@
// we are fine.
Solver::Summary inner_summary;
Solve(&inner_program,
- linear_solvers[thread_id],
- parameters + parameter_block->state_offset(),
+ linear_solvers[thread_id].get(),
+ parameters + old_state_offset,
&inner_summary);
parameter_block->set_index(old_index);
parameter_block->set_delta_offset(old_delta_offset);
+ parameter_block->set_state_offset(old_state_offset);
parameter_block->SetState(parameters +
parameter_block->state_offset());
parameter_block->SetConstant();
});
}
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- parameter_blocks_[i]->SetVarying();
- }
-
- for (int i = 0; i < options.num_threads; ++i) {
- delete linear_solvers[i];
+ for (auto* parameter_block : parameter_blocks_) {
+ parameter_block->SetVarying();
}
}
@@ -218,20 +216,19 @@
summary->initial_cost = 0.0;
summary->fixed_cost = 0.0;
summary->final_cost = 0.0;
- string error;
+ std::string error;
Minimizer::Options minimizer_options;
- minimizer_options.evaluator.reset(
- Evaluator::Create(evaluator_options_, program, &error));
+ minimizer_options.evaluator =
+ Evaluator::Create(evaluator_options_, program, &error);
CHECK(minimizer_options.evaluator != nullptr);
- minimizer_options.jacobian.reset(
- minimizer_options.evaluator->CreateJacobian());
+ minimizer_options.jacobian = minimizer_options.evaluator->CreateJacobian();
CHECK(minimizer_options.jacobian != nullptr);
TrustRegionStrategy::Options trs_options;
trs_options.linear_solver = linear_solver;
- minimizer_options.trust_region_strategy.reset(
- TrustRegionStrategy::Create(trs_options));
+ minimizer_options.trust_region_strategy =
+ TrustRegionStrategy::Create(trs_options);
CHECK(minimizer_options.trust_region_strategy != nullptr);
minimizer_options.is_silent = true;
@@ -242,8 +239,10 @@
bool CoordinateDescentMinimizer::IsOrderingValid(
const Program& program,
const ParameterBlockOrdering& ordering,
- string* message) {
- const map<int, set<double*>>& group_to_elements =
+ std::string* message) {
+ // TODO(sameeragarwal): Investigate if this should be an ordered or an
+ // unordered group.
+ const std::map<int, std::set<double*>>& group_to_elements =
ordering.group_to_elements();
// Verify that each group is an independent set
@@ -263,13 +262,12 @@
// of independent sets of decreasing size and invert it. This
// seems to work better in practice, i.e., Cameras before
// points.
-ParameterBlockOrdering* CoordinateDescentMinimizer::CreateOrdering(
- const Program& program) {
- std::unique_ptr<ParameterBlockOrdering> ordering(new ParameterBlockOrdering);
+std::shared_ptr<ParameterBlockOrdering>
+CoordinateDescentMinimizer::CreateOrdering(const Program& program) {
+ auto ordering = std::make_shared<ParameterBlockOrdering>();
ComputeRecursiveIndependentSetOrdering(program, ordering.get());
ordering->Reverse();
- return ordering.release();
+ return ordering;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/coordinate_descent_minimizer.h b/internal/ceres/coordinate_descent_minimizer.h
index 7d17d53..8fc5dd7 100644
--- a/internal/ceres/coordinate_descent_minimizer.h
+++ b/internal/ceres/coordinate_descent_minimizer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,6 +31,7 @@
#ifndef CERES_INTERNAL_COORDINATE_DESCENT_MINIMIZER_H_
#define CERES_INTERNAL_COORDINATE_DESCENT_MINIMIZER_H_
+#include <memory>
#include <string>
#include <vector>
@@ -40,8 +41,7 @@
#include "ceres/problem_impl.h"
#include "ceres/solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class Program;
class LinearSolver;
@@ -56,7 +56,7 @@
//
// The minimizer assumes that none of the parameter blocks in the
// program are constant.
-class CoordinateDescentMinimizer : public Minimizer {
+class CERES_NO_EXPORT CoordinateDescentMinimizer final : public Minimizer {
public:
explicit CoordinateDescentMinimizer(ContextImpl* context);
@@ -66,7 +66,7 @@
std::string* error);
// Minimizer interface.
- virtual ~CoordinateDescentMinimizer();
+ ~CoordinateDescentMinimizer() override;
void Minimize(const Minimizer::Options& options,
double* parameters,
@@ -81,7 +81,8 @@
// of independent sets of decreasing size and invert it. This
// seems to work better in practice, i.e., Cameras before
// points.
- static ParameterBlockOrdering* CreateOrdering(const Program& program);
+ static std::shared_ptr<ParameterBlockOrdering> CreateOrdering(
+ const Program& program);
private:
void Solve(Program* program,
@@ -102,7 +103,6 @@
ContextImpl* context_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_COORDINATE_DESCENT_MINIMIZER_H_
diff --git a/internal/ceres/corrector.cc b/internal/ceres/corrector.cc
index 6a79a06..d9b80cd 100644
--- a/internal/ceres/corrector.cc
+++ b/internal/ceres/corrector.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,8 +36,7 @@
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
Corrector::Corrector(const double sq_norm, const double rho[3]) {
CHECK_GE(sq_norm, 0.0);
@@ -88,7 +87,7 @@
// We now require that the first derivative of the loss function be
// positive only if the second derivative is positive. This is
// because when the second derivative is non-positive, we do not use
- // the second order correction suggested by BANS and instead use a
+ // the second order correction suggested by BAMS and instead use a
// simpler first order strategy which does not use a division by the
// gradient of the loss function.
CHECK_GT(rho[1], 0.0);
@@ -111,8 +110,8 @@
}
void Corrector::CorrectResiduals(const int num_rows, double* residuals) {
- DCHECK(residuals != NULL);
- // Equation 11 in BANS.
+ DCHECK(residuals != nullptr);
+ // Equation 11 in BAMS.
VectorRef(residuals, num_rows) *= residual_scaling_;
}
@@ -120,8 +119,8 @@
const int num_cols,
double* residuals,
double* jacobian) {
- DCHECK(residuals != NULL);
- DCHECK(jacobian != NULL);
+ DCHECK(residuals != nullptr);
+ DCHECK(jacobian != nullptr);
// The common case (rho[2] <= 0).
if (alpha_sq_norm_ == 0.0) {
@@ -129,7 +128,7 @@
return;
}
- // Equation 11 in BANS.
+ // Equation 11 in BAMS.
//
// J = sqrt(rho) * (J - alpha^2 r * r' J)
//
@@ -155,5 +154,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/corrector.h b/internal/ceres/corrector.h
index 3e11cdc..2216a96 100644
--- a/internal/ceres/corrector.h
+++ b/internal/ceres/corrector.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,15 +30,15 @@
//
// Class definition for the object that is responsible for applying a
// second order correction to the Gauss-Newton based on the ideas in
-// BANS by Triggs et al.
+// BAMS by Triggs et al.
#ifndef CERES_INTERNAL_CORRECTOR_H_
#define CERES_INTERNAL_CORRECTOR_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Corrector is responsible for applying the second order correction
// to the residual and jacobian of a least squares problem based on a
@@ -47,8 +47,8 @@
// The key idea here is to look at the expressions for the robustified
// gauss newton approximation and then take its square root to get the
// corresponding corrections to the residual and jacobian. For the
-// full expressions see Eq. 10 and 11 in BANS by Triggs et al.
-class CERES_EXPORT_INTERNAL Corrector {
+// full expressions see Eq. 10 and 11 in BAMS by Triggs et al.
+class CERES_NO_EXPORT Corrector {
public:
// The constructor takes the squared norm, the value, the first and
// second derivatives of the LossFunction. It precalculates some of
@@ -86,7 +86,8 @@
double residual_scaling_;
double alpha_sq_norm_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_CORRECTOR_H_
diff --git a/internal/ceres/corrector_test.cc b/internal/ceres/corrector_test.cc
index 951041e..0548336 100644
--- a/internal/ceres/corrector_test.cc
+++ b/internal/ceres/corrector_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,11 +32,10 @@
#include <algorithm>
#include <cmath>
-#include <cstdlib>
#include <cstring>
+#include <random>
#include "ceres/internal/eigen.h"
-#include "ceres/random.h"
#include "gtest/gtest.h"
namespace ceres {
@@ -160,18 +159,19 @@
// and hessians.
Matrix c_hess(2, 2);
Vector c_grad(2);
-
- srand(5);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform01(0.0, 1.0);
for (int iter = 0; iter < 10000; ++iter) {
// Initialize the jacobian and residual.
- for (int i = 0; i < 2 * 3; ++i) jacobian[i] = RandDouble();
- for (int i = 0; i < 3; ++i) residuals[i] = RandDouble();
+ for (double& jacobian_entry : jacobian) jacobian_entry = uniform01(prng);
+ for (double& residual : residuals) residual = uniform01(prng);
const double sq_norm = res.dot(res);
rho[0] = sq_norm;
- rho[1] = RandDouble();
- rho[2] = 2.0 * RandDouble() - 1.0;
+ rho[1] = uniform01(prng);
+ rho[2] = uniform01(
+ prng, std::uniform_real_distribution<double>::param_type(-1, 1));
// If rho[2] > 0, then the curvature correction to the correction
// and the gauss newton approximation will match. Otherwise, we
@@ -227,10 +227,11 @@
Matrix c_hess(2, 2);
Vector c_grad(2);
- srand(5);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform01(0.0, 1.0);
for (int iter = 0; iter < 10000; ++iter) {
// Initialize the jacobian.
- for (int i = 0; i < 2 * 3; ++i) jacobian[i] = RandDouble();
+ for (double& jacobian_entry : jacobian) jacobian_entry = uniform01(prng);
// Zero residuals
res.setZero();
@@ -238,8 +239,9 @@
const double sq_norm = res.dot(res);
rho[0] = sq_norm;
- rho[1] = RandDouble();
- rho[2] = 2 * RandDouble() - 1.0;
+ rho[1] = uniform01(prng);
+ rho[2] = uniform01(
+ prng, std::uniform_real_distribution<double>::param_type(-1, 1));
// Ground truth values.
g_res = sqrt(rho[1]) * res;
diff --git a/internal/ceres/float_cxsparse.cc b/internal/ceres/cost_function.cc
similarity index 80%
copy from internal/ceres/float_cxsparse.cc
copy to internal/ceres/cost_function.cc
index 6c68830..543348f 100644
--- a/internal/ceres/float_cxsparse.cc
+++ b/internal/ceres/cost_function.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -27,21 +27,15 @@
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
+// keir@google.m (Keir Mierle)
-#include "ceres/float_cxsparse.h"
-
-#if !defined(CERES_NO_CXSPARSE)
+#include "ceres/cost_function.h"
namespace ceres {
-namespace internal {
-std::unique_ptr<SparseCholesky> FloatCXSparseCholesky::Create(
- OrderingType ordering_type) {
- LOG(FATAL) << "FloatCXSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
-}
+CostFunction::CostFunction(CostFunction&& other) noexcept = default;
+CostFunction& CostFunction::operator=(CostFunction&& other) noexcept = default;
+CostFunction::CostFunction() : num_residuals_(0) {}
+CostFunction::~CostFunction() = default;
-} // namespace internal
} // namespace ceres
-
-#endif // !defined(CERES_NO_CXSPARSE)
diff --git a/internal/ceres/cost_function_to_functor_test.cc b/internal/ceres/cost_function_to_functor_test.cc
index 11f47e3..bc081f4 100644
--- a/internal/ceres/cost_function_to_functor_test.cc
+++ b/internal/ceres/cost_function_to_functor_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,16 +32,16 @@
#include <cstdint>
#include <memory>
+#include <utility>
+#include <vector>
#include "ceres/autodiff_cost_function.h"
#include "ceres/dynamic_autodiff_cost_function.h"
#include "ceres/dynamic_cost_function_to_functor.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::vector;
const double kTolerance = 1e-18;
static void ExpectCostFunctionsAreEqual(
@@ -50,9 +50,9 @@
EXPECT_EQ(cost_function.num_residuals(),
actual_cost_function.num_residuals());
const int num_residuals = cost_function.num_residuals();
- const vector<int32_t>& parameter_block_sizes =
+ const std::vector<int32_t>& parameter_block_sizes =
cost_function.parameter_block_sizes();
- const vector<int32_t>& actual_parameter_block_sizes =
+ const std::vector<int32_t>& actual_parameter_block_sizes =
actual_cost_function.parameter_block_sizes();
EXPECT_EQ(parameter_block_sizes.size(), actual_parameter_block_sizes.size());
@@ -92,9 +92,9 @@
}
EXPECT_TRUE(
- cost_function.Evaluate(parameter_blocks.get(), residuals.get(), NULL));
+ cost_function.Evaluate(parameter_blocks.get(), residuals.get(), nullptr));
EXPECT_TRUE(actual_cost_function.Evaluate(
- parameter_blocks.get(), actual_residuals.get(), NULL));
+ parameter_blocks.get(), actual_residuals.get(), nullptr));
for (int i = 0; i < num_residuals; ++i) {
EXPECT_NEAR(residuals[i], actual_residuals[i], kTolerance)
<< "residual id: " << i;
@@ -302,11 +302,11 @@
// Check that AutoDiff(Functor1) == AutoDiff(CostToFunctor(AutoDiff(Functor1)))
#define TEST_BODY(Functor1) \
TEST(CostFunctionToFunctor, Functor1) { \
- typedef AutoDiffCostFunction<Functor1, 2, PARAMETER_BLOCK_SIZES> \
- CostFunction1; \
- typedef CostFunctionToFunctor<2, PARAMETER_BLOCK_SIZES> FunctionToFunctor; \
- typedef AutoDiffCostFunction<FunctionToFunctor, 2, PARAMETER_BLOCK_SIZES> \
- CostFunction2; \
+ using CostFunction1 = \
+ AutoDiffCostFunction<Functor1, 2, PARAMETER_BLOCK_SIZES>; \
+ using FunctionToFunctor = CostFunctionToFunctor<2, PARAMETER_BLOCK_SIZES>; \
+ using CostFunction2 = \
+ AutoDiffCostFunction<FunctionToFunctor, 2, PARAMETER_BLOCK_SIZES>; \
\
std::unique_ptr<CostFunction> cost_function(new CostFunction2( \
new FunctionToFunctor(new CostFunction1(new Functor1)))); \
@@ -376,10 +376,9 @@
}
TEST(CostFunctionToFunctor, DynamicCostFunctionToFunctor) {
- DynamicAutoDiffCostFunction<DynamicTwoParameterBlockFunctor>*
- actual_cost_function(
- new DynamicAutoDiffCostFunction<DynamicTwoParameterBlockFunctor>(
- new DynamicTwoParameterBlockFunctor));
+ auto* actual_cost_function(
+ new DynamicAutoDiffCostFunction<DynamicTwoParameterBlockFunctor>(
+ new DynamicTwoParameterBlockFunctor));
actual_cost_function->AddParameterBlock(2);
actual_cost_function->AddParameterBlock(2);
actual_cost_function->SetNumResiduals(2);
@@ -393,5 +392,39 @@
ExpectCostFunctionsAreEqual(cost_function, *actual_cost_function);
}
-} // namespace internal
-} // namespace ceres
+TEST(CostFunctionToFunctor, UniquePtrArgumentForwarding) {
+ auto cost_function = std::make_unique<
+ AutoDiffCostFunction<CostFunctionToFunctor<ceres::DYNAMIC, 2, 2>,
+ ceres::DYNAMIC,
+ 2,
+ 2>>(
+ std::make_unique<CostFunctionToFunctor<ceres::DYNAMIC, 2, 2>>(
+ std::make_unique<
+ AutoDiffCostFunction<TwoParameterBlockFunctor, 2, 2, 2>>()),
+ 2);
+
+ auto actual_cost_function = std::make_unique<
+ AutoDiffCostFunction<TwoParameterBlockFunctor, 2, 2, 2>>();
+ ExpectCostFunctionsAreEqual(*cost_function, *actual_cost_function);
+}
+
+TEST(CostFunctionToFunctor, DynamicCostFunctionToFunctorUniquePtr) {
+ auto actual_cost_function = std::make_unique<
+ DynamicAutoDiffCostFunction<DynamicTwoParameterBlockFunctor>>();
+ actual_cost_function->AddParameterBlock(2);
+ actual_cost_function->AddParameterBlock(2);
+ actual_cost_function->SetNumResiduals(2);
+
+ // Use deduction guides for a more compact variable definition
+ DynamicAutoDiffCostFunction cost_function(
+ std::make_unique<DynamicCostFunctionToFunctor>(
+ std::move(actual_cost_function)));
+ cost_function.AddParameterBlock(2);
+ cost_function.AddParameterBlock(2);
+ cost_function.SetNumResiduals(2);
+
+ ExpectCostFunctionsAreEqual(cost_function,
+ *cost_function.functor().function());
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/covariance.cc b/internal/ceres/covariance.cc
index 8e240ff..50da029 100644
--- a/internal/ceres/covariance.cc
+++ b/internal/ceres/covariance.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,25 +39,22 @@
namespace ceres {
-using std::make_pair;
-using std::pair;
-using std::vector;
-
Covariance::Covariance(const Covariance::Options& options) {
- impl_.reset(new internal::CovarianceImpl(options));
+ impl_ = std::make_unique<internal::CovarianceImpl>(options);
}
-Covariance::~Covariance() {}
+Covariance::~Covariance() = default;
bool Covariance::Compute(
- const vector<pair<const double*, const double*>>& covariance_blocks,
+ const std::vector<std::pair<const double*, const double*>>&
+ covariance_blocks,
Problem* problem) {
- return impl_->Compute(covariance_blocks, problem->impl_.get());
+ return impl_->Compute(covariance_blocks, problem->mutable_impl());
}
-bool Covariance::Compute(const vector<const double*>& parameter_blocks,
+bool Covariance::Compute(const std::vector<const double*>& parameter_blocks,
Problem* problem) {
- return impl_->Compute(parameter_blocks, problem->impl_.get());
+ return impl_->Compute(parameter_blocks, problem->mutable_impl());
}
bool Covariance::GetCovarianceBlock(const double* parameter_block1,
@@ -80,7 +77,7 @@
}
bool Covariance::GetCovarianceMatrix(
- const vector<const double*>& parameter_blocks,
+ const std::vector<const double*>& parameter_blocks,
double* covariance_matrix) const {
return impl_->GetCovarianceMatrixInTangentOrAmbientSpace(parameter_blocks,
true, // ambient
diff --git a/internal/ceres/covariance_impl.cc b/internal/ceres/covariance_impl.cc
index 1f86707..6e8362d 100644
--- a/internal/ceres/covariance_impl.cc
+++ b/internal/ceres/covariance_impl.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -57,35 +57,22 @@
#include "ceres/wall_time.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::swap;
+namespace ceres::internal {
using CovarianceBlocks = std::vector<std::pair<const double*, const double*>>;
CovarianceImpl::CovarianceImpl(const Covariance::Options& options)
: options_(options), is_computed_(false), is_valid_(false) {
-#ifdef CERES_NO_THREADS
- if (options_.num_threads > 1) {
- LOG(WARNING) << "No threading support is compiled into this binary; "
- << "only options.num_threads = 1 is supported. Switching "
- << "to single threaded mode.";
- options_.num_threads = 1;
- }
-#endif
-
evaluate_options_.num_threads = options_.num_threads;
evaluate_options_.apply_loss_function = options_.apply_loss_function;
}
-CovarianceImpl::~CovarianceImpl() {}
+CovarianceImpl::~CovarianceImpl() = default;
template <typename T>
void CheckForDuplicates(std::vector<T> blocks) {
- sort(blocks.begin(), blocks.end());
- typename std::vector<T>::iterator it =
- std::adjacent_find(blocks.begin(), blocks.end());
+ std::sort(blocks.begin(), blocks.end());
+ auto it = std::adjacent_find(blocks.begin(), blocks.end());
if (it != blocks.end()) {
// In case there are duplicates, we search for their location.
std::map<T, std::vector<int>> blocks_map;
@@ -117,7 +104,7 @@
covariance_blocks);
problem_ = problem;
parameter_block_to_row_index_.clear();
- covariance_matrix_.reset(NULL);
+ covariance_matrix_ = nullptr;
is_valid_ = (ComputeCovarianceSparsity(covariance_blocks, problem) &&
ComputeCovarianceValues());
is_computed_ = true;
@@ -162,10 +149,10 @@
const int block1_size = block1->Size();
const int block2_size = block2->Size();
- const int block1_local_size = block1->LocalSize();
- const int block2_local_size = block2->LocalSize();
+ const int block1_tangent_size = block1->TangentSize();
+ const int block2_tangent_size = block2->TangentSize();
if (!lift_covariance_to_ambient_space) {
- MatrixRef(covariance_block, block1_local_size, block2_local_size)
+ MatrixRef(covariance_block, block1_tangent_size, block2_tangent_size)
.setZero();
} else {
MatrixRef(covariance_block, block1_size, block2_size).setZero();
@@ -177,7 +164,7 @@
const double* parameter_block2 = original_parameter_block2;
const bool transpose = parameter_block1 > parameter_block2;
if (transpose) {
- swap(parameter_block1, parameter_block2);
+ std::swap(parameter_block1, parameter_block2);
}
// Find where in the covariance matrix the block is located.
@@ -191,7 +178,7 @@
const int* cols_begin = cols + rows[row_begin];
// The only part that requires work is walking the compressed column
- // vector to determine where the set of columns correspnding to the
+ // vector to determine where the set of columns corresponding to the
// covariance block begin.
int offset = 0;
while (cols_begin[offset] != col_begin && offset < row_size) {
@@ -209,34 +196,34 @@
FindOrDie(parameter_map, const_cast<double*>(parameter_block1));
ParameterBlock* block2 =
FindOrDie(parameter_map, const_cast<double*>(parameter_block2));
- const LocalParameterization* local_param1 = block1->local_parameterization();
- const LocalParameterization* local_param2 = block2->local_parameterization();
+ const Manifold* manifold1 = block1->manifold();
+ const Manifold* manifold2 = block2->manifold();
const int block1_size = block1->Size();
- const int block1_local_size = block1->LocalSize();
+ const int block1_tangent_size = block1->TangentSize();
const int block2_size = block2->Size();
- const int block2_local_size = block2->LocalSize();
+ const int block2_tangent_size = block2->TangentSize();
- ConstMatrixRef cov(
- covariance_matrix_->values() + rows[row_begin], block1_size, row_size);
+ ConstMatrixRef cov(covariance_matrix_->values() + rows[row_begin],
+ block1_tangent_size,
+ row_size);
- // Fast path when there are no local parameterizations or if the
- // user does not want it lifted to the ambient space.
- if ((local_param1 == NULL && local_param2 == NULL) ||
+ // Fast path when there are no manifolds or if the user does not want it
+ // lifted to the ambient space.
+ if ((manifold1 == nullptr && manifold2 == nullptr) ||
!lift_covariance_to_ambient_space) {
if (transpose) {
- MatrixRef(covariance_block, block2_local_size, block1_local_size) =
- cov.block(0, offset, block1_local_size, block2_local_size)
+ MatrixRef(covariance_block, block2_tangent_size, block1_tangent_size) =
+ cov.block(0, offset, block1_tangent_size, block2_tangent_size)
.transpose();
} else {
- MatrixRef(covariance_block, block1_local_size, block2_local_size) =
- cov.block(0, offset, block1_local_size, block2_local_size);
+ MatrixRef(covariance_block, block1_tangent_size, block2_tangent_size) =
+ cov.block(0, offset, block1_tangent_size, block2_tangent_size);
}
return true;
}
- // If local parameterizations are used then the covariance that has
- // been computed is in the tangent space and it needs to be lifted
- // back to the ambient space.
+ // If manifolds are used then the covariance that has been computed is in the
+ // tangent space and it needs to be lifted back to the ambient space.
//
// This is given by the formula
//
@@ -249,36 +236,37 @@
// See Result 5.11 on page 142 of Hartley & Zisserman (2nd Edition)
// for a proof.
//
- // TODO(sameeragarwal): Add caching of local parameterization, so
- // that they are computed just once per parameter block.
- Matrix block1_jacobian(block1_size, block1_local_size);
- if (local_param1 == NULL) {
+ // TODO(sameeragarwal): Add caching the manifold plus_jacobian, so that they
+ // are computed just once per parameter block.
+ Matrix block1_jacobian(block1_size, block1_tangent_size);
+ if (manifold1 == nullptr) {
block1_jacobian.setIdentity();
} else {
- local_param1->ComputeJacobian(parameter_block1, block1_jacobian.data());
+ manifold1->PlusJacobian(parameter_block1, block1_jacobian.data());
}
- Matrix block2_jacobian(block2_size, block2_local_size);
+ Matrix block2_jacobian(block2_size, block2_tangent_size);
// Fast path if the user is requesting a diagonal block.
if (parameter_block1 == parameter_block2) {
block2_jacobian = block1_jacobian;
} else {
- if (local_param2 == NULL) {
+ if (manifold2 == nullptr) {
block2_jacobian.setIdentity();
} else {
- local_param2->ComputeJacobian(parameter_block2, block2_jacobian.data());
+ manifold2->PlusJacobian(parameter_block2, block2_jacobian.data());
}
}
if (transpose) {
MatrixRef(covariance_block, block2_size, block1_size) =
block2_jacobian *
- cov.block(0, offset, block1_local_size, block2_local_size).transpose() *
+ cov.block(0, offset, block1_tangent_size, block2_tangent_size)
+ .transpose() *
block1_jacobian.transpose();
} else {
MatrixRef(covariance_block, block1_size, block2_size) =
block1_jacobian *
- cov.block(0, offset, block1_local_size, block2_local_size) *
+ cov.block(0, offset, block1_tangent_size, block2_tangent_size) *
block2_jacobian.transpose();
}
@@ -309,7 +297,7 @@
if (lift_covariance_to_ambient_space) {
parameter_sizes.push_back(block->Size());
} else {
- parameter_sizes.push_back(block->LocalSize());
+ parameter_sizes.push_back(block->TangentSize());
}
}
std::partial_sum(parameter_sizes.begin(),
@@ -322,9 +310,8 @@
// Assemble the blocks in the covariance matrix.
MatrixRef covariance(covariance_matrix, covariance_size, covariance_size);
const int num_threads = options_.num_threads;
- std::unique_ptr<double[]> workspace(
- new double[num_threads * max_covariance_block_size *
- max_covariance_block_size]);
+ auto workspace = std::make_unique<double[]>(
+ num_threads * max_covariance_block_size * max_covariance_block_size);
bool success = true;
@@ -383,8 +370,7 @@
std::vector<ResidualBlock*> residual_blocks;
problem->GetResidualBlocks(&residual_blocks);
- for (int i = 0; i < residual_blocks.size(); ++i) {
- ResidualBlock* residual_block = residual_blocks[i];
+ for (auto* residual_block : residual_blocks) {
parameter_blocks_in_use.insert(residual_block->parameter_blocks(),
residual_block->parameter_blocks() +
residual_block->NumParameterBlocks());
@@ -394,8 +380,7 @@
std::vector<double*>& active_parameter_blocks =
evaluate_options_.parameter_blocks;
active_parameter_blocks.clear();
- for (int i = 0; i < all_parameter_blocks.size(); ++i) {
- double* parameter_block = all_parameter_blocks[i];
+ for (auto* parameter_block : all_parameter_blocks) {
ParameterBlock* block = FindOrDie(parameter_map, parameter_block);
if (!block->IsConstant() && (parameter_blocks_in_use.count(block) > 0)) {
active_parameter_blocks.push_back(parameter_block);
@@ -411,10 +396,9 @@
// ordering of parameter blocks just constructed.
int num_rows = 0;
parameter_block_to_row_index_.clear();
- for (int i = 0; i < active_parameter_blocks.size(); ++i) {
- double* parameter_block = active_parameter_blocks[i];
+ for (auto* parameter_block : active_parameter_blocks) {
const int parameter_block_size =
- problem->ParameterBlockLocalSize(parameter_block);
+ problem->ParameterBlockTangentSize(parameter_block);
parameter_block_to_row_index_[parameter_block] = num_rows;
num_rows += parameter_block_size;
}
@@ -424,9 +408,7 @@
// triangular part of the matrix.
int num_nonzeros = 0;
CovarianceBlocks covariance_blocks;
- for (int i = 0; i < original_covariance_blocks.size(); ++i) {
- const std::pair<const double*, const double*>& block_pair =
- original_covariance_blocks[i];
+ for (const auto& block_pair : original_covariance_blocks) {
if (constant_parameter_blocks_.count(block_pair.first) > 0 ||
constant_parameter_blocks_.count(block_pair.second) > 0) {
continue;
@@ -434,8 +416,8 @@
int index1 = FindOrDie(parameter_block_to_row_index_, block_pair.first);
int index2 = FindOrDie(parameter_block_to_row_index_, block_pair.second);
- const int size1 = problem->ParameterBlockLocalSize(block_pair.first);
- const int size2 = problem->ParameterBlockLocalSize(block_pair.second);
+ const int size1 = problem->ParameterBlockTangentSize(block_pair.first);
+ const int size2 = problem->ParameterBlockTangentSize(block_pair.second);
num_nonzeros += size1 * size2;
// Make sure we are constructing a block upper triangular matrix.
@@ -447,9 +429,9 @@
}
}
- if (covariance_blocks.size() == 0) {
+ if (covariance_blocks.empty()) {
VLOG(2) << "No non-zero covariance blocks found";
- covariance_matrix_.reset(NULL);
+ covariance_matrix_ = nullptr;
return true;
}
@@ -459,8 +441,8 @@
std::sort(covariance_blocks.begin(), covariance_blocks.end());
// Fill the sparsity pattern of the covariance matrix.
- covariance_matrix_.reset(
- new CompressedRowSparseMatrix(num_rows, num_rows, num_nonzeros));
+ covariance_matrix_ = std::make_unique<CompressedRowSparseMatrix>(
+ num_rows, num_rows, num_nonzeros);
int* rows = covariance_matrix_->mutable_rows();
int* cols = covariance_matrix_->mutable_cols();
@@ -480,20 +462,18 @@
int cursor = 0; // index into the covariance matrix.
for (const auto& entry : parameter_block_to_row_index_) {
const double* row_block = entry.first;
- const int row_block_size = problem->ParameterBlockLocalSize(row_block);
+ const int row_block_size = problem->ParameterBlockTangentSize(row_block);
int row_begin = entry.second;
// Iterate over the covariance blocks contained in this row block
// and count the number of columns in this row block.
int num_col_blocks = 0;
- int num_columns = 0;
for (int j = i; j < covariance_blocks.size(); ++j, ++num_col_blocks) {
const std::pair<const double*, const double*>& block_pair =
covariance_blocks[j];
if (block_pair.first != row_block) {
break;
}
- num_columns += problem->ParameterBlockLocalSize(block_pair.second);
}
// Fill out all the compressed rows for this parameter block.
@@ -501,7 +481,8 @@
rows[row_begin + r] = cursor;
for (int c = 0; c < num_col_blocks; ++c) {
const double* col_block = covariance_blocks[i + c].second;
- const int col_block_size = problem->ParameterBlockLocalSize(col_block);
+ const int col_block_size =
+ problem->ParameterBlockTangentSize(col_block);
int col_begin = FindOrDie(parameter_block_to_row_index_, col_block);
for (int k = 0; k < col_block_size; ++k) {
cols[cursor++] = col_begin++;
@@ -556,13 +537,13 @@
"CovarianceImpl::ComputeCovarianceValuesUsingSparseQR");
#ifndef CERES_NO_SUITESPARSE
- if (covariance_matrix_.get() == NULL) {
+ if (covariance_matrix_ == nullptr) {
// Nothing to do, all zeros covariance matrix.
return true;
}
CRSMatrix jacobian;
- problem_->Evaluate(evaluate_options_, NULL, NULL, NULL, &jacobian);
+ problem_->Evaluate(evaluate_options_, nullptr, nullptr, nullptr, &jacobian);
event_logger.AddEvent("Evaluate");
// Construct a compressed column form of the Jacobian.
@@ -601,11 +582,11 @@
cholmod_jacobian.nrow = num_rows;
cholmod_jacobian.ncol = num_cols;
cholmod_jacobian.nzmax = num_nonzeros;
- cholmod_jacobian.nz = NULL;
- cholmod_jacobian.p = reinterpret_cast<void*>(&transpose_rows[0]);
- cholmod_jacobian.i = reinterpret_cast<void*>(&transpose_cols[0]);
- cholmod_jacobian.x = reinterpret_cast<void*>(&transpose_values[0]);
- cholmod_jacobian.z = NULL;
+ cholmod_jacobian.nz = nullptr;
+ cholmod_jacobian.p = reinterpret_cast<void*>(transpose_rows.data());
+ cholmod_jacobian.i = reinterpret_cast<void*>(transpose_cols.data());
+ cholmod_jacobian.x = reinterpret_cast<void*>(transpose_values.data());
+ cholmod_jacobian.z = nullptr;
cholmod_jacobian.stype = 0; // Matrix is not symmetric.
cholmod_jacobian.itype = CHOLMOD_LONG;
cholmod_jacobian.xtype = CHOLMOD_REAL;
@@ -616,8 +597,8 @@
cholmod_common cc;
cholmod_l_start(&cc);
- cholmod_sparse* R = NULL;
- SuiteSparse_long* permutation = NULL;
+ cholmod_sparse* R = nullptr;
+ SuiteSparse_long* permutation = nullptr;
// Compute a Q-less QR factorization of the Jacobian. Since we are
// only interested in inverting J'J = R'R, we do not need Q. This
@@ -632,13 +613,15 @@
// more efficient, both in runtime as well as the quality of
// ordering computed. So, it maybe worth doing that analysis
// separately.
- const SuiteSparse_long rank = SuiteSparseQR<double>(SPQR_ORDERING_BESTAMD,
- SPQR_DEFAULT_TOL,
- cholmod_jacobian.ncol,
- &cholmod_jacobian,
- &R,
- &permutation,
- &cc);
+ const SuiteSparse_long rank = SuiteSparseQR<double>(
+ SPQR_ORDERING_BESTAMD,
+ options_.column_pivot_threshold < 0 ? SPQR_DEFAULT_TOL
+ : options_.column_pivot_threshold,
+ static_cast<int64_t>(cholmod_jacobian.ncol),
+ &cholmod_jacobian,
+ &R,
+ &permutation,
+ &cc);
event_logger.AddEvent("Numeric Factorization");
if (R == nullptr) {
LOG(ERROR) << "Something is wrong. SuiteSparseQR returned R = nullptr.";
@@ -648,9 +631,9 @@
}
if (rank < cholmod_jacobian.ncol) {
- LOG(ERROR) << "Jacobian matrix is rank deficient. "
- << "Number of columns: " << cholmod_jacobian.ncol
- << " rank: " << rank;
+ LOG(WARNING) << "Jacobian matrix is rank deficient. "
+ << "Number of columns: " << cholmod_jacobian.ncol
+ << " rank: " << rank;
free(permutation);
cholmod_l_free_sparse(&R, &cc);
cholmod_l_finish(&cc);
@@ -682,7 +665,7 @@
// Since the covariance matrix is symmetric, the i^th row and column
// are equal.
const int num_threads = options_.num_threads;
- std::unique_ptr<double[]> workspace(new double[num_threads * num_cols]);
+ auto workspace = std::make_unique<double[]>(num_threads * num_cols);
problem_->context()->EnsureMinimumThreads(num_threads);
ParallelFor(
@@ -721,13 +704,13 @@
bool CovarianceImpl::ComputeCovarianceValuesUsingDenseSVD() {
EventLogger event_logger(
"CovarianceImpl::ComputeCovarianceValuesUsingDenseSVD");
- if (covariance_matrix_.get() == NULL) {
+ if (covariance_matrix_ == nullptr) {
// Nothing to do, all zeros covariance matrix.
return true;
}
CRSMatrix jacobian;
- problem_->Evaluate(evaluate_options_, NULL, NULL, NULL, &jacobian);
+ problem_->Evaluate(evaluate_options_, nullptr, nullptr, nullptr, &jacobian);
event_logger.AddEvent("Evaluate");
Matrix dense_jacobian(jacobian.num_rows, jacobian.num_cols);
@@ -812,20 +795,20 @@
bool CovarianceImpl::ComputeCovarianceValuesUsingEigenSparseQR() {
EventLogger event_logger(
"CovarianceImpl::ComputeCovarianceValuesUsingEigenSparseQR");
- if (covariance_matrix_.get() == NULL) {
+ if (covariance_matrix_ == nullptr) {
// Nothing to do, all zeros covariance matrix.
return true;
}
CRSMatrix jacobian;
- problem_->Evaluate(evaluate_options_, NULL, NULL, NULL, &jacobian);
+ problem_->Evaluate(evaluate_options_, nullptr, nullptr, nullptr, &jacobian);
event_logger.AddEvent("Evaluate");
- typedef Eigen::SparseMatrix<double, Eigen::ColMajor> EigenSparseMatrix;
+ using EigenSparseMatrix = Eigen::SparseMatrix<double, Eigen::ColMajor>;
// Convert the matrix to column major order as required by SparseQR.
EigenSparseMatrix sparse_jacobian =
- Eigen::MappedSparseMatrix<double, Eigen::RowMajor>(
+ Eigen::Map<Eigen::SparseMatrix<double, Eigen::RowMajor>>(
jacobian.num_rows,
jacobian.num_cols,
static_cast<int>(jacobian.values.size()),
@@ -834,19 +817,23 @@
jacobian.values.data());
event_logger.AddEvent("ConvertToSparseMatrix");
- Eigen::SparseQR<EigenSparseMatrix, Eigen::COLAMDOrdering<int>> qr_solver(
- sparse_jacobian);
+ Eigen::SparseQR<EigenSparseMatrix, Eigen::COLAMDOrdering<int>> qr;
+ if (options_.column_pivot_threshold > 0) {
+ qr.setPivotThreshold(options_.column_pivot_threshold);
+ }
+
+ qr.compute(sparse_jacobian);
event_logger.AddEvent("QRDecomposition");
- if (qr_solver.info() != Eigen::Success) {
+ if (qr.info() != Eigen::Success) {
LOG(ERROR) << "Eigen::SparseQR decomposition failed.";
return false;
}
- if (qr_solver.rank() < jacobian.num_cols) {
+ if (qr.rank() < jacobian.num_cols) {
LOG(ERROR) << "Jacobian matrix is rank deficient. "
<< "Number of columns: " << jacobian.num_cols
- << " rank: " << qr_solver.rank();
+ << " rank: " << qr.rank();
return false;
}
@@ -856,7 +843,7 @@
// Compute the inverse column permutation used by QR factorization.
Eigen::PermutationMatrix<Eigen::Dynamic, Eigen::Dynamic> inverse_permutation =
- qr_solver.colsPermutation().inverse();
+ qr.colsPermutation().inverse();
// The following loop exploits the fact that the i^th column of A^{-1}
// is given by the solution to the linear system
@@ -869,7 +856,7 @@
// are equal.
const int num_cols = jacobian.num_cols;
const int num_threads = options_.num_threads;
- std::unique_ptr<double[]> workspace(new double[num_threads * num_cols]);
+ auto workspace = std::make_unique<double[]>(num_threads * num_cols);
problem_->context()->EnsureMinimumThreads(num_threads);
ParallelFor(
@@ -879,9 +866,9 @@
if (row_end != row_begin) {
double* solution = workspace.get() + thread_id * num_cols;
SolveRTRWithSparseRHS<int>(num_cols,
- qr_solver.matrixR().innerIndexPtr(),
- qr_solver.matrixR().outerIndexPtr(),
- &qr_solver.matrixR().data().value(0),
+ qr.matrixR().innerIndexPtr(),
+ qr.matrixR().outerIndexPtr(),
+ &qr.matrixR().data().value(0),
inverse_permutation.indices().coeff(r),
solution);
@@ -899,5 +886,4 @@
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/covariance_impl.h b/internal/ceres/covariance_impl.h
index 394a04b..9ff7982 100644
--- a/internal/ceres/covariance_impl.h
+++ b/internal/ceres/covariance_impl.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,16 +38,16 @@
#include <vector>
#include "ceres/covariance.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/problem_impl.h"
#include "ceres/suitesparse.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class CompressedRowSparseMatrix;
-class CERES_EXPORT_INTERNAL CovarianceImpl {
+class CERES_NO_EXPORT CovarianceImpl {
public:
explicit CovarianceImpl(const Covariance::Options& options);
~CovarianceImpl();
@@ -95,7 +95,8 @@
std::unique_ptr<CompressedRowSparseMatrix> covariance_matrix_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_COVARIANCE_IMPL_H_
diff --git a/internal/ceres/covariance_test.cc b/internal/ceres/covariance_test.cc
index 229173f..40d9b0e 100644
--- a/internal/ceres/covariance_test.cc
+++ b/internal/ceres/covariance_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,12 +36,14 @@
#include <map>
#include <memory>
#include <utility>
+#include <vector>
#include "ceres/autodiff_cost_function.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/cost_function.h"
#include "ceres/covariance_impl.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/internal/config.h"
+#include "ceres/manifold.h"
#include "ceres/map_util.h"
#include "ceres/problem_impl.h"
#include "gtest/gtest.h"
@@ -49,11 +51,6 @@
namespace ceres {
namespace internal {
-using std::make_pair;
-using std::map;
-using std::pair;
-using std::vector;
-
class UnaryCostFunction : public CostFunction {
public:
UnaryCostFunction(const int num_residuals,
@@ -71,19 +68,19 @@
residuals[i] = 1;
}
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return true;
}
- if (jacobians[0] != NULL) {
- copy(jacobian_.begin(), jacobian_.end(), jacobians[0]);
+ if (jacobians[0] != nullptr) {
+ std::copy(jacobian_.begin(), jacobian_.end(), jacobians[0]);
}
return true;
}
private:
- vector<double> jacobian_;
+ std::vector<double> jacobian_;
};
class BinaryCostFunction : public CostFunction {
@@ -109,47 +106,24 @@
residuals[i] = 2;
}
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return true;
}
- if (jacobians[0] != NULL) {
- copy(jacobian1_.begin(), jacobian1_.end(), jacobians[0]);
+ if (jacobians[0] != nullptr) {
+ std::copy(jacobian1_.begin(), jacobian1_.end(), jacobians[0]);
}
- if (jacobians[1] != NULL) {
- copy(jacobian2_.begin(), jacobian2_.end(), jacobians[1]);
+ if (jacobians[1] != nullptr) {
+ std::copy(jacobian2_.begin(), jacobian2_.end(), jacobians[1]);
}
return true;
}
private:
- vector<double> jacobian1_;
- vector<double> jacobian2_;
-};
-
-// x_plus_delta = delta * x;
-class PolynomialParameterization : public LocalParameterization {
- public:
- virtual ~PolynomialParameterization() {}
-
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const final {
- x_plus_delta[0] = delta[0] * x[0];
- x_plus_delta[1] = delta[0] * x[1];
- return true;
- }
-
- bool ComputeJacobian(const double* x, double* jacobian) const final {
- jacobian[0] = x[0];
- jacobian[1] = x[1];
- return true;
- }
-
- int GlobalSize() const final { return 2; }
- int LocalSize() const final { return 1; }
+ std::vector<double> jacobian1_;
+ std::vector<double> jacobian2_;
};
TEST(CovarianceImpl, ComputeCovarianceSparsity) {
@@ -165,13 +139,13 @@
// Add in random order
Vector junk_jacobian = Vector::Zero(10);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 1, junk_jacobian.data()), NULL, block1);
+ new UnaryCostFunction(1, 1, junk_jacobian.data()), nullptr, block1);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 4, junk_jacobian.data()), NULL, block4);
+ new UnaryCostFunction(1, 4, junk_jacobian.data()), nullptr, block4);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 3, junk_jacobian.data()), NULL, block3);
+ new UnaryCostFunction(1, 3, junk_jacobian.data()), nullptr, block3);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 2, junk_jacobian.data()), NULL, block2);
+ new UnaryCostFunction(1, 2, junk_jacobian.data()), nullptr, block2);
// Sparsity pattern
//
@@ -206,13 +180,13 @@
6, 7, 8, 9};
// clang-format on
- vector<pair<const double*, const double*>> covariance_blocks;
- covariance_blocks.push_back(make_pair(block1, block1));
- covariance_blocks.push_back(make_pair(block4, block4));
- covariance_blocks.push_back(make_pair(block2, block2));
- covariance_blocks.push_back(make_pair(block3, block3));
- covariance_blocks.push_back(make_pair(block2, block3));
- covariance_blocks.push_back(make_pair(block4, block1)); // reversed
+ std::vector<std::pair<const double*, const double*>> covariance_blocks;
+ covariance_blocks.emplace_back(block1, block1);
+ covariance_blocks.emplace_back(block4, block4);
+ covariance_blocks.emplace_back(block2, block2);
+ covariance_blocks.emplace_back(block3, block3);
+ covariance_blocks.emplace_back(block2, block3);
+ covariance_blocks.emplace_back(block4, block1); // reversed
Covariance::Options options;
CovarianceImpl covariance_impl(options);
@@ -251,13 +225,13 @@
// Add in random order
Vector junk_jacobian = Vector::Zero(10);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 1, junk_jacobian.data()), NULL, block1);
+ new UnaryCostFunction(1, 1, junk_jacobian.data()), nullptr, block1);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 4, junk_jacobian.data()), NULL, block4);
+ new UnaryCostFunction(1, 4, junk_jacobian.data()), nullptr, block4);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 3, junk_jacobian.data()), NULL, block3);
+ new UnaryCostFunction(1, 3, junk_jacobian.data()), nullptr, block3);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 2, junk_jacobian.data()), NULL, block2);
+ new UnaryCostFunction(1, 2, junk_jacobian.data()), nullptr, block2);
problem.SetParameterBlockConstant(block3);
// Sparsity pattern
@@ -287,13 +261,13 @@
3, 4, 5, 6};
// clang-format on
- vector<pair<const double*, const double*>> covariance_blocks;
- covariance_blocks.push_back(make_pair(block1, block1));
- covariance_blocks.push_back(make_pair(block4, block4));
- covariance_blocks.push_back(make_pair(block2, block2));
- covariance_blocks.push_back(make_pair(block3, block3));
- covariance_blocks.push_back(make_pair(block2, block3));
- covariance_blocks.push_back(make_pair(block4, block1)); // reversed
+ std::vector<std::pair<const double*, const double*>> covariance_blocks;
+ covariance_blocks.emplace_back(block1, block1);
+ covariance_blocks.emplace_back(block4, block4);
+ covariance_blocks.emplace_back(block2, block2);
+ covariance_blocks.emplace_back(block3, block3);
+ covariance_blocks.emplace_back(block2, block3);
+ covariance_blocks.emplace_back(block4, block1); // reversed
Covariance::Options options;
CovarianceImpl covariance_impl(options);
@@ -332,12 +306,12 @@
// Add in random order
Vector junk_jacobian = Vector::Zero(10);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 1, junk_jacobian.data()), NULL, block1);
+ new UnaryCostFunction(1, 1, junk_jacobian.data()), nullptr, block1);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 4, junk_jacobian.data()), NULL, block4);
+ new UnaryCostFunction(1, 4, junk_jacobian.data()), nullptr, block4);
problem.AddParameterBlock(block3, 3);
problem.AddResidualBlock(
- new UnaryCostFunction(1, 2, junk_jacobian.data()), NULL, block2);
+ new UnaryCostFunction(1, 2, junk_jacobian.data()), nullptr, block2);
// Sparsity pattern
//
@@ -366,13 +340,13 @@
3, 4, 5, 6};
// clang-format on
- vector<pair<const double*, const double*>> covariance_blocks;
- covariance_blocks.push_back(make_pair(block1, block1));
- covariance_blocks.push_back(make_pair(block4, block4));
- covariance_blocks.push_back(make_pair(block2, block2));
- covariance_blocks.push_back(make_pair(block3, block3));
- covariance_blocks.push_back(make_pair(block2, block3));
- covariance_blocks.push_back(make_pair(block4, block1)); // reversed
+ std::vector<std::pair<const double*, const double*>> covariance_blocks;
+ covariance_blocks.emplace_back(block1, block1);
+ covariance_blocks.emplace_back(block4, block4);
+ covariance_blocks.emplace_back(block2, block2);
+ covariance_blocks.emplace_back(block3, block3);
+ covariance_blocks.emplace_back(block2, block3);
+ covariance_blocks.emplace_back(block4, block1); // reversed
Covariance::Options options;
CovarianceImpl covariance_impl(options);
@@ -398,9 +372,42 @@
}
}
+// x_plus_delta = delta * x;
+class PolynomialManifold : public Manifold {
+ public:
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const final {
+ x_plus_delta[0] = delta[0] * x[0];
+ x_plus_delta[1] = delta[0] * x[1];
+ return true;
+ }
+
+ bool Minus(const double* y, const double* x, double* y_minus_x) const final {
+ LOG(FATAL) << "Should not be called";
+ return true;
+ }
+
+ bool PlusJacobian(const double* x, double* jacobian) const final {
+ jacobian[0] = x[0];
+ jacobian[1] = x[1];
+ return true;
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian) const final {
+ LOG(FATAL) << "Should not be called";
+ return true;
+ }
+
+ int AmbientSize() const final { return 2; }
+ int TangentSize() const final { return 1; }
+};
+
class CovarianceTest : public ::testing::Test {
protected:
- typedef map<const double*, pair<int, int>> BoundsMap;
+ // TODO(sameeragarwal): Investigate if this should be an ordered or an
+ // unordered map.
+ using BoundsMap = std::map<const double*, std::pair<int, int>>;
void SetUp() override {
double* x = parameters_;
@@ -416,44 +423,46 @@
{
double jacobian[] = {1.0, 0.0, 0.0, 1.0};
- problem_.AddResidualBlock(new UnaryCostFunction(2, 2, jacobian), NULL, x);
+ problem_.AddResidualBlock(
+ new UnaryCostFunction(2, 2, jacobian), nullptr, x);
}
{
double jacobian[] = {2.0, 0.0, 0.0, 0.0, 2.0, 0.0, 0.0, 0.0, 2.0};
- problem_.AddResidualBlock(new UnaryCostFunction(3, 3, jacobian), NULL, y);
+ problem_.AddResidualBlock(
+ new UnaryCostFunction(3, 3, jacobian), nullptr, y);
}
{
double jacobian = 5.0;
problem_.AddResidualBlock(
- new UnaryCostFunction(1, 1, &jacobian), NULL, z);
+ new UnaryCostFunction(1, 1, &jacobian), nullptr, z);
}
{
double jacobian1[] = {1.0, 2.0, 3.0};
double jacobian2[] = {-5.0, -6.0};
problem_.AddResidualBlock(
- new BinaryCostFunction(1, 3, 2, jacobian1, jacobian2), NULL, y, x);
+ new BinaryCostFunction(1, 3, 2, jacobian1, jacobian2), nullptr, y, x);
}
{
double jacobian1[] = {2.0};
double jacobian2[] = {3.0, -2.0};
problem_.AddResidualBlock(
- new BinaryCostFunction(1, 1, 2, jacobian1, jacobian2), NULL, z, x);
+ new BinaryCostFunction(1, 1, 2, jacobian1, jacobian2), nullptr, z, x);
}
- all_covariance_blocks_.push_back(make_pair(x, x));
- all_covariance_blocks_.push_back(make_pair(y, y));
- all_covariance_blocks_.push_back(make_pair(z, z));
- all_covariance_blocks_.push_back(make_pair(x, y));
- all_covariance_blocks_.push_back(make_pair(x, z));
- all_covariance_blocks_.push_back(make_pair(y, z));
+ all_covariance_blocks_.emplace_back(x, x);
+ all_covariance_blocks_.emplace_back(y, y);
+ all_covariance_blocks_.emplace_back(z, z);
+ all_covariance_blocks_.emplace_back(x, y);
+ all_covariance_blocks_.emplace_back(x, z);
+ all_covariance_blocks_.emplace_back(y, z);
- column_bounds_[x] = make_pair(0, 2);
- column_bounds_[y] = make_pair(2, 5);
- column_bounds_[z] = make_pair(5, 6);
+ column_bounds_[x] = std::make_pair(0, 2);
+ column_bounds_[y] = std::make_pair(2, 5);
+ column_bounds_[z] = std::make_pair(5, 6);
}
// Computes covariance in ambient space.
@@ -481,7 +490,7 @@
// Generate all possible combination of block pairs and check if the
// covariance computation is correct.
for (int i = 0; i <= 64; ++i) {
- vector<pair<const double*, const double*>> covariance_blocks;
+ std::vector<std::pair<const double*, const double*>> covariance_blocks;
if (i & 1) {
covariance_blocks.push_back(all_covariance_blocks_[0]);
}
@@ -509,9 +518,9 @@
Covariance covariance(options);
EXPECT_TRUE(covariance.Compute(covariance_blocks, &problem_));
- for (int i = 0; i < covariance_blocks.size(); ++i) {
- const double* block1 = covariance_blocks[i].first;
- const double* block2 = covariance_blocks[i].second;
+ for (auto& covariance_block : covariance_blocks) {
+ const double* block1 = covariance_block.first;
+ const double* block2 = covariance_block.second;
// block1, block2
GetCovarianceBlockAndCompare(block1,
block2,
@@ -574,7 +583,7 @@
double parameters_[6];
Problem problem_;
- vector<pair<const double*, const double*>> all_covariance_blocks_;
+ std::vector<std::pair<const double*, const double*>> all_covariance_blocks_;
BoundsMap column_bounds_;
BoundsMap local_column_bounds_;
};
@@ -628,8 +637,6 @@
ComputeAndCompareCovarianceBlocks(options, expected_covariance);
}
-#ifdef CERES_USE_OPENMP
-
TEST_F(CovarianceTest, ThreadedNormalBehavior) {
// J
//
@@ -680,8 +687,6 @@
ComputeAndCompareCovarianceBlocks(options, expected_covariance);
}
-#endif // CERES_USE_OPENMP
-
TEST_F(CovarianceTest, ConstantParameterBlock) {
problem_.SetParameterBlockConstant(parameters_);
@@ -733,15 +738,15 @@
ComputeAndCompareCovarianceBlocks(options, expected_covariance);
}
-TEST_F(CovarianceTest, LocalParameterization) {
+TEST_F(CovarianceTest, Manifold) {
double* x = parameters_;
double* y = x + 2;
- problem_.SetParameterization(x, new PolynomialParameterization);
+ problem_.SetManifold(x, new PolynomialManifold);
- vector<int> subset;
+ std::vector<int> subset;
subset.push_back(2);
- problem_.SetParameterization(y, new SubsetParameterization(3, subset));
+ problem_.SetManifold(y, new SubsetManifold(3, subset));
// Raw Jacobian: J
//
@@ -792,20 +797,20 @@
ComputeAndCompareCovarianceBlocks(options, expected_covariance);
}
-TEST_F(CovarianceTest, LocalParameterizationInTangentSpace) {
+TEST_F(CovarianceTest, ManifoldInTangentSpace) {
double* x = parameters_;
double* y = x + 2;
double* z = y + 3;
- problem_.SetParameterization(x, new PolynomialParameterization);
+ problem_.SetManifold(x, new PolynomialManifold);
- vector<int> subset;
+ std::vector<int> subset;
subset.push_back(2);
- problem_.SetParameterization(y, new SubsetParameterization(3, subset));
+ problem_.SetManifold(y, new SubsetManifold(3, subset));
- local_column_bounds_[x] = make_pair(0, 1);
- local_column_bounds_[y] = make_pair(1, 3);
- local_column_bounds_[z] = make_pair(3, 4);
+ local_column_bounds_[x] = std::make_pair(0, 1);
+ local_column_bounds_[y] = std::make_pair(1, 3);
+ local_column_bounds_[z] = std::make_pair(3, 4);
// Raw Jacobian: J
//
@@ -855,22 +860,22 @@
ComputeAndCompareCovarianceBlocksInTangentSpace(options, expected_covariance);
}
-TEST_F(CovarianceTest, LocalParameterizationInTangentSpaceWithConstantBlocks) {
+TEST_F(CovarianceTest, ManifoldInTangentSpaceWithConstantBlocks) {
double* x = parameters_;
double* y = x + 2;
double* z = y + 3;
- problem_.SetParameterization(x, new PolynomialParameterization);
+ problem_.SetManifold(x, new PolynomialManifold);
problem_.SetParameterBlockConstant(x);
- vector<int> subset;
+ std::vector<int> subset;
subset.push_back(2);
- problem_.SetParameterization(y, new SubsetParameterization(3, subset));
+ problem_.SetManifold(y, new SubsetManifold(3, subset));
problem_.SetParameterBlockConstant(y);
- local_column_bounds_[x] = make_pair(0, 1);
- local_column_bounds_[y] = make_pair(1, 3);
- local_column_bounds_[z] = make_pair(3, 4);
+ local_column_bounds_[x] = std::make_pair(0, 1);
+ local_column_bounds_[y] = std::make_pair(1, 3);
+ local_column_bounds_[z] = std::make_pair(3, 4);
// Raw Jacobian: J
//
@@ -941,7 +946,7 @@
// -15 -18 3 6 13 0
// 6 -4 0 0 0 29
- // 3.4142 is the smallest eigen value of J'J. The following matrix
+ // 3.4142 is the smallest eigenvalue of J'J. The following matrix
// was obtained by dropping the eigenvector corresponding to this
// eigenvalue.
// clang-format off
@@ -980,7 +985,7 @@
double* x = parameters_;
double* y = x + 2;
double* z = y + 3;
- vector<const double*> parameter_blocks;
+ std::vector<const double*> parameter_blocks;
parameter_blocks.push_back(x);
parameter_blocks.push_back(y);
parameter_blocks.push_back(z);
@@ -1009,7 +1014,7 @@
double* x = parameters_;
double* y = x + 2;
double* z = y + 3;
- vector<const double*> parameter_blocks;
+ std::vector<const double*> parameter_blocks;
parameter_blocks.push_back(x);
parameter_blocks.push_back(y);
parameter_blocks.push_back(z);
@@ -1038,17 +1043,17 @@
double* y = x + 2;
double* z = y + 3;
- problem_.SetParameterization(x, new PolynomialParameterization);
+ problem_.SetManifold(x, new PolynomialManifold);
- vector<int> subset;
+ std::vector<int> subset;
subset.push_back(2);
- problem_.SetParameterization(y, new SubsetParameterization(3, subset));
+ problem_.SetManifold(y, new SubsetManifold(3, subset));
- local_column_bounds_[x] = make_pair(0, 1);
- local_column_bounds_[y] = make_pair(1, 3);
- local_column_bounds_[z] = make_pair(3, 4);
+ local_column_bounds_[x] = std::make_pair(0, 1);
+ local_column_bounds_[y] = std::make_pair(1, 3);
+ local_column_bounds_[z] = std::make_pair(3, 4);
- vector<const double*> parameter_blocks;
+ std::vector<const double*> parameter_blocks;
parameter_blocks.push_back(x);
parameter_blocks.push_back(y);
parameter_blocks.push_back(z);
@@ -1077,7 +1082,7 @@
Covariance covariance(options);
double* x = parameters_;
double* y = x + 2;
- vector<const double*> parameter_blocks;
+ std::vector<const double*> parameter_blocks;
parameter_blocks.push_back(x);
parameter_blocks.push_back(x);
parameter_blocks.push_back(y);
@@ -1085,11 +1090,11 @@
EXPECT_DEATH_IF_SUPPORTED(covariance.Compute(parameter_blocks, &problem_),
"Covariance::Compute called with duplicate blocks "
"at indices \\(0, 1\\) and \\(2, 3\\)");
- vector<pair<const double*, const double*>> covariance_blocks;
- covariance_blocks.push_back(make_pair(x, x));
- covariance_blocks.push_back(make_pair(x, x));
- covariance_blocks.push_back(make_pair(y, y));
- covariance_blocks.push_back(make_pair(y, y));
+ std::vector<std::pair<const double*, const double*>> covariance_blocks;
+ covariance_blocks.emplace_back(x, x);
+ covariance_blocks.emplace_back(x, x);
+ covariance_blocks.emplace_back(y, y);
+ covariance_blocks.emplace_back(y, y);
EXPECT_DEATH_IF_SUPPORTED(covariance.Compute(covariance_blocks, &problem_),
"Covariance::Compute called with duplicate blocks "
"at indices \\(0, 1\\) and \\(2, 3\\)");
@@ -1104,44 +1109,46 @@
{
double jacobian[] = {1.0, 0.0, 0.0, 1.0};
- problem_.AddResidualBlock(new UnaryCostFunction(2, 2, jacobian), NULL, x);
+ problem_.AddResidualBlock(
+ new UnaryCostFunction(2, 2, jacobian), nullptr, x);
}
{
double jacobian[] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
- problem_.AddResidualBlock(new UnaryCostFunction(3, 3, jacobian), NULL, y);
+ problem_.AddResidualBlock(
+ new UnaryCostFunction(3, 3, jacobian), nullptr, y);
}
{
double jacobian = 5.0;
problem_.AddResidualBlock(
- new UnaryCostFunction(1, 1, &jacobian), NULL, z);
+ new UnaryCostFunction(1, 1, &jacobian), nullptr, z);
}
{
double jacobian1[] = {0.0, 0.0, 0.0};
double jacobian2[] = {-5.0, -6.0};
problem_.AddResidualBlock(
- new BinaryCostFunction(1, 3, 2, jacobian1, jacobian2), NULL, y, x);
+ new BinaryCostFunction(1, 3, 2, jacobian1, jacobian2), nullptr, y, x);
}
{
double jacobian1[] = {2.0};
double jacobian2[] = {3.0, -2.0};
problem_.AddResidualBlock(
- new BinaryCostFunction(1, 1, 2, jacobian1, jacobian2), NULL, z, x);
+ new BinaryCostFunction(1, 1, 2, jacobian1, jacobian2), nullptr, z, x);
}
- all_covariance_blocks_.push_back(make_pair(x, x));
- all_covariance_blocks_.push_back(make_pair(y, y));
- all_covariance_blocks_.push_back(make_pair(z, z));
- all_covariance_blocks_.push_back(make_pair(x, y));
- all_covariance_blocks_.push_back(make_pair(x, z));
- all_covariance_blocks_.push_back(make_pair(y, z));
+ all_covariance_blocks_.emplace_back(x, x);
+ all_covariance_blocks_.emplace_back(y, y);
+ all_covariance_blocks_.emplace_back(z, z);
+ all_covariance_blocks_.emplace_back(x, y);
+ all_covariance_blocks_.emplace_back(x, z);
+ all_covariance_blocks_.emplace_back(y, z);
- column_bounds_[x] = make_pair(0, 2);
- column_bounds_[y] = make_pair(2, 5);
- column_bounds_[z] = make_pair(5, 6);
+ column_bounds_[x] = std::make_pair(0, 2);
+ column_bounds_[y] = std::make_pair(2, 5);
+ column_bounds_[z] = std::make_pair(5, 6);
}
};
@@ -1197,22 +1204,22 @@
}
};
-TEST(Covariance, ZeroSizedLocalParameterizationGetCovariance) {
+TEST(Covariance, ZeroSizedManifoldGetCovariance) {
double x = 0.0;
double y = 1.0;
Problem problem;
problem.AddResidualBlock(LinearCostFunction::Create(), nullptr, &x, &y);
- problem.SetParameterization(&y, new SubsetParameterization(1, {0}));
+ problem.SetManifold(&y, new SubsetManifold(1, {0}));
// J = [-1 0]
// [ 0 0]
Covariance::Options options;
options.algorithm_type = DENSE_SVD;
Covariance covariance(options);
- vector<pair<const double*, const double*>> covariance_blocks;
- covariance_blocks.push_back(std::make_pair(&x, &x));
- covariance_blocks.push_back(std::make_pair(&x, &y));
- covariance_blocks.push_back(std::make_pair(&y, &x));
- covariance_blocks.push_back(std::make_pair(&y, &y));
+ std::vector<std::pair<const double*, const double*>> covariance_blocks;
+ covariance_blocks.emplace_back(&x, &x);
+ covariance_blocks.emplace_back(&x, &y);
+ covariance_blocks.emplace_back(&y, &x);
+ covariance_blocks.emplace_back(&y, &y);
EXPECT_TRUE(covariance.Compute(covariance_blocks, &problem));
double value = -1;
@@ -1232,22 +1239,22 @@
EXPECT_NEAR(value, 0.0, std::numeric_limits<double>::epsilon());
}
-TEST(Covariance, ZeroSizedLocalParameterizationGetCovarianceInTangentSpace) {
+TEST(Covariance, ZeroSizedManifoldGetCovarianceInTangentSpace) {
double x = 0.0;
double y = 1.0;
Problem problem;
problem.AddResidualBlock(LinearCostFunction::Create(), nullptr, &x, &y);
- problem.SetParameterization(&y, new SubsetParameterization(1, {0}));
+ problem.SetManifold(&y, new SubsetManifold(1, {0}));
// J = [-1 0]
// [ 0 0]
Covariance::Options options;
options.algorithm_type = DENSE_SVD;
Covariance covariance(options);
- vector<pair<const double*, const double*>> covariance_blocks;
- covariance_blocks.push_back(std::make_pair(&x, &x));
- covariance_blocks.push_back(std::make_pair(&x, &y));
- covariance_blocks.push_back(std::make_pair(&y, &x));
- covariance_blocks.push_back(std::make_pair(&y, &y));
+ std::vector<std::pair<const double*, const double*>> covariance_blocks;
+ covariance_blocks.emplace_back(&x, &x);
+ covariance_blocks.emplace_back(&x, &y);
+ covariance_blocks.emplace_back(&y, &x);
+ covariance_blocks.emplace_back(&y, &y);
EXPECT_TRUE(covariance.Compute(covariance_blocks, &problem));
double value = -1;
@@ -1270,8 +1277,8 @@
void SetUp() final {
num_parameter_blocks_ = 2000;
parameter_block_size_ = 5;
- parameters_.reset(
- new double[parameter_block_size_ * num_parameter_blocks_]);
+ parameters_ = std::make_unique<double[]>(parameter_block_size_ *
+ num_parameter_blocks_);
Matrix jacobian(parameter_block_size_, parameter_block_size_);
for (int i = 0; i < num_parameter_blocks_; ++i) {
@@ -1282,11 +1289,11 @@
problem_.AddResidualBlock(
new UnaryCostFunction(
parameter_block_size_, parameter_block_size_, jacobian.data()),
- NULL,
+ nullptr,
block_i);
for (int j = i; j < num_parameter_blocks_; ++j) {
double* block_j = parameters_.get() + j * parameter_block_size_;
- all_covariance_blocks_.push_back(make_pair(block_i, block_j));
+ all_covariance_blocks_.emplace_back(block_i, block_j);
}
}
}
@@ -1339,16 +1346,16 @@
int num_parameter_blocks_;
Problem problem_;
- vector<pair<const double*, const double*>> all_covariance_blocks_;
+ std::vector<std::pair<const double*, const double*>> all_covariance_blocks_;
};
-#if !defined(CERES_NO_SUITESPARSE) && defined(CERES_USE_OPENMP)
+#if !defined(CERES_NO_SUITESPARSE)
TEST_F(LargeScaleCovarianceTest, Parallel) {
ComputeAndCompare(SPARSE_QR, SUITE_SPARSE, 4);
}
-#endif // !defined(CERES_NO_SUITESPARSE) && defined(CERES_USE_OPENMP)
+#endif // !defined(CERES_NO_SUITESPARSE)
} // namespace internal
} // namespace ceres
diff --git a/internal/ceres/cubic_interpolation_test.cc b/internal/ceres/cubic_interpolation_test.cc
index 3907d22..c02951e 100644
--- a/internal/ceres/cubic_interpolation_test.cc
+++ b/internal/ceres/cubic_interpolation_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,8 +36,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
static constexpr double kTolerance = 1e-12;
@@ -226,7 +225,7 @@
const double b,
const double c,
const double d) {
- values_.reset(new double[kDataDimension * kNumSamples]);
+ values_ = std::make_unique<double[]>(kDataDimension * kNumSamples);
for (int x = 0; x < kNumSamples; ++x) {
for (int dim = 0; dim < kDataDimension; ++dim) {
@@ -335,7 +334,7 @@
template <int kDataDimension>
void RunPolynomialInterpolationTest(const Eigen::Matrix3d& coeff) {
- values_.reset(new double[kNumRows * kNumCols * kDataDimension]);
+ values_ = std::make_unique<double[]>(kNumRows * kNumCols * kDataDimension);
coeff_ = coeff;
double* v = values_.get();
for (int r = 0; r < kNumRows; ++r) {
@@ -530,5 +529,4 @@
kTolerance);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/cuda_block_sparse_crs_view.cc b/internal/ceres/cuda_block_sparse_crs_view.cc
new file mode 100644
index 0000000..7564d52
--- /dev/null
+++ b/internal/ceres/cuda_block_sparse_crs_view.cc
@@ -0,0 +1,103 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include "ceres/cuda_block_sparse_crs_view.h"
+
+#ifndef CERES_NO_CUDA
+
+#include "ceres/cuda_kernels_bsm_to_crs.h"
+
+namespace ceres::internal {
+
+CudaBlockSparseCRSView::CudaBlockSparseCRSView(const BlockSparseMatrix& bsm,
+ ContextImpl* context)
+ : context_(context) {
+ block_structure_ = std::make_unique<CudaBlockSparseStructure>(
+ *bsm.block_structure(), context);
+ CudaBuffer<int32_t> rows(context, bsm.num_rows() + 1);
+ CudaBuffer<int32_t> cols(context, bsm.num_nonzeros());
+ FillCRSStructure(block_structure_->num_row_blocks(),
+ bsm.num_rows(),
+ block_structure_->first_cell_in_row_block(),
+ block_structure_->cells(),
+ block_structure_->row_blocks(),
+ block_structure_->col_blocks(),
+ rows.data(),
+ cols.data(),
+ context->DefaultStream(),
+ context->is_cuda_memory_pools_supported_);
+ is_crs_compatible_ = block_structure_->IsCrsCompatible();
+ // if matrix is crs-compatible - we can drop block-structure and don't need
+ // streamed_buffer_
+ if (is_crs_compatible_) {
+ VLOG(3) << "Block-sparse matrix is compatible with CRS, discarding "
+ "block-structure";
+ block_structure_ = nullptr;
+ } else {
+ streamed_buffer_ = std::make_unique<CudaStreamedBuffer<double>>(
+ context_, kMaxTemporaryArraySize);
+ }
+ crs_matrix_ = std::make_unique<CudaSparseMatrix>(
+ bsm.num_cols(), std::move(rows), std::move(cols), context);
+ UpdateValues(bsm);
+}
+
+void CudaBlockSparseCRSView::UpdateValues(const BlockSparseMatrix& bsm) {
+ if (is_crs_compatible_) {
+ // Values of CRS-compatible matrices can be copied as-is
+ CHECK_EQ(cudaSuccess,
+ cudaMemcpyAsync(crs_matrix_->mutable_values(),
+ bsm.values(),
+ bsm.num_nonzeros() * sizeof(double),
+ cudaMemcpyHostToDevice,
+ context_->DefaultStream()));
+ return;
+ }
+ streamed_buffer_->CopyToGpu(
+ bsm.values(),
+ bsm.num_nonzeros(),
+ [bs = block_structure_.get(), crs = crs_matrix_.get()](
+ const double* values, int num_values, int offset, auto stream) {
+ PermuteToCRS(offset,
+ num_values,
+ bs->num_row_blocks(),
+ bs->first_cell_in_row_block(),
+ bs->cells(),
+ bs->row_blocks(),
+ bs->col_blocks(),
+ crs->rows(),
+ values,
+ crs->mutable_values(),
+ stream);
+ });
+}
+
+} // namespace ceres::internal
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_block_sparse_crs_view.h b/internal/ceres/cuda_block_sparse_crs_view.h
new file mode 100644
index 0000000..58ef618
--- /dev/null
+++ b/internal/ceres/cuda_block_sparse_crs_view.h
@@ -0,0 +1,108 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+//
+
+#ifndef CERES_INTERNAL_CUDA_BLOCK_SPARSE_CRS_VIEW_H_
+#define CERES_INTERNAL_CUDA_BLOCK_SPARSE_CRS_VIEW_H_
+
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include <memory>
+
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/cuda_block_structure.h"
+#include "ceres/cuda_buffer.h"
+#include "ceres/cuda_sparse_matrix.h"
+#include "ceres/cuda_streamed_buffer.h"
+
+namespace ceres::internal {
+// We use cuSPARSE library for SpMV operations. However, it does not support
+// block-sparse format with varying size of the blocks. Thus, we perform the
+// following operations in order to compute products of block-sparse matrices
+// and dense vectors on gpu:
+// - Once per block-sparse structure update:
+// - Compute CRS structure from block-sparse structure and check if values of
+// block-sparse matrix would have the same order as values of CRS matrix
+// - Once per block-sparse values update:
+// - Update values in CRS matrix with values of block-sparse matrix
+//
+// Only block-sparse matrices with sequential order of cells are supported.
+//
+// UpdateValues method updates values:
+// - In a single host-to-device copy for matrices with CRS-compatible value
+// layout
+// - Simultaneously transferring and permuting values using CudaStreamedBuffer
+// otherwise
+class CERES_NO_EXPORT CudaBlockSparseCRSView {
+ public:
+ // Initializes internal CRS matrix using structure and values of block-sparse
+ // matrix For block-sparse matrices that have value layout different from CRS
+ // block-sparse structure will be stored/
+ CudaBlockSparseCRSView(const BlockSparseMatrix& bsm, ContextImpl* context);
+
+ const CudaSparseMatrix* crs_matrix() const { return crs_matrix_.get(); }
+ CudaSparseMatrix* mutable_crs_matrix() { return crs_matrix_.get(); }
+
+ // Update values of crs_matrix_ using values of block-sparse matrix.
+ // Assumes that bsm has the same block-sparse structure as matrix that was
+ // used for construction.
+ void UpdateValues(const BlockSparseMatrix& bsm);
+
+ // Returns true if block-sparse matrix had CRS-compatible value layout
+ bool IsCrsCompatible() const { return is_crs_compatible_; }
+
+ void LeftMultiplyAndAccumulate(const CudaVector& x, CudaVector* y) const {
+ crs_matrix()->LeftMultiplyAndAccumulate(x, y);
+ }
+
+ void RightMultiplyAndAccumulate(const CudaVector& x, CudaVector* y) const {
+ crs_matrix()->RightMultiplyAndAccumulate(x, y);
+ }
+
+ private:
+ // Value permutation kernel performs a single element-wise operation per
+ // thread, thus performing permutation in blocks of 8 megabytes of
+ // block-sparse values seems reasonable
+ static constexpr int kMaxTemporaryArraySize = 1 * 1024 * 1024;
+ std::unique_ptr<CudaSparseMatrix> crs_matrix_;
+ // Only created if block-sparse matrix has non-CRS value layout
+ std::unique_ptr<CudaStreamedBuffer<double>> streamed_buffer_;
+ // Only stored if block-sparse matrix has non-CRS value layout
+ std::unique_ptr<CudaBlockSparseStructure> block_structure_;
+ bool is_crs_compatible_;
+ ContextImpl* context_;
+};
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
+#endif // CERES_INTERNAL_CUDA_BLOCK_SPARSE_CRS_VIEW_H_
diff --git a/internal/ceres/cuda_block_sparse_crs_view_test.cc b/internal/ceres/cuda_block_sparse_crs_view_test.cc
new file mode 100644
index 0000000..7d7d46c
--- /dev/null
+++ b/internal/ceres/cuda_block_sparse_crs_view_test.cc
@@ -0,0 +1,164 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include "ceres/cuda_block_sparse_crs_view.h"
+
+#include <glog/logging.h>
+#include <gtest/gtest.h>
+
+#include <numeric>
+
+#ifndef CERES_NO_CUDA
+
+namespace ceres::internal {
+class CudaBlockSparseCRSViewTest : public ::testing::Test {
+ protected:
+ void SetUp() final {
+ std::string message;
+ CHECK(context_.InitCuda(&message))
+ << "InitCuda() failed because: " << message;
+
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = 1234;
+ options.min_row_block_size = 1;
+ options.max_row_block_size = 10;
+ options.num_col_blocks = 567;
+ options.min_col_block_size = 1;
+ options.max_col_block_size = 10;
+ options.block_density = 0.2;
+ std::mt19937 rng;
+
+ // Block-sparse matrix with order of values different from CRS
+ block_sparse_non_crs_compatible_ =
+ BlockSparseMatrix::CreateRandomMatrix(options, rng, true);
+ std::iota(block_sparse_non_crs_compatible_->mutable_values(),
+ block_sparse_non_crs_compatible_->mutable_values() +
+ block_sparse_non_crs_compatible_->num_nonzeros(),
+ 1);
+
+ options.max_row_block_size = 1;
+ // Block-sparse matrix with CRS order of values (row-blocks are rows)
+ block_sparse_crs_compatible_rows_ =
+ BlockSparseMatrix::CreateRandomMatrix(options, rng, true);
+ std::iota(block_sparse_crs_compatible_rows_->mutable_values(),
+ block_sparse_crs_compatible_rows_->mutable_values() +
+ block_sparse_crs_compatible_rows_->num_nonzeros(),
+ 1);
+ // Block-sparse matrix with CRS order of values (single cell per row-block)
+ auto bs = std::make_unique<CompressedRowBlockStructure>(
+ *block_sparse_non_crs_compatible_->block_structure());
+
+ int num_nonzeros = 0;
+ for (auto& r : bs->rows) {
+ const int num_cells = r.cells.size();
+ if (num_cells > 1) {
+ std::uniform_int_distribution<int> uniform_cell(0, num_cells - 1);
+ const int selected_cell = uniform_cell(rng);
+ std::swap(r.cells[0], r.cells[selected_cell]);
+ r.cells.resize(1);
+ }
+ const int row_block_size = r.block.size;
+ for (auto& c : r.cells) {
+ c.position = num_nonzeros;
+ const int col_block_size = bs->cols[c.block_id].size;
+ num_nonzeros += col_block_size * row_block_size;
+ }
+ }
+ block_sparse_crs_compatible_single_cell_ =
+ std::make_unique<BlockSparseMatrix>(bs.release());
+ std::iota(block_sparse_crs_compatible_single_cell_->mutable_values(),
+ block_sparse_crs_compatible_single_cell_->mutable_values() +
+ block_sparse_crs_compatible_single_cell_->num_nonzeros(),
+ 1);
+ }
+
+ void Compare(const BlockSparseMatrix& bsm, const CudaSparseMatrix& csm) {
+ ASSERT_EQ(csm.num_cols(), bsm.num_cols());
+ ASSERT_EQ(csm.num_rows(), bsm.num_rows());
+ ASSERT_EQ(csm.num_nonzeros(), bsm.num_nonzeros());
+ const int num_rows = bsm.num_rows();
+ const int num_cols = bsm.num_cols();
+ Vector x(num_cols);
+ Vector y(num_rows);
+ CudaVector x_cuda(&context_, num_cols);
+ CudaVector y_cuda(&context_, num_rows);
+ Vector y_cuda_host(num_rows);
+
+ for (int i = 0; i < num_cols; ++i) {
+ x.setZero();
+ y.setZero();
+ y_cuda.SetZero();
+ x[i] = 1.;
+ x_cuda.CopyFromCpu(x);
+ csm.RightMultiplyAndAccumulate(x_cuda, &y_cuda);
+ bsm.RightMultiplyAndAccumulate(
+ x.data(), y.data(), &context_, std::thread::hardware_concurrency());
+ y_cuda.CopyTo(&y_cuda_host);
+ // There will be up to 1 non-zero product per row, thus we expect an exact
+ // match on 32-bit integer indices
+ EXPECT_EQ((y - y_cuda_host).squaredNorm(), 0.);
+ }
+ }
+
+ std::unique_ptr<BlockSparseMatrix> block_sparse_non_crs_compatible_;
+ std::unique_ptr<BlockSparseMatrix> block_sparse_crs_compatible_rows_;
+ std::unique_ptr<BlockSparseMatrix> block_sparse_crs_compatible_single_cell_;
+ ContextImpl context_;
+};
+
+TEST_F(CudaBlockSparseCRSViewTest, CreateUpdateValuesNonCompatible) {
+ auto view =
+ CudaBlockSparseCRSView(*block_sparse_non_crs_compatible_, &context_);
+ ASSERT_EQ(view.IsCrsCompatible(), false);
+
+ auto matrix = view.crs_matrix();
+ Compare(*block_sparse_non_crs_compatible_, *matrix);
+}
+
+TEST_F(CudaBlockSparseCRSViewTest, CreateUpdateValuesCompatibleRows) {
+ auto view =
+ CudaBlockSparseCRSView(*block_sparse_crs_compatible_rows_, &context_);
+ ASSERT_EQ(view.IsCrsCompatible(), true);
+
+ auto matrix = view.crs_matrix();
+ Compare(*block_sparse_crs_compatible_rows_, *matrix);
+}
+
+TEST_F(CudaBlockSparseCRSViewTest, CreateUpdateValuesCompatibleSingleCell) {
+ auto view = CudaBlockSparseCRSView(*block_sparse_crs_compatible_single_cell_,
+ &context_);
+ ASSERT_EQ(view.IsCrsCompatible(), true);
+
+ auto matrix = view.crs_matrix();
+ Compare(*block_sparse_crs_compatible_single_cell_, *matrix);
+}
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_block_structure.cc b/internal/ceres/cuda_block_structure.cc
new file mode 100644
index 0000000..3685775
--- /dev/null
+++ b/internal/ceres/cuda_block_structure.cc
@@ -0,0 +1,234 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include "ceres/cuda_block_structure.h"
+
+#ifndef CERES_NO_CUDA
+
+namespace ceres::internal {
+namespace {
+// Dimension of a sorted array of blocks
+inline int Dimension(const std::vector<Block>& blocks) {
+ if (blocks.empty()) {
+ return 0;
+ }
+ const auto& last = blocks.back();
+ return last.size + last.position;
+}
+} // namespace
+CudaBlockSparseStructure::CudaBlockSparseStructure(
+ const CompressedRowBlockStructure& block_structure, ContextImpl* context)
+ : CudaBlockSparseStructure(block_structure, 0, context) {}
+
+CudaBlockSparseStructure::CudaBlockSparseStructure(
+ const CompressedRowBlockStructure& block_structure,
+ const int num_col_blocks_e,
+ ContextImpl* context)
+ : first_cell_in_row_block_(context),
+ value_offset_row_block_f_(context),
+ cells_(context),
+ row_blocks_(context),
+ col_blocks_(context) {
+ // Row blocks extracted from CompressedRowBlockStructure::rows
+ std::vector<Block> row_blocks;
+ // Column blocks can be reused as-is
+ const auto& col_blocks = block_structure.cols;
+
+ // Row block offset is an index of the first cell corresponding to row block
+ std::vector<int> first_cell_in_row_block;
+ // Offset of the first value in the first non-empty row-block of F sub-matrix
+ std::vector<int> value_offset_row_block_f;
+ // Flat array of all cells from all row-blocks
+ std::vector<Cell> cells;
+
+ int f_values_offset = -1;
+ num_nonzeros_e_ = 0;
+ is_crs_compatible_ = true;
+ num_row_blocks_ = block_structure.rows.size();
+ num_col_blocks_ = col_blocks.size();
+
+ row_blocks.reserve(num_row_blocks_);
+ first_cell_in_row_block.reserve(num_row_blocks_ + 1);
+ value_offset_row_block_f.reserve(num_row_blocks_ + 1);
+ num_nonzeros_ = 0;
+ // Block-sparse matrices arising from block-jacobian writer are expected to
+ // have sequential layout (for partitioned matrices - it is expected that both
+ // E and F sub-matrices have sequential layout).
+ bool sequential_layout = true;
+ int row_block_id = 0;
+ num_row_blocks_e_ = 0;
+ for (; row_block_id < num_row_blocks_; ++row_block_id) {
+ const auto& r = block_structure.rows[row_block_id];
+ const int row_block_size = r.block.size;
+ const int num_cells = r.cells.size();
+
+ if (num_col_blocks_e == 0 || r.cells.size() == 0 ||
+ r.cells[0].block_id >= num_col_blocks_e) {
+ break;
+ }
+ num_row_blocks_e_ = row_block_id + 1;
+ // In E sub-matrix there is exactly a single E cell in the row
+ // since E cells are stored separately from F cells, crs-compatiblity of
+ // F sub-matrix only breaks if there are more than 2 cells in row (that
+ // is, more than 1 cell in F sub-matrix)
+ if (num_cells > 2 && row_block_size > 1) {
+ is_crs_compatible_ = false;
+ }
+ row_blocks.emplace_back(r.block);
+ first_cell_in_row_block.push_back(cells.size());
+
+ for (int cell_id = 0; cell_id < num_cells; ++cell_id) {
+ const auto& c = r.cells[cell_id];
+ const int col_block_size = col_blocks[c.block_id].size;
+ const int cell_size = col_block_size * row_block_size;
+ cells.push_back(c);
+ if (cell_id == 0) {
+ DCHECK(c.position == num_nonzeros_e_);
+ num_nonzeros_e_ += cell_size;
+ } else {
+ if (f_values_offset == -1) {
+ num_nonzeros_ = c.position;
+ f_values_offset = c.position;
+ }
+ sequential_layout &= c.position == num_nonzeros_;
+ num_nonzeros_ += cell_size;
+ if (cell_id == 1) {
+ // Correct value_offset_row_block_f for empty row-blocks of F
+ // preceding this one
+ for (auto it = value_offset_row_block_f.rbegin();
+ it != value_offset_row_block_f.rend();
+ ++it) {
+ if (*it != -1) break;
+ *it = c.position;
+ }
+ value_offset_row_block_f.push_back(c.position);
+ }
+ }
+ }
+ if (num_cells == 1) {
+ value_offset_row_block_f.push_back(-1);
+ }
+ }
+ for (; row_block_id < num_row_blocks_; ++row_block_id) {
+ const auto& r = block_structure.rows[row_block_id];
+ const int row_block_size = r.block.size;
+ const int num_cells = r.cells.size();
+ // After num_row_blocks_e_ row-blocks, there should be no cells in E
+ // sub-matrix. Thus crs-compatibility of F sub-matrix breaks if there are
+ // more than one cells in the row-block
+ if (num_cells > 1 && row_block_size > 1) {
+ is_crs_compatible_ = false;
+ }
+ row_blocks.emplace_back(r.block);
+ first_cell_in_row_block.push_back(cells.size());
+
+ if (r.cells.empty()) {
+ value_offset_row_block_f.push_back(-1);
+ } else {
+ for (auto it = value_offset_row_block_f.rbegin();
+ it != value_offset_row_block_f.rend();
+ --it) {
+ if (*it != -1) break;
+ *it = cells[0].position;
+ }
+ value_offset_row_block_f.push_back(r.cells[0].position);
+ }
+ for (const auto& c : r.cells) {
+ const int col_block_size = col_blocks[c.block_id].size;
+ const int cell_size = col_block_size * row_block_size;
+ cells.push_back(c);
+ DCHECK(c.block_id >= num_col_blocks_e);
+ if (f_values_offset == -1) {
+ num_nonzeros_ = c.position;
+ f_values_offset = c.position;
+ }
+ sequential_layout &= c.position == num_nonzeros_;
+ num_nonzeros_ += cell_size;
+ }
+ }
+
+ if (f_values_offset == -1) {
+ f_values_offset = num_nonzeros_e_;
+ num_nonzeros_ = num_nonzeros_e_;
+ }
+ // Fill non-zero offsets for the last rows of F submatrix
+ for (auto it = value_offset_row_block_f.rbegin();
+ it != value_offset_row_block_f.rend();
+ ++it) {
+ if (*it != -1) break;
+ *it = num_nonzeros_;
+ }
+ value_offset_row_block_f.push_back(num_nonzeros_);
+ CHECK_EQ(num_nonzeros_e_, f_values_offset);
+ first_cell_in_row_block.push_back(cells.size());
+ num_cells_ = cells.size();
+
+ num_rows_ = Dimension(row_blocks);
+ num_cols_ = Dimension(col_blocks);
+
+ CHECK(sequential_layout);
+
+ if (VLOG_IS_ON(3)) {
+ const size_t first_cell_in_row_block_size =
+ first_cell_in_row_block.size() * sizeof(int);
+ const size_t cells_size = cells.size() * sizeof(Cell);
+ const size_t row_blocks_size = row_blocks.size() * sizeof(Block);
+ const size_t col_blocks_size = col_blocks.size() * sizeof(Block);
+ const size_t total_size = first_cell_in_row_block_size + cells_size +
+ col_blocks_size + row_blocks_size;
+ const double ratio =
+ (100. * total_size) / (num_nonzeros_ * (sizeof(int) + sizeof(double)) +
+ num_rows_ * sizeof(int));
+ VLOG(3) << "\nCudaBlockSparseStructure:\n"
+ "\tRow block offsets: "
+ << first_cell_in_row_block_size
+ << " bytes\n"
+ "\tColumn blocks: "
+ << col_blocks_size
+ << " bytes\n"
+ "\tRow blocks: "
+ << row_blocks_size
+ << " bytes\n"
+ "\tCells: "
+ << cells_size << " bytes\n\tTotal: " << total_size
+ << " bytes of GPU memory (" << ratio << "% of CRS matrix size)";
+ }
+
+ first_cell_in_row_block_.CopyFromCpuVector(first_cell_in_row_block);
+ cells_.CopyFromCpuVector(cells);
+ row_blocks_.CopyFromCpuVector(row_blocks);
+ col_blocks_.CopyFromCpuVector(col_blocks);
+ if (num_col_blocks_e || num_row_blocks_e_) {
+ value_offset_row_block_f_.CopyFromCpuVector(value_offset_row_block_f);
+ }
+}
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_block_structure.h b/internal/ceres/cuda_block_structure.h
new file mode 100644
index 0000000..6da6fdd
--- /dev/null
+++ b/internal/ceres/cuda_block_structure.h
@@ -0,0 +1,120 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#ifndef CERES_INTERNAL_CUDA_BLOCK_STRUCTURE_H_
+#define CERES_INTERNAL_CUDA_BLOCK_STRUCTURE_H_
+
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include "ceres/block_structure.h"
+#include "ceres/cuda_buffer.h"
+
+namespace ceres::internal {
+class CudaBlockStructureTest;
+
+// This class stores a read-only block-sparse structure in gpu memory.
+// Invariants are the same as those of CompressedRowBlockStructure.
+// In order to simplify allocation and copying data to gpu, cells from all
+// row-blocks are stored in a single array sequentially. Array
+// first_cell_in_row_block of size num_row_blocks + 1 allows to identify range
+// of cells corresponding to a row-block. Cells corresponding to i-th row-block
+// are stored in sub-array cells[first_cell_in_row_block[i]; ...
+// first_cell_in_row_block[i + 1] - 1], and their order is preserved.
+class CERES_NO_EXPORT CudaBlockSparseStructure {
+ public:
+ // CompressedRowBlockStructure is contains a vector of CompressedLists, with
+ // each CompressedList containing a vector of Cells. We precompute a flat
+ // array of cells on cpu and transfer it to the gpu.
+ CudaBlockSparseStructure(const CompressedRowBlockStructure& block_structure,
+ ContextImpl* context);
+ // In the case of partitioned matrices, number of non-zeros in E and layout of
+ // F are computed
+ CudaBlockSparseStructure(const CompressedRowBlockStructure& block_structure,
+ const int num_col_blocks_e,
+ ContextImpl* context);
+
+ int num_rows() const { return num_rows_; }
+ int num_cols() const { return num_cols_; }
+ int num_cells() const { return num_cells_; }
+ int num_nonzeros() const { return num_nonzeros_; }
+ // When partitioned matrix constructor was used, returns number of non-zeros
+ // in E sub-matrix
+ int num_nonzeros_e() const { return num_nonzeros_e_; }
+ int num_row_blocks() const { return num_row_blocks_; }
+ int num_row_blocks_e() const { return num_row_blocks_e_; }
+ int num_col_blocks() const { return num_col_blocks_; }
+
+ // Returns true if values from block-sparse matrix (F sub-matrix in
+ // partitioned case) can be copied to CRS matrix as-is. This is possible if
+ // each row-block is stored in CRS order:
+ // - Row-block consists of a single row
+ // - Row-block contains a single cell
+ bool IsCrsCompatible() const { return is_crs_compatible_; }
+
+ // Device pointer to array of num_row_blocks + 1 indices of the first cell of
+ // row block
+ const int* first_cell_in_row_block() const {
+ return first_cell_in_row_block_.data();
+ }
+ // Device pointer to array of num_row_blocks + 1 indices of the first value in
+ // this or subsequent row-blocks of submatrix F
+ const int* value_offset_row_block_f() const {
+ return value_offset_row_block_f_.data();
+ }
+ // Device pointer to array of num_cells cells, sorted by row-block
+ const Cell* cells() const { return cells_.data(); }
+ // Device pointer to array of row blocks
+ const Block* row_blocks() const { return row_blocks_.data(); }
+ // Device pointer to array of column blocks
+ const Block* col_blocks() const { return col_blocks_.data(); }
+
+ private:
+ int num_rows_;
+ int num_cols_;
+ int num_cells_;
+ int num_nonzeros_;
+ int num_nonzeros_e_;
+ int num_row_blocks_;
+ int num_row_blocks_e_;
+ int num_col_blocks_;
+ bool is_crs_compatible_;
+ CudaBuffer<int> first_cell_in_row_block_;
+ CudaBuffer<int> value_offset_row_block_f_;
+ CudaBuffer<Cell> cells_;
+ CudaBuffer<Block> row_blocks_;
+ CudaBuffer<Block> col_blocks_;
+ friend class CudaBlockStructureTest;
+};
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
+#endif // CERES_INTERNAL_CUDA_BLOCK_SPARSE_STRUCTURE_H_
diff --git a/internal/ceres/cuda_block_structure_test.cc b/internal/ceres/cuda_block_structure_test.cc
new file mode 100644
index 0000000..daff431
--- /dev/null
+++ b/internal/ceres/cuda_block_structure_test.cc
@@ -0,0 +1,144 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include <glog/logging.h>
+#include <gtest/gtest.h>
+
+#include <numeric>
+
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/cuda_block_structure.h"
+
+namespace ceres::internal {
+
+class CudaBlockStructureTest : public ::testing::Test {
+ protected:
+ void SetUp() final {
+ std::string message;
+ CHECK(context_.InitCuda(&message))
+ << "InitCuda() failed because: " << message;
+
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = 1234;
+ options.min_row_block_size = 1;
+ options.max_row_block_size = 10;
+ options.num_col_blocks = 567;
+ options.min_col_block_size = 1;
+ options.max_col_block_size = 10;
+ options.block_density = 0.2;
+ std::mt19937 rng;
+ A_ = BlockSparseMatrix::CreateRandomMatrix(options, rng);
+ std::iota(
+ A_->mutable_values(), A_->mutable_values() + A_->num_nonzeros(), 1);
+ }
+
+ std::vector<Cell> GetCells(const CudaBlockSparseStructure& structure) {
+ const auto& cuda_buffer = structure.cells_;
+ std::vector<Cell> cells(cuda_buffer.size());
+ cuda_buffer.CopyToCpu(cells.data(), cells.size());
+ return cells;
+ }
+ std::vector<Block> GetRowBlocks(const CudaBlockSparseStructure& structure) {
+ const auto& cuda_buffer = structure.row_blocks_;
+ std::vector<Block> blocks(cuda_buffer.size());
+ cuda_buffer.CopyToCpu(blocks.data(), blocks.size());
+ return blocks;
+ }
+ std::vector<Block> GetColBlocks(const CudaBlockSparseStructure& structure) {
+ const auto& cuda_buffer = structure.col_blocks_;
+ std::vector<Block> blocks(cuda_buffer.size());
+ cuda_buffer.CopyToCpu(blocks.data(), blocks.size());
+ return blocks;
+ }
+ std::vector<int> GetRowBlockOffsets(
+ const CudaBlockSparseStructure& structure) {
+ const auto& cuda_buffer = structure.first_cell_in_row_block_;
+ std::vector<int> first_cell_in_row_block(cuda_buffer.size());
+ cuda_buffer.CopyToCpu(first_cell_in_row_block.data(),
+ first_cell_in_row_block.size());
+ return first_cell_in_row_block;
+ }
+
+ std::unique_ptr<BlockSparseMatrix> A_;
+ ContextImpl context_;
+};
+
+TEST_F(CudaBlockStructureTest, StructureIdentity) {
+ auto block_structure = A_->block_structure();
+ const int num_row_blocks = block_structure->rows.size();
+ const int num_col_blocks = block_structure->cols.size();
+
+ CudaBlockSparseStructure cuda_block_structure(*block_structure, &context_);
+
+ ASSERT_EQ(cuda_block_structure.num_rows(), A_->num_rows());
+ ASSERT_EQ(cuda_block_structure.num_cols(), A_->num_cols());
+ ASSERT_EQ(cuda_block_structure.num_nonzeros(), A_->num_nonzeros());
+ ASSERT_EQ(cuda_block_structure.num_row_blocks(), num_row_blocks);
+ ASSERT_EQ(cuda_block_structure.num_col_blocks(), num_col_blocks);
+
+ std::vector<Block> blocks = GetColBlocks(cuda_block_structure);
+ ASSERT_EQ(blocks.size(), num_col_blocks);
+ for (int i = 0; i < num_col_blocks; ++i) {
+ EXPECT_EQ(block_structure->cols[i].position, blocks[i].position);
+ EXPECT_EQ(block_structure->cols[i].size, blocks[i].size);
+ }
+
+ std::vector<Cell> cells = GetCells(cuda_block_structure);
+ std::vector<int> first_cell_in_row_block =
+ GetRowBlockOffsets(cuda_block_structure);
+ blocks = GetRowBlocks(cuda_block_structure);
+
+ ASSERT_EQ(blocks.size(), num_row_blocks);
+ ASSERT_EQ(first_cell_in_row_block.size(), num_row_blocks + 1);
+ ASSERT_EQ(first_cell_in_row_block.back(), cells.size());
+
+ for (int i = 0; i < num_row_blocks; ++i) {
+ const int num_cells = block_structure->rows[i].cells.size();
+ EXPECT_EQ(blocks[i].position, block_structure->rows[i].block.position);
+ EXPECT_EQ(blocks[i].size, block_structure->rows[i].block.size);
+ const int first_cell = first_cell_in_row_block[i];
+ const int last_cell = first_cell_in_row_block[i + 1];
+ ASSERT_EQ(last_cell - first_cell, num_cells);
+ for (int j = 0; j < num_cells; ++j) {
+ EXPECT_EQ(cells[first_cell + j].block_id,
+ block_structure->rows[i].cells[j].block_id);
+ EXPECT_EQ(cells[first_cell + j].position,
+ block_structure->rows[i].cells[j].position);
+ }
+ }
+}
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_buffer.h b/internal/ceres/cuda_buffer.h
new file mode 100644
index 0000000..40048fd
--- /dev/null
+++ b/internal/ceres/cuda_buffer.h
@@ -0,0 +1,172 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#ifndef CERES_INTERNAL_CUDA_BUFFER_H_
+#define CERES_INTERNAL_CUDA_BUFFER_H_
+
+#include "ceres/context_impl.h"
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include <vector>
+
+#include "cuda_runtime.h"
+#include "glog/logging.h"
+
+namespace ceres::internal {
+// An encapsulated buffer to maintain GPU memory, and handle transfers between
+// GPU and system memory. It is the responsibility of the user to ensure that
+// the appropriate GPU device is selected before each subroutine is called. This
+// is particularly important when using multiple GPU devices on different CPU
+// threads, since active Cuda devices are determined by the cuda runtime on a
+// per-thread basis.
+template <typename T>
+class CudaBuffer {
+ public:
+ explicit CudaBuffer(ContextImpl* context) : context_(context) {}
+ CudaBuffer(ContextImpl* context, int size) : context_(context) {
+ Reserve(size);
+ }
+
+ CudaBuffer(CudaBuffer&& other)
+ : data_(other.data_), size_(other.size_), context_(other.context_) {
+ other.data_ = nullptr;
+ other.size_ = 0;
+ }
+
+ CudaBuffer(const CudaBuffer&) = delete;
+ CudaBuffer& operator=(const CudaBuffer&) = delete;
+
+ ~CudaBuffer() {
+ if (data_ != nullptr) {
+ CHECK_EQ(cudaFree(data_), cudaSuccess);
+ }
+ }
+
+ // Grow the GPU memory buffer if needed to accommodate data of the specified
+ // size
+ void Reserve(const size_t size) {
+ if (size > size_) {
+ if (data_ != nullptr) {
+ CHECK_EQ(cudaFree(data_), cudaSuccess);
+ }
+ CHECK_EQ(cudaMalloc(&data_, size * sizeof(T)), cudaSuccess)
+ << "Failed to allocate " << size * sizeof(T)
+ << " bytes of GPU memory";
+ size_ = size;
+ }
+ }
+
+ // Perform an asynchronous copy from CPU memory to GPU memory managed by this
+ // CudaBuffer instance using the stream provided.
+ void CopyFromCpu(const T* data, const size_t size) {
+ Reserve(size);
+ CHECK_EQ(cudaMemcpyAsync(data_,
+ data,
+ size * sizeof(T),
+ cudaMemcpyHostToDevice,
+ context_->DefaultStream()),
+ cudaSuccess);
+ }
+
+ // Perform an asynchronous copy from a vector in CPU memory to GPU memory
+ // managed by this CudaBuffer instance.
+ void CopyFromCpuVector(const std::vector<T>& data) {
+ Reserve(data.size());
+ CHECK_EQ(cudaMemcpyAsync(data_,
+ data.data(),
+ data.size() * sizeof(T),
+ cudaMemcpyHostToDevice,
+ context_->DefaultStream()),
+ cudaSuccess);
+ }
+
+ // Perform an asynchronous copy from another GPU memory array to the GPU
+ // memory managed by this CudaBuffer instance using the stream provided.
+ void CopyFromGPUArray(const T* data, const size_t size) {
+ Reserve(size);
+ CHECK_EQ(cudaMemcpyAsync(data_,
+ data,
+ size * sizeof(T),
+ cudaMemcpyDeviceToDevice,
+ context_->DefaultStream()),
+ cudaSuccess);
+ }
+
+ // Copy data from the GPU memory managed by this CudaBuffer instance to CPU
+ // memory. It is the caller's responsibility to ensure that the CPU memory
+ // pointer is valid, i.e. it is not null, and that it points to memory of
+ // at least this->size() size. This method ensures all previously dispatched
+ // GPU operations on the specified stream have completed before copying the
+ // data to CPU memory.
+ void CopyToCpu(T* data, const size_t size) const {
+ CHECK(data_ != nullptr);
+ CHECK_EQ(cudaMemcpyAsync(data,
+ data_,
+ size * sizeof(T),
+ cudaMemcpyDeviceToHost,
+ context_->DefaultStream()),
+ cudaSuccess);
+ CHECK_EQ(cudaStreamSynchronize(context_->DefaultStream()), cudaSuccess);
+ }
+
+ // Copy N items from another GPU memory array to the GPU memory managed by
+ // this CudaBuffer instance, growing this buffer's size if needed. This copy
+ // is asynchronous, and operates on the stream provided.
+ void CopyNItemsFrom(int n, const CudaBuffer<T>& other) {
+ Reserve(n);
+ CHECK(other.data_ != nullptr);
+ CHECK(data_ != nullptr);
+ CHECK_EQ(cudaMemcpyAsync(data_,
+ other.data_,
+ size_ * sizeof(T),
+ cudaMemcpyDeviceToDevice,
+ context_->DefaultStream()),
+ cudaSuccess);
+ }
+
+ // Return a pointer to the GPU memory managed by this CudaBuffer instance.
+ T* data() { return data_; }
+ const T* data() const { return data_; }
+ // Return the number of items of type T that can fit in the GPU memory
+ // allocated so far by this CudaBuffer instance.
+ size_t size() const { return size_; }
+
+ private:
+ T* data_ = nullptr;
+ size_t size_ = 0;
+ ContextImpl* context_ = nullptr;
+};
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
+
+#endif // CERES_INTERNAL_CUDA_BUFFER_H_
diff --git a/internal/ceres/cuda_dense_cholesky_test.cc b/internal/ceres/cuda_dense_cholesky_test.cc
new file mode 100644
index 0000000..b74a75a
--- /dev/null
+++ b/internal/ceres/cuda_dense_cholesky_test.cc
@@ -0,0 +1,332 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#include <string>
+
+#include "ceres/dense_cholesky.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/eigen.h"
+#include "glog/logging.h"
+#include "gtest/gtest.h"
+
+namespace ceres::internal {
+
+#ifndef CERES_NO_CUDA
+
+TEST(CUDADenseCholesky, InvalidOptionOnCreate) {
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ auto dense_cuda_solver = CUDADenseCholesky::Create(options);
+ EXPECT_EQ(dense_cuda_solver, nullptr);
+}
+
+// Tests the CUDA Cholesky solver with a simple 4x4 matrix.
+TEST(CUDADenseCholesky, Cholesky4x4Matrix) {
+ Eigen::Matrix4d A;
+ // clang-format off
+ A << 4, 12, -16, 0,
+ 12, 37, -43, 0,
+ -16, -43, 98, 0,
+ 0, 0, 0, 1;
+ // clang-format on
+
+ Vector b = Eigen::Vector4d::Ones();
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ auto dense_cuda_solver = CUDADenseCholesky::Create(options);
+ ASSERT_NE(dense_cuda_solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(dense_cuda_solver->Factorize(A.cols(), A.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ Eigen::Vector4d x = Eigen::Vector4d::Zero();
+ ASSERT_EQ(dense_cuda_solver->Solve(b.data(), x.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ static const double kEpsilon = std::numeric_limits<double>::epsilon() * 10;
+ const Eigen::Vector4d x_expected(113.75 / 3.0, -31.0 / 3.0, 5.0 / 3.0, 1.0);
+ EXPECT_NEAR((x[0] - x_expected[0]) / x_expected[0], 0.0, kEpsilon);
+ EXPECT_NEAR((x[1] - x_expected[1]) / x_expected[1], 0.0, kEpsilon);
+ EXPECT_NEAR((x[2] - x_expected[2]) / x_expected[2], 0.0, kEpsilon);
+ EXPECT_NEAR((x[3] - x_expected[3]) / x_expected[3], 0.0, kEpsilon);
+}
+
+TEST(CUDADenseCholesky, SingularMatrix) {
+ Eigen::Matrix3d A;
+ // clang-format off
+ A << 1, 0, 0,
+ 0, 1, 0,
+ 0, 0, 0;
+ // clang-format on
+
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ auto dense_cuda_solver = CUDADenseCholesky::Create(options);
+ ASSERT_NE(dense_cuda_solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(dense_cuda_solver->Factorize(A.cols(), A.data(), &error_string),
+ LinearSolverTerminationType::FAILURE);
+}
+
+TEST(CUDADenseCholesky, NegativeMatrix) {
+ Eigen::Matrix3d A;
+ // clang-format off
+ A << 1, 0, 0,
+ 0, 1, 0,
+ 0, 0, -1;
+ // clang-format on
+
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ auto dense_cuda_solver = CUDADenseCholesky::Create(options);
+ ASSERT_NE(dense_cuda_solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(dense_cuda_solver->Factorize(A.cols(), A.data(), &error_string),
+ LinearSolverTerminationType::FAILURE);
+}
+
+TEST(CUDADenseCholesky, MustFactorizeBeforeSolve) {
+ const Eigen::Vector3d b = Eigen::Vector3d::Ones();
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ auto dense_cuda_solver = CUDADenseCholesky::Create(options);
+ ASSERT_NE(dense_cuda_solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(dense_cuda_solver->Solve(b.data(), nullptr, &error_string),
+ LinearSolverTerminationType::FATAL_ERROR);
+}
+
+TEST(CUDADenseCholesky, Randomized1600x1600Tests) {
+ const int kNumCols = 1600;
+ using LhsType = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic>;
+ using RhsType = Eigen::Matrix<double, Eigen::Dynamic, 1>;
+ using SolutionType = Eigen::Matrix<double, Eigen::Dynamic, 1>;
+
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = ceres::CUDA;
+ std::unique_ptr<DenseCholesky> dense_cholesky =
+ CUDADenseCholesky::Create(options);
+
+ const int kNumTrials = 20;
+ for (int i = 0; i < kNumTrials; ++i) {
+ LhsType lhs = LhsType::Random(kNumCols, kNumCols);
+ lhs = lhs.transpose() * lhs;
+ lhs += 1e-3 * LhsType::Identity(kNumCols, kNumCols);
+ SolutionType x_expected = SolutionType::Random(kNumCols);
+ RhsType rhs = lhs * x_expected;
+ SolutionType x_computed = SolutionType::Zero(kNumCols);
+ // Sanity check the random matrix sizes.
+ EXPECT_EQ(lhs.rows(), kNumCols);
+ EXPECT_EQ(lhs.cols(), kNumCols);
+ EXPECT_EQ(rhs.rows(), kNumCols);
+ EXPECT_EQ(rhs.cols(), 1);
+ EXPECT_EQ(x_expected.rows(), kNumCols);
+ EXPECT_EQ(x_expected.cols(), 1);
+ EXPECT_EQ(x_computed.rows(), kNumCols);
+ EXPECT_EQ(x_computed.cols(), 1);
+ LinearSolver::Summary summary;
+ summary.termination_type = dense_cholesky->FactorAndSolve(
+ kNumCols, lhs.data(), rhs.data(), x_computed.data(), &summary.message);
+ ASSERT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
+ static const double kEpsilon = std::numeric_limits<double>::epsilon() * 3e5;
+ ASSERT_NEAR(
+ (x_computed - x_expected).norm() / x_expected.norm(), 0.0, kEpsilon);
+ }
+}
+
+TEST(CUDADenseCholeskyMixedPrecision, InvalidOptionsOnCreate) {
+ {
+ // Did not ask for CUDA, and did not ask for mixed precision.
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ auto solver = CUDADenseCholeskyMixedPrecision::Create(options);
+ ASSERT_EQ(solver, nullptr);
+ }
+ {
+ // Asked for CUDA, but did not ask for mixed precision.
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = ceres::CUDA;
+ auto solver = CUDADenseCholeskyMixedPrecision::Create(options);
+ ASSERT_EQ(solver, nullptr);
+ }
+}
+
+// Tests the CUDA Cholesky solver with a simple 4x4 matrix.
+TEST(CUDADenseCholeskyMixedPrecision, Cholesky4x4Matrix1Step) {
+ Eigen::Matrix4d A;
+ // clang-format off
+ // A common test Cholesky decomposition test matrix, see :
+ // https://en.wikipedia.org/w/index.php?title=Cholesky_decomposition&oldid=1080607368#Example
+ A << 4, 12, -16, 0,
+ 12, 37, -43, 0,
+ -16, -43, 98, 0,
+ 0, 0, 0, 1;
+ // clang-format on
+
+ const Eigen::Vector4d b = Eigen::Vector4d::Ones();
+ LinearSolver::Options options;
+ options.max_num_refinement_iterations = 0;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ options.use_mixed_precision_solves = true;
+ auto solver = CUDADenseCholeskyMixedPrecision::Create(options);
+ ASSERT_NE(solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(solver->Factorize(A.cols(), A.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ Eigen::Vector4d x = Eigen::Vector4d::Zero();
+ ASSERT_EQ(solver->Solve(b.data(), x.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ // A single step of the mixed precision solver will be equivalent to solving
+ // in low precision (FP32). Hence the tolerance is defined w.r.t. FP32 epsilon
+ // instead of FP64 epsilon.
+ static const double kEpsilon = std::numeric_limits<float>::epsilon() * 10;
+ const Eigen::Vector4d x_expected(113.75 / 3.0, -31.0 / 3.0, 5.0 / 3.0, 1.0);
+ EXPECT_NEAR((x[0] - x_expected[0]) / x_expected[0], 0.0, kEpsilon);
+ EXPECT_NEAR((x[1] - x_expected[1]) / x_expected[1], 0.0, kEpsilon);
+ EXPECT_NEAR((x[2] - x_expected[2]) / x_expected[2], 0.0, kEpsilon);
+ EXPECT_NEAR((x[3] - x_expected[3]) / x_expected[3], 0.0, kEpsilon);
+}
+
+// Tests the CUDA Cholesky solver with a simple 4x4 matrix.
+TEST(CUDADenseCholeskyMixedPrecision, Cholesky4x4Matrix4Steps) {
+ Eigen::Matrix4d A;
+ // clang-format off
+ A << 4, 12, -16, 0,
+ 12, 37, -43, 0,
+ -16, -43, 98, 0,
+ 0, 0, 0, 1;
+ // clang-format on
+
+ const Eigen::Vector4d b = Eigen::Vector4d::Ones();
+ LinearSolver::Options options;
+ options.max_num_refinement_iterations = 3;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ options.use_mixed_precision_solves = true;
+ auto solver = CUDADenseCholeskyMixedPrecision::Create(options);
+ ASSERT_NE(solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(solver->Factorize(A.cols(), A.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ Eigen::Vector4d x = Eigen::Vector4d::Zero();
+ ASSERT_EQ(solver->Solve(b.data(), x.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ // The error does not reduce beyond four iterations, and stagnates at this
+ // level of precision.
+ static const double kEpsilon = std::numeric_limits<double>::epsilon() * 100;
+ const Eigen::Vector4d x_expected(113.75 / 3.0, -31.0 / 3.0, 5.0 / 3.0, 1.0);
+ EXPECT_NEAR((x[0] - x_expected[0]) / x_expected[0], 0.0, kEpsilon);
+ EXPECT_NEAR((x[1] - x_expected[1]) / x_expected[1], 0.0, kEpsilon);
+ EXPECT_NEAR((x[2] - x_expected[2]) / x_expected[2], 0.0, kEpsilon);
+ EXPECT_NEAR((x[3] - x_expected[3]) / x_expected[3], 0.0, kEpsilon);
+}
+
+TEST(CUDADenseCholeskyMixedPrecision, Randomized1600x1600Tests) {
+ const int kNumCols = 1600;
+ using LhsType = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic>;
+ using RhsType = Eigen::Matrix<double, Eigen::Dynamic, 1>;
+ using SolutionType = Eigen::Matrix<double, Eigen::Dynamic, 1>;
+
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = ceres::CUDA;
+ options.use_mixed_precision_solves = true;
+ options.max_num_refinement_iterations = 20;
+ std::unique_ptr<CUDADenseCholeskyMixedPrecision> dense_cholesky =
+ CUDADenseCholeskyMixedPrecision::Create(options);
+
+ const int kNumTrials = 20;
+ for (int i = 0; i < kNumTrials; ++i) {
+ LhsType lhs = LhsType::Random(kNumCols, kNumCols);
+ lhs = lhs.transpose() * lhs;
+ lhs += 1e-3 * LhsType::Identity(kNumCols, kNumCols);
+ SolutionType x_expected = SolutionType::Random(kNumCols);
+ RhsType rhs = lhs * x_expected;
+ SolutionType x_computed = SolutionType::Zero(kNumCols);
+ // Sanity check the random matrix sizes.
+ EXPECT_EQ(lhs.rows(), kNumCols);
+ EXPECT_EQ(lhs.cols(), kNumCols);
+ EXPECT_EQ(rhs.rows(), kNumCols);
+ EXPECT_EQ(rhs.cols(), 1);
+ EXPECT_EQ(x_expected.rows(), kNumCols);
+ EXPECT_EQ(x_expected.cols(), 1);
+ EXPECT_EQ(x_computed.rows(), kNumCols);
+ EXPECT_EQ(x_computed.cols(), 1);
+ LinearSolver::Summary summary;
+ summary.termination_type = dense_cholesky->FactorAndSolve(
+ kNumCols, lhs.data(), rhs.data(), x_computed.data(), &summary.message);
+ ASSERT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
+ static const double kEpsilon = std::numeric_limits<double>::epsilon() * 1e6;
+ ASSERT_NEAR(
+ (x_computed - x_expected).norm() / x_expected.norm(), 0.0, kEpsilon);
+ }
+}
+
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
diff --git a/internal/ceres/cuda_dense_qr_test.cc b/internal/ceres/cuda_dense_qr_test.cc
new file mode 100644
index 0000000..b1f25e2
--- /dev/null
+++ b/internal/ceres/cuda_dense_qr_test.cc
@@ -0,0 +1,177 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#include <string>
+
+#include "ceres/dense_qr.h"
+#include "ceres/internal/eigen.h"
+#include "glog/logging.h"
+#include "gtest/gtest.h"
+
+namespace ceres::internal {
+
+#ifndef CERES_NO_CUDA
+
+TEST(CUDADenseQR, InvalidOptionOnCreate) {
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ auto dense_cuda_solver = CUDADenseQR::Create(options);
+ EXPECT_EQ(dense_cuda_solver, nullptr);
+}
+
+// Tests the CUDA QR solver with a simple 4x4 matrix.
+TEST(CUDADenseQR, QR4x4Matrix) {
+ Eigen::Matrix4d A;
+ // clang-format off
+ A << 4, 12, -16, 0,
+ 12, 37, -43, 0,
+ -16, -43, 98, 0,
+ 0, 0, 0, 1;
+ // clang-format on
+ const Eigen::Vector4d b = Eigen::Vector4d::Ones();
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ auto dense_cuda_solver = CUDADenseQR::Create(options);
+ ASSERT_NE(dense_cuda_solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(
+ dense_cuda_solver->Factorize(A.rows(), A.cols(), A.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ Eigen::Vector4d x = Eigen::Vector4d::Zero();
+ ASSERT_EQ(dense_cuda_solver->Solve(b.data(), x.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ // Empirically observed accuracy of cuSolverDN's QR solver.
+ const double kEpsilon = std::numeric_limits<double>::epsilon() * 1500;
+ const Eigen::Vector4d x_expected(113.75 / 3.0, -31.0 / 3.0, 5.0 / 3.0, 1.0);
+ EXPECT_NEAR((x - x_expected).norm() / x_expected.norm(), 0.0, kEpsilon);
+}
+
+// Tests the CUDA QR solver with a simple 4x4 matrix.
+TEST(CUDADenseQR, QR4x2Matrix) {
+ Eigen::Matrix<double, 4, 2> A;
+ // clang-format off
+ A << 4, 12,
+ 12, 37,
+ -16, -43,
+ 0, 0;
+ // clang-format on
+
+ const std::vector<double> b(4, 1.0);
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ auto dense_cuda_solver = CUDADenseQR::Create(options);
+ ASSERT_NE(dense_cuda_solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(
+ dense_cuda_solver->Factorize(A.rows(), A.cols(), A.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ std::vector<double> x(2, 0);
+ ASSERT_EQ(dense_cuda_solver->Solve(b.data(), x.data(), &error_string),
+ LinearSolverTerminationType::SUCCESS);
+ // Empirically observed accuracy of cuSolverDN's QR solver.
+ const double kEpsilon = std::numeric_limits<double>::epsilon() * 10;
+ // Solution values computed with Octave.
+ const Eigen::Vector2d x_expected(-1.143410852713177, 0.4031007751937981);
+ EXPECT_NEAR((x[0] - x_expected[0]) / x_expected[0], 0.0, kEpsilon);
+ EXPECT_NEAR((x[1] - x_expected[1]) / x_expected[1], 0.0, kEpsilon);
+}
+
+TEST(CUDADenseQR, MustFactorizeBeforeSolve) {
+ const Eigen::Vector3d b = Eigen::Vector3d::Ones();
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = CUDA;
+ auto dense_cuda_solver = CUDADenseQR::Create(options);
+ ASSERT_NE(dense_cuda_solver, nullptr);
+ std::string error_string;
+ ASSERT_EQ(dense_cuda_solver->Solve(b.data(), nullptr, &error_string),
+ LinearSolverTerminationType::FATAL_ERROR);
+}
+
+TEST(CUDADenseQR, Randomized1600x100Tests) {
+ const int kNumRows = 1600;
+ const int kNumCols = 100;
+ using LhsType = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic>;
+ using RhsType = Eigen::Matrix<double, Eigen::Dynamic, 1>;
+ using SolutionType = Eigen::Matrix<double, Eigen::Dynamic, 1>;
+
+ LinearSolver::Options options;
+ ContextImpl context;
+ options.context = &context;
+ std::string error;
+ EXPECT_TRUE(context.InitCuda(&error)) << error;
+ options.dense_linear_algebra_library_type = ceres::CUDA;
+ std::unique_ptr<DenseQR> dense_qr = CUDADenseQR::Create(options);
+
+ const int kNumTrials = 20;
+ for (int i = 0; i < kNumTrials; ++i) {
+ LhsType lhs = LhsType::Random(kNumRows, kNumCols);
+ SolutionType x_expected = SolutionType::Random(kNumCols);
+ RhsType rhs = lhs * x_expected;
+ SolutionType x_computed = SolutionType::Zero(kNumCols);
+ // Sanity check the random matrix sizes.
+ EXPECT_EQ(lhs.rows(), kNumRows);
+ EXPECT_EQ(lhs.cols(), kNumCols);
+ EXPECT_EQ(rhs.rows(), kNumRows);
+ EXPECT_EQ(rhs.cols(), 1);
+ EXPECT_EQ(x_expected.rows(), kNumCols);
+ EXPECT_EQ(x_expected.cols(), 1);
+ EXPECT_EQ(x_computed.rows(), kNumCols);
+ EXPECT_EQ(x_computed.cols(), 1);
+ LinearSolver::Summary summary;
+ summary.termination_type = dense_qr->FactorAndSolve(kNumRows,
+ kNumCols,
+ lhs.data(),
+ rhs.data(),
+ x_computed.data(),
+ &summary.message);
+ ASSERT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
+ ASSERT_NEAR((x_computed - x_expected).norm() / x_expected.norm(),
+ 0.0,
+ std::numeric_limits<double>::epsilon() * 400);
+ }
+}
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
diff --git a/internal/ceres/cuda_kernels_bsm_to_crs.cu.cc b/internal/ceres/cuda_kernels_bsm_to_crs.cu.cc
new file mode 100644
index 0000000..ee574f0
--- /dev/null
+++ b/internal/ceres/cuda_kernels_bsm_to_crs.cu.cc
@@ -0,0 +1,477 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include "ceres/cuda_kernels_bsm_to_crs.h"
+
+#include <cuda_runtime.h>
+#include <thrust/execution_policy.h>
+#include <thrust/scan.h>
+
+#include "ceres/block_structure.h"
+#include "ceres/cuda_kernels_utils.h"
+
+namespace ceres {
+namespace internal {
+
+namespace {
+inline auto ThrustCudaStreamExecutionPolicy(cudaStream_t stream) {
+ // par_nosync execution policy was added in Thrust 1.16
+ // https://github.com/NVIDIA/thrust/blob/main/CHANGELOG.md#thrust-1160
+#if THRUST_VERSION < 101700
+ return thrust::cuda::par.on(stream);
+#else
+ return thrust::cuda::par_nosync.on(stream);
+#endif
+}
+
+void* CudaMalloc(size_t size,
+ cudaStream_t stream,
+ bool memory_pools_supported) {
+ void* data = nullptr;
+ // Stream-ordered alloaction API is available since CUDA 11.2, but might be
+ // not implemented by particular device
+#if CUDART_VERSION < 11020
+#warning \
+ "Stream-ordered allocations are unavailable, consider updating CUDA toolkit to version 11.2+"
+ cudaMalloc(&data, size);
+#else
+ if (memory_pools_supported) {
+ cudaMallocAsync(&data, size, stream);
+ } else {
+ cudaMalloc(&data, size);
+ }
+#endif
+ return data;
+}
+
+void CudaFree(void* data, cudaStream_t stream, bool memory_pools_supported) {
+ // Stream-ordered alloaction API is available since CUDA 11.2, but might be
+ // not implemented by particular device
+#if CUDART_VERSION < 11020
+#warning \
+ "Stream-ordered allocations are unavailable, consider updating CUDA toolkit to version 11.2+"
+ cudaSuccess, cudaFree(data);
+#else
+ if (memory_pools_supported) {
+ cudaFreeAsync(data, stream);
+ } else {
+ cudaFree(data);
+ }
+#endif
+}
+template <typename T>
+T* CudaAllocate(size_t num_elements,
+ cudaStream_t stream,
+ bool memory_pools_supported) {
+ T* data = static_cast<T*>(
+ CudaMalloc(num_elements * sizeof(T), stream, memory_pools_supported));
+ return data;
+}
+} // namespace
+
+// Fill row block id and nnz for each row using block-sparse structure
+// represented by a set of flat arrays.
+// Inputs:
+// - num_row_blocks: number of row-blocks in block-sparse structure
+// - first_cell_in_row_block: index of the first cell of the row-block; size:
+// num_row_blocks + 1
+// - cells: cells of block-sparse structure as a continuous array
+// - row_blocks: row blocks of block-sparse structure stored sequentially
+// - col_blocks: column blocks of block-sparse structure stored sequentially
+// Outputs:
+// - rows: rows[i + 1] will contain number of non-zeros in i-th row, rows[0]
+// will be set to 0; rows are filled with a shift by one element in order
+// to obtain row-index array of CRS matrix with a inclusive scan afterwards
+// - row_block_ids: row_block_ids[i] will be set to index of row-block that
+// contains i-th row.
+// Computation is perform row-block-wise
+template <bool partitioned = false>
+__global__ void RowBlockIdAndNNZ(
+ const int num_row_blocks,
+ const int num_col_blocks_e,
+ const int num_row_blocks_e,
+ const int* __restrict__ first_cell_in_row_block,
+ const Cell* __restrict__ cells,
+ const Block* __restrict__ row_blocks,
+ const Block* __restrict__ col_blocks,
+ int* __restrict__ rows_e,
+ int* __restrict__ rows_f,
+ int* __restrict__ row_block_ids) {
+ const int row_block_id = blockIdx.x * blockDim.x + threadIdx.x;
+ if (row_block_id > num_row_blocks) {
+ // No synchronization is performed in this kernel, thus it is safe to return
+ return;
+ }
+ if (row_block_id == num_row_blocks) {
+ // one extra thread sets the first element
+ rows_f[0] = 0;
+ if constexpr (partitioned) {
+ rows_e[0] = 0;
+ }
+ return;
+ }
+ const auto& row_block = row_blocks[row_block_id];
+ auto first_cell = cells + first_cell_in_row_block[row_block_id];
+ const auto last_cell = cells + first_cell_in_row_block[row_block_id + 1];
+ [[maybe_unused]] int row_nnz_e = 0;
+ if (partitioned && row_block_id < num_row_blocks_e) {
+ // First cell is a cell from E
+ row_nnz_e = col_blocks[first_cell->block_id].size;
+ ++first_cell;
+ }
+ int row_nnz_f = 0;
+ for (auto cell = first_cell; cell < last_cell; ++cell) {
+ row_nnz_f += col_blocks[cell->block_id].size;
+ }
+ const int first_row = row_block.position;
+ const int last_row = first_row + row_block.size;
+ for (int i = first_row; i < last_row; ++i) {
+ if constexpr (partitioned) {
+ rows_e[i + 1] = row_nnz_e;
+ }
+ rows_f[i + 1] = row_nnz_f;
+ row_block_ids[i] = row_block_id;
+ }
+}
+
+// Row-wise creation of CRS structure
+// Inputs:
+// - num_rows: number of rows in matrix
+// - first_cell_in_row_block: index of the first cell of the row-block; size:
+// num_row_blocks + 1
+// - cells: cells of block-sparse structure as a continuous array
+// - row_blocks: row blocks of block-sparse structure stored sequentially
+// - col_blocks: column blocks of block-sparse structure stored sequentially
+// - row_block_ids: index of row-block that corresponds to row
+// - rows: row-index array of CRS structure
+// Outputs:
+// - cols: column-index array of CRS structure
+// Computaion is perform row-wise
+template <bool partitioned>
+__global__ void ComputeColumns(const int num_rows,
+ const int num_row_blocks_e,
+ const int num_col_blocks_e,
+ const int* __restrict__ first_cell_in_row_block,
+ const Cell* __restrict__ cells,
+ const Block* __restrict__ row_blocks,
+ const Block* __restrict__ col_blocks,
+ const int* __restrict__ row_block_ids,
+ const int* __restrict__ rows_e,
+ int* __restrict__ cols_e,
+ const int* __restrict__ rows_f,
+ int* __restrict__ cols_f) {
+ const int row = blockIdx.x * blockDim.x + threadIdx.x;
+ if (row >= num_rows) {
+ // No synchronization is performed in this kernel, thus it is safe to return
+ return;
+ }
+ const int row_block_id = row_block_ids[row];
+ // position in crs matrix
+ auto first_cell = cells + first_cell_in_row_block[row_block_id];
+ const auto last_cell = cells + first_cell_in_row_block[row_block_id + 1];
+ const int num_cols_e = col_blocks[num_col_blocks_e].position;
+ // For reach cell of row-block only current row is being filled
+ if (partitioned && row_block_id < num_row_blocks_e) {
+ // The first cell is cell from E
+ const auto& col_block = col_blocks[first_cell->block_id];
+ const int col_block_size = col_block.size;
+ int column_idx = col_block.position;
+ int crs_position_e = rows_e[row];
+ // Column indices for each element of row_in_block row of current cell
+ for (int i = 0; i < col_block_size; ++i, ++crs_position_e) {
+ cols_e[crs_position_e] = column_idx++;
+ }
+ ++first_cell;
+ }
+ int crs_position_f = rows_f[row];
+ for (auto cell = first_cell; cell < last_cell; ++cell) {
+ const auto& col_block = col_blocks[cell->block_id];
+ const int col_block_size = col_block.size;
+ int column_idx = col_block.position - num_cols_e;
+ // Column indices for each element of row_in_block row of current cell
+ for (int i = 0; i < col_block_size; ++i, ++crs_position_f) {
+ cols_f[crs_position_f] = column_idx++;
+ }
+ }
+}
+
+void FillCRSStructure(const int num_row_blocks,
+ const int num_rows,
+ const int* first_cell_in_row_block,
+ const Cell* cells,
+ const Block* row_blocks,
+ const Block* col_blocks,
+ int* rows,
+ int* cols,
+ cudaStream_t stream,
+ bool memory_pools_supported) {
+ // Set number of non-zeros per row in rows array and row to row-block map in
+ // row_block_ids array
+ int* row_block_ids =
+ CudaAllocate<int>(num_rows, stream, memory_pools_supported);
+ const int num_blocks_blockwise = NumBlocksInGrid(num_row_blocks + 1);
+ RowBlockIdAndNNZ<false><<<num_blocks_blockwise, kCudaBlockSize, 0, stream>>>(
+ num_row_blocks,
+ 0,
+ 0,
+ first_cell_in_row_block,
+ cells,
+ row_blocks,
+ col_blocks,
+ nullptr,
+ rows,
+ row_block_ids);
+ // Finalize row-index array of CRS strucure by computing prefix sum
+ thrust::inclusive_scan(
+ ThrustCudaStreamExecutionPolicy(stream), rows, rows + num_rows + 1, rows);
+
+ // Fill cols array of CRS structure
+ const int num_blocks_rowwise = NumBlocksInGrid(num_rows);
+ ComputeColumns<false><<<num_blocks_rowwise, kCudaBlockSize, 0, stream>>>(
+ num_rows,
+ 0,
+ 0,
+ first_cell_in_row_block,
+ cells,
+ row_blocks,
+ col_blocks,
+ row_block_ids,
+ nullptr,
+ nullptr,
+ rows,
+ cols);
+ CudaFree(row_block_ids, stream, memory_pools_supported);
+}
+
+void FillCRSStructurePartitioned(const int num_row_blocks,
+ const int num_rows,
+ const int num_row_blocks_e,
+ const int num_col_blocks_e,
+ const int num_nonzeros_e,
+ const int* first_cell_in_row_block,
+ const Cell* cells,
+ const Block* row_blocks,
+ const Block* col_blocks,
+ int* rows_e,
+ int* cols_e,
+ int* rows_f,
+ int* cols_f,
+ cudaStream_t stream,
+ bool memory_pools_supported) {
+ // Set number of non-zeros per row in rows array and row to row-block map in
+ // row_block_ids array
+ int* row_block_ids =
+ CudaAllocate<int>(num_rows, stream, memory_pools_supported);
+ const int num_blocks_blockwise = NumBlocksInGrid(num_row_blocks + 1);
+ RowBlockIdAndNNZ<true><<<num_blocks_blockwise, kCudaBlockSize, 0, stream>>>(
+ num_row_blocks,
+ num_col_blocks_e,
+ num_row_blocks_e,
+ first_cell_in_row_block,
+ cells,
+ row_blocks,
+ col_blocks,
+ rows_e,
+ rows_f,
+ row_block_ids);
+ // Finalize row-index array of CRS strucure by computing prefix sum
+ thrust::inclusive_scan(ThrustCudaStreamExecutionPolicy(stream),
+ rows_e,
+ rows_e + num_rows + 1,
+ rows_e);
+ thrust::inclusive_scan(ThrustCudaStreamExecutionPolicy(stream),
+ rows_f,
+ rows_f + num_rows + 1,
+ rows_f);
+
+ // Fill cols array of CRS structure
+ const int num_blocks_rowwise = NumBlocksInGrid(num_rows);
+ ComputeColumns<true><<<num_blocks_rowwise, kCudaBlockSize, 0, stream>>>(
+ num_rows,
+ num_row_blocks_e,
+ num_col_blocks_e,
+ first_cell_in_row_block,
+ cells,
+ row_blocks,
+ col_blocks,
+ row_block_ids,
+ rows_e,
+ cols_e,
+ rows_f,
+ cols_f);
+ CudaFree(row_block_ids, stream, memory_pools_supported);
+}
+
+template <typename T, typename Predicate>
+__device__ int PartitionPoint(const T* data,
+ int first,
+ int last,
+ Predicate&& predicate) {
+ if (!predicate(data[first])) {
+ return first;
+ }
+ while (last - first > 1) {
+ const auto midpoint = first + (last - first) / 2;
+ if (predicate(data[midpoint])) {
+ first = midpoint;
+ } else {
+ last = midpoint;
+ }
+ }
+ return last;
+}
+
+// Element-wise reordering of block-sparse values
+// - first_cell_in_row_block - position of the first cell of row-block
+// - block_sparse_values - segment of block-sparse values starting from
+// block_sparse_offset, containing num_values
+template <bool partitioned>
+__global__ void PermuteToCrsKernel(
+ const int block_sparse_offset,
+ const int num_values,
+ const int num_row_blocks,
+ const int num_row_blocks_e,
+ const int* __restrict__ first_cell_in_row_block,
+ const int* __restrict__ value_offset_row_block_f,
+ const Cell* __restrict__ cells,
+ const Block* __restrict__ row_blocks,
+ const Block* __restrict__ col_blocks,
+ const int* __restrict__ crs_rows,
+ const double* __restrict__ block_sparse_values,
+ double* __restrict__ crs_values) {
+ const int value_id = blockIdx.x * blockDim.x + threadIdx.x;
+ if (value_id >= num_values) {
+ return;
+ }
+ const int block_sparse_value_id = value_id + block_sparse_offset;
+ // Find the corresponding row-block with a binary search
+ const int row_block_id =
+ (partitioned
+ ? PartitionPoint(value_offset_row_block_f,
+ 0,
+ num_row_blocks,
+ [block_sparse_value_id] __device__(
+ const int row_block_offset) {
+ return row_block_offset <= block_sparse_value_id;
+ })
+ : PartitionPoint(first_cell_in_row_block,
+ 0,
+ num_row_blocks,
+ [cells, block_sparse_value_id] __device__(
+ const int row_block_offset) {
+ return cells[row_block_offset].position <=
+ block_sparse_value_id;
+ })) -
+ 1;
+ // Find cell and calculate offset within the row with a linear scan
+ const auto& row_block = row_blocks[row_block_id];
+ auto first_cell = cells + first_cell_in_row_block[row_block_id];
+ const auto last_cell = cells + first_cell_in_row_block[row_block_id + 1];
+ const int row_block_size = row_block.size;
+ int num_cols_before = 0;
+ if (partitioned && row_block_id < num_row_blocks_e) {
+ ++first_cell;
+ }
+ for (const Cell* cell = first_cell; cell < last_cell; ++cell) {
+ const auto& col_block = col_blocks[cell->block_id];
+ const int col_block_size = col_block.size;
+ const int cell_size = row_block_size * col_block_size;
+ if (cell->position + cell_size > block_sparse_value_id) {
+ const int pos_in_cell = block_sparse_value_id - cell->position;
+ const int row_in_cell = pos_in_cell / col_block_size;
+ const int col_in_cell = pos_in_cell % col_block_size;
+ const int row = row_in_cell + row_block.position;
+ crs_values[crs_rows[row] + num_cols_before + col_in_cell] =
+ block_sparse_values[value_id];
+ break;
+ }
+ num_cols_before += col_block_size;
+ }
+}
+
+void PermuteToCRS(const int block_sparse_offset,
+ const int num_values,
+ const int num_row_blocks,
+ const int* first_cell_in_row_block,
+ const Cell* cells,
+ const Block* row_blocks,
+ const Block* col_blocks,
+ const int* crs_rows,
+ const double* block_sparse_values,
+ double* crs_values,
+ cudaStream_t stream) {
+ const int num_blocks_valuewise = NumBlocksInGrid(num_values);
+ PermuteToCrsKernel<false>
+ <<<num_blocks_valuewise, kCudaBlockSize, 0, stream>>>(
+ block_sparse_offset,
+ num_values,
+ num_row_blocks,
+ 0,
+ first_cell_in_row_block,
+ nullptr,
+ cells,
+ row_blocks,
+ col_blocks,
+ crs_rows,
+ block_sparse_values,
+ crs_values);
+}
+
+void PermuteToCRSPartitionedF(const int block_sparse_offset,
+ const int num_values,
+ const int num_row_blocks,
+ const int num_row_blocks_e,
+ const int* first_cell_in_row_block,
+ const int* value_offset_row_block_f,
+ const Cell* cells,
+ const Block* row_blocks,
+ const Block* col_blocks,
+ const int* crs_rows,
+ const double* block_sparse_values,
+ double* crs_values,
+ cudaStream_t stream) {
+ const int num_blocks_valuewise = NumBlocksInGrid(num_values);
+ PermuteToCrsKernel<true><<<num_blocks_valuewise, kCudaBlockSize, 0, stream>>>(
+ block_sparse_offset,
+ num_values,
+ num_row_blocks,
+ num_row_blocks_e,
+ first_cell_in_row_block,
+ value_offset_row_block_f,
+ cells,
+ row_blocks,
+ col_blocks,
+ crs_rows,
+ block_sparse_values,
+ crs_values);
+}
+
+} // namespace internal
+} // namespace ceres
diff --git a/internal/ceres/cuda_kernels_bsm_to_crs.h b/internal/ceres/cuda_kernels_bsm_to_crs.h
new file mode 100644
index 0000000..27f4a25
--- /dev/null
+++ b/internal/ceres/cuda_kernels_bsm_to_crs.h
@@ -0,0 +1,113 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#ifndef CERES_INTERNAL_CUDA_KERNELS_BSM_TO_CRS_H_
+#define CERES_INTERNAL_CUDA_KERNELS_BSM_TO_CRS_H_
+
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include "cuda_runtime.h"
+
+namespace ceres {
+namespace internal {
+struct Block;
+struct Cell;
+
+// Compute structure of CRS matrix using block-sparse structure.
+// Arrays corresponding to CRS matrix are to be allocated by caller
+void FillCRSStructure(const int num_row_blocks,
+ const int num_rows,
+ const int* first_cell_in_row_block,
+ const Cell* cells,
+ const Block* row_blocks,
+ const Block* col_blocks,
+ int* rows,
+ int* cols,
+ cudaStream_t stream,
+ bool memory_pools_supported);
+
+// Compute structure of partitioned CRS matrix using block-sparse structure.
+// Arrays corresponding to CRS matrices are to be allocated by caller
+void FillCRSStructurePartitioned(const int num_row_blocks,
+ const int num_rows,
+ const int num_row_blocks_e,
+ const int num_col_blocks_e,
+ const int num_nonzeros_e,
+ const int* first_cell_in_row_block,
+ const Cell* cells,
+ const Block* row_blocks,
+ const Block* col_blocks,
+ int* rows_e,
+ int* cols_e,
+ int* rows_f,
+ int* cols_f,
+ cudaStream_t stream,
+ bool memory_pools_supported);
+
+// Permute segment of values from block-sparse matrix with sequential layout to
+// CRS order. Segment starts at block_sparse_offset and has length of num_values
+void PermuteToCRS(const int block_sparse_offset,
+ const int num_values,
+ const int num_row_blocks,
+ const int* first_cell_in_row_block,
+ const Cell* cells,
+ const Block* row_blocks,
+ const Block* col_blocks,
+ const int* crs_rows,
+ const double* block_sparse_values,
+ double* crs_values,
+ cudaStream_t stream);
+
+// Permute segment of values from F sub-matrix of block-sparse partitioned
+// matrix with sequential layout to CRS order. Segment starts at
+// block_sparse_offset (including the offset induced by values of E submatrix)
+// and has length of num_values
+void PermuteToCRSPartitionedF(const int block_sparse_offset,
+ const int num_values,
+ const int num_row_blocks,
+ const int num_row_blocks_e,
+ const int* first_cell_in_row_block,
+ const int* value_offset_row_block_f,
+ const Cell* cells,
+ const Block* row_blocks,
+ const Block* col_blocks,
+ const int* crs_rows,
+ const double* block_sparse_values,
+ double* crs_values,
+ cudaStream_t stream);
+
+} // namespace internal
+} // namespace ceres
+
+#endif // CERES_NO_CUDA
+
+#endif // CERES_INTERNAL_CUDA_KERNELS_BSM_TO_CRS_H_
diff --git a/internal/ceres/cuda_kernels_utils.h b/internal/ceres/cuda_kernels_utils.h
new file mode 100644
index 0000000..4a17bac
--- /dev/null
+++ b/internal/ceres/cuda_kernels_utils.h
@@ -0,0 +1,56 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#ifndef CERES_INTERNAL_CUDA_KERNELS_UTILS_H_
+#define CERES_INTERNAL_CUDA_KERNELS_UTILS_H_
+
+namespace ceres {
+namespace internal {
+
+// Parallel execution on CUDA device requires splitting job into blocks of a
+// fixed size. We use block-size of kCudaBlockSize for all kernels that do not
+// require any specific block size. As the CUDA Toolkit documentation says,
+// "although arbitrary in this case, is a common choice". This is determined by
+// the warp size, max block size, and multiprocessor sizes of recent GPUs. For
+// complex kernels with significant register usage and unusual memory patterns,
+// the occupancy calculator API might provide better performance. See "Occupancy
+// Calculator" under the CUDA toolkit documentation.
+constexpr int kCudaBlockSize = 256;
+
+// Compute number of blocks of kCudaBlockSize that span over 1-d grid with
+// dimension size. Note that 1-d grid dimension is limited by 2^31-1 in CUDA,
+// thus a signed int is used as an argument.
+inline int NumBlocksInGrid(int size) {
+ return (size + kCudaBlockSize - 1) / kCudaBlockSize;
+}
+} // namespace internal
+} // namespace ceres
+
+#endif // CERES_INTERNAL_CUDA_KERNELS_UTILS_H_
diff --git a/internal/ceres/cuda_kernels_vector_ops.cu.cc b/internal/ceres/cuda_kernels_vector_ops.cu.cc
new file mode 100644
index 0000000..3199ca6
--- /dev/null
+++ b/internal/ceres/cuda_kernels_vector_ops.cu.cc
@@ -0,0 +1,123 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#include "ceres/cuda_kernels_vector_ops.h"
+
+#include <cuda_runtime.h>
+
+#include "ceres/cuda_kernels_utils.h"
+
+namespace ceres {
+namespace internal {
+
+template <typename SrcType, typename DstType>
+__global__ void TypeConversionKernel(const SrcType* __restrict__ input,
+ DstType* __restrict__ output,
+ const int size) {
+ const int i = blockIdx.x * blockDim.x + threadIdx.x;
+ if (i < size) {
+ output[i] = static_cast<DstType>(input[i]);
+ }
+}
+
+void CudaFP64ToFP32(const double* input,
+ float* output,
+ const int size,
+ cudaStream_t stream) {
+ const int num_blocks = NumBlocksInGrid(size);
+ TypeConversionKernel<double, float>
+ <<<num_blocks, kCudaBlockSize, 0, stream>>>(input, output, size);
+}
+
+void CudaFP32ToFP64(const float* input,
+ double* output,
+ const int size,
+ cudaStream_t stream) {
+ const int num_blocks = NumBlocksInGrid(size);
+ TypeConversionKernel<float, double>
+ <<<num_blocks, kCudaBlockSize, 0, stream>>>(input, output, size);
+}
+
+template <typename T>
+__global__ void SetZeroKernel(T* __restrict__ output, const int size) {
+ const int i = blockIdx.x * blockDim.x + threadIdx.x;
+ if (i < size) {
+ output[i] = T(0.0);
+ }
+}
+
+void CudaSetZeroFP32(float* output, const int size, cudaStream_t stream) {
+ const int num_blocks = NumBlocksInGrid(size);
+ SetZeroKernel<float><<<num_blocks, kCudaBlockSize, 0, stream>>>(output, size);
+}
+
+void CudaSetZeroFP64(double* output, const int size, cudaStream_t stream) {
+ const int num_blocks = NumBlocksInGrid(size);
+ SetZeroKernel<double>
+ <<<num_blocks, kCudaBlockSize, 0, stream>>>(output, size);
+}
+
+template <typename SrcType, typename DstType>
+__global__ void XPlusEqualsYKernel(DstType* __restrict__ x,
+ const SrcType* __restrict__ y,
+ const int size) {
+ const int i = blockIdx.x * blockDim.x + threadIdx.x;
+ if (i < size) {
+ x[i] = x[i] + DstType(y[i]);
+ }
+}
+
+void CudaDsxpy(double* x, float* y, const int size, cudaStream_t stream) {
+ const int num_blocks = NumBlocksInGrid(size);
+ XPlusEqualsYKernel<float, double>
+ <<<num_blocks, kCudaBlockSize, 0, stream>>>(x, y, size);
+}
+
+__global__ void CudaDtDxpyKernel(double* __restrict__ y,
+ const double* D,
+ const double* __restrict__ x,
+ const int size) {
+ const int i = blockIdx.x * blockDim.x + threadIdx.x;
+ if (i < size) {
+ y[i] = y[i] + D[i] * D[i] * x[i];
+ }
+}
+
+void CudaDtDxpy(double* y,
+ const double* D,
+ const double* x,
+ const int size,
+ cudaStream_t stream) {
+ const int num_blocks = NumBlocksInGrid(size);
+ CudaDtDxpyKernel<<<num_blocks, kCudaBlockSize, 0, stream>>>(y, D, x, size);
+}
+
+} // namespace internal
+} // namespace ceres
diff --git a/internal/ceres/cuda_kernels_vector_ops.h b/internal/ceres/cuda_kernels_vector_ops.h
new file mode 100644
index 0000000..9905657
--- /dev/null
+++ b/internal/ceres/cuda_kernels_vector_ops.h
@@ -0,0 +1,83 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#ifndef CERES_INTERNAL_CUDA_KERNELS_VECTOR_OPS_H_
+#define CERES_INTERNAL_CUDA_KERNELS_VECTOR_OPS_H_
+
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include "cuda_runtime.h"
+
+namespace ceres {
+namespace internal {
+class Block;
+class Cell;
+
+// Convert an array of double (FP64) values to float (FP32). Both arrays must
+// already be on GPU memory.
+void CudaFP64ToFP32(const double* input,
+ float* output,
+ const int size,
+ cudaStream_t stream);
+
+// Convert an array of float (FP32) values to double (FP64). Both arrays must
+// already be on GPU memory.
+void CudaFP32ToFP64(const float* input,
+ double* output,
+ const int size,
+ cudaStream_t stream);
+
+// Set all elements of the array to the FP32 value 0. The array must be in GPU
+// memory.
+void CudaSetZeroFP32(float* output, const int size, cudaStream_t stream);
+
+// Set all elements of the array to the FP64 value 0. The array must be in GPU
+// memory.
+void CudaSetZeroFP64(double* output, const int size, cudaStream_t stream);
+
+// Compute x = x + double(y). Input array is float (FP32), output array is
+// double (FP64). Both arrays must already be on GPU memory.
+void CudaDsxpy(double* x, float* y, const int size, cudaStream_t stream);
+
+// Compute y[i] = y[i] + d[i]^2 x[i]. All arrays must already be on GPU memory.
+void CudaDtDxpy(double* y,
+ const double* D,
+ const double* x,
+ const int size,
+ cudaStream_t stream);
+
+} // namespace internal
+} // namespace ceres
+
+#endif // CERES_NO_CUDA
+
+#endif // CERES_INTERNAL_CUDA_KERNELS_VECTOR_OPS_H_
diff --git a/internal/ceres/cuda_kernels_vector_ops_test.cc b/internal/ceres/cuda_kernels_vector_ops_test.cc
new file mode 100644
index 0000000..e6116f7
--- /dev/null
+++ b/internal/ceres/cuda_kernels_vector_ops_test.cc
@@ -0,0 +1,198 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#include "ceres/cuda_kernels_vector_ops.h"
+
+#include <math.h>
+
+#include <limits>
+#include <string>
+#include <vector>
+
+#include "ceres/context_impl.h"
+#include "ceres/cuda_buffer.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/eigen.h"
+#include "glog/logging.h"
+#include "gtest/gtest.h"
+
+namespace ceres {
+namespace internal {
+
+#ifndef CERES_NO_CUDA
+
+TEST(CudaFP64ToFP32, SimpleConversions) {
+ ContextImpl context;
+ std::string cuda_error;
+ EXPECT_TRUE(context.InitCuda(&cuda_error)) << cuda_error;
+ std::vector<double> fp64_cpu = {1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0};
+ CudaBuffer<double> fp64_gpu(&context);
+ fp64_gpu.CopyFromCpuVector(fp64_cpu);
+ CudaBuffer<float> fp32_gpu(&context);
+ fp32_gpu.Reserve(fp64_cpu.size());
+ CudaFP64ToFP32(fp64_gpu.data(),
+ fp32_gpu.data(),
+ fp64_cpu.size(),
+ context.DefaultStream());
+ std::vector<float> fp32_cpu(fp64_cpu.size());
+ fp32_gpu.CopyToCpu(fp32_cpu.data(), fp32_cpu.size());
+ for (int i = 0; i < fp32_cpu.size(); ++i) {
+ EXPECT_EQ(fp32_cpu[i], static_cast<float>(fp64_cpu[i]));
+ }
+}
+
+TEST(CudaFP64ToFP32, NumericallyExtremeValues) {
+ ContextImpl context;
+ std::string cuda_error;
+ EXPECT_TRUE(context.InitCuda(&cuda_error)) << cuda_error;
+ std::vector<double> fp64_cpu = {
+ DBL_MIN, 10.0 * DBL_MIN, DBL_MAX, 0.1 * DBL_MAX};
+ // First just make sure that the compiler has represented these values
+ // accurately as fp64.
+ EXPECT_GT(fp64_cpu[0], 0.0);
+ EXPECT_GT(fp64_cpu[1], 0.0);
+ EXPECT_TRUE(std::isfinite(fp64_cpu[2]));
+ EXPECT_TRUE(std::isfinite(fp64_cpu[3]));
+ CudaBuffer<double> fp64_gpu(&context);
+ fp64_gpu.CopyFromCpuVector(fp64_cpu);
+ CudaBuffer<float> fp32_gpu(&context);
+ fp32_gpu.Reserve(fp64_cpu.size());
+ CudaFP64ToFP32(fp64_gpu.data(),
+ fp32_gpu.data(),
+ fp64_cpu.size(),
+ context.DefaultStream());
+ std::vector<float> fp32_cpu(fp64_cpu.size());
+ fp32_gpu.CopyToCpu(fp32_cpu.data(), fp32_cpu.size());
+ EXPECT_EQ(fp32_cpu[0], 0.0f);
+ EXPECT_EQ(fp32_cpu[1], 0.0f);
+ EXPECT_EQ(fp32_cpu[2], std::numeric_limits<float>::infinity());
+ EXPECT_EQ(fp32_cpu[3], std::numeric_limits<float>::infinity());
+}
+
+TEST(CudaFP32ToFP64, SimpleConversions) {
+ ContextImpl context;
+ std::string cuda_error;
+ EXPECT_TRUE(context.InitCuda(&cuda_error)) << cuda_error;
+ std::vector<float> fp32_cpu = {1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0};
+ CudaBuffer<float> fp32_gpu(&context);
+ fp32_gpu.CopyFromCpuVector(fp32_cpu);
+ CudaBuffer<double> fp64_gpu(&context);
+ fp64_gpu.Reserve(fp32_cpu.size());
+ CudaFP32ToFP64(fp32_gpu.data(),
+ fp64_gpu.data(),
+ fp32_cpu.size(),
+ context.DefaultStream());
+ std::vector<double> fp64_cpu(fp32_cpu.size());
+ fp64_gpu.CopyToCpu(fp64_cpu.data(), fp64_cpu.size());
+ for (int i = 0; i < fp64_cpu.size(); ++i) {
+ EXPECT_EQ(fp64_cpu[i], static_cast<double>(fp32_cpu[i]));
+ }
+}
+
+TEST(CudaSetZeroFP32, NonZeroInput) {
+ ContextImpl context;
+ std::string cuda_error;
+ EXPECT_TRUE(context.InitCuda(&cuda_error)) << cuda_error;
+ std::vector<float> fp32_cpu = {1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0};
+ CudaBuffer<float> fp32_gpu(&context);
+ fp32_gpu.CopyFromCpuVector(fp32_cpu);
+ CudaSetZeroFP32(fp32_gpu.data(), fp32_cpu.size(), context.DefaultStream());
+ std::vector<float> fp32_cpu_zero(fp32_cpu.size());
+ fp32_gpu.CopyToCpu(fp32_cpu_zero.data(), fp32_cpu_zero.size());
+ for (int i = 0; i < fp32_cpu_zero.size(); ++i) {
+ EXPECT_EQ(fp32_cpu_zero[i], 0.0f);
+ }
+}
+
+TEST(CudaSetZeroFP64, NonZeroInput) {
+ ContextImpl context;
+ std::string cuda_error;
+ EXPECT_TRUE(context.InitCuda(&cuda_error)) << cuda_error;
+ std::vector<double> fp64_cpu = {1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0};
+ CudaBuffer<double> fp64_gpu(&context);
+ fp64_gpu.CopyFromCpuVector(fp64_cpu);
+ CudaSetZeroFP64(fp64_gpu.data(), fp64_cpu.size(), context.DefaultStream());
+ std::vector<double> fp64_cpu_zero(fp64_cpu.size());
+ fp64_gpu.CopyToCpu(fp64_cpu_zero.data(), fp64_cpu_zero.size());
+ for (int i = 0; i < fp64_cpu_zero.size(); ++i) {
+ EXPECT_EQ(fp64_cpu_zero[i], 0.0);
+ }
+}
+
+TEST(CudaDsxpy, DoubleValues) {
+ ContextImpl context;
+ std::string cuda_error;
+ EXPECT_TRUE(context.InitCuda(&cuda_error)) << cuda_error;
+ std::vector<float> fp32_cpu_a = {1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0};
+ std::vector<double> fp64_cpu_b = {
+ 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0};
+ CudaBuffer<float> fp32_gpu_a(&context);
+ fp32_gpu_a.CopyFromCpuVector(fp32_cpu_a);
+ CudaBuffer<double> fp64_gpu_b(&context);
+ fp64_gpu_b.CopyFromCpuVector(fp64_cpu_b);
+ CudaDsxpy(fp64_gpu_b.data(),
+ fp32_gpu_a.data(),
+ fp32_gpu_a.size(),
+ context.DefaultStream());
+ fp64_gpu_b.CopyToCpu(fp64_cpu_b.data(), fp64_cpu_b.size());
+ for (int i = 0; i < fp64_cpu_b.size(); ++i) {
+ EXPECT_DOUBLE_EQ(fp64_cpu_b[i], 2.0 * fp32_cpu_a[i]);
+ }
+}
+
+TEST(CudaDtDxpy, ComputeFourItems) {
+ ContextImpl context;
+ std::string cuda_error;
+ EXPECT_TRUE(context.InitCuda(&cuda_error)) << cuda_error;
+ std::vector<double> x_cpu = {1, 2, 3, 4};
+ std::vector<double> y_cpu = {4, 3, 2, 1};
+ std::vector<double> d_cpu = {10, 20, 30, 40};
+ CudaBuffer<double> x_gpu(&context);
+ x_gpu.CopyFromCpuVector(x_cpu);
+ CudaBuffer<double> y_gpu(&context);
+ y_gpu.CopyFromCpuVector(y_cpu);
+ CudaBuffer<double> d_gpu(&context);
+ d_gpu.CopyFromCpuVector(d_cpu);
+ CudaDtDxpy(y_gpu.data(),
+ d_gpu.data(),
+ x_gpu.data(),
+ y_gpu.size(),
+ context.DefaultStream());
+ y_gpu.CopyToCpu(y_cpu.data(), y_cpu.size());
+ EXPECT_DOUBLE_EQ(y_cpu[0], 4.0 + 10.0 * 10.0 * 1.0);
+ EXPECT_DOUBLE_EQ(y_cpu[1], 3.0 + 20.0 * 20.0 * 2.0);
+ EXPECT_DOUBLE_EQ(y_cpu[2], 2.0 + 30.0 * 30.0 * 3.0);
+ EXPECT_DOUBLE_EQ(y_cpu[3], 1.0 + 40.0 * 40.0 * 4.0);
+}
+
+#endif // CERES_NO_CUDA
+
+} // namespace internal
+} // namespace ceres
diff --git a/internal/ceres/cuda_partitioned_block_sparse_crs_view.cc b/internal/ceres/cuda_partitioned_block_sparse_crs_view.cc
new file mode 100644
index 0000000..c0c1dc8
--- /dev/null
+++ b/internal/ceres/cuda_partitioned_block_sparse_crs_view.cc
@@ -0,0 +1,152 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include "ceres/cuda_partitioned_block_sparse_crs_view.h"
+
+#ifndef CERES_NO_CUDA
+
+#include "ceres/cuda_block_structure.h"
+#include "ceres/cuda_kernels_bsm_to_crs.h"
+
+namespace ceres::internal {
+
+CudaPartitionedBlockSparseCRSView::CudaPartitionedBlockSparseCRSView(
+ const BlockSparseMatrix& bsm,
+ const int num_col_blocks_e,
+ ContextImpl* context)
+ :
+
+ context_(context) {
+ const auto& bs = *bsm.block_structure();
+ block_structure_ =
+ std::make_unique<CudaBlockSparseStructure>(bs, num_col_blocks_e, context);
+ // Determine number of non-zeros in left submatrix
+ // Row-blocks are at least 1 row high, thus we can use a temporary array of
+ // num_rows for ComputeNonZerosInColumnBlockSubMatrix; and later reuse it for
+ // FillCRSStructurePartitioned
+ const int num_rows = bsm.num_rows();
+ const int num_nonzeros_e = block_structure_->num_nonzeros_e();
+ const int num_nonzeros_f = bsm.num_nonzeros() - num_nonzeros_e;
+
+ const int num_cols_e = num_col_blocks_e < bs.cols.size()
+ ? bs.cols[num_col_blocks_e].position
+ : bsm.num_cols();
+ const int num_cols_f = bsm.num_cols() - num_cols_e;
+
+ CudaBuffer<int32_t> rows_e(context, num_rows + 1);
+ CudaBuffer<int32_t> cols_e(context, num_nonzeros_e);
+ CudaBuffer<int32_t> rows_f(context, num_rows + 1);
+ CudaBuffer<int32_t> cols_f(context, num_nonzeros_f);
+
+ num_row_blocks_e_ = block_structure_->num_row_blocks_e();
+ FillCRSStructurePartitioned(block_structure_->num_row_blocks(),
+ num_rows,
+ num_row_blocks_e_,
+ num_col_blocks_e,
+ num_nonzeros_e,
+ block_structure_->first_cell_in_row_block(),
+ block_structure_->cells(),
+ block_structure_->row_blocks(),
+ block_structure_->col_blocks(),
+ rows_e.data(),
+ cols_e.data(),
+ rows_f.data(),
+ cols_f.data(),
+ context->DefaultStream(),
+ context->is_cuda_memory_pools_supported_);
+ f_is_crs_compatible_ = block_structure_->IsCrsCompatible();
+ if (f_is_crs_compatible_) {
+ block_structure_ = nullptr;
+ } else {
+ streamed_buffer_ = std::make_unique<CudaStreamedBuffer<double>>(
+ context, kMaxTemporaryArraySize);
+ }
+ matrix_e_ = std::make_unique<CudaSparseMatrix>(
+ num_cols_e, std::move(rows_e), std::move(cols_e), context);
+ matrix_f_ = std::make_unique<CudaSparseMatrix>(
+ num_cols_f, std::move(rows_f), std::move(cols_f), context);
+
+ CHECK_EQ(bsm.num_nonzeros(),
+ matrix_e_->num_nonzeros() + matrix_f_->num_nonzeros());
+
+ UpdateValues(bsm);
+}
+
+void CudaPartitionedBlockSparseCRSView::UpdateValues(
+ const BlockSparseMatrix& bsm) {
+ if (f_is_crs_compatible_) {
+ CHECK_EQ(cudaSuccess,
+ cudaMemcpyAsync(matrix_e_->mutable_values(),
+ bsm.values(),
+ matrix_e_->num_nonzeros() * sizeof(double),
+ cudaMemcpyHostToDevice,
+ context_->DefaultStream()));
+
+ CHECK_EQ(cudaSuccess,
+ cudaMemcpyAsync(matrix_f_->mutable_values(),
+ bsm.values() + matrix_e_->num_nonzeros(),
+ matrix_f_->num_nonzeros() * sizeof(double),
+ cudaMemcpyHostToDevice,
+ context_->DefaultStream()));
+ return;
+ }
+ streamed_buffer_->CopyToGpu(
+ bsm.values(),
+ bsm.num_nonzeros(),
+ [block_structure = block_structure_.get(),
+ num_nonzeros_e = matrix_e_->num_nonzeros(),
+ num_row_blocks_e = num_row_blocks_e_,
+ values_f = matrix_f_->mutable_values(),
+ rows_f = matrix_f_->rows()](
+ const double* values, int num_values, int offset, auto stream) {
+ PermuteToCRSPartitionedF(num_nonzeros_e + offset,
+ num_values,
+ block_structure->num_row_blocks(),
+ num_row_blocks_e,
+ block_structure->first_cell_in_row_block(),
+ block_structure->value_offset_row_block_f(),
+ block_structure->cells(),
+ block_structure->row_blocks(),
+ block_structure->col_blocks(),
+ rows_f,
+ values,
+ values_f,
+ stream);
+ });
+ CHECK_EQ(cudaSuccess,
+ cudaMemcpyAsync(matrix_e_->mutable_values(),
+ bsm.values(),
+ matrix_e_->num_nonzeros() * sizeof(double),
+ cudaMemcpyHostToDevice,
+ context_->DefaultStream()));
+}
+
+} // namespace ceres::internal
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_partitioned_block_sparse_crs_view.h b/internal/ceres/cuda_partitioned_block_sparse_crs_view.h
new file mode 100644
index 0000000..3072dea
--- /dev/null
+++ b/internal/ceres/cuda_partitioned_block_sparse_crs_view.h
@@ -0,0 +1,111 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+//
+
+#ifndef CERES_INTERNAL_CUDA_PARTITIONED_BLOCK_SPARSE_CRS_VIEW_H_
+#define CERES_INTERNAL_CUDA_PARTITIONED_BLOCK_SPARSE_CRS_VIEW_H_
+
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include <memory>
+
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/cuda_block_structure.h"
+#include "ceres/cuda_buffer.h"
+#include "ceres/cuda_sparse_matrix.h"
+#include "ceres/cuda_streamed_buffer.h"
+
+namespace ceres::internal {
+// We use cuSPARSE library for SpMV operations. However, it does not support
+// neither block-sparse format with varying size of the blocks nor
+// submatrix-vector products. Thus, we perform the following operations in order
+// to compute products of partitioned block-sparse matrices and dense vectors on
+// gpu:
+// - Once per block-sparse structure update:
+// - Compute CRS structures of left and right submatrices from block-sparse
+// structure
+// - Check if values of F sub-matrix can be copied without permutation
+// matrices
+// - Once per block-sparse values update:
+// - Copy values of E sub-matrix
+// - Permute or copy values of F sub-matrix
+//
+// It is assumed that cells of block-sparse matrix are laid out sequentially in
+// both of sub-matrices and there is exactly one cell in row-block of E
+// sub-matrix in the first num_row_blocks_e_ row blocks, and no cells in E
+// sub-matrix below num_row_blocks_e_ row blocks.
+//
+// This class avoids storing both CRS and block-sparse values in GPU memory.
+// Instead, block-sparse values are transferred to gpu memory as a disjoint set
+// of small continuous segments with simultaneous permutation of the values into
+// correct order using block-structure.
+class CERES_NO_EXPORT CudaPartitionedBlockSparseCRSView {
+ public:
+ // Initializes internal CRS matrix and block-sparse structure on GPU side
+ // values. The following objects are stored in gpu memory for the whole
+ // lifetime of the object
+ // - matrix_e_: left CRS submatrix
+ // - matrix_f_: right CRS submatrix
+ // - block_structure_: copy of block-sparse structure on GPU
+ // - streamed_buffer_: helper for value updating
+ CudaPartitionedBlockSparseCRSView(const BlockSparseMatrix& bsm,
+ const int num_col_blocks_e,
+ ContextImpl* context);
+
+ // Update values of CRS submatrices using values of block-sparse matrix.
+ // Assumes that bsm has the same block-sparse structure as matrix that was
+ // used for construction.
+ void UpdateValues(const BlockSparseMatrix& bsm);
+
+ const CudaSparseMatrix* matrix_e() const { return matrix_e_.get(); }
+ const CudaSparseMatrix* matrix_f() const { return matrix_f_.get(); }
+ CudaSparseMatrix* mutable_matrix_e() { return matrix_e_.get(); }
+ CudaSparseMatrix* mutable_matrix_f() { return matrix_f_.get(); }
+
+ private:
+ // Value permutation kernel performs a single element-wise operation per
+ // thread, thus performing permutation in blocks of 8 megabytes of
+ // block-sparse values seems reasonable
+ static constexpr int kMaxTemporaryArraySize = 1 * 1024 * 1024;
+ std::unique_ptr<CudaSparseMatrix> matrix_e_;
+ std::unique_ptr<CudaSparseMatrix> matrix_f_;
+ std::unique_ptr<CudaStreamedBuffer<double>> streamed_buffer_;
+ std::unique_ptr<CudaBlockSparseStructure> block_structure_;
+ bool f_is_crs_compatible_;
+ int num_row_blocks_e_;
+ ContextImpl* context_;
+};
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
+#endif // CERES_INTERNAL_CUDA_PARTITIONED_BLOCK_SPARSE_CRS_VIEW_H_
diff --git a/internal/ceres/cuda_partitioned_block_sparse_crs_view_test.cc b/internal/ceres/cuda_partitioned_block_sparse_crs_view_test.cc
new file mode 100644
index 0000000..ddfdeef
--- /dev/null
+++ b/internal/ceres/cuda_partitioned_block_sparse_crs_view_test.cc
@@ -0,0 +1,279 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include "ceres/cuda_partitioned_block_sparse_crs_view.h"
+
+#include <glog/logging.h>
+#include <gtest/gtest.h>
+
+#ifndef CERES_NO_CUDA
+
+namespace ceres::internal {
+
+namespace {
+struct RandomPartitionedMatrixOptions {
+ int num_row_blocks_e;
+ int num_row_blocks_f;
+ int num_col_blocks_e;
+ int num_col_blocks_f;
+ int min_row_block_size;
+ int max_row_block_size;
+ int min_col_block_size;
+ int max_col_block_size;
+ double empty_f_probability;
+ double cell_probability_f;
+ int max_cells_f;
+};
+
+std::unique_ptr<BlockSparseMatrix> CreateRandomPartitionedMatrix(
+ const RandomPartitionedMatrixOptions& options, std::mt19937& rng) {
+ const int num_row_blocks =
+ std::max(options.num_row_blocks_e, options.num_row_blocks_f);
+ const int num_col_blocks =
+ options.num_col_blocks_e + options.num_col_blocks_f;
+
+ CompressedRowBlockStructure* block_structure =
+ new CompressedRowBlockStructure;
+ block_structure->cols.reserve(num_col_blocks);
+ block_structure->rows.reserve(num_row_blocks);
+
+ // Create column blocks
+ std::uniform_int_distribution<int> col_size(options.min_col_block_size,
+ options.max_col_block_size);
+ int num_cols = 0;
+ for (int i = 0; i < num_col_blocks; ++i) {
+ const int size = col_size(rng);
+ block_structure->cols.emplace_back(size, num_cols);
+ num_cols += size;
+ }
+
+ // Prepare column-block indices of E cells
+ std::vector<int> e_col_block_idx;
+ e_col_block_idx.reserve(options.num_row_blocks_e);
+ std::uniform_int_distribution<int> col_e(0, options.num_col_blocks_e - 1);
+ for (int i = 0; i < options.num_row_blocks_e; ++i) {
+ e_col_block_idx.emplace_back(col_e(rng));
+ }
+ std::sort(e_col_block_idx.begin(), e_col_block_idx.end());
+
+ // Prepare cell structure
+ std::uniform_int_distribution<int> row_size(options.min_row_block_size,
+ options.max_row_block_size);
+ std::uniform_real_distribution<double> uniform;
+ int num_rows = 0;
+ for (int i = 0; i < num_row_blocks; ++i) {
+ const int size = row_size(rng);
+ block_structure->rows.emplace_back();
+ auto& row = block_structure->rows.back();
+ row.block.size = size;
+ row.block.position = num_rows;
+ num_rows += size;
+ if (i < options.num_row_blocks_e) {
+ row.cells.emplace_back(e_col_block_idx[i], -1);
+ if (uniform(rng) < options.empty_f_probability) {
+ continue;
+ }
+ }
+ if (i >= options.num_row_blocks_f) continue;
+ const int cells_before = row.cells.size();
+ for (int j = options.num_col_blocks_e; j < num_col_blocks; ++j) {
+ if (uniform(rng) > options.cell_probability_f) {
+ continue;
+ }
+ row.cells.emplace_back(j, -1);
+ }
+ if (row.cells.size() > cells_before + options.max_cells_f) {
+ std::shuffle(row.cells.begin() + cells_before, row.cells.end(), rng);
+ row.cells.resize(cells_before + options.max_cells_f);
+ std::sort(
+ row.cells.begin(), row.cells.end(), [](const auto& a, const auto& b) {
+ return a.block_id < b.block_id;
+ });
+ }
+ }
+
+ // Fill positions in E sub-matrix
+ int num_nonzeros = 0;
+ for (int i = 0; i < options.num_row_blocks_e; ++i) {
+ CHECK_GE(block_structure->rows[i].cells.size(), 1);
+ block_structure->rows[i].cells[0].position = num_nonzeros;
+ const int col_block_size =
+ block_structure->cols[block_structure->rows[i].cells[0].block_id].size;
+ const int row_block_size = block_structure->rows[i].block.size;
+ num_nonzeros += row_block_size * col_block_size;
+ CHECK_GE(num_nonzeros, 0);
+ }
+ // Fill positions in F sub-matrix
+ for (int i = 0; i < options.num_row_blocks_f; ++i) {
+ const int row_block_size = block_structure->rows[i].block.size;
+ for (auto& cell : block_structure->rows[i].cells) {
+ if (cell.position >= 0) continue;
+ cell.position = num_nonzeros;
+ const int col_block_size = block_structure->cols[cell.block_id].size;
+ num_nonzeros += row_block_size * col_block_size;
+ CHECK_GE(num_nonzeros, 0);
+ }
+ }
+ // Populate values
+ auto bsm = std::make_unique<BlockSparseMatrix>(block_structure, true);
+ for (int i = 0; i < num_nonzeros; ++i) {
+ bsm->mutable_values()[i] = i + 1;
+ }
+ return bsm;
+}
+} // namespace
+
+class CudaPartitionedBlockSparseCRSViewTest : public ::testing::Test {
+ static constexpr int kNumColBlocksE = 456;
+
+ protected:
+ void SetUp() final {
+ std::string message;
+ CHECK(context_.InitCuda(&message))
+ << "InitCuda() failed because: " << message;
+
+ RandomPartitionedMatrixOptions options;
+ options.num_row_blocks_f = 123;
+ options.num_row_blocks_e = 456;
+ options.num_col_blocks_f = 123;
+ options.num_col_blocks_e = kNumColBlocksE;
+ options.min_row_block_size = 1;
+ options.max_row_block_size = 4;
+ options.min_col_block_size = 1;
+ options.max_col_block_size = 4;
+ options.empty_f_probability = .1;
+ options.cell_probability_f = .2;
+ options.max_cells_f = options.num_col_blocks_f;
+
+ std::mt19937 rng;
+ short_f_ = CreateRandomPartitionedMatrix(options, rng);
+
+ options.num_row_blocks_e = 123;
+ options.num_row_blocks_f = 456;
+ short_e_ = CreateRandomPartitionedMatrix(options, rng);
+
+ options.max_cells_f = 1;
+ options.num_row_blocks_e = options.num_row_blocks_f;
+ options.num_row_blocks_e = options.num_row_blocks_f;
+ f_crs_compatible_ = CreateRandomPartitionedMatrix(options, rng);
+ }
+
+ void TestMatrix(const BlockSparseMatrix& A_) {
+ const int num_col_blocks_e = 456;
+ CudaPartitionedBlockSparseCRSView view(A_, kNumColBlocksE, &context_);
+
+ const int num_rows = A_.num_rows();
+ const int num_cols = A_.num_cols();
+
+ const auto& bs = *A_.block_structure();
+ const int num_cols_e = bs.cols[num_col_blocks_e].position;
+ const int num_cols_f = num_cols - num_cols_e;
+
+ auto matrix_e = view.matrix_e();
+ auto matrix_f = view.matrix_f();
+ ASSERT_EQ(matrix_e->num_cols(), num_cols_e);
+ ASSERT_EQ(matrix_e->num_rows(), num_rows);
+ ASSERT_EQ(matrix_f->num_cols(), num_cols_f);
+ ASSERT_EQ(matrix_f->num_rows(), num_rows);
+
+ Vector x(num_cols);
+ Vector x_left(num_cols_e);
+ Vector x_right(num_cols_f);
+ Vector y(num_rows);
+ CudaVector x_cuda(&context_, num_cols);
+ CudaVector x_left_cuda(&context_, num_cols_e);
+ CudaVector x_right_cuda(&context_, num_cols_f);
+ CudaVector y_cuda(&context_, num_rows);
+ Vector y_cuda_host(num_rows);
+
+ for (int i = 0; i < num_cols_e; ++i) {
+ x.setZero();
+ x_left.setZero();
+ y.setZero();
+ y_cuda.SetZero();
+ x[i] = 1.;
+ x_left[i] = 1.;
+ x_left_cuda.CopyFromCpu(x_left);
+ A_.RightMultiplyAndAccumulate(
+ x.data(), y.data(), &context_, std::thread::hardware_concurrency());
+ matrix_e->RightMultiplyAndAccumulate(x_left_cuda, &y_cuda);
+ y_cuda.CopyTo(&y_cuda_host);
+ // There will be up to 1 non-zero product per row, thus we expect an exact
+ // match on 32-bit integer indices
+ EXPECT_EQ((y - y_cuda_host).squaredNorm(), 0.);
+ }
+ for (int i = num_cols_e; i < num_cols_f; ++i) {
+ x.setZero();
+ x_right.setZero();
+ y.setZero();
+ y_cuda.SetZero();
+ x[i] = 1.;
+ x_right[i - num_cols_e] = 1.;
+ x_right_cuda.CopyFromCpu(x_right);
+ A_.RightMultiplyAndAccumulate(
+ x.data(), y.data(), &context_, std::thread::hardware_concurrency());
+ matrix_f->RightMultiplyAndAccumulate(x_right_cuda, &y_cuda);
+ y_cuda.CopyTo(&y_cuda_host);
+ // There will be up to 1 non-zero product per row, thus we expect an exact
+ // match on 32-bit integer indices
+ EXPECT_EQ((y - y_cuda_host).squaredNorm(), 0.);
+ }
+ }
+
+ // E sub-matrix might have less row-blocks with cells than F sub-matrix. This
+ // test matrix checks if this case is handled properly
+ std::unique_ptr<BlockSparseMatrix> short_e_;
+ // In case of non-crs compatible F matrix, permuting values from block-order
+ // to crs order involves binary search over row-blocks of F. Having lots of
+ // row-blocks with no F cells is an edge case for this algorithm.
+ std::unique_ptr<BlockSparseMatrix> short_f_;
+ // With F matrix being CRS-compatible, update of the values of partitioned
+ // matrix view reduces to two host->device memcopies, and uses a separate code
+ // path
+ std::unique_ptr<BlockSparseMatrix> f_crs_compatible_;
+
+ ContextImpl context_;
+};
+
+TEST_F(CudaPartitionedBlockSparseCRSViewTest, CreateUpdateValuesShortE) {
+ TestMatrix(*short_e_);
+}
+
+TEST_F(CudaPartitionedBlockSparseCRSViewTest, CreateUpdateValuesShortF) {
+ TestMatrix(*short_f_);
+}
+
+TEST_F(CudaPartitionedBlockSparseCRSViewTest,
+ CreateUpdateValuesCrsCompatibleF) {
+ TestMatrix(*f_crs_compatible_);
+}
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_sparse_matrix.cc b/internal/ceres/cuda_sparse_matrix.cc
new file mode 100644
index 0000000..33685a4
--- /dev/null
+++ b/internal/ceres/cuda_sparse_matrix.cc
@@ -0,0 +1,226 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+//
+// A CUDA sparse matrix linear operator.
+
+// This include must come before any #ifndef check on Ceres compile options.
+// clang-format off
+#include "ceres/internal/config.h"
+// clang-format on
+
+#include "ceres/cuda_sparse_matrix.h"
+
+#include <math.h>
+
+#include <memory>
+
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/compressed_row_sparse_matrix.h"
+#include "ceres/context_impl.h"
+#include "ceres/crs_matrix.h"
+#include "ceres/internal/export.h"
+#include "ceres/types.h"
+#include "ceres/wall_time.h"
+
+#ifndef CERES_NO_CUDA
+
+#include "ceres/cuda_buffer.h"
+#include "ceres/cuda_kernels_vector_ops.h"
+#include "ceres/cuda_vector.h"
+#include "cuda_runtime_api.h"
+#include "cusparse.h"
+
+namespace ceres::internal {
+namespace {
+// Starting in CUDA 11.2.1, CUSPARSE_MV_ALG_DEFAULT was deprecated in favor of
+// CUSPARSE_SPMV_ALG_DEFAULT.
+#if CUDART_VERSION >= 11021
+const auto kSpMVAlgorithm = CUSPARSE_SPMV_ALG_DEFAULT;
+#else // CUDART_VERSION >= 11021
+const auto kSpMVAlgorithm = CUSPARSE_MV_ALG_DEFAULT;
+#endif // CUDART_VERSION >= 11021
+size_t GetTempBufferSizeForOp(const cusparseHandle_t& handle,
+ const cusparseOperation_t op,
+ const cusparseDnVecDescr_t& x,
+ const cusparseDnVecDescr_t& y,
+ const cusparseSpMatDescr_t& A) {
+ size_t buffer_size;
+ const double alpha = 1.0;
+ const double beta = 1.0;
+ CHECK_NE(A, nullptr);
+ CHECK_EQ(cusparseSpMV_bufferSize(handle,
+ op,
+ &alpha,
+ A,
+ x,
+ &beta,
+ y,
+ CUDA_R_64F,
+ kSpMVAlgorithm,
+ &buffer_size),
+ CUSPARSE_STATUS_SUCCESS);
+ return buffer_size;
+}
+
+size_t GetTempBufferSize(const cusparseHandle_t& handle,
+ const cusparseDnVecDescr_t& left,
+ const cusparseDnVecDescr_t& right,
+ const cusparseSpMatDescr_t& A) {
+ CHECK_NE(A, nullptr);
+ return std::max(GetTempBufferSizeForOp(
+ handle, CUSPARSE_OPERATION_NON_TRANSPOSE, right, left, A),
+ GetTempBufferSizeForOp(
+ handle, CUSPARSE_OPERATION_TRANSPOSE, left, right, A));
+}
+} // namespace
+
+CudaSparseMatrix::CudaSparseMatrix(int num_cols,
+ CudaBuffer<int32_t>&& rows,
+ CudaBuffer<int32_t>&& cols,
+ ContextImpl* context)
+ : num_rows_(rows.size() - 1),
+ num_cols_(num_cols),
+ num_nonzeros_(cols.size()),
+ context_(context),
+ rows_(std::move(rows)),
+ cols_(std::move(cols)),
+ values_(context, num_nonzeros_),
+ spmv_buffer_(context) {
+ Initialize();
+}
+
+CudaSparseMatrix::CudaSparseMatrix(ContextImpl* context,
+ const CompressedRowSparseMatrix& crs_matrix)
+ : num_rows_(crs_matrix.num_rows()),
+ num_cols_(crs_matrix.num_cols()),
+ num_nonzeros_(crs_matrix.num_nonzeros()),
+ context_(context),
+ rows_(context, num_rows_ + 1),
+ cols_(context, num_nonzeros_),
+ values_(context, num_nonzeros_),
+ spmv_buffer_(context) {
+ rows_.CopyFromCpu(crs_matrix.rows(), num_rows_ + 1);
+ cols_.CopyFromCpu(crs_matrix.cols(), num_nonzeros_);
+ values_.CopyFromCpu(crs_matrix.values(), num_nonzeros_);
+ Initialize();
+}
+
+CudaSparseMatrix::~CudaSparseMatrix() {
+ CHECK_EQ(cusparseDestroySpMat(descr_), CUSPARSE_STATUS_SUCCESS);
+ descr_ = nullptr;
+ CHECK_EQ(CUSPARSE_STATUS_SUCCESS, cusparseDestroyDnVec(descr_vec_left_));
+ CHECK_EQ(CUSPARSE_STATUS_SUCCESS, cusparseDestroyDnVec(descr_vec_right_));
+}
+
+void CudaSparseMatrix::CopyValuesFromCpu(
+ const CompressedRowSparseMatrix& crs_matrix) {
+ // There is no quick and easy way to verify that the structure is unchanged,
+ // but at least we can check that the size of the matrix and the number of
+ // nonzeros is unchanged.
+ CHECK_EQ(num_rows_, crs_matrix.num_rows());
+ CHECK_EQ(num_cols_, crs_matrix.num_cols());
+ CHECK_EQ(num_nonzeros_, crs_matrix.num_nonzeros());
+ values_.CopyFromCpu(crs_matrix.values(), num_nonzeros_);
+}
+
+void CudaSparseMatrix::Initialize() {
+ CHECK(context_->IsCudaInitialized());
+ CHECK_EQ(CUSPARSE_STATUS_SUCCESS,
+ cusparseCreateCsr(&descr_,
+ num_rows_,
+ num_cols_,
+ num_nonzeros_,
+ rows_.data(),
+ cols_.data(),
+ values_.data(),
+ CUSPARSE_INDEX_32I,
+ CUSPARSE_INDEX_32I,
+ CUSPARSE_INDEX_BASE_ZERO,
+ CUDA_R_64F));
+
+ // Note: values_.data() is used as non-zero pointer to device memory
+ // When there is no non-zero values, data-pointer of values_ array will be a
+ // nullptr; but in this case left/right products are trivial and temporary
+ // buffer (and vector descriptors) is not required
+ if (!num_nonzeros_) return;
+
+ CHECK_EQ(CUSPARSE_STATUS_SUCCESS,
+ cusparseCreateDnVec(
+ &descr_vec_left_, num_rows_, values_.data(), CUDA_R_64F));
+ CHECK_EQ(CUSPARSE_STATUS_SUCCESS,
+ cusparseCreateDnVec(
+ &descr_vec_right_, num_cols_, values_.data(), CUDA_R_64F));
+ size_t buffer_size = GetTempBufferSize(
+ context_->cusparse_handle_, descr_vec_left_, descr_vec_right_, descr_);
+ spmv_buffer_.Reserve(buffer_size);
+}
+
+void CudaSparseMatrix::SpMv(cusparseOperation_t op,
+ const cusparseDnVecDescr_t& x,
+ const cusparseDnVecDescr_t& y) const {
+ const double alpha = 1.0;
+ const double beta = 1.0;
+
+ CHECK_EQ(cusparseSpMV(context_->cusparse_handle_,
+ op,
+ &alpha,
+ descr_,
+ x,
+ &beta,
+ y,
+ CUDA_R_64F,
+ kSpMVAlgorithm,
+ spmv_buffer_.data()),
+ CUSPARSE_STATUS_SUCCESS);
+}
+
+void CudaSparseMatrix::RightMultiplyAndAccumulate(const CudaVector& x,
+ CudaVector* y) const {
+ DCHECK(GetTempBufferSize(
+ context_->cusparse_handle_, y->descr(), x.descr(), descr_) <=
+ spmv_buffer_.size());
+ SpMv(CUSPARSE_OPERATION_NON_TRANSPOSE, x.descr(), y->descr());
+}
+
+void CudaSparseMatrix::LeftMultiplyAndAccumulate(const CudaVector& x,
+ CudaVector* y) const {
+ // TODO(Joydeep Biswas): We should consider storing a transposed copy of the
+ // matrix by converting CSR to CSC. From the cuSPARSE documentation:
+ // "In general, opA == CUSPARSE_OPERATION_NON_TRANSPOSE is 3x faster than opA
+ // != CUSPARSE_OPERATION_NON_TRANSPOSE"
+ DCHECK(GetTempBufferSize(
+ context_->cusparse_handle_, x.descr(), y->descr(), descr_) <=
+ spmv_buffer_.size());
+ SpMv(CUSPARSE_OPERATION_TRANSPOSE, x.descr(), y->descr());
+}
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_sparse_matrix.h b/internal/ceres/cuda_sparse_matrix.h
new file mode 100644
index 0000000..2940d1d
--- /dev/null
+++ b/internal/ceres/cuda_sparse_matrix.h
@@ -0,0 +1,143 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+//
+// A CUDA sparse matrix linear operator.
+
+#ifndef CERES_INTERNAL_CUDA_SPARSE_MATRIX_H_
+#define CERES_INTERNAL_CUDA_SPARSE_MATRIX_H_
+
+// This include must come before any #ifndef check on Ceres compile options.
+// clang-format off
+#include "ceres/internal/config.h"
+// clang-format on
+
+#include <cstdint>
+#include <memory>
+#include <string>
+
+#include "ceres/compressed_row_sparse_matrix.h"
+#include "ceres/context_impl.h"
+#include "ceres/internal/export.h"
+#include "ceres/types.h"
+
+#ifndef CERES_NO_CUDA
+#include "ceres/cuda_buffer.h"
+#include "ceres/cuda_vector.h"
+#include "cusparse.h"
+
+namespace ceres::internal {
+
+// A sparse matrix hosted on the GPU in compressed row sparse format, with
+// CUDA-accelerated operations.
+// The user of the class must ensure that ContextImpl::InitCuda() has already
+// been successfully called before using this class.
+class CERES_NO_EXPORT CudaSparseMatrix {
+ public:
+ // Create a GPU copy of the matrix provided.
+ CudaSparseMatrix(ContextImpl* context,
+ const CompressedRowSparseMatrix& crs_matrix);
+
+ // Create matrix from existing row and column index buffers.
+ // Values are left uninitialized.
+ CudaSparseMatrix(int num_cols,
+ CudaBuffer<int32_t>&& rows,
+ CudaBuffer<int32_t>&& cols,
+ ContextImpl* context);
+
+ ~CudaSparseMatrix();
+
+ // Left/right products are using internal buffer and are not thread-safe
+ // y = y + Ax;
+ void RightMultiplyAndAccumulate(const CudaVector& x, CudaVector* y) const;
+ // y = y + A'x;
+ void LeftMultiplyAndAccumulate(const CudaVector& x, CudaVector* y) const;
+
+ int num_rows() const { return num_rows_; }
+ int num_cols() const { return num_cols_; }
+ int num_nonzeros() const { return num_nonzeros_; }
+
+ const int32_t* rows() const { return rows_.data(); }
+ const int32_t* cols() const { return cols_.data(); }
+ const double* values() const { return values_.data(); }
+
+ int32_t* mutable_rows() { return rows_.data(); }
+ int32_t* mutable_cols() { return cols_.data(); }
+ double* mutable_values() { return values_.data(); }
+
+ // If subsequent uses of this matrix involve only numerical changes and no
+ // structural changes, then this method can be used to copy the updated
+ // non-zero values -- the row and column index arrays are kept the same. It
+ // is the caller's responsibility to ensure that the sparsity structure of the
+ // matrix is unchanged.
+ void CopyValuesFromCpu(const CompressedRowSparseMatrix& crs_matrix);
+
+ const cusparseSpMatDescr_t& descr() const { return descr_; }
+
+ private:
+ // Disable copy and assignment.
+ CudaSparseMatrix(const CudaSparseMatrix&) = delete;
+ CudaSparseMatrix& operator=(const CudaSparseMatrix&) = delete;
+
+ // Allocate temporary buffer for left/right products, create cuSPARSE
+ // descriptors
+ void Initialize();
+
+ // y = y + op(M)x. op must be either CUSPARSE_OPERATION_NON_TRANSPOSE or
+ // CUSPARSE_OPERATION_TRANSPOSE.
+ void SpMv(cusparseOperation_t op,
+ const cusparseDnVecDescr_t& x,
+ const cusparseDnVecDescr_t& y) const;
+
+ int num_rows_ = 0;
+ int num_cols_ = 0;
+ int num_nonzeros_ = 0;
+
+ ContextImpl* context_ = nullptr;
+ // CSR row indices.
+ CudaBuffer<int32_t> rows_;
+ // CSR column indices.
+ CudaBuffer<int32_t> cols_;
+ // CSR values.
+ CudaBuffer<double> values_;
+
+ // CuSparse object that describes this matrix.
+ cusparseSpMatDescr_t descr_ = nullptr;
+
+ // Dense vector descriptors for pointer interface
+ cusparseDnVecDescr_t descr_vec_left_ = nullptr;
+ cusparseDnVecDescr_t descr_vec_right_ = nullptr;
+
+ mutable CudaBuffer<uint8_t> spmv_buffer_;
+};
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
+#endif // CERES_INTERNAL_CUDA_SPARSE_MATRIX_H_
diff --git a/internal/ceres/cuda_sparse_matrix_test.cc b/internal/ceres/cuda_sparse_matrix_test.cc
new file mode 100644
index 0000000..774829b
--- /dev/null
+++ b/internal/ceres/cuda_sparse_matrix_test.cc
@@ -0,0 +1,286 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#include "ceres/cuda_sparse_matrix.h"
+
+#include <string>
+
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/casts.h"
+#include "ceres/cuda_vector.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/linear_least_squares_problems.h"
+#include "ceres/triplet_sparse_matrix.h"
+#include "glog/logging.h"
+#include "gtest/gtest.h"
+
+namespace ceres {
+namespace internal {
+
+#ifndef CERES_NO_CUDA
+
+class CudaSparseMatrixTest : public ::testing::Test {
+ protected:
+ void SetUp() final {
+ std::string message;
+ CHECK(context_.InitCuda(&message))
+ << "InitCuda() failed because: " << message;
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(2);
+ CHECK(problem != nullptr);
+ A_.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
+ CHECK(A_ != nullptr);
+ CHECK(problem->b != nullptr);
+ CHECK(problem->x != nullptr);
+ b_.resize(A_->num_rows());
+ for (int i = 0; i < A_->num_rows(); ++i) {
+ b_[i] = problem->b[i];
+ }
+ x_.resize(A_->num_cols());
+ for (int i = 0; i < A_->num_cols(); ++i) {
+ x_[i] = problem->x[i];
+ }
+ CHECK_EQ(A_->num_rows(), b_.rows());
+ CHECK_EQ(A_->num_cols(), x_.rows());
+ }
+
+ std::unique_ptr<BlockSparseMatrix> A_;
+ Vector x_;
+ Vector b_;
+ ContextImpl context_;
+};
+
+TEST_F(CudaSparseMatrixTest, RightMultiplyAndAccumulate) {
+ std::string message;
+ auto A_crs = A_->ToCompressedRowSparseMatrix();
+ CudaSparseMatrix A_gpu(&context_, *A_crs);
+ CudaVector x_gpu(&context_, A_gpu.num_cols());
+ CudaVector res_gpu(&context_, A_gpu.num_rows());
+ x_gpu.CopyFromCpu(x_);
+
+ const Vector minus_b = -b_;
+ // res = -b
+ res_gpu.CopyFromCpu(minus_b);
+ // res += A * x
+ A_gpu.RightMultiplyAndAccumulate(x_gpu, &res_gpu);
+
+ Vector res;
+ res_gpu.CopyTo(&res);
+
+ Vector res_expected = minus_b;
+ A_->RightMultiplyAndAccumulate(x_.data(), res_expected.data());
+
+ EXPECT_LE((res - res_expected).norm(),
+ std::numeric_limits<double>::epsilon() * 1e3);
+}
+
+TEST(CudaSparseMatrix, CopyValuesFromCpu) {
+ // A1:
+ // [ 1 1 0 0
+ // 0 1 1 0]
+ // A2:
+ // [ 1 2 0 0
+ // 0 3 4 0]
+ // b: [1 2 3 4]'
+ // A1 * b = [3 5]'
+ // A2 * b = [5 18]'
+ TripletSparseMatrix A1(2, 4, {0, 0, 1, 1}, {0, 1, 1, 2}, {1, 1, 1, 1});
+ TripletSparseMatrix A2(2, 4, {0, 0, 1, 1}, {0, 1, 1, 2}, {1, 2, 3, 4});
+ Vector b(4);
+ b << 1, 2, 3, 4;
+
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ auto A1_crs = CompressedRowSparseMatrix::FromTripletSparseMatrix(A1);
+ CudaSparseMatrix A_gpu(&context, *A1_crs);
+ CudaVector b_gpu(&context, A1.num_cols());
+ CudaVector x_gpu(&context, A1.num_rows());
+ b_gpu.CopyFromCpu(b);
+ x_gpu.SetZero();
+
+ Vector x_expected(2);
+ x_expected << 3, 5;
+ A_gpu.RightMultiplyAndAccumulate(b_gpu, &x_gpu);
+ Vector x_computed;
+ x_gpu.CopyTo(&x_computed);
+ EXPECT_EQ(x_computed, x_expected);
+
+ auto A2_crs = CompressedRowSparseMatrix::FromTripletSparseMatrix(A2);
+ A_gpu.CopyValuesFromCpu(*A2_crs);
+ x_gpu.SetZero();
+ x_expected << 5, 18;
+ A_gpu.RightMultiplyAndAccumulate(b_gpu, &x_gpu);
+ x_gpu.CopyTo(&x_computed);
+ EXPECT_EQ(x_computed, x_expected);
+}
+
+TEST(CudaSparseMatrix, RightMultiplyAndAccumulate) {
+ // A:
+ // [ 1 2 0 0
+ // 0 3 4 0]
+ // b: [1 2 3 4]'
+ // A * b = [5 18]'
+ TripletSparseMatrix A(2, 4, {0, 0, 1, 1}, {0, 1, 1, 2}, {1, 2, 3, 4});
+ Vector b(4);
+ b << 1, 2, 3, 4;
+ Vector x_expected(2);
+ x_expected << 5, 18;
+
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ auto A_crs = CompressedRowSparseMatrix::FromTripletSparseMatrix(A);
+ CudaSparseMatrix A_gpu(&context, *A_crs);
+ CudaVector b_gpu(&context, A.num_cols());
+ CudaVector x_gpu(&context, A.num_rows());
+ b_gpu.CopyFromCpu(b);
+ x_gpu.SetZero();
+
+ A_gpu.RightMultiplyAndAccumulate(b_gpu, &x_gpu);
+
+ Vector x_computed;
+ x_gpu.CopyTo(&x_computed);
+
+ EXPECT_EQ(x_computed, x_expected);
+}
+
+TEST(CudaSparseMatrix, LeftMultiplyAndAccumulate) {
+ // A:
+ // [ 1 2 0 0
+ // 0 3 4 0]
+ // b: [1 2]'
+ // A'* b = [1 8 8 0]'
+ TripletSparseMatrix A(2, 4, {0, 0, 1, 1}, {0, 1, 1, 2}, {1, 2, 3, 4});
+ Vector b(2);
+ b << 1, 2;
+ Vector x_expected(4);
+ x_expected << 1, 8, 8, 0;
+
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ auto A_crs = CompressedRowSparseMatrix::FromTripletSparseMatrix(A);
+ CudaSparseMatrix A_gpu(&context, *A_crs);
+ CudaVector b_gpu(&context, A.num_rows());
+ CudaVector x_gpu(&context, A.num_cols());
+ b_gpu.CopyFromCpu(b);
+ x_gpu.SetZero();
+
+ A_gpu.LeftMultiplyAndAccumulate(b_gpu, &x_gpu);
+
+ Vector x_computed;
+ x_gpu.CopyTo(&x_computed);
+
+ EXPECT_EQ(x_computed, x_expected);
+}
+
+// If there are numerical errors due to synchronization issues, they will show
+// up when testing with large matrices, since each operation will take
+// significant time, thus hopefully revealing any potential synchronization
+// issues.
+TEST(CudaSparseMatrix, LargeMultiplyAndAccumulate) {
+ // Create a large NxN matrix A that has the following structure:
+ // In row i, only columns i and i+1 are non-zero.
+ // A_{i, i} = A_{i, i+1} = 1.
+ // There will be 2 * N - 1 non-zero elements in A.
+ // X = [1:N]
+ // Right multiply test:
+ // b = A * X
+ // Left multiply test:
+ // b = A' * X
+
+ const int N = 10 * 1000 * 1000;
+ const int num_non_zeros = 2 * N - 1;
+ std::vector<int> row_indices(num_non_zeros);
+ std::vector<int> col_indices(num_non_zeros);
+ std::vector<double> values(num_non_zeros);
+
+ for (int i = 0; i < N; ++i) {
+ row_indices[2 * i] = i;
+ col_indices[2 * i] = i;
+ values[2 * i] = 1.0;
+ if (i + 1 < N) {
+ col_indices[2 * i + 1] = i + 1;
+ row_indices[2 * i + 1] = i;
+ values[2 * i + 1] = 1;
+ }
+ }
+ TripletSparseMatrix A(N, N, row_indices, col_indices, values);
+ Vector x(N);
+ for (int i = 0; i < N; ++i) {
+ x[i] = i + 1;
+ }
+
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ auto A_crs = CompressedRowSparseMatrix::FromTripletSparseMatrix(A);
+ CudaSparseMatrix A_gpu(&context, *A_crs);
+ CudaVector b_gpu(&context, N);
+ CudaVector x_gpu(&context, N);
+ x_gpu.CopyFromCpu(x);
+
+ // First check RightMultiply.
+ {
+ b_gpu.SetZero();
+ A_gpu.RightMultiplyAndAccumulate(x_gpu, &b_gpu);
+ Vector b_computed;
+ b_gpu.CopyTo(&b_computed);
+ for (int i = 0; i < N; ++i) {
+ if (i + 1 < N) {
+ EXPECT_EQ(b_computed[i], 2 * (i + 1) + 1);
+ } else {
+ EXPECT_EQ(b_computed[i], i + 1);
+ }
+ }
+ }
+
+ // Next check LeftMultiply.
+ {
+ b_gpu.SetZero();
+ A_gpu.LeftMultiplyAndAccumulate(x_gpu, &b_gpu);
+ Vector b_computed;
+ b_gpu.CopyTo(&b_computed);
+ for (int i = 0; i < N; ++i) {
+ if (i > 0) {
+ EXPECT_EQ(b_computed[i], 2 * (i + 1) - 1);
+ } else {
+ EXPECT_EQ(b_computed[i], i + 1);
+ }
+ }
+ }
+}
+
+#endif // CERES_NO_CUDA
+
+} // namespace internal
+} // namespace ceres
diff --git a/internal/ceres/cuda_streamed_buffer.h b/internal/ceres/cuda_streamed_buffer.h
new file mode 100644
index 0000000..37bcf4a
--- /dev/null
+++ b/internal/ceres/cuda_streamed_buffer.h
@@ -0,0 +1,338 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#ifndef CERES_INTERNAL_CUDA_STREAMED_BUFFER_H_
+#define CERES_INTERNAL_CUDA_STREAMED_BUFFER_H_
+
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include <algorithm>
+
+#include "ceres/cuda_buffer.h"
+
+namespace ceres::internal {
+
+// Most contemporary CUDA devices are capable of simultaneous code execution and
+// host-to-device transfer. This class copies batches of data to GPU memory and
+// executes processing of copied data in parallel (asynchronously).
+// Data is copied to a fixed-size buffer on GPU (containing at most
+// max_buffer_size values), and this memory is re-used when the previous
+// batch of values is processed by user-provided callback
+// Host-to-device copy uses a temporary buffer if required. Each batch of values
+// has size of kValuesPerBatch, except the last one.
+template <typename T>
+class CERES_NO_EXPORT CudaStreamedBuffer {
+ public:
+ // If hardware supports only one host-to-device copy or one host-to-device
+ // copy is able to reach peak bandwidth, two streams are sufficient to reach
+ // maximum efficiency:
+ // - If transferring batch of values takes more time, than processing it on
+ // gpu, then at every moment of time one of the streams will be transferring
+ // data and other stream will be either processing data or idle; the whole
+ // process will be bounded by host-to-device copy.
+ // - If transferring batch of values takes less time, than processing it on
+ // gpu, then at every moment of time one of the streams will be processing
+ // data and other stream will be either performing computations or
+ // transferring data, and the whole process will be bounded by computations.
+ static constexpr int kNumBatches = 2;
+ // max_buffer_size is the maximal size (in elements of type T) of array
+ // to be pre-allocated in gpu memory. The size of array determines size of
+ // batch of values for simultaneous copying and processing. It should be large
+ // enough to allow highly-parallel execution of user kernels; making it too
+ // large increases latency.
+ CudaStreamedBuffer(ContextImpl* context, const int max_buffer_size)
+ : kValuesPerBatch(max_buffer_size / kNumBatches),
+ context_(context),
+ values_gpu_(context, kValuesPerBatch * kNumBatches) {
+ static_assert(ContextImpl::kNumCudaStreams >= kNumBatches);
+ CHECK_GE(max_buffer_size, kNumBatches);
+ // Pre-allocate a buffer of page-locked memory for transfers from a regular
+ // cpu memory. Because we will be only writing into that buffer from cpu,
+ // memory is allocated with cudaHostAllocWriteCombined flag.
+ CHECK_EQ(cudaSuccess,
+ cudaHostAlloc(&values_cpu_pinned_,
+ sizeof(T) * kValuesPerBatch * kNumBatches,
+ cudaHostAllocWriteCombined));
+ for (auto& e : copy_finished_) {
+ CHECK_EQ(cudaSuccess,
+ cudaEventCreateWithFlags(&e, cudaEventDisableTiming));
+ }
+ }
+
+ CudaStreamedBuffer(const CudaStreamedBuffer&) = delete;
+
+ ~CudaStreamedBuffer() {
+ CHECK_EQ(cudaSuccess, cudaFreeHost(values_cpu_pinned_));
+ for (auto& e : copy_finished_) {
+ CHECK_EQ(cudaSuccess, cudaEventDestroy(e));
+ }
+ }
+
+ // Transfer num_values at host-memory pointer from, calling
+ // callback(device_pointer, size_of_batch, offset_of_batch, stream_to_use)
+ // after scheduling transfer of each batch of data. User-provided callback
+ // should perform processing of data at device_pointer only in
+ // stream_to_use stream (device_pointer will be re-used in the next
+ // callback invocation with the same stream).
+ //
+ // Two diagrams below describe operation in two possible scenarios, depending
+ // on input data being stored in page-locked memory. In this example we will
+ // have max_buffer_size = 2 * K, num_values = N * K and callback
+ // scheduling a single asynchronous launch of
+ // Kernel<<..., stream_to_use>>(device_pointer,
+ // size_of_batch,
+ // offset_of_batch)
+ //
+ // a. Copying from page-locked memory
+ // In this case no copy on the host-side is necessary, and this method just
+ // schedules a bunch of interleaved memory copies and callback invocations:
+ //
+ // cudaStreamSynchronize(context->DefaultStream());
+ // - Iteration #0:
+ // - cudaMemcpyAsync(values_gpu_, from, K * sizeof(T), H->D, stream_0)
+ // - callback(values_gpu_, K, 0, stream_0)
+ // - Iteration #1:
+ // - cudaMemcpyAsync(values_gpu_ + K, from + K, K * sizeof(T), H->D,
+ // stream_1)
+ // - callback(values_gpu_ + K, K, K, stream_1)
+ // - Iteration #2:
+ // - cudaMemcpyAsync(values_gpu_, from + 2 * K, K * sizeof(T), H->D,
+ // stream_0)
+ // - callback(values_gpu_, K, 2 * K, stream_0)
+ // - Iteration #3:
+ // - cudaMemcpyAsync(values_gpu_ + K, from + 3 * K, K * sizeof(T), H->D,
+ // stream_1)
+ // - callback(values_gpu_ + K, K, 3 * K, stream_1)
+ // ...
+ // - Iteration #i:
+ // - cudaMemcpyAsync(values_gpu_ + (i % 2) * K, from + i * K, K *
+ // sizeof(T), H->D, stream_(i % 2))
+ // - callback(values_gpu_ + (i % 2) * K, K, i * K, stream_(i % 2)
+ // ...
+ // cudaStreamSynchronize(stream_0)
+ // cudaStreamSynchronize(stream_1)
+ //
+ // This sequence of calls results in following activity on gpu (assuming that
+ // kernel invoked by callback takes less time than host-to-device copy):
+ // +-------------------+-------------------+
+ // | Stream #0 | Stream #1 |
+ // +-------------------+-------------------+
+ // | Copy host->device | |
+ // | | |
+ // | | |
+ // +-------------------+-------------------+
+ // | Kernel | Copy host->device |
+ // +-------------------+ |
+ // | | |
+ // +-------------------+-------------------+
+ // | Copy host->device | Kernel |
+ // | +-------------------+
+ // | | |
+ // +-------------------+-------------------+
+ // | Kernel | Copy host->device |
+ // | ... |
+ // +---------------------------------------+
+ //
+ // b. Copying from regular memory
+ // In this case a copy from regular memory to page-locked memory is required
+ // in order to get asynchrnonous operation. Because pinned memory on host-side
+ // is reused, additional synchronization is required. On each iteration method
+ // the following actions are performed:
+ // - Wait till previous copy operation in stream is completed
+ // - Copy batch of values from input array into pinned memory
+ // - Asynchronously launch host-to-device copy
+ // - Setup event for synchronization on copy completion
+ // - Invoke callback (that launches kernel asynchronously)
+ //
+ // Invocations are performed with the following arguments
+ // cudaStreamSynchronize(context->DefaultStream());
+ // - Iteration #0:
+ // - cudaEventSynchronize(copy_finished_0)
+ // - std::copy_n(from, K, values_cpu_pinned_)
+ // - cudaMemcpyAsync(values_gpu_, values_cpu_pinned_, K * sizeof(T), H->D,
+ // stream_0)
+ // - cudaEventRecord(copy_finished_0, stream_0)
+ // - callback(values_gpu_, K, 0, stream_0)
+ // - Iteration #1:
+ // - cudaEventSynchronize(copy_finished_1)
+ // - std::copy_n(from + K, K, values_cpu_pinned_ + K)
+ // - cudaMemcpyAsync(values_gpu_ + K, values_cpu_pinned_ + K, K *
+ // sizeof(T), H->D, stream_1)
+ // - cudaEventRecord(copy_finished_1, stream_1)
+ // - callback(values_gpu_ + K, K, K, stream_1)
+ // - Iteration #2:
+ // - cudaEventSynchronize(copy_finished_0)
+ // - std::copy_n(from + 2 * K, K, values_cpu_pinned_)
+ // - cudaMemcpyAsync(values_gpu_, values_cpu_pinned_, K * sizeof(T), H->D,
+ // stream_0)
+ // - cudaEventRecord(copy_finished_0, stream_0)
+ // - callback(values_gpu_, K, 2 * K, stream_0)
+ // - Iteration #3:
+ // - cudaEventSynchronize(copy_finished_1)
+ // - std::copy_n(from + 3 * K, K, values_cpu_pinned_ + K)
+ // - cudaMemcpyAsync(values_gpu_ + K, values_cpu_pinned_ + K, K *
+ // sizeof(T), H->D, stream_1)
+ // - cudaEventRecord(copy_finished_1, stream_1)
+ // - callback(values_gpu_ + K, K, 3 * K, stream_1)
+ // ...
+ // - Iteration #i:
+ // - cudaEventSynchronize(copy_finished_(i % 2))
+ // - std::copy_n(from + i * K, K, values_cpu_pinned_ + (i % 2) * K)
+ // - cudaMemcpyAsync(values_gpu_ + (i % 2) * K, values_cpu_pinned_ + (i %
+ // 2) * K, K * sizeof(T), H->D, stream_(i % 2))
+ // - cudaEventRecord(copy_finished_(i % 2), stream_(i % 2))
+ // - callback(values_gpu_ + (i % 2) * K, K, i * K, stream_(i % 2))
+ // ...
+ // cudaStreamSynchronize(stream_0)
+ // cudaStreamSynchronize(stream_1)
+ //
+ // This sequence of calls results in following activity on cpu and gpu
+ // (assuming that kernel invoked by callback takes less time than
+ // host-to-device copy and copy in cpu memory, and copy in cpu memory is
+ // faster than host-to-device copy):
+ // +----------------------------+-------------------+-------------------+
+ // | Stream #0 | Stream #0 | Stream #1 |
+ // +----------------------------+-------------------+-------------------+
+ // | Copy to pinned memory | | |
+ // | | | |
+ // +----------------------------+-------------------| |
+ // | Copy to pinned memory | Copy host->device | |
+ // | | | |
+ // +----------------------------+ | |
+ // | Waiting previous h->d copy | | |
+ // +----------------------------+-------------------+-------------------+
+ // | Copy to pinned memory | Kernel | Copy host->device |
+ // | +-------------------+ |
+ // +----------------------------+ | |
+ // | Waiting previous h->d copy | | |
+ // +----------------------------+-------------------+-------------------+
+ // | Copy to pinned memory | Copy host->device | Kernel |
+ // | | +-------------------+
+ // | ... ... |
+ // +----------------------------+---------------------------------------+
+ //
+ template <typename Fun>
+ void CopyToGpu(const T* from, const int num_values, Fun&& callback) {
+ // This synchronization is not required in some cases, but we perform it in
+ // order to avoid situation when user callback depends on data that is
+ // still to be computed in default stream
+ CHECK_EQ(cudaSuccess, cudaStreamSynchronize(context_->DefaultStream()));
+
+ // If pointer to input data does not correspond to page-locked memory,
+ // host-to-device memory copy might be executed synchrnonously (with a copy
+ // to pinned memory happening inside the driver). In that case we perform
+ // copy to a pre-allocated array of page-locked memory.
+ const bool copy_to_pinned_memory = MemoryTypeResultsInSynchronousCopy(from);
+ T* batch_values_gpu[kNumBatches];
+ T* batch_values_cpu[kNumBatches];
+ auto streams = context_->streams_;
+ for (int i = 0; i < kNumBatches; ++i) {
+ batch_values_gpu[i] = values_gpu_.data() + kValuesPerBatch * i;
+ batch_values_cpu[i] = values_cpu_pinned_ + kValuesPerBatch * i;
+ }
+ int batch_id = 0;
+ for (int offset = 0; offset < num_values; offset += kValuesPerBatch) {
+ const int num_values_batch =
+ std::min(num_values - offset, kValuesPerBatch);
+ const T* batch_from = from + offset;
+ T* batch_to = batch_values_gpu[batch_id];
+ auto stream = streams[batch_id];
+ auto copy_finished = copy_finished_[batch_id];
+
+ if (copy_to_pinned_memory) {
+ // Copying values to a temporary buffer should be started only after the
+ // previous copy from temporary buffer to device is completed.
+ CHECK_EQ(cudaSuccess, cudaEventSynchronize(copy_finished));
+ std::copy_n(batch_from, num_values_batch, batch_values_cpu[batch_id]);
+ batch_from = batch_values_cpu[batch_id];
+ }
+ CHECK_EQ(cudaSuccess,
+ cudaMemcpyAsync(batch_to,
+ batch_from,
+ sizeof(T) * num_values_batch,
+ cudaMemcpyHostToDevice,
+ stream));
+ if (copy_to_pinned_memory) {
+ // Next copy to a temporary buffer can start straight after asynchronous
+ // copy is completed (and might be started before kernels asynchronously
+ // executed in stream by user-supplied callback are completed).
+ // No explicit synchronization is required when copying data from
+ // page-locked memory, because memory copy and user kernel execution
+ // with corresponding part of values_gpu_ array is serialized using
+ // stream
+ CHECK_EQ(cudaSuccess, cudaEventRecord(copy_finished, stream));
+ }
+ callback(batch_to, num_values_batch, offset, stream);
+ batch_id = (batch_id + 1) % kNumBatches;
+ }
+ // Explicitly synchronize on all CUDA streams that were utilized.
+ for (int i = 0; i < kNumBatches; ++i) {
+ CHECK_EQ(cudaSuccess, cudaStreamSynchronize(streams[i]));
+ }
+ }
+
+ private:
+ // It is necessary to have all host-to-device copies to be completely
+ // asynchronous. This requires source memory to be allocated in page-locked
+ // memory.
+ static bool MemoryTypeResultsInSynchronousCopy(const void* ptr) {
+ cudaPointerAttributes attributes;
+ auto status = cudaPointerGetAttributes(&attributes, ptr);
+#if CUDART_VERSION < 11000
+ // In CUDA versions prior 11 call to cudaPointerGetAttributes with host
+ // pointer will return cudaErrorInvalidValue
+ if (status == cudaErrorInvalidValue) {
+ return true;
+ }
+#endif
+ CHECK_EQ(status, cudaSuccess);
+ // This class only supports cpu memory as a source
+ CHECK_NE(attributes.type, cudaMemoryTypeDevice);
+ // If host memory was allocated (or registered) with CUDA API, or is a
+ // managed memory, then call to cudaMemcpyAsync will be asynchrnous. In case
+ // of managed memory it might be slightly better to perform a single call of
+ // user-provided call-back (and hope that page migration will provide a
+ // similar throughput with zero efforts from our side).
+ return attributes.type == cudaMemoryTypeUnregistered;
+ }
+
+ const int kValuesPerBatch;
+ ContextImpl* context_ = nullptr;
+ CudaBuffer<T> values_gpu_;
+ T* values_cpu_pinned_ = nullptr;
+ cudaEvent_t copy_finished_[kNumBatches] = {nullptr};
+};
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
+#endif // CERES_INTERNAL_CUDA_STREAMED_BUFFER_H_
diff --git a/internal/ceres/cuda_streamed_buffer_test.cc b/internal/ceres/cuda_streamed_buffer_test.cc
new file mode 100644
index 0000000..4837005
--- /dev/null
+++ b/internal/ceres/cuda_streamed_buffer_test.cc
@@ -0,0 +1,169 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include "ceres/internal/config.h"
+
+#ifndef CERES_NO_CUDA
+
+#include <glog/logging.h>
+#include <gtest/gtest.h>
+
+#include <numeric>
+
+#include "ceres/cuda_streamed_buffer.h"
+
+namespace ceres::internal {
+
+TEST(CudaStreamedBufferTest, IntegerCopy) {
+ // Offsets and sizes of batches supplied to callback
+ std::vector<std::pair<int, int>> batches;
+ const int kMaxTemporaryArraySize = 16;
+ const int kInputSize = kMaxTemporaryArraySize * 7 + 3;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+
+ std::vector<int> inputs(kInputSize);
+ std::vector<int> outputs(kInputSize, -1);
+ std::iota(inputs.begin(), inputs.end(), 0);
+
+ CudaStreamedBuffer<int> streamed_buffer(&context, kMaxTemporaryArraySize);
+ streamed_buffer.CopyToGpu(inputs.data(),
+ kInputSize,
+ [&outputs, &batches](const int* device_pointer,
+ int size,
+ int offset,
+ cudaStream_t stream) {
+ batches.emplace_back(offset, size);
+ CHECK_EQ(cudaSuccess,
+ cudaMemcpyAsync(outputs.data() + offset,
+ device_pointer,
+ sizeof(int) * size,
+ cudaMemcpyDeviceToHost,
+ stream));
+ });
+ // All operations in all streams should be completed when CopyToGpu returns
+ // control to the callee
+ for (int i = 0; i < ContextImpl::kNumCudaStreams; ++i) {
+ CHECK_EQ(cudaSuccess, cudaStreamQuery(context.streams_[i]));
+ }
+
+ // Check if every element was visited
+ for (int i = 0; i < kInputSize; ++i) {
+ CHECK_EQ(outputs[i], i);
+ }
+
+ // Check if there is no overlap between batches
+ std::sort(batches.begin(), batches.end());
+ const int num_batches = batches.size();
+ for (int i = 0; i < num_batches; ++i) {
+ const auto [begin, size] = batches[i];
+ const int end = begin + size;
+ CHECK_GE(begin, 0);
+ CHECK_LT(begin, kInputSize);
+
+ CHECK_GT(size, 0);
+ CHECK_LE(end, kInputSize);
+
+ if (i + 1 == num_batches) continue;
+ CHECK_EQ(end, batches[i + 1].first);
+ }
+}
+
+TEST(CudaStreamedBufferTest, IntegerNoCopy) {
+ // Offsets and sizes of batches supplied to callback
+ std::vector<std::pair<int, int>> batches;
+ const int kMaxTemporaryArraySize = 16;
+ const int kInputSize = kMaxTemporaryArraySize * 7 + 3;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+
+ int* inputs;
+ int* outputs;
+ CHECK_EQ(cudaSuccess,
+ cudaHostAlloc(
+ &inputs, sizeof(int) * kInputSize, cudaHostAllocWriteCombined));
+ CHECK_EQ(
+ cudaSuccess,
+ cudaHostAlloc(&outputs, sizeof(int) * kInputSize, cudaHostAllocDefault));
+
+ std::fill(outputs, outputs + kInputSize, -1);
+ std::iota(inputs, inputs + kInputSize, 0);
+
+ CudaStreamedBuffer<int> streamed_buffer(&context, kMaxTemporaryArraySize);
+ streamed_buffer.CopyToGpu(inputs,
+ kInputSize,
+ [outputs, &batches](const int* device_pointer,
+ int size,
+ int offset,
+ cudaStream_t stream) {
+ batches.emplace_back(offset, size);
+ CHECK_EQ(cudaSuccess,
+ cudaMemcpyAsync(outputs + offset,
+ device_pointer,
+ sizeof(int) * size,
+ cudaMemcpyDeviceToHost,
+ stream));
+ });
+ // All operations in all streams should be completed when CopyToGpu returns
+ // control to the callee
+ for (int i = 0; i < ContextImpl::kNumCudaStreams; ++i) {
+ CHECK_EQ(cudaSuccess, cudaStreamQuery(context.streams_[i]));
+ }
+
+ // Check if every element was visited
+ for (int i = 0; i < kInputSize; ++i) {
+ CHECK_EQ(outputs[i], i);
+ }
+
+ // Check if there is no overlap between batches
+ std::sort(batches.begin(), batches.end());
+ const int num_batches = batches.size();
+ for (int i = 0; i < num_batches; ++i) {
+ const auto [begin, size] = batches[i];
+ const int end = begin + size;
+ CHECK_GE(begin, 0);
+ CHECK_LT(begin, kInputSize);
+
+ CHECK_GT(size, 0);
+ CHECK_LE(end, kInputSize);
+
+ if (i + 1 == num_batches) continue;
+ CHECK_EQ(end, batches[i + 1].first);
+ }
+
+ CHECK_EQ(cudaSuccess, cudaFreeHost(inputs));
+ CHECK_EQ(cudaSuccess, cudaFreeHost(outputs));
+}
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_vector.cc b/internal/ceres/cuda_vector.cc
new file mode 100644
index 0000000..08217b2
--- /dev/null
+++ b/internal/ceres/cuda_vector.cc
@@ -0,0 +1,185 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+//
+// A simple CUDA vector class.
+
+// This include must come before any #ifndef check on Ceres compile options.
+// clang-format off
+#include "ceres/internal/config.h"
+// clang-format on
+
+#include <math.h>
+
+#include "ceres/context_impl.h"
+#include "ceres/internal/export.h"
+#include "ceres/types.h"
+
+#ifndef CERES_NO_CUDA
+
+#include "ceres/cuda_buffer.h"
+#include "ceres/cuda_kernels_vector_ops.h"
+#include "ceres/cuda_vector.h"
+#include "cublas_v2.h"
+
+namespace ceres::internal {
+
+CudaVector::CudaVector(ContextImpl* context, int size)
+ : context_(context), data_(context, size) {
+ DCHECK_NE(context, nullptr);
+ DCHECK(context->IsCudaInitialized());
+ Resize(size);
+}
+
+CudaVector::CudaVector(CudaVector&& other)
+ : num_rows_(other.num_rows_),
+ context_(other.context_),
+ data_(std::move(other.data_)),
+ descr_(other.descr_) {
+ other.num_rows_ = 0;
+ other.descr_ = nullptr;
+}
+
+CudaVector& CudaVector::operator=(const CudaVector& other) {
+ if (this != &other) {
+ Resize(other.num_rows());
+ data_.CopyFromGPUArray(other.data_.data(), num_rows_);
+ }
+ return *this;
+}
+
+void CudaVector::DestroyDescriptor() {
+ if (descr_ != nullptr) {
+ CHECK_EQ(cusparseDestroyDnVec(descr_), CUSPARSE_STATUS_SUCCESS);
+ descr_ = nullptr;
+ }
+}
+
+CudaVector::~CudaVector() { DestroyDescriptor(); }
+
+void CudaVector::Resize(int size) {
+ data_.Reserve(size);
+ num_rows_ = size;
+ DestroyDescriptor();
+ CHECK_EQ(cusparseCreateDnVec(&descr_, num_rows_, data_.data(), CUDA_R_64F),
+ CUSPARSE_STATUS_SUCCESS);
+}
+
+double CudaVector::Dot(const CudaVector& x) const {
+ double result = 0;
+ CHECK_EQ(cublasDdot(context_->cublas_handle_,
+ num_rows_,
+ data_.data(),
+ 1,
+ x.data(),
+ 1,
+ &result),
+ CUBLAS_STATUS_SUCCESS)
+ << "CuBLAS cublasDdot failed.";
+ return result;
+}
+
+double CudaVector::Norm() const {
+ double result = 0;
+ CHECK_EQ(cublasDnrm2(
+ context_->cublas_handle_, num_rows_, data_.data(), 1, &result),
+ CUBLAS_STATUS_SUCCESS)
+ << "CuBLAS cublasDnrm2 failed.";
+ return result;
+}
+
+void CudaVector::CopyFromCpu(const double* x) {
+ data_.CopyFromCpu(x, num_rows_);
+}
+
+void CudaVector::CopyFromCpu(const Vector& x) {
+ if (x.rows() != num_rows_) {
+ Resize(x.rows());
+ }
+ CopyFromCpu(x.data());
+}
+
+void CudaVector::CopyTo(Vector* x) const {
+ CHECK(x != nullptr);
+ x->resize(num_rows_);
+ data_.CopyToCpu(x->data(), num_rows_);
+}
+
+void CudaVector::CopyTo(double* x) const {
+ CHECK(x != nullptr);
+ data_.CopyToCpu(x, num_rows_);
+}
+
+void CudaVector::SetZero() {
+ // Allow empty vector to be zeroed
+ if (num_rows_ == 0) return;
+ CHECK(data_.data() != nullptr);
+ CudaSetZeroFP64(data_.data(), num_rows_, context_->DefaultStream());
+}
+
+void CudaVector::Axpby(double a, const CudaVector& x, double b) {
+ if (&x == this) {
+ Scale(a + b);
+ return;
+ }
+ CHECK_EQ(num_rows_, x.num_rows_);
+ if (b != 1.0) {
+ // First scale y by b.
+ CHECK_EQ(
+ cublasDscal(context_->cublas_handle_, num_rows_, &b, data_.data(), 1),
+ CUBLAS_STATUS_SUCCESS)
+ << "CuBLAS cublasDscal failed.";
+ }
+ // Then add a * x to y.
+ CHECK_EQ(cublasDaxpy(context_->cublas_handle_,
+ num_rows_,
+ &a,
+ x.data(),
+ 1,
+ data_.data(),
+ 1),
+ CUBLAS_STATUS_SUCCESS)
+ << "CuBLAS cublasDaxpy failed.";
+}
+
+void CudaVector::DtDxpy(const CudaVector& D, const CudaVector& x) {
+ CudaDtDxpy(
+ data_.data(), D.data(), x.data(), num_rows_, context_->DefaultStream());
+}
+
+void CudaVector::Scale(double s) {
+ CHECK_EQ(
+ cublasDscal(context_->cublas_handle_, num_rows_, &s, data_.data(), 1),
+ CUBLAS_STATUS_SUCCESS)
+ << "CuBLAS cublasDscal failed.";
+}
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/cuda_vector.h b/internal/ceres/cuda_vector.h
new file mode 100644
index 0000000..8db5649
--- /dev/null
+++ b/internal/ceres/cuda_vector.h
@@ -0,0 +1,193 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+//
+// A simple CUDA vector class.
+
+#ifndef CERES_INTERNAL_CUDA_VECTOR_H_
+#define CERES_INTERNAL_CUDA_VECTOR_H_
+
+// This include must come before any #ifndef check on Ceres compile options.
+// clang-format off
+#include "ceres/internal/config.h"
+// clang-format on
+
+#include <math.h>
+
+#include <memory>
+#include <string>
+
+#include "ceres/context_impl.h"
+#include "ceres/internal/export.h"
+#include "ceres/types.h"
+
+#ifndef CERES_NO_CUDA
+
+#include "ceres/cuda_buffer.h"
+#include "ceres/cuda_kernels_vector_ops.h"
+#include "ceres/internal/eigen.h"
+#include "cublas_v2.h"
+#include "cusparse.h"
+
+namespace ceres::internal {
+
+// An Nx1 vector, denoted y hosted on the GPU, with CUDA-accelerated operations.
+class CERES_NO_EXPORT CudaVector {
+ public:
+ // Create a pre-allocated vector of size N and return a pointer to it. The
+ // caller must ensure that InitCuda() has already been successfully called on
+ // context before calling this method.
+ CudaVector(ContextImpl* context, int size);
+
+ CudaVector(CudaVector&& other);
+
+ ~CudaVector();
+
+ void Resize(int size);
+
+ // Perform a deep copy of the vector.
+ CudaVector& operator=(const CudaVector&);
+
+ // Return the inner product x' * y.
+ double Dot(const CudaVector& x) const;
+
+ // Return the L2 norm of the vector (||y||_2).
+ double Norm() const;
+
+ // Set all elements to zero.
+ void SetZero();
+
+ // Copy from Eigen vector.
+ void CopyFromCpu(const Vector& x);
+
+ // Copy from CPU memory array.
+ void CopyFromCpu(const double* x);
+
+ // Copy to Eigen vector.
+ void CopyTo(Vector* x) const;
+
+ // Copy to CPU memory array. It is the caller's responsibility to ensure
+ // that the array is large enough.
+ void CopyTo(double* x) const;
+
+ // y = a * x + b * y.
+ void Axpby(double a, const CudaVector& x, double b);
+
+ // y = diag(d)' * diag(d) * x + y.
+ void DtDxpy(const CudaVector& D, const CudaVector& x);
+
+ // y = s * y.
+ void Scale(double s);
+
+ int num_rows() const { return num_rows_; }
+ int num_cols() const { return 1; }
+
+ const double* data() const { return data_.data(); }
+ double* mutable_data() { return data_.data(); }
+
+ const cusparseDnVecDescr_t& descr() const { return descr_; }
+
+ private:
+ CudaVector(const CudaVector&) = delete;
+ void DestroyDescriptor();
+
+ int num_rows_ = 0;
+ ContextImpl* context_ = nullptr;
+ CudaBuffer<double> data_;
+ // CuSparse object that describes this dense vector.
+ cusparseDnVecDescr_t descr_ = nullptr;
+};
+
+// Blas1 operations on Cuda vectors. These functions are needed as an
+// abstraction layer so that we can use different versions of a vector style
+// object in the conjugate gradients linear solver.
+// Context and num_threads arguments are not used by CUDA implementation,
+// context embedded into CudaVector is used instead.
+inline double Norm(const CudaVector& x,
+ ContextImpl* context = nullptr,
+ int num_threads = 1) {
+ (void)context;
+ (void)num_threads;
+ return x.Norm();
+}
+inline void SetZero(CudaVector& x,
+ ContextImpl* context = nullptr,
+ int num_threads = 1) {
+ (void)context;
+ (void)num_threads;
+ x.SetZero();
+}
+inline void Axpby(double a,
+ const CudaVector& x,
+ double b,
+ const CudaVector& y,
+ CudaVector& z,
+ ContextImpl* context = nullptr,
+ int num_threads = 1) {
+ (void)context;
+ (void)num_threads;
+ if (&x == &y && &y == &z) {
+ // z = (a + b) * z;
+ z.Scale(a + b);
+ } else if (&x == &z) {
+ // x is aliased to z.
+ // z = x
+ // = b * y + a * x;
+ z.Axpby(b, y, a);
+ } else if (&y == &z) {
+ // y is aliased to z.
+ // z = y = a * x + b * y;
+ z.Axpby(a, x, b);
+ } else {
+ // General case: all inputs and outputs are distinct.
+ z = y;
+ z.Axpby(a, x, b);
+ }
+}
+inline double Dot(const CudaVector& x,
+ const CudaVector& y,
+ ContextImpl* context = nullptr,
+ int num_threads = 1) {
+ (void)context;
+ (void)num_threads;
+ return x.Dot(y);
+}
+inline void Copy(const CudaVector& from,
+ CudaVector& to,
+ ContextImpl* context = nullptr,
+ int num_threads = 1) {
+ (void)context;
+ (void)num_threads;
+ to = from;
+}
+
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
+#endif // CERES_INTERNAL_CUDA_SPARSE_LINEAR_OPERATOR_H_
diff --git a/internal/ceres/cuda_vector_test.cc b/internal/ceres/cuda_vector_test.cc
new file mode 100644
index 0000000..8dcb4b7
--- /dev/null
+++ b/internal/ceres/cuda_vector_test.cc
@@ -0,0 +1,423 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#include "ceres/cuda_vector.h"
+
+#include <string>
+
+#include "ceres/internal/config.h"
+#include "ceres/internal/eigen.h"
+#include "glog/logging.h"
+#include "gtest/gtest.h"
+
+namespace ceres {
+namespace internal {
+
+#ifndef CERES_NO_CUDA
+
+TEST(CudaVector, Creation) {
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x(&context, 1000);
+ EXPECT_EQ(x.num_rows(), 1000);
+ EXPECT_NE(x.data(), nullptr);
+}
+
+TEST(CudaVector, CopyVector) {
+ Vector x(3);
+ x << 1, 2, 3;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector y(&context, 10);
+ y.CopyFromCpu(x);
+ EXPECT_EQ(y.num_rows(), 3);
+
+ Vector z(3);
+ z << 0, 0, 0;
+ y.CopyTo(&z);
+ EXPECT_EQ(x, z);
+}
+
+TEST(CudaVector, Move) {
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector y(&context, 10);
+ const auto y_data = y.data();
+ const auto y_descr = y.descr();
+ EXPECT_EQ(y.num_rows(), 10);
+ CudaVector z(std::move(y));
+ EXPECT_EQ(y.data(), nullptr);
+ EXPECT_EQ(y.descr(), nullptr);
+ EXPECT_EQ(y.num_rows(), 0);
+
+ EXPECT_EQ(z.data(), y_data);
+ EXPECT_EQ(z.descr(), y_descr);
+}
+
+TEST(CudaVector, DeepCopy) {
+ Vector x(3);
+ x << 1, 2, 3;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 3);
+ x_gpu.CopyFromCpu(x);
+
+ CudaVector y_gpu(&context, 3);
+ y_gpu.SetZero();
+ EXPECT_EQ(y_gpu.Norm(), 0.0);
+
+ y_gpu = x_gpu;
+ Vector y(3);
+ y << 0, 0, 0;
+ y_gpu.CopyTo(&y);
+ EXPECT_EQ(x, y);
+}
+
+TEST(CudaVector, Dot) {
+ Vector x(3);
+ Vector y(3);
+ x << 1, 2, 3;
+ y << 100, 10, 1;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 10);
+ CudaVector y_gpu(&context, 10);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+
+ EXPECT_EQ(x_gpu.Dot(y_gpu), 123.0);
+ EXPECT_EQ(Dot(x_gpu, y_gpu), 123.0);
+}
+
+TEST(CudaVector, Norm) {
+ Vector x(3);
+ x << 1, 2, 3;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 10);
+ x_gpu.CopyFromCpu(x);
+
+ EXPECT_NEAR(x_gpu.Norm(),
+ sqrt(1.0 + 4.0 + 9.0),
+ std::numeric_limits<double>::epsilon());
+
+ EXPECT_NEAR(Norm(x_gpu),
+ sqrt(1.0 + 4.0 + 9.0),
+ std::numeric_limits<double>::epsilon());
+}
+
+TEST(CudaVector, SetZero) {
+ Vector x(4);
+ x << 1, 1, 1, 1;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 10);
+ x_gpu.CopyFromCpu(x);
+
+ EXPECT_NEAR(x_gpu.Norm(), 2.0, std::numeric_limits<double>::epsilon());
+
+ x_gpu.SetZero();
+ EXPECT_NEAR(x_gpu.Norm(), 0.0, std::numeric_limits<double>::epsilon());
+
+ x_gpu.CopyFromCpu(x);
+ EXPECT_NEAR(x_gpu.Norm(), 2.0, std::numeric_limits<double>::epsilon());
+ SetZero(x_gpu);
+ EXPECT_NEAR(x_gpu.Norm(), 0.0, std::numeric_limits<double>::epsilon());
+}
+
+TEST(CudaVector, Resize) {
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 10);
+ EXPECT_EQ(x_gpu.num_rows(), 10);
+ x_gpu.Resize(4);
+ EXPECT_EQ(x_gpu.num_rows(), 4);
+}
+
+TEST(CudaVector, Axpy) {
+ Vector x(4);
+ Vector y(4);
+ x << 1, 1, 1, 1;
+ y << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector y_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+
+ x_gpu.Axpby(2.0, y_gpu, 1.0);
+ Vector result;
+ Vector expected(4);
+ expected << 201, 21, 3, 1;
+ x_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyBEquals1) {
+ Vector x(4);
+ Vector y(4);
+ x << 1, 1, 1, 1;
+ y << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector y_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+
+ x_gpu.Axpby(2.0, y_gpu, 1.0);
+ Vector result;
+ Vector expected(4);
+ expected << 201, 21, 3, 1;
+ x_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyMemberFunctionBNotEqual1) {
+ Vector x(4);
+ Vector y(4);
+ x << 1, 1, 1, 1;
+ y << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector y_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+
+ x_gpu.Axpby(2.0, y_gpu, 3.0);
+ Vector result;
+ Vector expected(4);
+ expected << 203, 23, 5, 3;
+ x_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyMemberFunctionBEqual1) {
+ Vector x(4);
+ Vector y(4);
+ x << 1, 1, 1, 1;
+ y << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector y_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+
+ x_gpu.Axpby(2.0, y_gpu, 1.0);
+ Vector result;
+ Vector expected(4);
+ expected << 201, 21, 3, 1;
+ x_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyMemberXAliasesY) {
+ Vector x(4);
+ x << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector y_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.SetZero();
+
+ x_gpu.Axpby(2.0, x_gpu, 1.0);
+ Vector result;
+ Vector expected(4);
+ expected << 300, 30, 3, 0;
+ x_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyNonMemberMethodNoAliases) {
+ Vector x(4);
+ Vector y(4);
+ x << 1, 1, 1, 1;
+ y << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector y_gpu(&context, 4);
+ CudaVector z_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+ z_gpu.Resize(4);
+ z_gpu.SetZero();
+
+ Axpby(2.0, x_gpu, 3.0, y_gpu, z_gpu);
+ Vector result;
+ Vector expected(4);
+ expected << 302, 32, 5, 2;
+ z_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyNonMemberMethodXAliasesY) {
+ Vector x(4);
+ x << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector z_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ z_gpu.SetZero();
+
+ Axpby(2.0, x_gpu, 3.0, x_gpu, z_gpu);
+ Vector result;
+ Vector expected(4);
+ expected << 500, 50, 5, 0;
+ z_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyNonMemberMethodXAliasesZ) {
+ Vector x(4);
+ Vector y(4);
+ x << 1, 1, 1, 1;
+ y << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 10);
+ CudaVector y_gpu(&context, 10);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+
+ Axpby(2.0, x_gpu, 3.0, y_gpu, x_gpu);
+ Vector result;
+ Vector expected(4);
+ expected << 302, 32, 5, 2;
+ x_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyNonMemberMethodYAliasesZ) {
+ Vector x(4);
+ Vector y(4);
+ x << 1, 1, 1, 1;
+ y << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector y_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+
+ Axpby(2.0, x_gpu, 3.0, y_gpu, y_gpu);
+ Vector result;
+ Vector expected(4);
+ expected << 302, 32, 5, 2;
+ y_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, AxpbyNonMemberMethodXAliasesYAliasesZ) {
+ Vector x(4);
+ x << 100, 10, 1, 0;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 10);
+ x_gpu.CopyFromCpu(x);
+
+ Axpby(2.0, x_gpu, 3.0, x_gpu, x_gpu);
+ Vector result;
+ Vector expected(4);
+ expected << 500, 50, 5, 0;
+ x_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, DtDxpy) {
+ Vector x(4);
+ Vector y(4);
+ Vector D(4);
+ x << 1, 2, 3, 4;
+ y << 100, 10, 1, 0;
+ D << 4, 3, 2, 1;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ CudaVector y_gpu(&context, 4);
+ CudaVector D_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+ y_gpu.CopyFromCpu(y);
+ D_gpu.CopyFromCpu(D);
+
+ y_gpu.DtDxpy(D_gpu, x_gpu);
+ Vector result;
+ Vector expected(4);
+ expected << 116, 28, 13, 4;
+ y_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+TEST(CudaVector, Scale) {
+ Vector x(4);
+ x << 1, 2, 3, 4;
+ ContextImpl context;
+ std::string message;
+ CHECK(context.InitCuda(&message)) << "InitCuda() failed because: " << message;
+ CudaVector x_gpu(&context, 4);
+ x_gpu.CopyFromCpu(x);
+
+ x_gpu.Scale(-3.0);
+
+ Vector result;
+ Vector expected(4);
+ expected << -3.0, -6.0, -9.0, -12.0;
+ x_gpu.CopyTo(&result);
+ EXPECT_EQ(result, expected);
+}
+
+#endif // CERES_NO_CUDA
+
+} // namespace internal
+} // namespace ceres
diff --git a/internal/ceres/cxsparse.cc b/internal/ceres/cxsparse.cc
deleted file mode 100644
index 0167f98..0000000
--- a/internal/ceres/cxsparse.cc
+++ /dev/null
@@ -1,283 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: strandmark@google.com (Petter Strandmark)
-
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-#include <string>
-#include <vector>
-
-#include "ceres/compressed_col_sparse_matrix_utils.h"
-#include "ceres/compressed_row_sparse_matrix.h"
-#include "ceres/cxsparse.h"
-#include "ceres/triplet_sparse_matrix.h"
-#include "glog/logging.h"
-
-namespace ceres {
-namespace internal {
-
-using std::vector;
-
-CXSparse::CXSparse() : scratch_(NULL), scratch_size_(0) {}
-
-CXSparse::~CXSparse() {
- if (scratch_size_ > 0) {
- cs_di_free(scratch_);
- }
-}
-
-csn* CXSparse::Cholesky(cs_di* A, cs_dis* symbolic_factor) {
- return cs_di_chol(A, symbolic_factor);
-}
-
-void CXSparse::Solve(cs_dis* symbolic_factor, csn* numeric_factor, double* b) {
- // Make sure we have enough scratch space available.
- const int num_cols = numeric_factor->L->n;
- if (scratch_size_ < num_cols) {
- if (scratch_size_ > 0) {
- cs_di_free(scratch_);
- }
- scratch_ =
- reinterpret_cast<CS_ENTRY*>(cs_di_malloc(num_cols, sizeof(CS_ENTRY)));
- scratch_size_ = num_cols;
- }
-
- // When the Cholesky factor succeeded, these methods are
- // guaranteed to succeeded as well. In the comments below, "x"
- // refers to the scratch space.
- //
- // Set x = P * b.
- CHECK(cs_di_ipvec(symbolic_factor->pinv, b, scratch_, num_cols));
- // Set x = L \ x.
- CHECK(cs_di_lsolve(numeric_factor->L, scratch_));
- // Set x = L' \ x.
- CHECK(cs_di_ltsolve(numeric_factor->L, scratch_));
- // Set b = P' * x.
- CHECK(cs_di_pvec(symbolic_factor->pinv, scratch_, b, num_cols));
-}
-
-bool CXSparse::SolveCholesky(cs_di* lhs, double* rhs_and_solution) {
- return cs_cholsol(1, lhs, rhs_and_solution);
-}
-
-cs_dis* CXSparse::AnalyzeCholesky(cs_di* A) {
- // order = 1 for Cholesky factor.
- return cs_schol(1, A);
-}
-
-cs_dis* CXSparse::AnalyzeCholeskyWithNaturalOrdering(cs_di* A) {
- // order = 0 for Natural ordering.
- return cs_schol(0, A);
-}
-
-cs_dis* CXSparse::BlockAnalyzeCholesky(cs_di* A,
- const vector<int>& row_blocks,
- const vector<int>& col_blocks) {
- const int num_row_blocks = row_blocks.size();
- const int num_col_blocks = col_blocks.size();
-
- vector<int> block_rows;
- vector<int> block_cols;
- CompressedColumnScalarMatrixToBlockMatrix(
- A->i, A->p, row_blocks, col_blocks, &block_rows, &block_cols);
- cs_di block_matrix;
- block_matrix.m = num_row_blocks;
- block_matrix.n = num_col_blocks;
- block_matrix.nz = -1;
- block_matrix.nzmax = block_rows.size();
- block_matrix.p = &block_cols[0];
- block_matrix.i = &block_rows[0];
- block_matrix.x = NULL;
-
- int* ordering = cs_amd(1, &block_matrix);
- vector<int> block_ordering(num_row_blocks, -1);
- std::copy(ordering, ordering + num_row_blocks, &block_ordering[0]);
- cs_free(ordering);
-
- vector<int> scalar_ordering;
- BlockOrderingToScalarOrdering(row_blocks, block_ordering, &scalar_ordering);
-
- cs_dis* symbolic_factor =
- reinterpret_cast<cs_dis*>(cs_calloc(1, sizeof(cs_dis)));
- symbolic_factor->pinv = cs_pinv(&scalar_ordering[0], A->n);
- cs* permuted_A = cs_symperm(A, symbolic_factor->pinv, 0);
-
- symbolic_factor->parent = cs_etree(permuted_A, 0);
- int* postordering = cs_post(symbolic_factor->parent, A->n);
- int* column_counts =
- cs_counts(permuted_A, symbolic_factor->parent, postordering, 0);
- cs_free(postordering);
- cs_spfree(permuted_A);
-
- symbolic_factor->cp = (int*)cs_malloc(A->n + 1, sizeof(int));
- symbolic_factor->lnz = cs_cumsum(symbolic_factor->cp, column_counts, A->n);
- symbolic_factor->unz = symbolic_factor->lnz;
-
- cs_free(column_counts);
-
- if (symbolic_factor->lnz < 0) {
- cs_sfree(symbolic_factor);
- symbolic_factor = NULL;
- }
-
- return symbolic_factor;
-}
-
-cs_di CXSparse::CreateSparseMatrixTransposeView(CompressedRowSparseMatrix* A) {
- cs_di At;
- At.m = A->num_cols();
- At.n = A->num_rows();
- At.nz = -1;
- At.nzmax = A->num_nonzeros();
- At.p = A->mutable_rows();
- At.i = A->mutable_cols();
- At.x = A->mutable_values();
- return At;
-}
-
-cs_di* CXSparse::CreateSparseMatrix(TripletSparseMatrix* tsm) {
- cs_di_sparse tsm_wrapper;
- tsm_wrapper.nzmax = tsm->num_nonzeros();
- tsm_wrapper.nz = tsm->num_nonzeros();
- tsm_wrapper.m = tsm->num_rows();
- tsm_wrapper.n = tsm->num_cols();
- tsm_wrapper.p = tsm->mutable_cols();
- tsm_wrapper.i = tsm->mutable_rows();
- tsm_wrapper.x = tsm->mutable_values();
-
- return cs_compress(&tsm_wrapper);
-}
-
-void CXSparse::ApproximateMinimumDegreeOrdering(cs_di* A, int* ordering) {
- int* cs_ordering = cs_amd(1, A);
- std::copy(cs_ordering, cs_ordering + A->m, ordering);
- cs_free(cs_ordering);
-}
-
-cs_di* CXSparse::TransposeMatrix(cs_di* A) { return cs_di_transpose(A, 1); }
-
-cs_di* CXSparse::MatrixMatrixMultiply(cs_di* A, cs_di* B) {
- return cs_di_multiply(A, B);
-}
-
-void CXSparse::Free(cs_di* sparse_matrix) { cs_di_spfree(sparse_matrix); }
-
-void CXSparse::Free(cs_dis* symbolic_factor) { cs_di_sfree(symbolic_factor); }
-
-void CXSparse::Free(csn* numeric_factor) { cs_di_nfree(numeric_factor); }
-
-std::unique_ptr<SparseCholesky> CXSparseCholesky::Create(
- const OrderingType ordering_type) {
- return std::unique_ptr<SparseCholesky>(new CXSparseCholesky(ordering_type));
-}
-
-CompressedRowSparseMatrix::StorageType CXSparseCholesky::StorageType() const {
- return CompressedRowSparseMatrix::LOWER_TRIANGULAR;
-}
-
-CXSparseCholesky::CXSparseCholesky(const OrderingType ordering_type)
- : ordering_type_(ordering_type),
- symbolic_factor_(NULL),
- numeric_factor_(NULL) {}
-
-CXSparseCholesky::~CXSparseCholesky() {
- FreeSymbolicFactorization();
- FreeNumericFactorization();
-}
-
-LinearSolverTerminationType CXSparseCholesky::Factorize(
- CompressedRowSparseMatrix* lhs, std::string* message) {
- CHECK_EQ(lhs->storage_type(), StorageType());
- if (lhs == NULL) {
- *message = "Failure: Input lhs is NULL.";
- return LINEAR_SOLVER_FATAL_ERROR;
- }
-
- cs_di cs_lhs = cs_.CreateSparseMatrixTransposeView(lhs);
-
- if (symbolic_factor_ == NULL) {
- if (ordering_type_ == NATURAL) {
- symbolic_factor_ = cs_.AnalyzeCholeskyWithNaturalOrdering(&cs_lhs);
- } else {
- if (!lhs->col_blocks().empty() && !(lhs->row_blocks().empty())) {
- symbolic_factor_ = cs_.BlockAnalyzeCholesky(
- &cs_lhs, lhs->col_blocks(), lhs->row_blocks());
- } else {
- symbolic_factor_ = cs_.AnalyzeCholesky(&cs_lhs);
- }
- }
-
- if (symbolic_factor_ == NULL) {
- *message = "CXSparse Failure : Symbolic factorization failed.";
- return LINEAR_SOLVER_FATAL_ERROR;
- }
- }
-
- FreeNumericFactorization();
- numeric_factor_ = cs_.Cholesky(&cs_lhs, symbolic_factor_);
- if (numeric_factor_ == NULL) {
- *message = "CXSparse Failure : Numeric factorization failed.";
- return LINEAR_SOLVER_FAILURE;
- }
-
- return LINEAR_SOLVER_SUCCESS;
-}
-
-LinearSolverTerminationType CXSparseCholesky::Solve(const double* rhs,
- double* solution,
- std::string* message) {
- CHECK(numeric_factor_ != NULL)
- << "Solve called without a call to Factorize first.";
- const int num_cols = numeric_factor_->L->n;
- memcpy(solution, rhs, num_cols * sizeof(*solution));
- cs_.Solve(symbolic_factor_, numeric_factor_, solution);
- return LINEAR_SOLVER_SUCCESS;
-}
-
-void CXSparseCholesky::FreeSymbolicFactorization() {
- if (symbolic_factor_ != NULL) {
- cs_.Free(symbolic_factor_);
- symbolic_factor_ = NULL;
- }
-}
-
-void CXSparseCholesky::FreeNumericFactorization() {
- if (numeric_factor_ != NULL) {
- cs_.Free(numeric_factor_);
- numeric_factor_ = NULL;
- }
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/cxsparse.h b/internal/ceres/cxsparse.h
deleted file mode 100644
index d3f76e0..0000000
--- a/internal/ceres/cxsparse.h
+++ /dev/null
@@ -1,179 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: strandmark@google.com (Petter Strandmark)
-
-#ifndef CERES_INTERNAL_CXSPARSE_H_
-#define CERES_INTERNAL_CXSPARSE_H_
-
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-#include <memory>
-#include <string>
-#include <vector>
-
-#include "ceres/linear_solver.h"
-#include "ceres/sparse_cholesky.h"
-#include "cs.h"
-
-namespace ceres {
-namespace internal {
-
-class CompressedRowSparseMatrix;
-class TripletSparseMatrix;
-
-// This object provides access to solving linear systems using Cholesky
-// factorization with a known symbolic factorization. This features does not
-// explicitly exist in CXSparse. The methods in the class are nonstatic because
-// the class manages internal scratch space.
-class CXSparse {
- public:
- CXSparse();
- ~CXSparse();
-
- // Solve the system lhs * solution = rhs in place by using an
- // approximate minimum degree fill reducing ordering.
- bool SolveCholesky(cs_di* lhs, double* rhs_and_solution);
-
- // Solves a linear system given its symbolic and numeric factorization.
- void Solve(cs_dis* symbolic_factor,
- csn* numeric_factor,
- double* rhs_and_solution);
-
- // Compute the numeric Cholesky factorization of A, given its
- // symbolic factorization.
- //
- // Caller owns the result.
- csn* Cholesky(cs_di* A, cs_dis* symbolic_factor);
-
- // Creates a sparse matrix from a compressed-column form. No memory is
- // allocated or copied; the structure A is filled out with info from the
- // argument.
- cs_di CreateSparseMatrixTransposeView(CompressedRowSparseMatrix* A);
-
- // Creates a new matrix from a triplet form. Deallocate the returned matrix
- // with Free. May return NULL if the compression or allocation fails.
- cs_di* CreateSparseMatrix(TripletSparseMatrix* A);
-
- // B = A'
- //
- // The returned matrix should be deallocated with Free when not used
- // anymore.
- cs_di* TransposeMatrix(cs_di* A);
-
- // C = A * B
- //
- // The returned matrix should be deallocated with Free when not used
- // anymore.
- cs_di* MatrixMatrixMultiply(cs_di* A, cs_di* B);
-
- // Computes a symbolic factorization of A that can be used in SolveCholesky.
- //
- // The returned matrix should be deallocated with Free when not used anymore.
- cs_dis* AnalyzeCholesky(cs_di* A);
-
- // Computes a symbolic factorization of A that can be used in
- // SolveCholesky, but does not compute a fill-reducing ordering.
- //
- // The returned matrix should be deallocated with Free when not used anymore.
- cs_dis* AnalyzeCholeskyWithNaturalOrdering(cs_di* A);
-
- // Computes a symbolic factorization of A that can be used in
- // SolveCholesky. The difference from AnalyzeCholesky is that this
- // function first detects the block sparsity of the matrix using
- // information about the row and column blocks and uses this block
- // sparse matrix to find a fill-reducing ordering. This ordering is
- // then used to find a symbolic factorization. This can result in a
- // significant performance improvement AnalyzeCholesky on block
- // sparse matrices.
- //
- // The returned matrix should be deallocated with Free when not used
- // anymore.
- cs_dis* BlockAnalyzeCholesky(cs_di* A,
- const std::vector<int>& row_blocks,
- const std::vector<int>& col_blocks);
-
- // Compute an fill-reducing approximate minimum degree ordering of
- // the matrix A. ordering should be non-NULL and should point to
- // enough memory to hold the ordering for the rows of A.
- void ApproximateMinimumDegreeOrdering(cs_di* A, int* ordering);
-
- void Free(cs_di* sparse_matrix);
- void Free(cs_dis* symbolic_factorization);
- void Free(csn* numeric_factorization);
-
- private:
- // Cached scratch space
- CS_ENTRY* scratch_;
- int scratch_size_;
-};
-
-// An implementation of SparseCholesky interface using the CXSparse
-// library.
-class CXSparseCholesky : public SparseCholesky {
- public:
- // Factory
- static std::unique_ptr<SparseCholesky> Create(OrderingType ordering_type);
-
- // SparseCholesky interface.
- virtual ~CXSparseCholesky();
- CompressedRowSparseMatrix::StorageType StorageType() const final;
- LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
- std::string* message) final;
- LinearSolverTerminationType Solve(const double* rhs,
- double* solution,
- std::string* message) final;
-
- private:
- CXSparseCholesky(const OrderingType ordering_type);
- void FreeSymbolicFactorization();
- void FreeNumericFactorization();
-
- const OrderingType ordering_type_;
- CXSparse cs_;
- cs_dis* symbolic_factor_;
- csn* numeric_factor_;
-};
-
-} // namespace internal
-} // namespace ceres
-
-#else
-
-typedef void cs_dis;
-
-class CXSparse {
- public:
- void Free(void* arg) {}
-};
-#endif // CERES_NO_CXSPARSE
-
-#endif // CERES_INTERNAL_CXSPARSE_H_
diff --git a/internal/ceres/dense_cholesky.cc b/internal/ceres/dense_cholesky.cc
new file mode 100644
index 0000000..5a3e7e2
--- /dev/null
+++ b/internal/ceres/dense_cholesky.cc
@@ -0,0 +1,645 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#include "ceres/dense_cholesky.h"
+
+#include <algorithm>
+#include <memory>
+#include <string>
+#include <utility>
+#include <vector>
+
+#include "ceres/internal/config.h"
+#include "ceres/iterative_refiner.h"
+
+#ifndef CERES_NO_CUDA
+#include "ceres/context_impl.h"
+#include "ceres/cuda_kernels_vector_ops.h"
+#include "cuda_runtime.h"
+#include "cusolverDn.h"
+#endif // CERES_NO_CUDA
+
+#ifndef CERES_NO_LAPACK
+
+// C interface to the LAPACK Cholesky factorization and triangular solve.
+extern "C" void dpotrf_(
+ const char* uplo, const int* n, double* a, const int* lda, int* info);
+
+extern "C" void dpotrs_(const char* uplo,
+ const int* n,
+ const int* nrhs,
+ const double* a,
+ const int* lda,
+ double* b,
+ const int* ldb,
+ int* info);
+
+extern "C" void spotrf_(
+ const char* uplo, const int* n, float* a, const int* lda, int* info);
+
+extern "C" void spotrs_(const char* uplo,
+ const int* n,
+ const int* nrhs,
+ const float* a,
+ const int* lda,
+ float* b,
+ const int* ldb,
+ int* info);
+#endif
+
+namespace ceres::internal {
+
+DenseCholesky::~DenseCholesky() = default;
+
+std::unique_ptr<DenseCholesky> DenseCholesky::Create(
+ const LinearSolver::Options& options) {
+ std::unique_ptr<DenseCholesky> dense_cholesky;
+
+ switch (options.dense_linear_algebra_library_type) {
+ case EIGEN:
+ // Eigen mixed precision solver not yet implemented.
+ if (options.use_mixed_precision_solves) {
+ dense_cholesky = std::make_unique<FloatEigenDenseCholesky>();
+ } else {
+ dense_cholesky = std::make_unique<EigenDenseCholesky>();
+ }
+ break;
+
+ case LAPACK:
+#ifndef CERES_NO_LAPACK
+ // LAPACK mixed precision solver not yet implemented.
+ if (options.use_mixed_precision_solves) {
+ dense_cholesky = std::make_unique<FloatLAPACKDenseCholesky>();
+ } else {
+ dense_cholesky = std::make_unique<LAPACKDenseCholesky>();
+ }
+ break;
+#else
+ LOG(FATAL) << "Ceres was compiled without support for LAPACK.";
+#endif
+
+ case CUDA:
+#ifndef CERES_NO_CUDA
+ if (options.use_mixed_precision_solves) {
+ dense_cholesky = CUDADenseCholeskyMixedPrecision::Create(options);
+ } else {
+ dense_cholesky = CUDADenseCholesky::Create(options);
+ }
+ break;
+#else
+ LOG(FATAL) << "Ceres was compiled without support for CUDA.";
+#endif
+
+ default:
+ LOG(FATAL) << "Unknown dense linear algebra library type : "
+ << DenseLinearAlgebraLibraryTypeToString(
+ options.dense_linear_algebra_library_type);
+ }
+
+ if (options.max_num_refinement_iterations > 0) {
+ auto refiner = std::make_unique<DenseIterativeRefiner>(
+ options.max_num_refinement_iterations);
+ dense_cholesky = std::make_unique<RefinedDenseCholesky>(
+ std::move(dense_cholesky), std::move(refiner));
+ }
+
+ return dense_cholesky;
+}
+
+LinearSolverTerminationType DenseCholesky::FactorAndSolve(
+ int num_cols,
+ double* lhs,
+ const double* rhs,
+ double* solution,
+ std::string* message) {
+ LinearSolverTerminationType termination_type =
+ Factorize(num_cols, lhs, message);
+ if (termination_type == LinearSolverTerminationType::SUCCESS) {
+ termination_type = Solve(rhs, solution, message);
+ }
+ return termination_type;
+}
+
+LinearSolverTerminationType EigenDenseCholesky::Factorize(
+ int num_cols, double* lhs, std::string* message) {
+ Eigen::Map<Eigen::MatrixXd> m(lhs, num_cols, num_cols);
+ llt_ = std::make_unique<LLTType>(m);
+ if (llt_->info() != Eigen::Success) {
+ *message = "Eigen failure. Unable to perform dense Cholesky factorization.";
+ return LinearSolverTerminationType::FAILURE;
+ }
+
+ *message = "Success.";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+LinearSolverTerminationType EigenDenseCholesky::Solve(const double* rhs,
+ double* solution,
+ std::string* message) {
+ if (llt_->info() != Eigen::Success) {
+ *message = "Eigen failure. Unable to perform dense Cholesky factorization.";
+ return LinearSolverTerminationType::FAILURE;
+ }
+
+ VectorRef(solution, llt_->cols()) =
+ llt_->solve(ConstVectorRef(rhs, llt_->cols()));
+ *message = "Success.";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+LinearSolverTerminationType FloatEigenDenseCholesky::Factorize(
+ int num_cols, double* lhs, std::string* message) {
+ // TODO(sameeragarwal): Check if this causes a double allocation.
+ lhs_ = Eigen::Map<Eigen::MatrixXd>(lhs, num_cols, num_cols).cast<float>();
+ llt_ = std::make_unique<LLTType>(lhs_);
+ if (llt_->info() != Eigen::Success) {
+ *message = "Eigen failure. Unable to perform dense Cholesky factorization.";
+ return LinearSolverTerminationType::FAILURE;
+ }
+
+ *message = "Success.";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+LinearSolverTerminationType FloatEigenDenseCholesky::Solve(
+ const double* rhs, double* solution, std::string* message) {
+ if (llt_->info() != Eigen::Success) {
+ *message = "Eigen failure. Unable to perform dense Cholesky factorization.";
+ return LinearSolverTerminationType::FAILURE;
+ }
+
+ rhs_ = ConstVectorRef(rhs, llt_->cols()).cast<float>();
+ solution_ = llt_->solve(rhs_);
+ VectorRef(solution, llt_->cols()) = solution_.cast<double>();
+ *message = "Success.";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+#ifndef CERES_NO_LAPACK
+LinearSolverTerminationType LAPACKDenseCholesky::Factorize(
+ int num_cols, double* lhs, std::string* message) {
+ lhs_ = lhs;
+ num_cols_ = num_cols;
+
+ const char uplo = 'L';
+ int info = 0;
+ dpotrf_(&uplo, &num_cols_, lhs_, &num_cols_, &info);
+
+ if (info < 0) {
+ termination_type_ = LinearSolverTerminationType::FATAL_ERROR;
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres. "
+ << "Please report it. "
+ << "LAPACK::dpotrf fatal error. "
+ << "Argument: " << -info << " is invalid.";
+ } else if (info > 0) {
+ termination_type_ = LinearSolverTerminationType::FAILURE;
+ *message = StringPrintf(
+ "LAPACK::dpotrf numerical failure. "
+ "The leading minor of order %d is not positive definite.",
+ info);
+ } else {
+ termination_type_ = LinearSolverTerminationType::SUCCESS;
+ *message = "Success.";
+ }
+ return termination_type_;
+}
+
+LinearSolverTerminationType LAPACKDenseCholesky::Solve(const double* rhs,
+ double* solution,
+ std::string* message) {
+ const char uplo = 'L';
+ const int nrhs = 1;
+ int info = 0;
+
+ VectorRef(solution, num_cols_) = ConstVectorRef(rhs, num_cols_);
+ dpotrs_(
+ &uplo, &num_cols_, &nrhs, lhs_, &num_cols_, solution, &num_cols_, &info);
+
+ if (info < 0) {
+ termination_type_ = LinearSolverTerminationType::FATAL_ERROR;
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres. "
+ << "Please report it. "
+ << "LAPACK::dpotrs fatal error. "
+ << "Argument: " << -info << " is invalid.";
+ }
+
+ *message = "Success";
+ termination_type_ = LinearSolverTerminationType::SUCCESS;
+
+ return termination_type_;
+}
+
+LinearSolverTerminationType FloatLAPACKDenseCholesky::Factorize(
+ int num_cols, double* lhs, std::string* message) {
+ num_cols_ = num_cols;
+ lhs_ = Eigen::Map<Eigen::MatrixXd>(lhs, num_cols, num_cols).cast<float>();
+
+ const char uplo = 'L';
+ int info = 0;
+ spotrf_(&uplo, &num_cols_, lhs_.data(), &num_cols_, &info);
+
+ if (info < 0) {
+ termination_type_ = LinearSolverTerminationType::FATAL_ERROR;
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres. "
+ << "Please report it. "
+ << "LAPACK::spotrf fatal error. "
+ << "Argument: " << -info << " is invalid.";
+ } else if (info > 0) {
+ termination_type_ = LinearSolverTerminationType::FAILURE;
+ *message = StringPrintf(
+ "LAPACK::spotrf numerical failure. "
+ "The leading minor of order %d is not positive definite.",
+ info);
+ } else {
+ termination_type_ = LinearSolverTerminationType::SUCCESS;
+ *message = "Success.";
+ }
+ return termination_type_;
+}
+
+LinearSolverTerminationType FloatLAPACKDenseCholesky::Solve(
+ const double* rhs, double* solution, std::string* message) {
+ const char uplo = 'L';
+ const int nrhs = 1;
+ int info = 0;
+ rhs_and_solution_ = ConstVectorRef(rhs, num_cols_).cast<float>();
+ spotrs_(&uplo,
+ &num_cols_,
+ &nrhs,
+ lhs_.data(),
+ &num_cols_,
+ rhs_and_solution_.data(),
+ &num_cols_,
+ &info);
+
+ if (info < 0) {
+ termination_type_ = LinearSolverTerminationType::FATAL_ERROR;
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres. "
+ << "Please report it. "
+ << "LAPACK::dpotrs fatal error. "
+ << "Argument: " << -info << " is invalid.";
+ }
+
+ *message = "Success";
+ termination_type_ = LinearSolverTerminationType::SUCCESS;
+ VectorRef(solution, num_cols_) =
+ rhs_and_solution_.head(num_cols_).cast<double>();
+ return termination_type_;
+}
+
+#endif // CERES_NO_LAPACK
+
+RefinedDenseCholesky::RefinedDenseCholesky(
+ std::unique_ptr<DenseCholesky> dense_cholesky,
+ std::unique_ptr<DenseIterativeRefiner> iterative_refiner)
+ : dense_cholesky_(std::move(dense_cholesky)),
+ iterative_refiner_(std::move(iterative_refiner)) {}
+
+RefinedDenseCholesky::~RefinedDenseCholesky() = default;
+
+LinearSolverTerminationType RefinedDenseCholesky::Factorize(
+ const int num_cols, double* lhs, std::string* message) {
+ lhs_ = lhs;
+ num_cols_ = num_cols;
+ return dense_cholesky_->Factorize(num_cols, lhs, message);
+}
+
+LinearSolverTerminationType RefinedDenseCholesky::Solve(const double* rhs,
+ double* solution,
+ std::string* message) {
+ CHECK(lhs_ != nullptr);
+ auto termination_type = dense_cholesky_->Solve(rhs, solution, message);
+ if (termination_type != LinearSolverTerminationType::SUCCESS) {
+ return termination_type;
+ }
+
+ iterative_refiner_->Refine(
+ num_cols_, lhs_, rhs, dense_cholesky_.get(), solution);
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+#ifndef CERES_NO_CUDA
+
+CUDADenseCholesky::CUDADenseCholesky(ContextImpl* context)
+ : context_(context),
+ lhs_{context},
+ rhs_{context},
+ device_workspace_{context},
+ error_(context, 1) {}
+
+LinearSolverTerminationType CUDADenseCholesky::Factorize(int num_cols,
+ double* lhs,
+ std::string* message) {
+ factorize_result_ = LinearSolverTerminationType::FATAL_ERROR;
+ lhs_.Reserve(num_cols * num_cols);
+ num_cols_ = num_cols;
+ lhs_.CopyFromCpu(lhs, num_cols * num_cols);
+ int device_workspace_size = 0;
+ if (cusolverDnDpotrf_bufferSize(context_->cusolver_handle_,
+ CUBLAS_FILL_MODE_LOWER,
+ num_cols,
+ lhs_.data(),
+ num_cols,
+ &device_workspace_size) !=
+ CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnDpotrf_bufferSize failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ device_workspace_.Reserve(device_workspace_size);
+ if (cusolverDnDpotrf(context_->cusolver_handle_,
+ CUBLAS_FILL_MODE_LOWER,
+ num_cols,
+ lhs_.data(),
+ num_cols,
+ reinterpret_cast<double*>(device_workspace_.data()),
+ device_workspace_.size(),
+ error_.data()) != CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnDpotrf failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ int error = 0;
+ error_.CopyToCpu(&error, 1);
+ if (error < 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres - "
+ << "please report it. "
+ << "cuSolverDN::cusolverDnXpotrf fatal error. "
+ << "Argument: " << -error << " is invalid.";
+ // The following line is unreachable, but return failure just to be
+ // pedantic, since the compiler does not know that.
+ return LinearSolverTerminationType::FATAL_ERROR;
+ } else if (error > 0) {
+ *message = StringPrintf(
+ "cuSolverDN::cusolverDnDpotrf numerical failure. "
+ "The leading minor of order %d is not positive definite.",
+ error);
+ factorize_result_ = LinearSolverTerminationType::FAILURE;
+ return LinearSolverTerminationType::FAILURE;
+ }
+ *message = "Success";
+ factorize_result_ = LinearSolverTerminationType::SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+LinearSolverTerminationType CUDADenseCholesky::Solve(const double* rhs,
+ double* solution,
+ std::string* message) {
+ if (factorize_result_ != LinearSolverTerminationType::SUCCESS) {
+ *message = "Factorize did not complete successfully previously.";
+ return factorize_result_;
+ }
+ rhs_.CopyFromCpu(rhs, num_cols_);
+ if (cusolverDnDpotrs(context_->cusolver_handle_,
+ CUBLAS_FILL_MODE_LOWER,
+ num_cols_,
+ 1,
+ lhs_.data(),
+ num_cols_,
+ rhs_.data(),
+ num_cols_,
+ error_.data()) != CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnDpotrs failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ int error = 0;
+ error_.CopyToCpu(&error, 1);
+ if (error != 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres. "
+ << "Please report it."
+ << "cuSolverDN::cusolverDnDpotrs fatal error. "
+ << "Argument: " << -error << " is invalid.";
+ }
+ rhs_.CopyToCpu(solution, num_cols_);
+ *message = "Success";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+std::unique_ptr<CUDADenseCholesky> CUDADenseCholesky::Create(
+ const LinearSolver::Options& options) {
+ if (options.dense_linear_algebra_library_type != CUDA ||
+ options.context == nullptr || !options.context->IsCudaInitialized()) {
+ return nullptr;
+ }
+ return std::unique_ptr<CUDADenseCholesky>(
+ new CUDADenseCholesky(options.context));
+}
+
+std::unique_ptr<CUDADenseCholeskyMixedPrecision>
+CUDADenseCholeskyMixedPrecision::Create(const LinearSolver::Options& options) {
+ if (options.dense_linear_algebra_library_type != CUDA ||
+ !options.use_mixed_precision_solves || options.context == nullptr ||
+ !options.context->IsCudaInitialized()) {
+ return nullptr;
+ }
+ return std::unique_ptr<CUDADenseCholeskyMixedPrecision>(
+ new CUDADenseCholeskyMixedPrecision(
+ options.context, options.max_num_refinement_iterations));
+}
+
+LinearSolverTerminationType
+CUDADenseCholeskyMixedPrecision::CudaCholeskyFactorize(std::string* message) {
+ int device_workspace_size = 0;
+ if (cusolverDnSpotrf_bufferSize(context_->cusolver_handle_,
+ CUBLAS_FILL_MODE_LOWER,
+ num_cols_,
+ lhs_fp32_.data(),
+ num_cols_,
+ &device_workspace_size) !=
+ CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnSpotrf_bufferSize failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ device_workspace_.Reserve(device_workspace_size);
+ if (cusolverDnSpotrf(context_->cusolver_handle_,
+ CUBLAS_FILL_MODE_LOWER,
+ num_cols_,
+ lhs_fp32_.data(),
+ num_cols_,
+ device_workspace_.data(),
+ device_workspace_.size(),
+ error_.data()) != CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnSpotrf failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ int error = 0;
+ error_.CopyToCpu(&error, 1);
+ if (error < 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres - "
+ << "please report it. "
+ << "cuSolverDN::cusolverDnSpotrf fatal error. "
+ << "Argument: " << -error << " is invalid.";
+ // The following line is unreachable, but return failure just to be
+ // pedantic, since the compiler does not know that.
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ if (error > 0) {
+ *message = StringPrintf(
+ "cuSolverDN::cusolverDnSpotrf numerical failure. "
+ "The leading minor of order %d is not positive definite.",
+ error);
+ factorize_result_ = LinearSolverTerminationType::FAILURE;
+ return LinearSolverTerminationType::FAILURE;
+ }
+ *message = "Success";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+LinearSolverTerminationType CUDADenseCholeskyMixedPrecision::CudaCholeskySolve(
+ std::string* message) {
+ CHECK_EQ(cudaMemcpyAsync(correction_fp32_.data(),
+ residual_fp32_.data(),
+ num_cols_ * sizeof(float),
+ cudaMemcpyDeviceToDevice,
+ context_->DefaultStream()),
+ cudaSuccess);
+ if (cusolverDnSpotrs(context_->cusolver_handle_,
+ CUBLAS_FILL_MODE_LOWER,
+ num_cols_,
+ 1,
+ lhs_fp32_.data(),
+ num_cols_,
+ correction_fp32_.data(),
+ num_cols_,
+ error_.data()) != CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnDpotrs failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ int error = 0;
+ error_.CopyToCpu(&error, 1);
+ if (error != 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres. "
+ << "Please report it."
+ << "cuSolverDN::cusolverDnDpotrs fatal error. "
+ << "Argument: " << -error << " is invalid.";
+ }
+ *message = "Success";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+CUDADenseCholeskyMixedPrecision::CUDADenseCholeskyMixedPrecision(
+ ContextImpl* context, int max_num_refinement_iterations)
+ : context_(context),
+ lhs_fp64_{context},
+ rhs_fp64_{context},
+ lhs_fp32_{context},
+ device_workspace_{context},
+ error_(context, 1),
+ x_fp64_{context},
+ correction_fp32_{context},
+ residual_fp32_{context},
+ residual_fp64_{context},
+ max_num_refinement_iterations_(max_num_refinement_iterations) {}
+
+LinearSolverTerminationType CUDADenseCholeskyMixedPrecision::Factorize(
+ int num_cols, double* lhs, std::string* message) {
+ num_cols_ = num_cols;
+
+ // Copy fp64 version of lhs to GPU.
+ lhs_fp64_.Reserve(num_cols * num_cols);
+ lhs_fp64_.CopyFromCpu(lhs, num_cols * num_cols);
+
+ // Create an fp32 copy of lhs, lhs_fp32.
+ lhs_fp32_.Reserve(num_cols * num_cols);
+ CudaFP64ToFP32(lhs_fp64_.data(),
+ lhs_fp32_.data(),
+ num_cols * num_cols,
+ context_->DefaultStream());
+
+ // Factorize lhs_fp32.
+ factorize_result_ = CudaCholeskyFactorize(message);
+ return factorize_result_;
+}
+
+LinearSolverTerminationType CUDADenseCholeskyMixedPrecision::Solve(
+ const double* rhs, double* solution, std::string* message) {
+ // If factorization failed, return failure.
+ if (factorize_result_ != LinearSolverTerminationType::SUCCESS) {
+ *message = "Factorize did not complete successfully previously.";
+ return factorize_result_;
+ }
+
+ // Reserve memory for all arrays.
+ rhs_fp64_.Reserve(num_cols_);
+ x_fp64_.Reserve(num_cols_);
+ correction_fp32_.Reserve(num_cols_);
+ residual_fp32_.Reserve(num_cols_);
+ residual_fp64_.Reserve(num_cols_);
+
+ // Initialize x = 0.
+ CudaSetZeroFP64(x_fp64_.data(), num_cols_, context_->DefaultStream());
+
+ // Initialize residual = rhs.
+ rhs_fp64_.CopyFromCpu(rhs, num_cols_);
+ residual_fp64_.CopyFromGPUArray(rhs_fp64_.data(), num_cols_);
+
+ for (int i = 0; i <= max_num_refinement_iterations_; ++i) {
+ // Cast residual from fp64 to fp32.
+ CudaFP64ToFP32(residual_fp64_.data(),
+ residual_fp32_.data(),
+ num_cols_,
+ context_->DefaultStream());
+ // [fp32] c = lhs^-1 * residual.
+ auto result = CudaCholeskySolve(message);
+ if (result != LinearSolverTerminationType::SUCCESS) {
+ return result;
+ }
+ // [fp64] x += c.
+ CudaDsxpy(x_fp64_.data(),
+ correction_fp32_.data(),
+ num_cols_,
+ context_->DefaultStream());
+ if (i < max_num_refinement_iterations_) {
+ // [fp64] residual = rhs - lhs * x
+ // This is done in two steps:
+ // 1. [fp64] residual = rhs
+ residual_fp64_.CopyFromGPUArray(rhs_fp64_.data(), num_cols_);
+ // 2. [fp64] residual = residual - lhs * x
+ double alpha = -1.0;
+ double beta = 1.0;
+ cublasDsymv(context_->cublas_handle_,
+ CUBLAS_FILL_MODE_LOWER,
+ num_cols_,
+ &alpha,
+ lhs_fp64_.data(),
+ num_cols_,
+ x_fp64_.data(),
+ 1,
+ &beta,
+ residual_fp64_.data(),
+ 1);
+ }
+ }
+ x_fp64_.CopyToCpu(solution, num_cols_);
+ *message = "Success.";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
diff --git a/internal/ceres/dense_cholesky.h b/internal/ceres/dense_cholesky.h
new file mode 100644
index 0000000..04a5dd5
--- /dev/null
+++ b/internal/ceres/dense_cholesky.h
@@ -0,0 +1,308 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#ifndef CERES_INTERNAL_DENSE_CHOLESKY_H_
+#define CERES_INTERNAL_DENSE_CHOLESKY_H_
+
+// This include must come before any #ifndef check on Ceres compile options.
+// clang-format off
+#include "ceres/internal/config.h"
+// clang-format on
+
+#include <memory>
+#include <vector>
+
+#include "Eigen/Dense"
+#include "ceres/context_impl.h"
+#include "ceres/cuda_buffer.h"
+#include "ceres/linear_solver.h"
+#include "glog/logging.h"
+#ifndef CERES_NO_CUDA
+#include "ceres/context_impl.h"
+#include "cuda_runtime.h"
+#include "cusolverDn.h"
+#endif // CERES_NO_CUDA
+
+namespace ceres::internal {
+
+// An interface that abstracts away the internal details of various dense linear
+// algebra libraries and offers a simple API for solving dense symmetric
+// positive definite linear systems using a Cholesky factorization.
+class CERES_NO_EXPORT DenseCholesky {
+ public:
+ static std::unique_ptr<DenseCholesky> Create(
+ const LinearSolver::Options& options);
+
+ virtual ~DenseCholesky();
+
+ // Computes the Cholesky factorization of the given matrix.
+ //
+ // The input matrix lhs is assumed to be a column-major num_cols x num_cols
+ // matrix, that is symmetric positive definite with its lower triangular part
+ // containing the left hand side of the linear system being solved.
+ //
+ // The input matrix lhs may be modified by the implementation to store the
+ // factorization, irrespective of whether the factorization succeeds or not.
+ // As a result it is the user's responsibility to ensure that lhs is valid
+ // when Solve is called.
+ virtual LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
+ std::string* message) = 0;
+
+ // Computes the solution to the equation
+ //
+ // lhs * solution = rhs
+ //
+ // Calling Solve without calling Factorize is undefined behaviour. It is the
+ // user's responsibility to ensure that the input matrix lhs passed to
+ // Factorize has not been freed/modified when Solve is called.
+ virtual LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) = 0;
+
+ // Convenience method which combines a call to Factorize and Solve. Solve is
+ // only called if Factorize returns LinearSolverTerminationType::SUCCESS.
+ //
+ // The input matrix lhs may be modified by the implementation to store the
+ // factorization, irrespective of whether the method succeeds or not. It is
+ // the user's responsibility to ensure that lhs is valid if and when Solve is
+ // called again after this call.
+ LinearSolverTerminationType FactorAndSolve(int num_cols,
+ double* lhs,
+ const double* rhs,
+ double* solution,
+ std::string* message);
+};
+
+class CERES_NO_EXPORT EigenDenseCholesky final : public DenseCholesky {
+ public:
+ LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ using LLTType = Eigen::LLT<Eigen::Ref<Eigen::MatrixXd>, Eigen::Lower>;
+ std::unique_ptr<LLTType> llt_;
+};
+
+class CERES_NO_EXPORT FloatEigenDenseCholesky final : public DenseCholesky {
+ public:
+ LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ Eigen::MatrixXf lhs_;
+ Eigen::VectorXf rhs_;
+ Eigen::VectorXf solution_;
+ using LLTType = Eigen::LLT<Eigen::MatrixXf, Eigen::Lower>;
+ std::unique_ptr<LLTType> llt_;
+};
+
+#ifndef CERES_NO_LAPACK
+class CERES_NO_EXPORT LAPACKDenseCholesky final : public DenseCholesky {
+ public:
+ LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ double* lhs_ = nullptr;
+ int num_cols_ = -1;
+ LinearSolverTerminationType termination_type_ =
+ LinearSolverTerminationType::FATAL_ERROR;
+};
+
+class CERES_NO_EXPORT FloatLAPACKDenseCholesky final : public DenseCholesky {
+ public:
+ LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ Eigen::MatrixXf lhs_;
+ Eigen::VectorXf rhs_and_solution_;
+ int num_cols_ = -1;
+ LinearSolverTerminationType termination_type_ =
+ LinearSolverTerminationType::FATAL_ERROR;
+};
+#endif // CERES_NO_LAPACK
+
+class DenseIterativeRefiner;
+
+// Computes an initial solution using the given instance of
+// DenseCholesky, and then refines it using the DenseIterativeRefiner.
+class CERES_NO_EXPORT RefinedDenseCholesky final : public DenseCholesky {
+ public:
+ RefinedDenseCholesky(
+ std::unique_ptr<DenseCholesky> dense_cholesky,
+ std::unique_ptr<DenseIterativeRefiner> iterative_refiner);
+ ~RefinedDenseCholesky() override;
+
+ LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ std::unique_ptr<DenseCholesky> dense_cholesky_;
+ std::unique_ptr<DenseIterativeRefiner> iterative_refiner_;
+ double* lhs_ = nullptr;
+ int num_cols_;
+};
+
+#ifndef CERES_NO_CUDA
+// CUDA implementation of DenseCholesky using the cuSolverDN library using the
+// 32-bit legacy interface for maximum compatibility.
+class CERES_NO_EXPORT CUDADenseCholesky final : public DenseCholesky {
+ public:
+ static std::unique_ptr<CUDADenseCholesky> Create(
+ const LinearSolver::Options& options);
+ CUDADenseCholesky(const CUDADenseCholesky&) = delete;
+ CUDADenseCholesky& operator=(const CUDADenseCholesky&) = delete;
+ LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ explicit CUDADenseCholesky(ContextImpl* context);
+
+ ContextImpl* context_ = nullptr;
+ // Number of columns in the A matrix, to be cached between calls to *Factorize
+ // and *Solve.
+ size_t num_cols_ = 0;
+ // GPU memory allocated for the A matrix (lhs matrix).
+ CudaBuffer<double> lhs_;
+ // GPU memory allocated for the B matrix (rhs vector).
+ CudaBuffer<double> rhs_;
+ // Scratch space for cuSOLVER on the GPU.
+ CudaBuffer<double> device_workspace_;
+ // Required for error handling with cuSOLVER.
+ CudaBuffer<int> error_;
+ // Cache the result of Factorize to ensure that when Solve is called, the
+ // factorization of lhs is valid.
+ LinearSolverTerminationType factorize_result_ =
+ LinearSolverTerminationType::FATAL_ERROR;
+};
+
+// A mixed-precision iterative refinement dense Cholesky solver using FP32 CUDA
+// Dense Cholesky for inner iterations, and FP64 outer refinements.
+// This class implements a modified version of the "Classical iterative
+// refinement" (Algorithm 4.1) from the following paper:
+// Haidar, Azzam, Harun Bayraktar, Stanimire Tomov, Jack Dongarra, and Nicholas
+// J. Higham. "Mixed-precision iterative refinement using tensor cores on GPUs
+// to accelerate solution of linear systems." Proceedings of the Royal Society A
+// 476, no. 2243 (2020): 20200110.
+//
+// The three key modifications from Algorithm 4.1 in the paper are:
+// 1. We use Cholesky factorization instead of LU factorization since our A is
+// symmetric positive definite.
+// 2. During the solution update, the up-cast and accumulation is performed in
+// one step with a custom kernel.
+class CERES_NO_EXPORT CUDADenseCholeskyMixedPrecision final
+ : public DenseCholesky {
+ public:
+ static std::unique_ptr<CUDADenseCholeskyMixedPrecision> Create(
+ const LinearSolver::Options& options);
+ CUDADenseCholeskyMixedPrecision(const CUDADenseCholeskyMixedPrecision&) =
+ delete;
+ CUDADenseCholeskyMixedPrecision& operator=(
+ const CUDADenseCholeskyMixedPrecision&) = delete;
+ LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ CUDADenseCholeskyMixedPrecision(ContextImpl* context,
+ int max_num_refinement_iterations);
+
+ // Helper function to wrap Cuda boilerplate needed to call Spotrf.
+ LinearSolverTerminationType CudaCholeskyFactorize(std::string* message);
+ // Helper function to wrap Cuda boilerplate needed to call Spotrs.
+ LinearSolverTerminationType CudaCholeskySolve(std::string* message);
+ // Picks up the cuSolverDN and cuStream handles from the context in the
+ // options, and the number of refinement iterations from the options. If
+ // the context is unable to initialize CUDA, returns false with a
+ // human-readable message indicating the reason.
+ bool Init(const LinearSolver::Options& options, std::string* message);
+
+ ContextImpl* context_ = nullptr;
+ // Number of columns in the A matrix, to be cached between calls to *Factorize
+ // and *Solve.
+ size_t num_cols_ = 0;
+ CudaBuffer<double> lhs_fp64_;
+ CudaBuffer<double> rhs_fp64_;
+ CudaBuffer<float> lhs_fp32_;
+ // Scratch space for cuSOLVER on the GPU.
+ CudaBuffer<float> device_workspace_;
+ // Required for error handling with cuSOLVER.
+ CudaBuffer<int> error_;
+
+ // Solution to lhs * x = rhs.
+ CudaBuffer<double> x_fp64_;
+ // Incremental correction to x.
+ CudaBuffer<float> correction_fp32_;
+ // Residual to iterative refinement.
+ CudaBuffer<float> residual_fp32_;
+ CudaBuffer<double> residual_fp64_;
+
+ // Number of inner refinement iterations to perform.
+ int max_num_refinement_iterations_ = 0;
+ // Cache the result of Factorize to ensure that when Solve is called, the
+ // factorization of lhs is valid.
+ LinearSolverTerminationType factorize_result_ =
+ LinearSolverTerminationType::FATAL_ERROR;
+};
+
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
+
+#endif // CERES_INTERNAL_DENSE_CHOLESKY_H_
diff --git a/internal/ceres/dense_cholesky_test.cc b/internal/ceres/dense_cholesky_test.cc
new file mode 100644
index 0000000..1b2e42d
--- /dev/null
+++ b/internal/ceres/dense_cholesky_test.cc
@@ -0,0 +1,221 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#include "ceres/dense_cholesky.h"
+
+#include <memory>
+#include <numeric>
+#include <sstream>
+#include <string>
+#include <utility>
+#include <vector>
+
+#include "Eigen/Dense"
+#include "ceres/internal/config.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/iterative_refiner.h"
+#include "ceres/linear_solver.h"
+#include "glog/logging.h"
+#include "gmock/gmock.h"
+#include "gtest/gtest.h"
+
+namespace ceres::internal {
+
+using Param = ::testing::tuple<DenseLinearAlgebraLibraryType, bool>;
+constexpr bool kMixedPrecision = true;
+constexpr bool kFullPrecision = false;
+
+namespace {
+
+std::string ParamInfoToString(testing::TestParamInfo<Param> info) {
+ Param param = info.param;
+ std::stringstream ss;
+ ss << DenseLinearAlgebraLibraryTypeToString(::testing::get<0>(param)) << "_"
+ << (::testing::get<1>(param) ? "MixedPrecision" : "FullPrecision");
+ return ss.str();
+}
+} // namespace
+
+class DenseCholeskyTest : public ::testing::TestWithParam<Param> {};
+
+TEST_P(DenseCholeskyTest, FactorAndSolve) {
+ // TODO(sameeragarwal): Convert these tests into type parameterized tests so
+ // that we can test the single and double precision solvers.
+
+ using Scalar = double;
+ using MatrixType = Eigen::Matrix<Scalar, Eigen::Dynamic, Eigen::Dynamic>;
+ using VectorType = Eigen::Matrix<Scalar, Eigen::Dynamic, 1>;
+
+ LinearSolver::Options options;
+ ContextImpl context;
+#ifndef CERES_NO_CUDA
+ options.context = &context;
+ std::string error;
+ CHECK(context.InitCuda(&error)) << error;
+#endif // CERES_NO_CUDA
+ options.dense_linear_algebra_library_type = ::testing::get<0>(GetParam());
+ options.use_mixed_precision_solves = ::testing::get<1>(GetParam());
+ const int kNumRefinementSteps = 4;
+ if (options.use_mixed_precision_solves) {
+ options.max_num_refinement_iterations = kNumRefinementSteps;
+ }
+ auto dense_cholesky = DenseCholesky::Create(options);
+
+ const int kNumTrials = 10;
+ const int kMinNumCols = 1;
+ const int kMaxNumCols = 10;
+ for (int num_cols = kMinNumCols; num_cols < kMaxNumCols; ++num_cols) {
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const MatrixType a = MatrixType::Random(num_cols, num_cols);
+ MatrixType lhs = a.transpose() * a;
+ lhs += VectorType::Ones(num_cols).asDiagonal();
+ Vector x = VectorType::Random(num_cols);
+ Vector rhs = lhs * x;
+ Vector actual = Vector::Random(num_cols);
+
+ LinearSolver::Summary summary;
+ summary.termination_type = dense_cholesky->FactorAndSolve(
+ num_cols, lhs.data(), rhs.data(), actual.data(), &summary.message);
+ EXPECT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
+ EXPECT_NEAR((x - actual).norm() / x.norm(),
+ 0.0,
+ std::numeric_limits<double>::epsilon() * 10)
+ << "\nexpected: " << x.transpose()
+ << "\nactual : " << actual.transpose();
+ }
+ }
+}
+
+INSTANTIATE_TEST_SUITE_P(EigenCholesky,
+ DenseCholeskyTest,
+ ::testing::Combine(::testing::Values(EIGEN),
+ ::testing::Values(kMixedPrecision,
+ kFullPrecision)),
+ ParamInfoToString);
+#ifndef CERES_NO_LAPACK
+INSTANTIATE_TEST_SUITE_P(LapackCholesky,
+ DenseCholeskyTest,
+ ::testing::Combine(::testing::Values(LAPACK),
+ ::testing::Values(kMixedPrecision,
+ kFullPrecision)),
+ ParamInfoToString);
+#endif
+#ifndef CERES_NO_CUDA
+INSTANTIATE_TEST_SUITE_P(CudaCholesky,
+ DenseCholeskyTest,
+ ::testing::Combine(::testing::Values(CUDA),
+ ::testing::Values(kMixedPrecision,
+ kFullPrecision)),
+ ParamInfoToString);
+#endif
+
+class MockDenseCholesky : public DenseCholesky {
+ public:
+ MOCK_METHOD3(Factorize,
+ LinearSolverTerminationType(int num_cols,
+ double* lhs,
+ std::string* message));
+ MOCK_METHOD3(Solve,
+ LinearSolverTerminationType(const double* rhs,
+ double* solution,
+ std::string* message));
+};
+
+class MockDenseIterativeRefiner : public DenseIterativeRefiner {
+ public:
+ MockDenseIterativeRefiner() : DenseIterativeRefiner(1) {}
+ MOCK_METHOD5(Refine,
+ void(int num_cols,
+ const double* lhs,
+ const double* rhs,
+ DenseCholesky* dense_cholesky,
+ double* solution));
+};
+
+using testing::_;
+using testing::Return;
+
+TEST(RefinedDenseCholesky, Factorize) {
+ auto dense_cholesky = std::make_unique<MockDenseCholesky>();
+ auto iterative_refiner = std::make_unique<MockDenseIterativeRefiner>();
+ EXPECT_CALL(*dense_cholesky, Factorize(_, _, _))
+ .Times(1)
+ .WillRepeatedly(Return(LinearSolverTerminationType::SUCCESS));
+ EXPECT_CALL(*iterative_refiner, Refine(_, _, _, _, _)).Times(0);
+ RefinedDenseCholesky refined_dense_cholesky(std::move(dense_cholesky),
+ std::move(iterative_refiner));
+ double lhs;
+ std::string message;
+ EXPECT_EQ(refined_dense_cholesky.Factorize(1, &lhs, &message),
+ LinearSolverTerminationType::SUCCESS);
+};
+
+TEST(RefinedDenseCholesky, FactorAndSolveWithUnsuccessfulFactorization) {
+ auto dense_cholesky = std::make_unique<MockDenseCholesky>();
+ auto iterative_refiner = std::make_unique<MockDenseIterativeRefiner>();
+ EXPECT_CALL(*dense_cholesky, Factorize(_, _, _))
+ .Times(1)
+ .WillRepeatedly(Return(LinearSolverTerminationType::FAILURE));
+ EXPECT_CALL(*dense_cholesky, Solve(_, _, _)).Times(0);
+ EXPECT_CALL(*iterative_refiner, Refine(_, _, _, _, _)).Times(0);
+ RefinedDenseCholesky refined_dense_cholesky(std::move(dense_cholesky),
+ std::move(iterative_refiner));
+ double lhs;
+ std::string message;
+ double rhs;
+ double solution;
+ EXPECT_EQ(
+ refined_dense_cholesky.FactorAndSolve(1, &lhs, &rhs, &solution, &message),
+ LinearSolverTerminationType::FAILURE);
+};
+
+TEST(RefinedDenseCholesky, FactorAndSolveWithSuccess) {
+ auto dense_cholesky = std::make_unique<MockDenseCholesky>();
+ auto iterative_refiner = std::make_unique<MockDenseIterativeRefiner>();
+ EXPECT_CALL(*dense_cholesky, Factorize(_, _, _))
+ .Times(1)
+ .WillRepeatedly(Return(LinearSolverTerminationType::SUCCESS));
+ EXPECT_CALL(*dense_cholesky, Solve(_, _, _))
+ .Times(1)
+ .WillRepeatedly(Return(LinearSolverTerminationType::SUCCESS));
+ EXPECT_CALL(*iterative_refiner, Refine(_, _, _, _, _)).Times(1);
+
+ RefinedDenseCholesky refined_dense_cholesky(std::move(dense_cholesky),
+ std::move(iterative_refiner));
+ double lhs;
+ std::string message;
+ double rhs;
+ double solution;
+ EXPECT_EQ(
+ refined_dense_cholesky.FactorAndSolve(1, &lhs, &rhs, &solution, &message),
+ LinearSolverTerminationType::SUCCESS);
+};
+
+} // namespace ceres::internal
diff --git a/internal/ceres/dense_jacobian_writer.h b/internal/ceres/dense_jacobian_writer.h
index 28c60e2..d0f2c89 100644
--- a/internal/ceres/dense_jacobian_writer.h
+++ b/internal/ceres/dense_jacobian_writer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,9 +33,13 @@
#ifndef CERES_INTERNAL_DENSE_JACOBIAN_WRITER_H_
#define CERES_INTERNAL_DENSE_JACOBIAN_WRITER_H_
+#include <memory>
+
#include "ceres/casts.h"
#include "ceres/dense_sparse_matrix.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
#include "ceres/parameter_block.h"
#include "ceres/program.h"
#include "ceres/residual_block.h"
@@ -44,7 +48,7 @@
namespace ceres {
namespace internal {
-class DenseJacobianWriter {
+class CERES_NO_EXPORT DenseJacobianWriter {
public:
DenseJacobianWriter(Evaluator::Options /* ignored */, Program* program)
: program_(program) {}
@@ -54,13 +58,14 @@
// Since the dense matrix has different layout than that assumed by the cost
// functions, use scratch space to store the jacobians temporarily then copy
// them over to the larger jacobian later.
- ScratchEvaluatePreparer* CreateEvaluatePreparers(int num_threads) {
+ std::unique_ptr<ScratchEvaluatePreparer[]> CreateEvaluatePreparers(
+ int num_threads) {
return ScratchEvaluatePreparer::Create(*program_, num_threads);
}
- SparseMatrix* CreateJacobian() const {
- return new DenseSparseMatrix(
- program_->NumResiduals(), program_->NumEffectiveParameters(), true);
+ std::unique_ptr<SparseMatrix> CreateJacobian() const {
+ return std::make_unique<DenseSparseMatrix>(
+ program_->NumResiduals(), program_->NumEffectiveParameters());
}
void Write(int residual_id,
@@ -70,8 +75,8 @@
DenseSparseMatrix* dense_jacobian = down_cast<DenseSparseMatrix*>(jacobian);
const ResidualBlock* residual_block =
program_->residual_blocks()[residual_id];
- int num_parameter_blocks = residual_block->NumParameterBlocks();
- int num_residuals = residual_block->NumResiduals();
+ const int num_parameter_blocks = residual_block->NumParameterBlocks();
+ const int num_residuals = residual_block->NumResiduals();
// Now copy the jacobians for each parameter into the dense jacobian matrix.
for (int j = 0; j < num_parameter_blocks; ++j) {
@@ -82,14 +87,14 @@
continue;
}
- const int parameter_block_size = parameter_block->LocalSize();
+ const int parameter_block_size = parameter_block->TangentSize();
ConstMatrixRef parameter_jacobian(
jacobians[j], num_residuals, parameter_block_size);
- dense_jacobian->mutable_matrix().block(residual_offset,
- parameter_block->delta_offset(),
- num_residuals,
- parameter_block_size) =
+ dense_jacobian->mutable_matrix()->block(residual_offset,
+ parameter_block->delta_offset(),
+ num_residuals,
+ parameter_block_size) =
parameter_jacobian;
}
}
@@ -101,4 +106,6 @@
} // namespace internal
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_INTERNAL_DENSE_JACOBIAN_WRITER_H_
diff --git a/internal/ceres/dense_linear_solver_benchmark.cc b/internal/ceres/dense_linear_solver_benchmark.cc
new file mode 100644
index 0000000..0930b7b
--- /dev/null
+++ b/internal/ceres/dense_linear_solver_benchmark.cc
@@ -0,0 +1,108 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: sameeragarwal@google.com (Sameer Agarwal)
+
+#include "Eigen/Dense"
+#include "benchmark/benchmark.h"
+#include "ceres/context_impl.h"
+#include "ceres/dense_sparse_matrix.h"
+#include "ceres/internal/config.h"
+#include "ceres/linear_solver.h"
+
+namespace ceres::internal {
+
+template <ceres::DenseLinearAlgebraLibraryType kLibraryType,
+ ceres::LinearSolverType kSolverType>
+static void BM_DenseSolver(benchmark::State& state) {
+ const int num_rows = static_cast<int>(state.range(0));
+ const int num_cols = static_cast<int>(state.range(1));
+ DenseSparseMatrix jacobian(num_rows, num_cols);
+ *jacobian.mutable_matrix() = Eigen::MatrixXd::Random(num_rows, num_cols);
+ Eigen::VectorXd rhs = Eigen::VectorXd::Random(num_rows, 1);
+
+ Eigen::VectorXd solution(num_cols);
+
+ LinearSolver::Options options;
+ options.type = kSolverType;
+ options.dense_linear_algebra_library_type = kLibraryType;
+ ContextImpl context;
+ options.context = &context;
+ auto solver = LinearSolver::Create(options);
+
+ LinearSolver::PerSolveOptions per_solve_options;
+ Eigen::VectorXd diagonal = Eigen::VectorXd::Ones(num_cols) * 100;
+ per_solve_options.D = diagonal.data();
+ for (auto _ : state) {
+ solver->Solve(&jacobian, rhs.data(), per_solve_options, solution.data());
+ }
+}
+
+// Some reasonable matrix sizes. I picked them out of thin air.
+static void MatrixSizes(benchmark::internal::Benchmark* b) {
+ // {num_rows, num_cols}
+ b->Args({1, 1});
+ b->Args({2, 1});
+ b->Args({3, 1});
+ b->Args({6, 2});
+ b->Args({10, 3});
+ b->Args({12, 4});
+ b->Args({20, 5});
+ b->Args({40, 5});
+ b->Args({100, 10});
+ b->Args({150, 15});
+ b->Args({200, 16});
+ b->Args({225, 18});
+ b->Args({300, 20});
+ b->Args({400, 20});
+ b->Args({600, 22});
+ b->Args({800, 25});
+}
+
+BENCHMARK_TEMPLATE2(BM_DenseSolver, ceres::EIGEN, ceres::DENSE_QR)
+ ->Apply(MatrixSizes);
+BENCHMARK_TEMPLATE2(BM_DenseSolver, ceres::EIGEN, ceres::DENSE_NORMAL_CHOLESKY)
+ ->Apply(MatrixSizes);
+
+#ifndef CERES_NO_LAPACK
+BENCHMARK_TEMPLATE2(BM_DenseSolver, ceres::LAPACK, ceres::DENSE_QR)
+ ->Apply(MatrixSizes);
+BENCHMARK_TEMPLATE2(BM_DenseSolver, ceres::LAPACK, ceres::DENSE_NORMAL_CHOLESKY)
+ ->Apply(MatrixSizes);
+#endif // CERES_NO_LAPACK
+
+#ifndef CERES_NO_CUDA
+BENCHMARK_TEMPLATE2(BM_DenseSolver, ceres::CUDA, ceres::DENSE_NORMAL_CHOLESKY)
+ ->Apply(MatrixSizes);
+BENCHMARK_TEMPLATE2(BM_DenseSolver, ceres::CUDA, ceres::DENSE_QR)
+ ->Apply(MatrixSizes);
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
+
+BENCHMARK_MAIN();
diff --git a/internal/ceres/dense_linear_solver_test.cc b/internal/ceres/dense_linear_solver_test.cc
index 3929a6f..79d2543 100644
--- a/internal/ceres/dense_linear_solver_test.cc
+++ b/internal/ceres/dense_linear_solver_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,7 @@
#include "ceres/casts.h"
#include "ceres/context_impl.h"
+#include "ceres/internal/config.h"
#include "ceres/linear_least_squares_problems.h"
#include "ceres/linear_solver.h"
#include "ceres/triplet_sparse_matrix.h"
@@ -39,12 +40,10 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-typedef ::testing::
- tuple<LinearSolverType, DenseLinearAlgebraLibraryType, bool, int>
- Param;
+using Param = ::testing::
+ tuple<LinearSolverType, DenseLinearAlgebraLibraryType, bool, int>;
static std::string ParamInfoToString(testing::TestParamInfo<Param> info) {
Param param = info.param;
@@ -62,8 +61,8 @@
Param param = GetParam();
const bool regularized = testing::get<2>(param);
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(testing::get<3>(param)));
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(testing::get<3>(param));
DenseSparseMatrix lhs(*down_cast<TripletSparseMatrix*>(problem->A.get()));
const int num_cols = lhs.num_cols();
@@ -87,25 +86,25 @@
Vector solution(num_cols);
LinearSolver::Summary summary =
solver->Solve(&lhs, rhs.data(), per_solve_options, solution.data());
- EXPECT_EQ(summary.termination_type, LINEAR_SOLVER_SUCCESS);
+ EXPECT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
- // If solving for the regularized solution, add the diagonal to the
- // matrix. This makes subsequent computations simpler.
- if (testing::get<2>(param)) {
- lhs.AppendDiagonal(problem->D.get());
- };
+ Vector normal_rhs = lhs.matrix().transpose() * rhs.head(num_rows);
+ Matrix normal_lhs = lhs.matrix().transpose() * lhs.matrix();
- Vector tmp = Vector::Zero(num_rows + num_cols);
- lhs.RightMultiply(solution.data(), tmp.data());
- Vector actual_normal_rhs = Vector::Zero(num_cols);
- lhs.LeftMultiply(tmp.data(), actual_normal_rhs.data());
+ if (regularized) {
+ ConstVectorRef diagonal(problem->D.get(), num_cols);
+ normal_lhs += diagonal.array().square().matrix().asDiagonal();
+ }
- Vector expected_normal_rhs = Vector::Zero(num_cols);
- lhs.LeftMultiply(rhs.data(), expected_normal_rhs.data());
- const double residual = (expected_normal_rhs - actual_normal_rhs).norm() /
- expected_normal_rhs.norm();
+ Vector actual_normal_rhs = normal_lhs * solution;
- EXPECT_NEAR(residual, 0.0, 10 * std::numeric_limits<double>::epsilon());
+ const double normalized_residual =
+ (normal_rhs - actual_normal_rhs).norm() / normal_rhs.norm();
+
+ EXPECT_NEAR(
+ normalized_residual, 0.0, 10 * std::numeric_limits<double>::epsilon())
+ << "\nexpected: " << normal_rhs.transpose()
+ << "\nactual: " << actual_normal_rhs.transpose();
}
namespace {
@@ -136,5 +135,4 @@
#endif
} // namespace
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dense_normal_cholesky_solver.cc b/internal/ceres/dense_normal_cholesky_solver.cc
index 51c6390..f6d5e5a 100644
--- a/internal/ceres/dense_normal_cholesky_solver.cc
+++ b/internal/ceres/dense_normal_cholesky_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,47 +30,32 @@
#include "ceres/dense_normal_cholesky_solver.h"
-#include <cstddef>
+#include <utility>
#include "Eigen/Dense"
-#include "ceres/blas.h"
#include "ceres/dense_sparse_matrix.h"
#include "ceres/internal/eigen.h"
-#include "ceres/lapack.h"
#include "ceres/linear_solver.h"
#include "ceres/types.h"
#include "ceres/wall_time.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
DenseNormalCholeskySolver::DenseNormalCholeskySolver(
- const LinearSolver::Options& options)
- : options_(options) {}
+ LinearSolver::Options options)
+ : options_(std::move(options)),
+ cholesky_(DenseCholesky::Create(options_)) {}
LinearSolver::Summary DenseNormalCholeskySolver::SolveImpl(
DenseSparseMatrix* A,
const double* b,
const LinearSolver::PerSolveOptions& per_solve_options,
double* x) {
- if (options_.dense_linear_algebra_library_type == EIGEN) {
- return SolveUsingEigen(A, b, per_solve_options, x);
- } else {
- return SolveUsingLAPACK(A, b, per_solve_options, x);
- }
-}
-
-LinearSolver::Summary DenseNormalCholeskySolver::SolveUsingEigen(
- DenseSparseMatrix* A,
- const double* b,
- const LinearSolver::PerSolveOptions& per_solve_options,
- double* x) {
EventLogger event_logger("DenseNormalCholeskySolver::Solve");
const int num_rows = A->num_rows();
const int num_cols = A->num_cols();
- ConstColMajorMatrixRef Aref = A->matrix();
Matrix lhs(num_cols, num_cols);
lhs.setZero();
@@ -81,12 +66,12 @@
// Using rankUpdate instead of GEMM, exposes the fact that its the
// same matrix being multiplied with itself and that the product is
// symmetric.
- lhs.selfadjointView<Eigen::Upper>().rankUpdate(Aref.transpose());
+ lhs.selfadjointView<Eigen::Upper>().rankUpdate(A->matrix().transpose());
// rhs = A'b
- Vector rhs = Aref.transpose() * ConstVectorRef(b, num_rows);
+ Vector rhs = A->matrix().transpose() * ConstVectorRef(b, num_rows);
- if (per_solve_options.D != NULL) {
+ if (per_solve_options.D != nullptr) {
ConstVectorRef D(per_solve_options.D, num_cols);
lhs += D.array().square().matrix().asDiagonal();
}
@@ -94,64 +79,11 @@
LinearSolver::Summary summary;
summary.num_iterations = 1;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- Eigen::LLT<Matrix, Eigen::Upper> llt =
- lhs.selfadjointView<Eigen::Upper>().llt();
+ summary.termination_type = cholesky_->FactorAndSolve(
+ num_cols, lhs.data(), rhs.data(), x, &summary.message);
+ event_logger.AddEvent("FactorAndSolve");
- if (llt.info() != Eigen::Success) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
- summary.message = "Eigen LLT decomposition failed.";
- } else {
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- summary.message = "Success.";
- }
-
- VectorRef(x, num_cols) = llt.solve(rhs);
- event_logger.AddEvent("Solve");
return summary;
}
-LinearSolver::Summary DenseNormalCholeskySolver::SolveUsingLAPACK(
- DenseSparseMatrix* A,
- const double* b,
- const LinearSolver::PerSolveOptions& per_solve_options,
- double* x) {
- EventLogger event_logger("DenseNormalCholeskySolver::Solve");
-
- if (per_solve_options.D != NULL) {
- // Temporarily append a diagonal block to the A matrix, but undo
- // it before returning the matrix to the user.
- A->AppendDiagonal(per_solve_options.D);
- }
-
- const int num_cols = A->num_cols();
- Matrix lhs(num_cols, num_cols);
- event_logger.AddEvent("Setup");
-
- // lhs = A'A
- //
- // Note: This is a bit delicate, it assumes that the stride on this
- // matrix is the same as the number of rows.
- BLAS::SymmetricRankKUpdate(
- A->num_rows(), num_cols, A->values(), true, 1.0, 0.0, lhs.data());
-
- if (per_solve_options.D != NULL) {
- // Undo the modifications to the matrix A.
- A->RemoveDiagonal();
- }
-
- // TODO(sameeragarwal): Replace this with a gemv call for true blasness.
- // rhs = A'b
- VectorRef(x, num_cols) =
- A->matrix().transpose() * ConstVectorRef(b, A->num_rows());
- event_logger.AddEvent("Product");
-
- LinearSolver::Summary summary;
- summary.num_iterations = 1;
- summary.termination_type = LAPACK::SolveInPlaceUsingCholesky(
- num_cols, lhs.data(), x, &summary.message);
- event_logger.AddEvent("Solve");
- return summary;
-}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dense_normal_cholesky_solver.h b/internal/ceres/dense_normal_cholesky_solver.h
index 68ea611..c6aa2af 100644
--- a/internal/ceres/dense_normal_cholesky_solver.h
+++ b/internal/ceres/dense_normal_cholesky_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,10 +34,14 @@
#ifndef CERES_INTERNAL_DENSE_NORMAL_CHOLESKY_SOLVER_H_
#define CERES_INTERNAL_DENSE_NORMAL_CHOLESKY_SOLVER_H_
+#include <memory>
+
+#include "ceres/dense_cholesky.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class DenseSparseMatrix;
@@ -73,9 +77,10 @@
// library. This solver always returns a solution, it is the user's
// responsibility to judge if the solution is good enough for their
// purposes.
-class DenseNormalCholeskySolver : public DenseSparseMatrixSolver {
+class CERES_NO_EXPORT DenseNormalCholeskySolver
+ : public DenseSparseMatrixSolver {
public:
- explicit DenseNormalCholeskySolver(const LinearSolver::Options& options);
+ explicit DenseNormalCholeskySolver(LinearSolver::Options options);
private:
LinearSolver::Summary SolveImpl(
@@ -84,22 +89,12 @@
const LinearSolver::PerSolveOptions& per_solve_options,
double* x) final;
- LinearSolver::Summary SolveUsingLAPACK(
- DenseSparseMatrix* A,
- const double* b,
- const LinearSolver::PerSolveOptions& per_solve_options,
- double* x);
-
- LinearSolver::Summary SolveUsingEigen(
- DenseSparseMatrix* A,
- const double* b,
- const LinearSolver::PerSolveOptions& per_solve_options,
- double* x);
-
const LinearSolver::Options options_;
+ std::unique_ptr<DenseCholesky> cholesky_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_DENSE_NORMAL_CHOLESKY_SOLVER_H_
diff --git a/internal/ceres/dense_qr.cc b/internal/ceres/dense_qr.cc
new file mode 100644
index 0000000..fbbcadc
--- /dev/null
+++ b/internal/ceres/dense_qr.cc
@@ -0,0 +1,456 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#include "ceres/dense_qr.h"
+
+#include <algorithm>
+#include <memory>
+#include <string>
+
+#ifndef CERES_NO_CUDA
+#include "ceres/context_impl.h"
+#include "cublas_v2.h"
+#include "cusolverDn.h"
+#endif // CERES_NO_CUDA
+
+#ifndef CERES_NO_LAPACK
+
+// LAPACK routines for solving a linear least squares problem using QR
+// factorization. This is done in three stages:
+//
+// A * x = b
+// Q * R * x = b (dgeqrf)
+// R * x = Q' * b (dormqr)
+// x = R^{-1} * Q'* b (dtrtrs)
+
+// clang-format off
+
+// Compute the QR factorization of a.
+//
+// a is an m x n column major matrix (Denoted by "A" in the above description)
+// lda is the leading dimension of a. lda >= max(1, num_rows)
+// tau is an array of size min(m,n). It contains the scalar factors of the
+// elementary reflectors.
+// work is an array of size max(1,lwork). On exit, if info=0, work[0] contains
+// the optimal size of work.
+//
+// if lwork >= 1 it is the size of work. If lwork = -1, then a workspace query is assumed.
+// dgeqrf computes the optimal size of the work array and returns it as work[0].
+//
+// info = 0, successful exit.
+// info < 0, if info = -i, then the i^th argument had illegal value.
+extern "C" void dgeqrf_(const int* m, const int* n, double* a, const int* lda,
+ double* tau, double* work, const int* lwork, int* info);
+
+// Apply Q or Q' to b.
+//
+// b is a m times n column major matrix.
+// size = 'L' applies Q or Q' on the left, size = 'R' applies Q or Q' on the right.
+// trans = 'N', applies Q, trans = 'T', applies Q'.
+// k is the number of elementary reflectors whose product defines the matrix Q.
+// If size = 'L', m >= k >= 0 and if side = 'R', n >= k >= 0.
+// a is an lda x k column major matrix containing the reflectors as returned by dgeqrf.
+// ldb is the leading dimension of b.
+// work is an array of size max(1, lwork)
+// lwork if positive is the size of work. If lwork = -1, then a
+// workspace query is assumed.
+//
+// info = 0, successful exit.
+// info < 0, if info = -i, then the i^th argument had illegal value.
+extern "C" void dormqr_(const char* side, const char* trans, const int* m,
+ const int* n ,const int* k, double* a, const int* lda,
+ double* tau, double* b, const int* ldb, double* work,
+ const int* lwork, int* info);
+
+// Solve a triangular system of the form A * x = b
+//
+// uplo = 'U', A is upper triangular. uplo = 'L' is lower triangular.
+// trans = 'N', 'T', 'C' specifies the form - A, A^T, A^H.
+// DIAG = 'N', A is not unit triangular. 'U' is unit triangular.
+// n is the order of the matrix A.
+// nrhs number of columns of b.
+// a is a column major lda x n.
+// b is a column major matrix of ldb x nrhs
+//
+// info = 0 successful.
+// = -i < 0 i^th argument is an illegal value.
+// = i > 0, i^th diagonal element of A is zero.
+extern "C" void dtrtrs_(const char* uplo, const char* trans, const char* diag,
+ const int* n, const int* nrhs, double* a, const int* lda,
+ double* b, const int* ldb, int* info);
+// clang-format on
+
+#endif
+
+namespace ceres::internal {
+
+DenseQR::~DenseQR() = default;
+
+std::unique_ptr<DenseQR> DenseQR::Create(const LinearSolver::Options& options) {
+ std::unique_ptr<DenseQR> dense_qr;
+
+ switch (options.dense_linear_algebra_library_type) {
+ case EIGEN:
+ dense_qr = std::make_unique<EigenDenseQR>();
+ break;
+
+ case LAPACK:
+#ifndef CERES_NO_LAPACK
+ dense_qr = std::make_unique<LAPACKDenseQR>();
+ break;
+#else
+ LOG(FATAL) << "Ceres was compiled without support for LAPACK.";
+#endif
+
+ case CUDA:
+#ifndef CERES_NO_CUDA
+ dense_qr = CUDADenseQR::Create(options);
+ break;
+#else
+ LOG(FATAL) << "Ceres was compiled without support for CUDA.";
+#endif
+
+ default:
+ LOG(FATAL) << "Unknown dense linear algebra library type : "
+ << DenseLinearAlgebraLibraryTypeToString(
+ options.dense_linear_algebra_library_type);
+ }
+ return dense_qr;
+}
+
+LinearSolverTerminationType DenseQR::FactorAndSolve(int num_rows,
+ int num_cols,
+ double* lhs,
+ const double* rhs,
+ double* solution,
+ std::string* message) {
+ LinearSolverTerminationType termination_type =
+ Factorize(num_rows, num_cols, lhs, message);
+ if (termination_type == LinearSolverTerminationType::SUCCESS) {
+ termination_type = Solve(rhs, solution, message);
+ }
+ return termination_type;
+}
+
+LinearSolverTerminationType EigenDenseQR::Factorize(int num_rows,
+ int num_cols,
+ double* lhs,
+ std::string* message) {
+ Eigen::Map<ColMajorMatrix> m(lhs, num_rows, num_cols);
+ qr_ = std::make_unique<QRType>(m);
+ *message = "Success.";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+LinearSolverTerminationType EigenDenseQR::Solve(const double* rhs,
+ double* solution,
+ std::string* message) {
+ VectorRef(solution, qr_->cols()) =
+ qr_->solve(ConstVectorRef(rhs, qr_->rows()));
+ *message = "Success.";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+#ifndef CERES_NO_LAPACK
+LinearSolverTerminationType LAPACKDenseQR::Factorize(int num_rows,
+ int num_cols,
+ double* lhs,
+ std::string* message) {
+ int lwork = -1;
+ double work_size;
+ int info = 0;
+
+ // Compute the size of the temporary workspace needed to compute the QR
+ // factorization in the dgeqrf call below.
+ dgeqrf_(&num_rows,
+ &num_cols,
+ lhs_,
+ &num_rows,
+ tau_.data(),
+ &work_size,
+ &lwork,
+ &info);
+ if (info < 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres."
+ << "Please report it."
+ << "LAPACK::dgels fatal error."
+ << "Argument: " << -info << " is invalid.";
+ }
+
+ lhs_ = lhs;
+ num_rows_ = num_rows;
+ num_cols_ = num_cols;
+
+ lwork = static_cast<int>(work_size);
+
+ if (work_.size() < lwork) {
+ work_.resize(lwork);
+ }
+ if (tau_.size() < num_cols) {
+ tau_.resize(num_cols);
+ }
+
+ if (q_transpose_rhs_.size() < num_rows) {
+ q_transpose_rhs_.resize(num_rows);
+ }
+
+ // Factorize the lhs_ using the workspace that we just constructed above.
+ dgeqrf_(&num_rows,
+ &num_cols,
+ lhs_,
+ &num_rows,
+ tau_.data(),
+ work_.data(),
+ &lwork,
+ &info);
+
+ if (info < 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres."
+ << "Please report it. dgeqrf fatal error."
+ << "Argument: " << -info << " is invalid.";
+ }
+
+ termination_type_ = LinearSolverTerminationType::SUCCESS;
+ *message = "Success.";
+ return termination_type_;
+}
+
+LinearSolverTerminationType LAPACKDenseQR::Solve(const double* rhs,
+ double* solution,
+ std::string* message) {
+ if (termination_type_ != LinearSolverTerminationType::SUCCESS) {
+ *message = "QR factorization failed and solve called.";
+ return termination_type_;
+ }
+
+ std::copy_n(rhs, num_rows_, q_transpose_rhs_.data());
+
+ const char side = 'L';
+ char trans = 'T';
+ const int num_c_cols = 1;
+ const int lwork = work_.size();
+ int info = 0;
+ dormqr_(&side,
+ &trans,
+ &num_rows_,
+ &num_c_cols,
+ &num_cols_,
+ lhs_,
+ &num_rows_,
+ tau_.data(),
+ q_transpose_rhs_.data(),
+ &num_rows_,
+ work_.data(),
+ &lwork,
+ &info);
+ if (info < 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres."
+ << "Please report it. dormr fatal error."
+ << "Argument: " << -info << " is invalid.";
+ }
+
+ const char uplo = 'U';
+ trans = 'N';
+ const char diag = 'N';
+ dtrtrs_(&uplo,
+ &trans,
+ &diag,
+ &num_cols_,
+ &num_c_cols,
+ lhs_,
+ &num_rows_,
+ q_transpose_rhs_.data(),
+ &num_rows_,
+ &info);
+
+ if (info < 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres."
+ << "Please report it. dormr fatal error."
+ << "Argument: " << -info << " is invalid.";
+ } else if (info > 0) {
+ *message =
+ "QR factorization failure. The factorization is not full rank. R has "
+ "zeros on the diagonal.";
+ termination_type_ = LinearSolverTerminationType::FAILURE;
+ } else {
+ std::copy_n(q_transpose_rhs_.data(), num_cols_, solution);
+ termination_type_ = LinearSolverTerminationType::SUCCESS;
+ }
+
+ return termination_type_;
+}
+
+#endif // CERES_NO_LAPACK
+
+#ifndef CERES_NO_CUDA
+
+CUDADenseQR::CUDADenseQR(ContextImpl* context)
+ : context_(context),
+ lhs_{context},
+ rhs_{context},
+ tau_{context},
+ device_workspace_{context},
+ error_(context, 1) {}
+
+LinearSolverTerminationType CUDADenseQR::Factorize(int num_rows,
+ int num_cols,
+ double* lhs,
+ std::string* message) {
+ factorize_result_ = LinearSolverTerminationType::FATAL_ERROR;
+ lhs_.Reserve(num_rows * num_cols);
+ tau_.Reserve(std::min(num_rows, num_cols));
+ num_rows_ = num_rows;
+ num_cols_ = num_cols;
+ lhs_.CopyFromCpu(lhs, num_rows * num_cols);
+ int device_workspace_size = 0;
+ if (cusolverDnDgeqrf_bufferSize(context_->cusolver_handle_,
+ num_rows,
+ num_cols,
+ lhs_.data(),
+ num_rows,
+ &device_workspace_size) !=
+ CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnDgeqrf_bufferSize failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ device_workspace_.Reserve(device_workspace_size);
+ if (cusolverDnDgeqrf(context_->cusolver_handle_,
+ num_rows,
+ num_cols,
+ lhs_.data(),
+ num_rows,
+ tau_.data(),
+ reinterpret_cast<double*>(device_workspace_.data()),
+ device_workspace_.size(),
+ error_.data()) != CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnDgeqrf failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ int error = 0;
+ error_.CopyToCpu(&error, 1);
+ if (error < 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres - "
+ << "please report it. "
+ << "cuSolverDN::cusolverDnDgeqrf fatal error. "
+ << "Argument: " << -error << " is invalid.";
+ // The following line is unreachable, but return failure just to be
+ // pedantic, since the compiler does not know that.
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+
+ *message = "Success";
+ factorize_result_ = LinearSolverTerminationType::SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+LinearSolverTerminationType CUDADenseQR::Solve(const double* rhs,
+ double* solution,
+ std::string* message) {
+ if (factorize_result_ != LinearSolverTerminationType::SUCCESS) {
+ *message = "Factorize did not complete successfully previously.";
+ return factorize_result_;
+ }
+ rhs_.CopyFromCpu(rhs, num_rows_);
+ int device_workspace_size = 0;
+ if (cusolverDnDormqr_bufferSize(context_->cusolver_handle_,
+ CUBLAS_SIDE_LEFT,
+ CUBLAS_OP_T,
+ num_rows_,
+ 1,
+ num_cols_,
+ lhs_.data(),
+ num_rows_,
+ tau_.data(),
+ rhs_.data(),
+ num_rows_,
+ &device_workspace_size) !=
+ CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnDormqr_bufferSize failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ device_workspace_.Reserve(device_workspace_size);
+ // Compute rhs = Q^T * rhs, assuming that lhs has already been factorized.
+ // The result of factorization would have stored Q in a packed form in lhs_.
+ if (cusolverDnDormqr(context_->cusolver_handle_,
+ CUBLAS_SIDE_LEFT,
+ CUBLAS_OP_T,
+ num_rows_,
+ 1,
+ num_cols_,
+ lhs_.data(),
+ num_rows_,
+ tau_.data(),
+ rhs_.data(),
+ num_rows_,
+ reinterpret_cast<double*>(device_workspace_.data()),
+ device_workspace_.size(),
+ error_.data()) != CUSOLVER_STATUS_SUCCESS) {
+ *message = "cuSolverDN::cusolverDnDormqr failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ int error = 0;
+ error_.CopyToCpu(&error, 1);
+ if (error < 0) {
+ LOG(FATAL) << "Congratulations, you found a bug in Ceres. "
+ << "Please report it."
+ << "cuSolverDN::cusolverDnDormqr fatal error. "
+ << "Argument: " << -error << " is invalid.";
+ }
+ // Compute the solution vector as x = R \ (Q^T * rhs). Since the previous step
+ // replaced rhs by (Q^T * rhs), this is just x = R \ rhs.
+ if (cublasDtrsv(context_->cublas_handle_,
+ CUBLAS_FILL_MODE_UPPER,
+ CUBLAS_OP_N,
+ CUBLAS_DIAG_NON_UNIT,
+ num_cols_,
+ lhs_.data(),
+ num_rows_,
+ rhs_.data(),
+ 1) != CUBLAS_STATUS_SUCCESS) {
+ *message = "cuBLAS::cublasDtrsv failed.";
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+ rhs_.CopyToCpu(solution, num_cols_);
+ *message = "Success";
+ return LinearSolverTerminationType::SUCCESS;
+}
+
+std::unique_ptr<CUDADenseQR> CUDADenseQR::Create(
+ const LinearSolver::Options& options) {
+ if (options.dense_linear_algebra_library_type != CUDA ||
+ options.context == nullptr || !options.context->IsCudaInitialized()) {
+ return nullptr;
+ }
+ return std::unique_ptr<CUDADenseQR>(new CUDADenseQR(options.context));
+}
+
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
diff --git a/internal/ceres/dense_qr.h b/internal/ceres/dense_qr.h
new file mode 100644
index 0000000..0ba17c4
--- /dev/null
+++ b/internal/ceres/dense_qr.h
@@ -0,0 +1,199 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#ifndef CERES_INTERNAL_DENSE_QR_H_
+#define CERES_INTERNAL_DENSE_QR_H_
+
+// This include must come before any #ifndef check on Ceres compile options.
+// clang-format off
+#include "ceres/internal/config.h"
+// clang-format on
+
+#include <memory>
+#include <vector>
+
+#include "Eigen/Dense"
+#include "ceres/context_impl.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
+#include "ceres/linear_solver.h"
+#include "glog/logging.h"
+
+#ifndef CERES_NO_CUDA
+#include "ceres/context_impl.h"
+#include "ceres/cuda_buffer.h"
+#include "cublas_v2.h"
+#include "cuda_runtime.h"
+#include "cusolverDn.h"
+#endif // CERES_NO_CUDA
+
+namespace ceres::internal {
+
+// An interface that abstracts away the internal details of various dense linear
+// algebra libraries and offers a simple API for solving dense linear systems
+// using a QR factorization.
+class CERES_NO_EXPORT DenseQR {
+ public:
+ static std::unique_ptr<DenseQR> Create(const LinearSolver::Options& options);
+
+ virtual ~DenseQR();
+
+ // Computes the QR factorization of the given matrix.
+ //
+ // The input matrix lhs is assumed to be a column-major num_rows x num_cols
+ // matrix.
+ //
+ // The input matrix lhs may be modified by the implementation to store the
+ // factorization, irrespective of whether the factorization succeeds or not.
+ // As a result it is the user's responsibility to ensure that lhs is valid
+ // when Solve is called.
+ virtual LinearSolverTerminationType Factorize(int num_rows,
+ int num_cols,
+ double* lhs,
+ std::string* message) = 0;
+
+ // Computes the solution to the equation
+ //
+ // lhs * solution = rhs
+ //
+ // Calling Solve without calling Factorize is undefined behaviour. It is the
+ // user's responsibility to ensure that the input matrix lhs passed to
+ // Factorize has not been freed/modified when Solve is called.
+ virtual LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) = 0;
+
+ // Convenience method which combines a call to Factorize and Solve. Solve is
+ // only called if Factorize returns LinearSolverTerminationType::SUCCESS.
+ //
+ // The input matrix lhs may be modified by the implementation to store the
+ // factorization, irrespective of whether the method succeeds or not. It is
+ // the user's responsibility to ensure that lhs is valid if and when Solve is
+ // called again after this call.
+ LinearSolverTerminationType FactorAndSolve(int num_rows,
+ int num_cols,
+ double* lhs,
+ const double* rhs,
+ double* solution,
+ std::string* message);
+};
+
+class CERES_NO_EXPORT EigenDenseQR final : public DenseQR {
+ public:
+ LinearSolverTerminationType Factorize(int num_rows,
+ int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ using QRType = Eigen::HouseholderQR<Eigen::Ref<ColMajorMatrix>>;
+ std::unique_ptr<QRType> qr_;
+};
+
+#ifndef CERES_NO_LAPACK
+class CERES_NO_EXPORT LAPACKDenseQR final : public DenseQR {
+ public:
+ LinearSolverTerminationType Factorize(int num_rows,
+ int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ double* lhs_ = nullptr;
+ int num_rows_;
+ int num_cols_;
+ LinearSolverTerminationType termination_type_ =
+ LinearSolverTerminationType::FATAL_ERROR;
+ Vector work_;
+ Vector tau_;
+ Vector q_transpose_rhs_;
+};
+#endif // CERES_NO_LAPACK
+
+#ifndef CERES_NO_CUDA
+// Implementation of DenseQR using the 32-bit cuSolverDn interface. A
+// requirement for using this solver is that the lhs must not be rank deficient.
+// This is because cuSolverDn does not implement the singularity-checking
+// wrapper trtrs, hence this solver directly uses trsv from CUBLAS for the
+// backsubstitution.
+class CERES_NO_EXPORT CUDADenseQR final : public DenseQR {
+ public:
+ static std::unique_ptr<CUDADenseQR> Create(
+ const LinearSolver::Options& options);
+ CUDADenseQR(const CUDADenseQR&) = delete;
+ CUDADenseQR& operator=(const CUDADenseQR&) = delete;
+ LinearSolverTerminationType Factorize(int num_rows,
+ int num_cols,
+ double* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
+
+ private:
+ explicit CUDADenseQR(ContextImpl* context);
+
+ ContextImpl* context_ = nullptr;
+ // Number of rowns in the A matrix, to be cached between calls to *Factorize
+ // and *Solve.
+ size_t num_rows_ = 0;
+ // Number of columns in the A matrix, to be cached between calls to *Factorize
+ // and *Solve.
+ size_t num_cols_ = 0;
+ // GPU memory allocated for the A matrix (lhs matrix).
+ CudaBuffer<double> lhs_;
+ // GPU memory allocated for the B matrix (rhs vector).
+ CudaBuffer<double> rhs_;
+ // GPU memory allocated for the TAU matrix (scaling of householder vectors).
+ CudaBuffer<double> tau_;
+ // Scratch space for cuSOLVER on the GPU.
+ CudaBuffer<double> device_workspace_;
+ // Required for error handling with cuSOLVER.
+ CudaBuffer<int> error_;
+ // Cache the result of Factorize to ensure that when Solve is called, the
+ // factiorization of lhs is valid.
+ LinearSolverTerminationType factorize_result_ =
+ LinearSolverTerminationType::FATAL_ERROR;
+};
+
+#endif // CERES_NO_CUDA
+
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
+
+#endif // CERES_INTERNAL_DENSE_QR_H_
diff --git a/internal/ceres/dense_qr_solver.cc b/internal/ceres/dense_qr_solver.cc
index 44388f3..92652b4 100644
--- a/internal/ceres/dense_qr_solver.cc
+++ b/internal/ceres/dense_qr_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,137 +33,51 @@
#include <cstddef>
#include "Eigen/Dense"
+#include "ceres/dense_qr.h"
#include "ceres/dense_sparse_matrix.h"
#include "ceres/internal/eigen.h"
-#include "ceres/lapack.h"
#include "ceres/linear_solver.h"
#include "ceres/types.h"
#include "ceres/wall_time.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
DenseQRSolver::DenseQRSolver(const LinearSolver::Options& options)
- : options_(options) {
- work_.resize(1);
-}
+ : options_(options), dense_qr_(DenseQR::Create(options)) {}
LinearSolver::Summary DenseQRSolver::SolveImpl(
DenseSparseMatrix* A,
const double* b,
const LinearSolver::PerSolveOptions& per_solve_options,
double* x) {
- if (options_.dense_linear_algebra_library_type == EIGEN) {
- return SolveUsingEigen(A, b, per_solve_options, x);
- } else {
- return SolveUsingLAPACK(A, b, per_solve_options, x);
- }
-}
-
-LinearSolver::Summary DenseQRSolver::SolveUsingLAPACK(
- DenseSparseMatrix* A,
- const double* b,
- const LinearSolver::PerSolveOptions& per_solve_options,
- double* x) {
EventLogger event_logger("DenseQRSolver::Solve");
const int num_rows = A->num_rows();
const int num_cols = A->num_cols();
+ const int num_augmented_rows =
+ num_rows + ((per_solve_options.D != nullptr) ? num_cols : 0);
- if (per_solve_options.D != NULL) {
- // Temporarily append a diagonal block to the A matrix, but undo
- // it before returning the matrix to the user.
- A->AppendDiagonal(per_solve_options.D);
+ if (lhs_.rows() != num_augmented_rows || lhs_.cols() != num_cols) {
+ lhs_.resize(num_augmented_rows, num_cols);
+ rhs_.resize(num_augmented_rows);
}
- // TODO(sameeragarwal): Since we are copying anyways, the diagonal
- // can be appended to the matrix instead of doing it on A.
- lhs_ = A->matrix();
-
- if (per_solve_options.D != NULL) {
- // Undo the modifications to the matrix A.
- A->RemoveDiagonal();
- }
-
- // rhs = [b;0] to account for the additional rows in the lhs.
- if (rhs_.rows() != lhs_.rows()) {
- rhs_.resize(lhs_.rows());
- }
- rhs_.setZero();
+ lhs_.topRows(num_rows) = A->matrix();
rhs_.head(num_rows) = ConstVectorRef(b, num_rows);
- if (work_.rows() == 1) {
- const int work_size =
- LAPACK::EstimateWorkSizeForQR(lhs_.rows(), lhs_.cols());
- VLOG(3) << "Working memory for Dense QR factorization: "
- << work_size * sizeof(double);
- work_.resize(work_size);
+ if (num_rows != num_augmented_rows) {
+ lhs_.bottomRows(num_cols) =
+ ConstVectorRef(per_solve_options.D, num_cols).asDiagonal();
+ rhs_.tail(num_cols).setZero();
}
LinearSolver::Summary summary;
+ summary.termination_type = dense_qr_->FactorAndSolve(
+ lhs_.rows(), lhs_.cols(), lhs_.data(), rhs_.data(), x, &summary.message);
summary.num_iterations = 1;
- summary.termination_type = LAPACK::SolveInPlaceUsingQR(lhs_.rows(),
- lhs_.cols(),
- lhs_.data(),
- work_.rows(),
- work_.data(),
- rhs_.data(),
- &summary.message);
event_logger.AddEvent("Solve");
- if (summary.termination_type == LINEAR_SOLVER_SUCCESS) {
- VectorRef(x, num_cols) = rhs_.head(num_cols);
- }
- event_logger.AddEvent("TearDown");
return summary;
}
-LinearSolver::Summary DenseQRSolver::SolveUsingEigen(
- DenseSparseMatrix* A,
- const double* b,
- const LinearSolver::PerSolveOptions& per_solve_options,
- double* x) {
- EventLogger event_logger("DenseQRSolver::Solve");
-
- const int num_rows = A->num_rows();
- const int num_cols = A->num_cols();
-
- if (per_solve_options.D != NULL) {
- // Temporarily append a diagonal block to the A matrix, but undo
- // it before returning the matrix to the user.
- A->AppendDiagonal(per_solve_options.D);
- }
-
- // rhs = [b;0] to account for the additional rows in the lhs.
- const int augmented_num_rows =
- num_rows + ((per_solve_options.D != NULL) ? num_cols : 0);
- if (rhs_.rows() != augmented_num_rows) {
- rhs_.resize(augmented_num_rows);
- rhs_.setZero();
- }
- rhs_.head(num_rows) = ConstVectorRef(b, num_rows);
- event_logger.AddEvent("Setup");
-
- // Solve the system.
- VectorRef(x, num_cols) = A->matrix().householderQr().solve(rhs_);
- event_logger.AddEvent("Solve");
-
- if (per_solve_options.D != NULL) {
- // Undo the modifications to the matrix A.
- A->RemoveDiagonal();
- }
-
- // We always succeed, since the QR solver returns the best solution
- // it can. It is the job of the caller to determine if the solution
- // is good enough or not.
- LinearSolver::Summary summary;
- summary.num_iterations = 1;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- summary.message = "Success.";
-
- event_logger.AddEvent("TearDown");
- return summary;
-}
-
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dense_qr_solver.h b/internal/ceres/dense_qr_solver.h
index 980243b..12db52f 100644
--- a/internal/ceres/dense_qr_solver.h
+++ b/internal/ceres/dense_qr_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,12 +32,15 @@
#ifndef CERES_INTERNAL_DENSE_QR_SOLVER_H_
#define CERES_INTERNAL_DENSE_QR_SOLVER_H_
+#include <memory>
+
+#include "ceres/dense_qr.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class DenseSparseMatrix;
@@ -79,7 +82,7 @@
// library. This solver always returns a solution, it is the user's
// responsibility to judge if the solution is good enough for their
// purposes.
-class CERES_EXPORT_INTERNAL DenseQRSolver : public DenseSparseMatrixSolver {
+class CERES_NO_EXPORT DenseQRSolver final : public DenseSparseMatrixSolver {
public:
explicit DenseQRSolver(const LinearSolver::Options& options);
@@ -105,10 +108,11 @@
const LinearSolver::Options options_;
ColMajorMatrix lhs_;
Vector rhs_;
- Vector work_;
+ std::unique_ptr<DenseQR> dense_qr_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_DENSE_QR_SOLVER_H_
diff --git a/internal/ceres/dense_qr_test.cc b/internal/ceres/dense_qr_test.cc
new file mode 100644
index 0000000..155570c
--- /dev/null
+++ b/internal/ceres/dense_qr_test.cc
@@ -0,0 +1,130 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#include "ceres/dense_qr.h"
+
+#include <memory>
+#include <numeric>
+#include <string>
+#include <tuple>
+#include <vector>
+
+#include "Eigen/Dense"
+#include "ceres/internal/eigen.h"
+#include "ceres/linear_solver.h"
+#include "glog/logging.h"
+#include "gmock/gmock.h"
+#include "gtest/gtest.h"
+
+namespace ceres::internal {
+
+using Param = DenseLinearAlgebraLibraryType;
+
+namespace {
+
+std::string ParamInfoToString(testing::TestParamInfo<Param> info) {
+ return DenseLinearAlgebraLibraryTypeToString(info.param);
+}
+
+} // namespace
+
+class DenseQRTest : public ::testing::TestWithParam<Param> {};
+
+TEST_P(DenseQRTest, FactorAndSolve) {
+ // TODO(sameeragarwal): Convert these tests into type parameterized tests so
+ // that we can test the single and double precision solvers.
+
+ using Scalar = double;
+ using MatrixType = Eigen::Matrix<Scalar, Eigen::Dynamic, Eigen::Dynamic>;
+ using VectorType = Eigen::Matrix<Scalar, Eigen::Dynamic, 1>;
+
+ LinearSolver::Options options;
+ ContextImpl context;
+#ifndef CERES_NO_CUDA
+ options.context = &context;
+ std::string error;
+ CHECK(context.InitCuda(&error)) << error;
+#endif // CERES_NO_CUDA
+ options.dense_linear_algebra_library_type = GetParam();
+ const double kEpsilon = std::numeric_limits<double>::epsilon() * 1.5e4;
+ std::unique_ptr<DenseQR> dense_qr = DenseQR::Create(options);
+
+ const int kNumTrials = 10;
+ const int kMinNumCols = 1;
+ const int kMaxNumCols = 10;
+ const int kMinRowsFactor = 1;
+ const int kMaxRowsFactor = 3;
+ for (int num_cols = kMinNumCols; num_cols < kMaxNumCols; ++num_cols) {
+ for (int num_rows = kMinRowsFactor * num_cols;
+ num_rows < kMaxRowsFactor * num_cols;
+ ++num_rows) {
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ MatrixType lhs = MatrixType::Random(num_rows, num_cols);
+ Vector x = VectorType::Random(num_cols);
+ Vector rhs = lhs * x;
+ Vector actual = Vector::Random(num_cols);
+ LinearSolver::Summary summary;
+ summary.termination_type = dense_qr->FactorAndSolve(num_rows,
+ num_cols,
+ lhs.data(),
+ rhs.data(),
+ actual.data(),
+ &summary.message);
+ ASSERT_EQ(summary.termination_type,
+ LinearSolverTerminationType::SUCCESS);
+ ASSERT_NEAR((x - actual).norm() / x.norm(), 0.0, kEpsilon)
+ << "\nexpected: " << x.transpose()
+ << "\nactual : " << actual.transpose();
+ }
+ }
+ }
+}
+
+namespace {
+
+// NOTE: preprocessor directives in a macro are not standard conforming
+decltype(auto) MakeValues() {
+ return ::testing::Values(EIGEN
+#ifndef CERES_NO_LAPACK
+ ,
+ LAPACK
+#endif
+#ifndef CERES_NO_CUDA
+ ,
+ CUDA
+#endif
+ );
+}
+
+} // namespace
+
+INSTANTIATE_TEST_SUITE_P(_, DenseQRTest, MakeValues(), ParamInfoToString);
+
+} // namespace ceres::internal
diff --git a/internal/ceres/dense_sparse_matrix.cc b/internal/ceres/dense_sparse_matrix.cc
index 53207fe..e0c917c 100644
--- a/internal/ceres/dense_sparse_matrix.cc
+++ b/internal/ceres/dense_sparse_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,38 +31,20 @@
#include "ceres/dense_sparse_matrix.h"
#include <algorithm>
+#include <utility>
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
DenseSparseMatrix::DenseSparseMatrix(int num_rows, int num_cols)
- : has_diagonal_appended_(false), has_diagonal_reserved_(false) {
- m_.resize(num_rows, num_cols);
- m_.setZero();
-}
-
-DenseSparseMatrix::DenseSparseMatrix(int num_rows,
- int num_cols,
- bool reserve_diagonal)
- : has_diagonal_appended_(false), has_diagonal_reserved_(reserve_diagonal) {
- if (reserve_diagonal) {
- // Allocate enough space for the diagonal.
- m_.resize(num_rows + num_cols, num_cols);
- } else {
- m_.resize(num_rows, num_cols);
- }
- m_.setZero();
-}
+ : m_(Matrix(num_rows, num_cols)) {}
DenseSparseMatrix::DenseSparseMatrix(const TripletSparseMatrix& m)
- : m_(Eigen::MatrixXd::Zero(m.num_rows(), m.num_cols())),
- has_diagonal_appended_(false),
- has_diagonal_reserved_(false) {
+ : m_(Matrix::Zero(m.num_rows(), m.num_cols())) {
const double* values = m.values();
const int* rows = m.rows();
const int* cols = m.cols();
@@ -73,22 +55,35 @@
}
}
-DenseSparseMatrix::DenseSparseMatrix(const ColMajorMatrix& m)
- : m_(m), has_diagonal_appended_(false), has_diagonal_reserved_(false) {}
+DenseSparseMatrix::DenseSparseMatrix(Matrix m) : m_(std::move(m)) {}
void DenseSparseMatrix::SetZero() { m_.setZero(); }
-void DenseSparseMatrix::RightMultiply(const double* x, double* y) const {
- VectorRef(y, num_rows()) += matrix() * ConstVectorRef(x, num_cols());
+void DenseSparseMatrix::RightMultiplyAndAccumulate(const double* x,
+ double* y) const {
+ VectorRef(y, num_rows()).noalias() += m_ * ConstVectorRef(x, num_cols());
}
-void DenseSparseMatrix::LeftMultiply(const double* x, double* y) const {
- VectorRef(y, num_cols()) +=
- matrix().transpose() * ConstVectorRef(x, num_rows());
+void DenseSparseMatrix::LeftMultiplyAndAccumulate(const double* x,
+ double* y) const {
+ VectorRef(y, num_cols()).noalias() +=
+ m_.transpose() * ConstVectorRef(x, num_rows());
}
void DenseSparseMatrix::SquaredColumnNorm(double* x) const {
- VectorRef(x, num_cols()) = m_.colwise().squaredNorm();
+ // This implementation is 3x faster than the naive version
+ // x = m_.colwise().square().sum(), likely because m_
+ // is a row major matrix.
+
+ const int num_rows = m_.rows();
+ const int num_cols = m_.cols();
+ std::fill_n(x, num_cols, 0.0);
+ const double* m = m_.data();
+ for (int i = 0; i < num_rows; ++i) {
+ for (int j = 0; j < num_cols; ++j, ++m) {
+ x[j] += (*m) * (*m);
+ }
+ }
}
void DenseSparseMatrix::ScaleColumns(const double* scale) {
@@ -96,77 +91,26 @@
}
void DenseSparseMatrix::ToDenseMatrix(Matrix* dense_matrix) const {
- *dense_matrix = m_.block(0, 0, num_rows(), num_cols());
+ *dense_matrix = m_;
}
-void DenseSparseMatrix::AppendDiagonal(double* d) {
- CHECK(!has_diagonal_appended_);
- if (!has_diagonal_reserved_) {
- ColMajorMatrix tmp = m_;
- m_.resize(m_.rows() + m_.cols(), m_.cols());
- m_.setZero();
- m_.block(0, 0, tmp.rows(), tmp.cols()) = tmp;
- has_diagonal_reserved_ = true;
- }
-
- m_.bottomLeftCorner(m_.cols(), m_.cols()) =
- ConstVectorRef(d, m_.cols()).asDiagonal();
- has_diagonal_appended_ = true;
-}
-
-void DenseSparseMatrix::RemoveDiagonal() {
- CHECK(has_diagonal_appended_);
- has_diagonal_appended_ = false;
- // Leave the diagonal reserved.
-}
-
-int DenseSparseMatrix::num_rows() const {
- if (has_diagonal_reserved_ && !has_diagonal_appended_) {
- return m_.rows() - m_.cols();
- }
- return m_.rows();
-}
+int DenseSparseMatrix::num_rows() const { return m_.rows(); }
int DenseSparseMatrix::num_cols() const { return m_.cols(); }
-int DenseSparseMatrix::num_nonzeros() const {
- if (has_diagonal_reserved_ && !has_diagonal_appended_) {
- return (m_.rows() - m_.cols()) * m_.cols();
- }
- return m_.rows() * m_.cols();
-}
+int DenseSparseMatrix::num_nonzeros() const { return m_.rows() * m_.cols(); }
-ConstColMajorMatrixRef DenseSparseMatrix::matrix() const {
- return ConstColMajorMatrixRef(
- m_.data(),
- ((has_diagonal_reserved_ && !has_diagonal_appended_)
- ? m_.rows() - m_.cols()
- : m_.rows()),
- m_.cols(),
- Eigen::Stride<Eigen::Dynamic, 1>(m_.rows(), 1));
-}
+const Matrix& DenseSparseMatrix::matrix() const { return m_; }
-ColMajorMatrixRef DenseSparseMatrix::mutable_matrix() {
- return ColMajorMatrixRef(m_.data(),
- ((has_diagonal_reserved_ && !has_diagonal_appended_)
- ? m_.rows() - m_.cols()
- : m_.rows()),
- m_.cols(),
- Eigen::Stride<Eigen::Dynamic, 1>(m_.rows(), 1));
-}
+Matrix* DenseSparseMatrix::mutable_matrix() { return &m_; }
void DenseSparseMatrix::ToTextFile(FILE* file) const {
CHECK(file != nullptr);
- const int active_rows = (has_diagonal_reserved_ && !has_diagonal_appended_)
- ? (m_.rows() - m_.cols())
- : m_.rows();
-
- for (int r = 0; r < active_rows; ++r) {
+ for (int r = 0; r < m_.rows(); ++r) {
for (int c = 0; c < m_.cols(); ++c) {
fprintf(file, "% 10d % 10d %17f\n", r, c, m_(r, c));
}
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dense_sparse_matrix.h b/internal/ceres/dense_sparse_matrix.h
index 94064b3..dc066d5 100644
--- a/internal/ceres/dense_sparse_matrix.h
+++ b/internal/ceres/dense_sparse_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,32 +33,28 @@
#ifndef CERES_INTERNAL_DENSE_SPARSE_MATRIX_H_
#define CERES_INTERNAL_DENSE_SPARSE_MATRIX_H_
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/sparse_matrix.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class TripletSparseMatrix;
-class CERES_EXPORT_INTERNAL DenseSparseMatrix : public SparseMatrix {
+class CERES_NO_EXPORT DenseSparseMatrix final : public SparseMatrix {
public:
// Build a matrix with the same content as the TripletSparseMatrix
// m. This assumes that m does not have any repeated entries.
explicit DenseSparseMatrix(const TripletSparseMatrix& m);
- explicit DenseSparseMatrix(const ColMajorMatrix& m);
-
+ explicit DenseSparseMatrix(Matrix m);
DenseSparseMatrix(int num_rows, int num_cols);
- DenseSparseMatrix(int num_rows, int num_cols, bool reserve_diagonal);
-
- virtual ~DenseSparseMatrix() {}
// SparseMatrix interface.
void SetZero() final;
- void RightMultiply(const double* x, double* y) const final;
- void LeftMultiply(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const final;
void SquaredColumnNorm(double* x) const final;
void ScaleColumns(const double* scale) final;
void ToDenseMatrix(Matrix* dense_matrix) const final;
@@ -69,40 +65,15 @@
const double* values() const final { return m_.data(); }
double* mutable_values() final { return m_.data(); }
- ConstColMajorMatrixRef matrix() const;
- ColMajorMatrixRef mutable_matrix();
-
- // Only one diagonal can be appended at a time. The diagonal is appended to
- // as a new set of rows, e.g.
- //
- // Original matrix:
- //
- // x x x
- // x x x
- // x x x
- //
- // After append diagonal (1, 2, 3):
- //
- // x x x
- // x x x
- // x x x
- // 1 0 0
- // 0 2 0
- // 0 0 3
- //
- // Calling RemoveDiagonal removes the block. It is a fatal error to append a
- // diagonal to a matrix that already has an appended diagonal, and it is also
- // a fatal error to remove a diagonal from a matrix that has none.
- void AppendDiagonal(double* d);
- void RemoveDiagonal();
+ const Matrix& matrix() const;
+ Matrix* mutable_matrix();
private:
- ColMajorMatrix m_;
- bool has_diagonal_appended_;
- bool has_diagonal_reserved_;
+ Matrix m_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_DENSE_SPARSE_MATRIX_H_
diff --git a/internal/ceres/dense_sparse_matrix_test.cc b/internal/ceres/dense_sparse_matrix_test.cc
index 2fa7216..0e50b62 100644
--- a/internal/ceres/dense_sparse_matrix_test.cc
+++ b/internal/ceres/dense_sparse_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -43,8 +43,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
static void CompareMatrices(const SparseMatrix* a, const SparseMatrix* b) {
EXPECT_EQ(a->num_rows(), b->num_rows());
@@ -60,8 +59,8 @@
Vector y_a = Vector::Zero(num_rows);
Vector y_b = Vector::Zero(num_rows);
- a->RightMultiply(x.data(), y_a.data());
- b->RightMultiply(x.data(), y_b.data());
+ a->RightMultiplyAndAccumulate(x.data(), y_a.data());
+ b->RightMultiplyAndAccumulate(x.data(), y_b.data());
EXPECT_EQ((y_a - y_b).norm(), 0);
}
@@ -70,13 +69,13 @@
class DenseSparseMatrixTest : public ::testing::Test {
protected:
void SetUp() final {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(1));
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(1);
CHECK(problem != nullptr);
tsm.reset(down_cast<TripletSparseMatrix*>(problem->A.release()));
- dsm.reset(new DenseSparseMatrix(*tsm));
+ dsm = std::make_unique<DenseSparseMatrix>(*tsm);
num_rows = tsm->num_rows();
num_cols = tsm->num_cols();
@@ -89,7 +88,7 @@
std::unique_ptr<DenseSparseMatrix> dsm;
};
-TEST_F(DenseSparseMatrixTest, RightMultiply) {
+TEST_F(DenseSparseMatrixTest, RightMultiplyAndAccumulate) {
CompareMatrices(tsm.get(), dsm.get());
// Try with a not entirely zero vector to verify column interactions, which
@@ -101,13 +100,13 @@
Vector b1 = Vector::Zero(num_rows);
Vector b2 = Vector::Zero(num_rows);
- tsm->RightMultiply(a.data(), b1.data());
- dsm->RightMultiply(a.data(), b2.data());
+ tsm->RightMultiplyAndAccumulate(a.data(), b1.data());
+ dsm->RightMultiplyAndAccumulate(a.data(), b2.data());
EXPECT_EQ((b1 - b2).norm(), 0);
}
-TEST_F(DenseSparseMatrixTest, LeftMultiply) {
+TEST_F(DenseSparseMatrixTest, LeftMultiplyAndAccumulate) {
for (int i = 0; i < num_rows; ++i) {
Vector a = Vector::Zero(num_rows);
a(i) = 1.0;
@@ -115,8 +114,8 @@
Vector b1 = Vector::Zero(num_cols);
Vector b2 = Vector::Zero(num_cols);
- tsm->LeftMultiply(a.data(), b1.data());
- dsm->LeftMultiply(a.data(), b2.data());
+ tsm->LeftMultiplyAndAccumulate(a.data(), b1.data());
+ dsm->LeftMultiplyAndAccumulate(a.data(), b2.data());
EXPECT_EQ((b1 - b2).norm(), 0);
}
@@ -130,8 +129,8 @@
Vector b1 = Vector::Zero(num_cols);
Vector b2 = Vector::Zero(num_cols);
- tsm->LeftMultiply(a.data(), b1.data());
- dsm->LeftMultiply(a.data(), b2.data());
+ tsm->LeftMultiplyAndAccumulate(a.data(), b1.data());
+ dsm->LeftMultiplyAndAccumulate(a.data(), b2.data());
EXPECT_EQ((b1 - b2).norm(), 0);
}
@@ -166,5 +165,4 @@
EXPECT_EQ((tsm_dense - dsm_dense).norm(), 0.0);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/detect_structure.cc b/internal/ceres/detect_structure.cc
index 4aac445..e82d70f 100644
--- a/internal/ceres/detect_structure.cc
+++ b/internal/ceres/detect_structure.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,8 +33,7 @@
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
void DetectStructure(const CompressedRowBlockStructure& bs,
const int num_eliminate_blocks,
@@ -119,5 +118,4 @@
// clang-format on
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/detect_structure.h b/internal/ceres/detect_structure.h
index 0624230..3237d10 100644
--- a/internal/ceres/detect_structure.h
+++ b/internal/ceres/detect_structure.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,10 +32,10 @@
#define CERES_INTERNAL_DETECT_STRUCTURE_H_
#include "ceres/block_structure.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Detect static blocks in the problem sparsity. For rows containing
// e_blocks, we are interested in detecting if the size of the row
@@ -56,13 +56,14 @@
// Note: The structure of rows without any e-blocks has no effect on
// the values returned by this function. It is entirely possible that
// the f_block_size and row_blocks_size is not constant in such rows.
-void CERES_EXPORT DetectStructure(const CompressedRowBlockStructure& bs,
- const int num_eliminate_blocks,
- int* row_block_size,
- int* e_block_size,
- int* f_block_size);
+void CERES_NO_EXPORT DetectStructure(const CompressedRowBlockStructure& bs,
+ const int num_eliminate_blocks,
+ int* row_block_size,
+ int* e_block_size,
+ int* f_block_size);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_DETECT_STRUCTURE_H_
diff --git a/internal/ceres/detect_structure_test.cc b/internal/ceres/detect_structure_test.cc
index 8f9c5ed..e4e3f1d 100644
--- a/internal/ceres/detect_structure_test.cc
+++ b/internal/ceres/detect_structure_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -45,34 +45,34 @@
CompressedRowBlockStructure bs;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 3;
bs.cols.back().position = 0;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 4;
bs.cols.back().position = 3;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 4;
bs.cols.back().position = 7;
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(1, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(1, 0);
}
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 2;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(2, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(2, 0);
}
int row_block_size = 0;
@@ -94,34 +94,34 @@
CompressedRowBlockStructure bs;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 3;
bs.cols.back().position = 0;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 4;
bs.cols.back().position = 3;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 4;
bs.cols.back().position = 7;
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(1, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(1, 0);
}
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 1;
row.block.position = 2;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(2, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(2, 0);
}
int row_block_size = 0;
@@ -143,34 +143,34 @@
CompressedRowBlockStructure bs;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 3;
bs.cols.back().position = 0;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 4;
bs.cols.back().position = 3;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 3;
bs.cols.back().position = 7;
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(1, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(1, 0);
}
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 2;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(2, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(2, 0);
}
int row_block_size = 0;
@@ -192,34 +192,34 @@
CompressedRowBlockStructure bs;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 3;
bs.cols.back().position = 0;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 4;
bs.cols.back().position = 3;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 3;
bs.cols.back().position = 7;
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(2, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(2, 0);
}
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 2;
- row.cells.push_back(Cell(1, 0));
- row.cells.push_back(Cell(2, 0));
+ row.cells.emplace_back(1, 0);
+ row.cells.emplace_back(2, 0);
}
int row_block_size = 0;
@@ -241,26 +241,26 @@
CompressedRowBlockStructure bs;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 3;
bs.cols.back().position = 0;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 4;
bs.cols.back().position = 3;
- bs.cols.push_back(Block());
+ bs.cols.emplace_back();
bs.cols.back().size = 3;
bs.cols.back().position = 7;
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(1, 0));
- row.cells.push_back(Cell(2, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(1, 0);
+ row.cells.emplace_back(2, 0);
}
int row_block_size = 0;
diff --git a/internal/ceres/dogleg_strategy.cc b/internal/ceres/dogleg_strategy.cc
index 03ae22f..877d8d9 100644
--- a/internal/ceres/dogleg_strategy.cc
+++ b/internal/ceres/dogleg_strategy.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -44,8 +44,7 @@
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
namespace {
const double kMaxMu = 1.0;
const double kMinMu = 1e-8;
@@ -101,7 +100,7 @@
}
TrustRegionStrategy::Summary summary;
summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
return summary;
}
@@ -138,11 +137,13 @@
summary.num_iterations = linear_solver_summary.num_iterations;
summary.termination_type = linear_solver_summary.termination_type;
- if (linear_solver_summary.termination_type == LINEAR_SOLVER_FATAL_ERROR) {
+ if (linear_solver_summary.termination_type ==
+ LinearSolverTerminationType::FATAL_ERROR) {
return summary;
}
- if (linear_solver_summary.termination_type != LINEAR_SOLVER_FAILURE) {
+ if (linear_solver_summary.termination_type !=
+ LinearSolverTerminationType::FAILURE) {
switch (dogleg_type_) {
// Interpolate the Cauchy point and the Gauss-Newton step.
case TRADITIONAL_DOGLEG:
@@ -153,7 +154,7 @@
// Cauchy point and the (Gauss-)Newton step.
case SUBSPACE_DOGLEG:
if (!ComputeSubspaceModel(jacobian)) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
+ summary.termination_type = LinearSolverTerminationType::FAILURE;
break;
}
ComputeSubspaceDoglegStep(step);
@@ -174,7 +175,7 @@
void DoglegStrategy::ComputeGradient(SparseMatrix* jacobian,
const double* residuals) {
gradient_.setZero();
- jacobian->LeftMultiply(residuals, gradient_.data());
+ jacobian->LeftMultiplyAndAccumulate(residuals, gradient_.data());
gradient_.array() /= diagonal_.array();
}
@@ -187,7 +188,7 @@
// The Jacobian is scaled implicitly by computing J * (D^-1 * (D^-1 * g))
// instead of (J * D^-1) * (D^-1 * g).
Vector scaled_gradient = (gradient_.array() / diagonal_.array()).matrix();
- jacobian->RightMultiply(scaled_gradient.data(), Jg.data());
+ jacobian->RightMultiplyAndAccumulate(scaled_gradient.data(), Jg.data());
alpha_ = gradient_.squaredNorm() / Jg.squaredNorm();
}
@@ -480,7 +481,7 @@
// Find the real parts y_i of its roots (not only the real roots).
Vector roots_real;
- if (!FindPolynomialRoots(polynomial, &roots_real, NULL)) {
+ if (!FindPolynomialRoots(polynomial, &roots_real, nullptr)) {
// Failed to find the roots of the polynomial, i.e. the candidate
// solutions of the constrained problem. Report this back to the caller.
return false;
@@ -518,7 +519,7 @@
const double* residuals) {
const int n = jacobian->num_cols();
LinearSolver::Summary linear_solver_summary;
- linear_solver_summary.termination_type = LINEAR_SOLVER_FAILURE;
+ linear_solver_summary.termination_type = LinearSolverTerminationType::FAILURE;
// The Jacobian matrix is often quite poorly conditioned. Thus it is
// necessary to add a diagonal matrix at the bottom to prevent the
@@ -531,7 +532,7 @@
// If the solve fails, the multiplier to the diagonal is increased
// up to max_mu_ by a factor of mu_increase_factor_ every time. If
// the linear solver is still not successful, the strategy returns
- // with LINEAR_SOLVER_FAILURE.
+ // with LinearSolverTerminationType::FAILURE.
//
// Next time when a new Gauss-Newton step is requested, the
// multiplier starts out from the last successful solve.
@@ -582,21 +583,25 @@
}
}
- if (linear_solver_summary.termination_type == LINEAR_SOLVER_FATAL_ERROR) {
+ if (linear_solver_summary.termination_type ==
+ LinearSolverTerminationType::FATAL_ERROR) {
return linear_solver_summary;
}
- if (linear_solver_summary.termination_type == LINEAR_SOLVER_FAILURE ||
+ if (linear_solver_summary.termination_type ==
+ LinearSolverTerminationType::FAILURE ||
!IsArrayValid(n, gauss_newton_step_.data())) {
mu_ *= mu_increase_factor_;
VLOG(2) << "Increasing mu " << mu_;
- linear_solver_summary.termination_type = LINEAR_SOLVER_FAILURE;
+ linear_solver_summary.termination_type =
+ LinearSolverTerminationType::FAILURE;
continue;
}
break;
}
- if (linear_solver_summary.termination_type != LINEAR_SOLVER_FAILURE) {
+ if (linear_solver_summary.termination_type !=
+ LinearSolverTerminationType::FAILURE) {
// The scaled Gauss-Newton step is D * GN:
//
// - (D^-1 J^T J D^-1)^-1 (D^-1 g)
@@ -627,7 +632,7 @@
reuse_ = false;
}
-void DoglegStrategy::StepRejected(double step_quality) {
+void DoglegStrategy::StepRejected(double /*step_quality*/) {
radius_ *= 0.5;
reuse_ = true;
}
@@ -701,14 +706,13 @@
Vector tmp;
tmp = (subspace_basis_.col(0).array() / diagonal_.array()).matrix();
- jacobian->RightMultiply(tmp.data(), Jb.row(0).data());
+ jacobian->RightMultiplyAndAccumulate(tmp.data(), Jb.row(0).data());
tmp = (subspace_basis_.col(1).array() / diagonal_.array()).matrix();
- jacobian->RightMultiply(tmp.data(), Jb.row(1).data());
+ jacobian->RightMultiplyAndAccumulate(tmp.data(), Jb.row(1).data());
subspace_B_ = Jb * Jb.transpose();
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dogleg_strategy.h b/internal/ceres/dogleg_strategy.h
index cc3778e..b4d29c9 100644
--- a/internal/ceres/dogleg_strategy.h
+++ b/internal/ceres/dogleg_strategy.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,12 +31,12 @@
#ifndef CERES_INTERNAL_DOGLEG_STRATEGY_H_
#define CERES_INTERNAL_DOGLEG_STRATEGY_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
#include "ceres/trust_region_strategy.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Dogleg step computation and trust region sizing strategy based on
// on "Methods for Nonlinear Least Squares" by K. Madsen, H.B. Nielsen
@@ -53,10 +53,9 @@
// DoglegStrategy follows the approach by Shultz, Schnabel, Byrd.
// This finds the exact optimum over the two-dimensional subspace
// spanned by the two Dogleg vectors.
-class CERES_EXPORT_INTERNAL DoglegStrategy : public TrustRegionStrategy {
+class CERES_NO_EXPORT DoglegStrategy final : public TrustRegionStrategy {
public:
explicit DoglegStrategy(const TrustRegionStrategy::Options& options);
- virtual ~DoglegStrategy() {}
// TrustRegionStrategy interface
Summary ComputeStep(const PerSolveOptions& per_solve_options,
@@ -65,7 +64,7 @@
double* step) final;
void StepAccepted(double step_quality) final;
void StepRejected(double step_quality) final;
- void StepIsInvalid();
+ void StepIsInvalid() override;
double Radius() const final;
// These functions are predominantly for testing.
@@ -76,8 +75,8 @@
Matrix subspace_B() const { return subspace_B_; }
private:
- typedef Eigen::Matrix<double, 2, 1, Eigen::DontAlign> Vector2d;
- typedef Eigen::Matrix<double, 2, 2, Eigen::DontAlign> Matrix2d;
+ using Vector2d = Eigen::Matrix<double, 2, 1, Eigen::DontAlign>;
+ using Matrix2d = Eigen::Matrix<double, 2, 2, Eigen::DontAlign>;
LinearSolver::Summary ComputeGaussNewtonStep(
const PerSolveOptions& per_solve_options,
@@ -159,7 +158,8 @@
Matrix2d subspace_B_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_DOGLEG_STRATEGY_H_
diff --git a/internal/ceres/dogleg_strategy_test.cc b/internal/ceres/dogleg_strategy_test.cc
index 0c20f25..b256f3e 100644
--- a/internal/ceres/dogleg_strategy_test.cc
+++ b/internal/ceres/dogleg_strategy_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,8 +40,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
namespace {
class Fixture : public testing::Test {
@@ -79,7 +78,7 @@
Matrix sqrtD = Ddiag.array().sqrt().matrix().asDiagonal();
Matrix jacobian = sqrtD * basis;
- jacobian_.reset(new DenseSparseMatrix(jacobian));
+ jacobian_ = std::make_unique<DenseSparseMatrix>(jacobian);
Vector minimum(6);
minimum << 1.0, 1.0, 1.0, 1.0, 1.0, 1.0;
@@ -107,7 +106,7 @@
Ddiag << 1.0, 2.0, 4.0, 8.0, 16.0, 32.0;
Matrix jacobian = Ddiag.asDiagonal();
- jacobian_.reset(new DenseSparseMatrix(jacobian));
+ jacobian_ = std::make_unique<DenseSparseMatrix>(jacobian);
Vector minimum(6);
minimum << 0.0, 0.0, 1.0, 0.0, 0.0, 0.0;
@@ -146,7 +145,7 @@
TrustRegionStrategy::Summary summary =
strategy.ComputeStep(pso, jacobian_.get(), residual_.data(), x_.data());
- EXPECT_NE(summary.termination_type, LINEAR_SOLVER_FAILURE);
+ EXPECT_NE(summary.termination_type, LinearSolverTerminationType::FAILURE);
EXPECT_LE(x_.norm(), options_.initial_radius * (1.0 + 4.0 * kEpsilon));
}
@@ -164,7 +163,7 @@
TrustRegionStrategy::Summary summary =
strategy.ComputeStep(pso, jacobian_.get(), residual_.data(), x_.data());
- EXPECT_NE(summary.termination_type, LINEAR_SOLVER_FAILURE);
+ EXPECT_NE(summary.termination_type, LinearSolverTerminationType::FAILURE);
EXPECT_LE(x_.norm(), options_.initial_radius * (1.0 + 4.0 * kEpsilon));
}
@@ -182,7 +181,7 @@
TrustRegionStrategy::Summary summary =
strategy.ComputeStep(pso, jacobian_.get(), residual_.data(), x_.data());
- EXPECT_NE(summary.termination_type, LINEAR_SOLVER_FAILURE);
+ EXPECT_NE(summary.termination_type, LinearSolverTerminationType::FAILURE);
EXPECT_NEAR(x_(0), 1.0, kToleranceLoose);
EXPECT_NEAR(x_(1), 1.0, kToleranceLoose);
EXPECT_NEAR(x_(2), 1.0, kToleranceLoose);
@@ -240,7 +239,7 @@
TrustRegionStrategy::Summary summary =
strategy.ComputeStep(pso, jacobian_.get(), residual_.data(), x_.data());
- EXPECT_NE(summary.termination_type, LINEAR_SOLVER_FAILURE);
+ EXPECT_NE(summary.termination_type, LinearSolverTerminationType::FAILURE);
EXPECT_NEAR(x_(0), 0.0, kToleranceLoose);
EXPECT_NEAR(x_(1), 0.0, kToleranceLoose);
EXPECT_NEAR(x_(2), options_.initial_radius, kToleranceLoose);
@@ -266,7 +265,7 @@
TrustRegionStrategy::Summary summary =
strategy.ComputeStep(pso, jacobian_.get(), residual_.data(), x_.data());
- EXPECT_NE(summary.termination_type, LINEAR_SOLVER_FAILURE);
+ EXPECT_NE(summary.termination_type, LinearSolverTerminationType::FAILURE);
EXPECT_NEAR(x_(0), 0.0, kToleranceLoose);
EXPECT_NEAR(x_(1), 0.0, kToleranceLoose);
EXPECT_NEAR(x_(2), 1.0, kToleranceLoose);
@@ -275,5 +274,4 @@
EXPECT_NEAR(x_(5), 0.0, kToleranceLoose);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dynamic_autodiff_cost_function_test.cc b/internal/ceres/dynamic_autodiff_cost_function_test.cc
index 55d3fe1..51366c6 100644
--- a/internal/ceres/dynamic_autodiff_cost_function_test.cc
+++ b/internal/ceres/dynamic_autodiff_cost_function_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,13 +34,12 @@
#include <cstddef>
#include <memory>
+#include <vector>
+#include "ceres/types.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
// Takes 2 parameter blocks:
// parameters[0] is size 10.
@@ -75,8 +74,8 @@
};
TEST(DynamicAutodiffCostFunctionTest, TestResiduals) {
- vector<double> param_block_0(10, 0.0);
- vector<double> param_block_1(5, 0.0);
+ std::vector<double> param_block_0(10, 0.0);
+ std::vector<double> param_block_1(5, 0.0);
DynamicAutoDiffCostFunction<MyCostFunctor, 3> cost_function(
new MyCostFunctor());
cost_function.AddParameterBlock(param_block_0.size());
@@ -84,12 +83,12 @@
cost_function.SetNumResiduals(21);
// Test residual computation.
- vector<double> residuals(21, -100000);
- vector<double*> parameter_blocks(2);
+ std::vector<double> residuals(21, -100000);
+ std::vector<double*> parameter_blocks(2);
parameter_blocks[0] = ¶m_block_0[0];
parameter_blocks[1] = ¶m_block_1[0];
EXPECT_TRUE(
- cost_function.Evaluate(¶meter_blocks[0], residuals.data(), NULL));
+ cost_function.Evaluate(¶meter_blocks[0], residuals.data(), nullptr));
for (int r = 0; r < 10; ++r) {
EXPECT_EQ(1.0 * r, residuals.at(r * 2));
EXPECT_EQ(-1.0 * r, residuals.at(r * 2 + 1));
@@ -99,11 +98,11 @@
TEST(DynamicAutodiffCostFunctionTest, TestJacobian) {
// Test the residual counting.
- vector<double> param_block_0(10, 0.0);
+ std::vector<double> param_block_0(10, 0.0);
for (int i = 0; i < 10; ++i) {
param_block_0[i] = 2 * i;
}
- vector<double> param_block_1(5, 0.0);
+ std::vector<double> param_block_1(5, 0.0);
DynamicAutoDiffCostFunction<MyCostFunctor, 3> cost_function(
new MyCostFunctor());
cost_function.AddParameterBlock(param_block_0.size());
@@ -111,18 +110,18 @@
cost_function.SetNumResiduals(21);
// Prepare the residuals.
- vector<double> residuals(21, -100000);
+ std::vector<double> residuals(21, -100000);
// Prepare the parameters.
- vector<double*> parameter_blocks(2);
+ std::vector<double*> parameter_blocks(2);
parameter_blocks[0] = ¶m_block_0[0];
parameter_blocks[1] = ¶m_block_1[0];
// Prepare the jacobian.
- vector<vector<double>> jacobian_vect(2);
+ std::vector<std::vector<double>> jacobian_vect(2);
jacobian_vect[0].resize(21 * 10, -100000);
jacobian_vect[1].resize(21 * 5, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect[0].data());
jacobian.push_back(jacobian_vect[1].data());
@@ -149,8 +148,8 @@
EXPECT_EQ(4 * p - 8, jacobian_vect[0][20 * 10 + p]);
jacobian_vect[0][20 * 10 + p] = 0.0;
}
- for (int i = 0; i < jacobian_vect[0].size(); ++i) {
- EXPECT_EQ(0.0, jacobian_vect[0][i]);
+ for (double entry : jacobian_vect[0]) {
+ EXPECT_EQ(0.0, entry);
}
// Check "C" Jacobian for second parameter block.
@@ -158,18 +157,18 @@
EXPECT_EQ(1.0, jacobian_vect[1][20 * 5 + p]);
jacobian_vect[1][20 * 5 + p] = 0.0;
}
- for (int i = 0; i < jacobian_vect[1].size(); ++i) {
- EXPECT_EQ(0.0, jacobian_vect[1][i]);
+ for (double entry : jacobian_vect[1]) {
+ EXPECT_EQ(0.0, entry);
}
}
TEST(DynamicAutodiffCostFunctionTest, JacobianWithFirstParameterBlockConstant) {
// Test the residual counting.
- vector<double> param_block_0(10, 0.0);
+ std::vector<double> param_block_0(10, 0.0);
for (int i = 0; i < 10; ++i) {
param_block_0[i] = 2 * i;
}
- vector<double> param_block_1(5, 0.0);
+ std::vector<double> param_block_1(5, 0.0);
DynamicAutoDiffCostFunction<MyCostFunctor, 3> cost_function(
new MyCostFunctor());
cost_function.AddParameterBlock(param_block_0.size());
@@ -177,19 +176,19 @@
cost_function.SetNumResiduals(21);
// Prepare the residuals.
- vector<double> residuals(21, -100000);
+ std::vector<double> residuals(21, -100000);
// Prepare the parameters.
- vector<double*> parameter_blocks(2);
+ std::vector<double*> parameter_blocks(2);
parameter_blocks[0] = ¶m_block_0[0];
parameter_blocks[1] = ¶m_block_1[0];
// Prepare the jacobian.
- vector<vector<double>> jacobian_vect(2);
+ std::vector<std::vector<double>> jacobian_vect(2);
jacobian_vect[0].resize(21 * 10, -100000);
jacobian_vect[1].resize(21 * 5, -100000);
- vector<double*> jacobian;
- jacobian.push_back(NULL);
+ std::vector<double*> jacobian;
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect[1].data());
// Test jacobian computation.
@@ -207,19 +206,19 @@
EXPECT_EQ(1.0, jacobian_vect[1][20 * 5 + p]);
jacobian_vect[1][20 * 5 + p] = 0.0;
}
- for (int i = 0; i < jacobian_vect[1].size(); ++i) {
- EXPECT_EQ(0.0, jacobian_vect[1][i]);
+ for (double& i : jacobian_vect[1]) {
+ EXPECT_EQ(0.0, i);
}
}
TEST(DynamicAutodiffCostFunctionTest,
JacobianWithSecondParameterBlockConstant) { // NOLINT
// Test the residual counting.
- vector<double> param_block_0(10, 0.0);
+ std::vector<double> param_block_0(10, 0.0);
for (int i = 0; i < 10; ++i) {
param_block_0[i] = 2 * i;
}
- vector<double> param_block_1(5, 0.0);
+ std::vector<double> param_block_1(5, 0.0);
DynamicAutoDiffCostFunction<MyCostFunctor, 3> cost_function(
new MyCostFunctor());
cost_function.AddParameterBlock(param_block_0.size());
@@ -227,20 +226,20 @@
cost_function.SetNumResiduals(21);
// Prepare the residuals.
- vector<double> residuals(21, -100000);
+ std::vector<double> residuals(21, -100000);
// Prepare the parameters.
- vector<double*> parameter_blocks(2);
+ std::vector<double*> parameter_blocks(2);
parameter_blocks[0] = ¶m_block_0[0];
parameter_blocks[1] = ¶m_block_1[0];
// Prepare the jacobian.
- vector<vector<double>> jacobian_vect(2);
+ std::vector<std::vector<double>> jacobian_vect(2);
jacobian_vect[0].resize(21 * 10, -100000);
jacobian_vect[1].resize(21 * 5, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect[0].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
// Test jacobian computation.
EXPECT_TRUE(cost_function.Evaluate(
@@ -265,8 +264,8 @@
EXPECT_EQ(4 * p - 8, jacobian_vect[0][20 * 10 + p]);
jacobian_vect[0][20 * 10 + p] = 0.0;
}
- for (int i = 0; i < jacobian_vect[0].size(); ++i) {
- EXPECT_EQ(0.0, jacobian_vect[0][i]);
+ for (double& i : jacobian_vect[0]) {
+ EXPECT_EQ(0.0, i);
}
}
@@ -327,17 +326,16 @@
parameter_blocks_[2] = &z_[0];
// Prepare the cost function.
- typedef DynamicAutoDiffCostFunction<MyThreeParameterCostFunctor, 3>
- DynamicMyThreeParameterCostFunction;
- DynamicMyThreeParameterCostFunction* cost_function =
- new DynamicMyThreeParameterCostFunction(
- new MyThreeParameterCostFunctor());
+ using DynamicMyThreeParameterCostFunction =
+ DynamicAutoDiffCostFunction<MyThreeParameterCostFunctor, 3>;
+ auto cost_function = std::make_unique<DynamicMyThreeParameterCostFunction>(
+ new MyThreeParameterCostFunctor());
cost_function->AddParameterBlock(1);
cost_function->AddParameterBlock(2);
cost_function->AddParameterBlock(3);
cost_function->SetNumResiduals(7);
- cost_function_.reset(cost_function);
+ cost_function_ = std::move(cost_function);
// Setup jacobian data.
jacobian_vect_.resize(3);
@@ -410,36 +408,36 @@
}
protected:
- vector<double> x_;
- vector<double> y_;
- vector<double> z_;
+ std::vector<double> x_;
+ std::vector<double> y_;
+ std::vector<double> z_;
- vector<double*> parameter_blocks_;
+ std::vector<double*> parameter_blocks_;
std::unique_ptr<CostFunction> cost_function_;
- vector<vector<double>> jacobian_vect_;
+ std::vector<std::vector<double>> jacobian_vect_;
- vector<double> expected_residuals_;
+ std::vector<double> expected_residuals_;
- vector<double> expected_jacobian_x_;
- vector<double> expected_jacobian_y_;
- vector<double> expected_jacobian_z_;
+ std::vector<double> expected_jacobian_x_;
+ std::vector<double> expected_jacobian_y_;
+ std::vector<double> expected_jacobian_z_;
};
TEST_F(ThreeParameterCostFunctorTest, TestThreeParameterResiduals) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
EXPECT_TRUE(cost_function_->Evaluate(
- parameter_blocks_.data(), residuals.data(), NULL));
+ parameter_blocks_.data(), residuals.data(), nullptr));
for (int i = 0; i < 7; ++i) {
EXPECT_EQ(expected_residuals_[i], residuals[i]);
}
}
TEST_F(ThreeParameterCostFunctorTest, TestThreeParameterJacobian) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect_[0].data());
jacobian.push_back(jacobian_vect_[1].data());
jacobian.push_back(jacobian_vect_[2].data());
@@ -466,12 +464,12 @@
TEST_F(ThreeParameterCostFunctorTest,
ThreeParameterJacobianWithFirstAndLastParameterBlockConstant) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
- jacobian.push_back(NULL);
+ std::vector<double*> jacobian;
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect_[1].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
EXPECT_TRUE(cost_function_->Evaluate(
parameter_blocks_.data(), residuals.data(), jacobian.data()));
@@ -487,11 +485,11 @@
TEST_F(ThreeParameterCostFunctorTest,
ThreeParameterJacobianWithSecondParameterBlockConstant) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect_[0].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect_[2].data());
EXPECT_TRUE(cost_function_->Evaluate(
@@ -560,16 +558,16 @@
parameter_blocks_[5] = &z2_;
// Prepare the cost function.
- typedef DynamicAutoDiffCostFunction<MySixParameterCostFunctor, 3>
- DynamicMySixParameterCostFunction;
- DynamicMySixParameterCostFunction* cost_function =
- new DynamicMySixParameterCostFunction(new MySixParameterCostFunctor());
+ using DynamicMySixParameterCostFunction =
+ DynamicAutoDiffCostFunction<MySixParameterCostFunctor, 3>;
+ auto cost_function = std::make_unique<DynamicMySixParameterCostFunction>(
+ new MySixParameterCostFunctor());
for (int i = 0; i < 6; ++i) {
cost_function->AddParameterBlock(1);
}
cost_function->SetNumResiduals(7);
- cost_function_.reset(cost_function);
+ cost_function_ = std::move(cost_function);
// Setup jacobian data.
jacobian_vect_.resize(6);
@@ -656,29 +654,29 @@
double z1_;
double z2_;
- vector<double*> parameter_blocks_;
+ std::vector<double*> parameter_blocks_;
std::unique_ptr<CostFunction> cost_function_;
- vector<vector<double>> jacobian_vect_;
+ std::vector<std::vector<double>> jacobian_vect_;
- vector<double> expected_residuals_;
- vector<vector<double>> expected_jacobians_;
+ std::vector<double> expected_residuals_;
+ std::vector<std::vector<double>> expected_jacobians_;
};
TEST_F(SixParameterCostFunctorTest, TestSixParameterResiduals) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
EXPECT_TRUE(cost_function_->Evaluate(
- parameter_blocks_.data(), residuals.data(), NULL));
+ parameter_blocks_.data(), residuals.data(), nullptr));
for (int i = 0; i < 7; ++i) {
EXPECT_EQ(expected_residuals_[i], residuals[i]);
}
}
TEST_F(SixParameterCostFunctorTest, TestSixParameterJacobian) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect_[0].data());
jacobian.push_back(jacobian_vect_[1].data());
jacobian.push_back(jacobian_vect_[2].data());
@@ -701,15 +699,15 @@
}
TEST_F(SixParameterCostFunctorTest, TestSixParameterJacobianVVCVVC) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect_[0].data());
jacobian.push_back(jacobian_vect_[1].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect_[3].data());
jacobian.push_back(jacobian_vect_[4].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
EXPECT_TRUE(cost_function_->Evaluate(
parameter_blocks_.data(), residuals.data(), jacobian.data()));
@@ -731,14 +729,14 @@
}
TEST_F(SixParameterCostFunctorTest, TestSixParameterJacobianVCCVCV) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect_[0].data());
- jacobian.push_back(NULL);
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect_[3].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect_[5].data());
EXPECT_TRUE(cost_function_->Evaluate(
@@ -806,5 +804,19 @@
EXPECT_EQ(residual, target_value);
}
-} // namespace internal
-} // namespace ceres
+TEST(DynamicAutoDiffCostFunctionTest, DeductionTemplateCompilationTest) {
+ // Ensure deduction guide to be working
+ (void)DynamicAutoDiffCostFunction(new MyCostFunctor());
+ (void)DynamicAutoDiffCostFunction(new MyCostFunctor(), TAKE_OWNERSHIP);
+ (void)DynamicAutoDiffCostFunction(std::make_unique<MyCostFunctor>());
+}
+
+TEST(DynamicAutoDiffCostFunctionTest, ArgumentForwarding) {
+ (void)DynamicAutoDiffCostFunction<MyCostFunctor>();
+}
+
+TEST(DynamicAutoDiffCostFunctionTest, UniquePtr) {
+ (void)DynamicAutoDiffCostFunction(std::make_unique<MyCostFunctor>());
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/dynamic_compressed_row_finalizer.h b/internal/ceres/dynamic_compressed_row_finalizer.h
index 30c98d8..9da73c0 100644
--- a/internal/ceres/dynamic_compressed_row_finalizer.h
+++ b/internal/ceres/dynamic_compressed_row_finalizer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,24 +28,23 @@
//
// Author: richie.stebbing@gmail.com (Richard Stebbing)
-#ifndef CERES_INTERNAL_DYNAMIC_COMPRESED_ROW_FINALIZER_H_
-#define CERES_INTERNAL_DYNAMIC_COMPRESED_ROW_FINALIZER_H_
+#ifndef CERES_INTERNAL_DYNAMIC_COMPRESSED_ROW_FINALIZER_H_
+#define CERES_INTERNAL_DYNAMIC_COMPRESSED_ROW_FINALIZER_H_
#include "ceres/casts.h"
#include "ceres/dynamic_compressed_row_sparse_matrix.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-struct DynamicCompressedRowJacobianFinalizer {
+struct CERES_NO_EXPORT DynamicCompressedRowJacobianFinalizer {
void operator()(SparseMatrix* base_jacobian, int num_parameters) {
- DynamicCompressedRowSparseMatrix* jacobian =
+ auto* jacobian =
down_cast<DynamicCompressedRowSparseMatrix*>(base_jacobian);
jacobian->Finalize(num_parameters);
}
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_INTERNAL_DYNAMIC_COMPRESED_ROW_FINALISER_H_
+#endif // CERES_INTERNAL_DYNAMIC_COMPRESSED_ROW_FINALISER_H_
diff --git a/internal/ceres/dynamic_compressed_row_jacobian_writer.cc b/internal/ceres/dynamic_compressed_row_jacobian_writer.cc
index 1749449..790a5fb 100644
--- a/internal/ceres/dynamic_compressed_row_jacobian_writer.cc
+++ b/internal/ceres/dynamic_compressed_row_jacobian_writer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,6 +30,10 @@
#include "ceres/dynamic_compressed_row_jacobian_writer.h"
+#include <memory>
+#include <utility>
+#include <vector>
+
#include "ceres/casts.h"
#include "ceres/compressed_row_jacobian_writer.h"
#include "ceres/dynamic_compressed_row_sparse_matrix.h"
@@ -37,38 +41,33 @@
#include "ceres/program.h"
#include "ceres/residual_block.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::pair;
-using std::vector;
-
-ScratchEvaluatePreparer*
+std::unique_ptr<ScratchEvaluatePreparer[]>
DynamicCompressedRowJacobianWriter::CreateEvaluatePreparers(int num_threads) {
return ScratchEvaluatePreparer::Create(*program_, num_threads);
}
-SparseMatrix* DynamicCompressedRowJacobianWriter::CreateJacobian() const {
- DynamicCompressedRowSparseMatrix* jacobian =
- new DynamicCompressedRowSparseMatrix(program_->NumResiduals(),
- program_->NumEffectiveParameters(),
- 0 /* max_num_nonzeros */);
- return jacobian;
+std::unique_ptr<SparseMatrix>
+DynamicCompressedRowJacobianWriter::CreateJacobian() const {
+ return std::make_unique<DynamicCompressedRowSparseMatrix>(
+ program_->NumResiduals(),
+ program_->NumEffectiveParameters(),
+ 0 /* max_num_nonzeros */);
}
void DynamicCompressedRowJacobianWriter::Write(int residual_id,
int residual_offset,
double** jacobians,
SparseMatrix* base_jacobian) {
- DynamicCompressedRowSparseMatrix* jacobian =
- down_cast<DynamicCompressedRowSparseMatrix*>(base_jacobian);
+ auto* jacobian = down_cast<DynamicCompressedRowSparseMatrix*>(base_jacobian);
// Get the `residual_block` of interest.
const ResidualBlock* residual_block =
program_->residual_blocks()[residual_id];
const int num_residuals = residual_block->NumResiduals();
- vector<pair<int, int>> evaluated_jacobian_blocks;
+ std::vector<std::pair<int, int>> evaluated_jacobian_blocks;
CompressedRowJacobianWriter::GetOrderedParameterBlocks(
program_, residual_id, &evaluated_jacobian_blocks);
@@ -77,12 +76,11 @@
jacobian->ClearRows(residual_offset, num_residuals);
// Iterate over each parameter block.
- for (int i = 0; i < evaluated_jacobian_blocks.size(); ++i) {
+ for (const auto& evaluated_jacobian_block : evaluated_jacobian_blocks) {
const ParameterBlock* parameter_block =
- program_->parameter_blocks()[evaluated_jacobian_blocks[i].first];
- const int parameter_block_jacobian_index =
- evaluated_jacobian_blocks[i].second;
- const int parameter_block_size = parameter_block->LocalSize();
+ program_->parameter_blocks()[evaluated_jacobian_block.first];
+ const int parameter_block_jacobian_index = evaluated_jacobian_block.second;
+ const int parameter_block_size = parameter_block->TangentSize();
const double* parameter_jacobian =
jacobians[parameter_block_jacobian_index];
@@ -100,5 +98,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dynamic_compressed_row_jacobian_writer.h b/internal/ceres/dynamic_compressed_row_jacobian_writer.h
index ef8fa25..489197f 100644
--- a/internal/ceres/dynamic_compressed_row_jacobian_writer.h
+++ b/internal/ceres/dynamic_compressed_row_jacobian_writer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,16 +34,18 @@
#ifndef CERES_INTERNAL_DYNAMIC_COMPRESSED_ROW_JACOBIAN_WRITER_H_
#define CERES_INTERNAL_DYNAMIC_COMPRESSED_ROW_JACOBIAN_WRITER_H_
+#include <memory>
+
#include "ceres/evaluator.h"
+#include "ceres/internal/export.h"
#include "ceres/scratch_evaluate_preparer.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class Program;
class SparseMatrix;
-class DynamicCompressedRowJacobianWriter {
+class CERES_NO_EXPORT DynamicCompressedRowJacobianWriter {
public:
DynamicCompressedRowJacobianWriter(Evaluator::Options /* ignored */,
Program* program)
@@ -55,16 +57,17 @@
// the cost functions. The scratch space is therefore used to store
// the jacobians (including zeros) temporarily before only the non-zero
// entries are copied over to the larger jacobian in `Write`.
- ScratchEvaluatePreparer* CreateEvaluatePreparers(int num_threads);
+ std::unique_ptr<ScratchEvaluatePreparer[]> CreateEvaluatePreparers(
+ int num_threads);
// Return a `DynamicCompressedRowSparseMatrix` which is filled by
// `Write`. Note that `Finalize` must be called to make the
// `CompressedRowSparseMatrix` interface valid.
- SparseMatrix* CreateJacobian() const;
+ std::unique_ptr<SparseMatrix> CreateJacobian() const;
// Write only the non-zero jacobian entries for a residual block
// (specified by `residual_id`) into `base_jacobian`, starting at the row
- // specifed by `residual_offset`.
+ // specified by `residual_offset`.
//
// This method is thread-safe over residual blocks (each `residual_id`).
void Write(int residual_id,
@@ -76,7 +79,6 @@
Program* program_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_DYNAMIC_COMPRESSED_ROW_JACOBIAN_WRITER_H_
diff --git a/internal/ceres/dynamic_compressed_row_sparse_matrix.cc b/internal/ceres/dynamic_compressed_row_sparse_matrix.cc
index 936e682..4081c9c 100644
--- a/internal/ceres/dynamic_compressed_row_sparse_matrix.cc
+++ b/internal/ceres/dynamic_compressed_row_sparse_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,8 +32,7 @@
#include <cstring>
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
DynamicCompressedRowSparseMatrix::DynamicCompressedRowSparseMatrix(
int num_rows, int num_cols, int initial_max_num_nonzeros)
@@ -70,8 +69,8 @@
// Count the number of non-zeros and resize `cols_` and `values_`.
int num_jacobian_nonzeros = 0;
- for (int i = 0; i < dynamic_cols_.size(); ++i) {
- num_jacobian_nonzeros += dynamic_cols_[i].size();
+ for (const auto& dynamic_col : dynamic_cols_) {
+ num_jacobian_nonzeros += dynamic_col.size();
}
SetMaxNumNonZeros(num_jacobian_nonzeros + num_additional_elements);
@@ -99,5 +98,4 @@
<< "the number of jacobian nonzeros. Please contact the developers!";
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dynamic_compressed_row_sparse_matrix.h b/internal/ceres/dynamic_compressed_row_sparse_matrix.h
index d06c36e..6dafe59 100644
--- a/internal/ceres/dynamic_compressed_row_sparse_matrix.h
+++ b/internal/ceres/dynamic_compressed_row_sparse_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -44,15 +44,15 @@
#include <vector>
#include "ceres/compressed_row_sparse_matrix.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class CERES_EXPORT_INTERNAL DynamicCompressedRowSparseMatrix
+class CERES_NO_EXPORT DynamicCompressedRowSparseMatrix final
: public CompressedRowSparseMatrix {
public:
- // Set the number of rows and columns for the underlyig
+ // Set the number of rows and columns for the underlying
// `CompressedRowSparseMatrix` and set the initial number of maximum non-zero
// entries. Note that following the insertion of entries, when `Finalize`
// is called the number of non-zeros is determined and all internal
@@ -73,7 +73,7 @@
// Insert an entry at a given row and column position. This method is
// thread-safe across rows i.e. different threads can insert values
- // simultaneously into different rows. It should be emphasised that this
+ // simultaneously into different rows. It should be emphasized that this
// method always inserts a new entry and does not check for existing
// entries at the specified row and column position. Duplicate entries
// for a given row and column position will result in undefined
@@ -97,7 +97,8 @@
std::vector<std::vector<double>> dynamic_values_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_DYNAMIC_COMPRESSED_ROW_SPARSE_MATRIX_H_
diff --git a/internal/ceres/dynamic_compressed_row_sparse_matrix_test.cc b/internal/ceres/dynamic_compressed_row_sparse_matrix_test.cc
index 95dc807..47d86f2 100644
--- a/internal/ceres/dynamic_compressed_row_sparse_matrix_test.cc
+++ b/internal/ceres/dynamic_compressed_row_sparse_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,6 +31,7 @@
#include "ceres/dynamic_compressed_row_sparse_matrix.h"
#include <memory>
+#include <vector>
#include "ceres/casts.h"
#include "ceres/compressed_row_sparse_matrix.h"
@@ -42,9 +43,6 @@
namespace ceres {
namespace internal {
-using std::copy;
-using std::vector;
-
class DynamicCompressedRowSparseMatrixTest : public ::testing::Test {
protected:
void SetUp() final {
@@ -61,7 +59,8 @@
InitialiseDenseReference();
InitialiseSparseMatrixReferences();
- dcrsm.reset(new DynamicCompressedRowSparseMatrix(num_rows, num_cols, 0));
+ dcrsm = std::make_unique<DynamicCompressedRowSparseMatrix>(
+ num_rows, num_cols, 0);
}
void Finalize() { dcrsm->Finalize(num_additional_elements); }
@@ -81,8 +80,8 @@
}
void InitialiseSparseMatrixReferences() {
- vector<int> rows, cols;
- vector<double> values;
+ std::vector<int> rows, cols;
+ std::vector<double> values;
for (int i = 0; i < (num_rows * num_cols); ++i) {
const int r = i / num_cols, c = i % num_cols;
if (r != c) {
@@ -93,18 +92,18 @@
}
ASSERT_EQ(values.size(), expected_num_nonzeros);
- tsm.reset(
- new TripletSparseMatrix(num_rows, num_cols, expected_num_nonzeros));
- copy(rows.begin(), rows.end(), tsm->mutable_rows());
- copy(cols.begin(), cols.end(), tsm->mutable_cols());
- copy(values.begin(), values.end(), tsm->mutable_values());
+ tsm = std::make_unique<TripletSparseMatrix>(
+ num_rows, num_cols, expected_num_nonzeros);
+ std::copy(rows.begin(), rows.end(), tsm->mutable_rows());
+ std::copy(cols.begin(), cols.end(), tsm->mutable_cols());
+ std::copy(values.begin(), values.end(), tsm->mutable_values());
tsm->set_num_nonzeros(values.size());
Matrix dense_from_tsm;
tsm->ToDenseMatrix(&dense_from_tsm);
ASSERT_TRUE((dense.array() == dense_from_tsm.array()).all());
- crsm.reset(CompressedRowSparseMatrix::FromTripletSparseMatrix(*tsm));
+ crsm = CompressedRowSparseMatrix::FromTripletSparseMatrix(*tsm);
Matrix dense_from_crsm;
crsm->ToDenseMatrix(&dense_from_crsm);
ASSERT_TRUE((dense.array() == dense_from_crsm.array()).all());
@@ -140,7 +139,7 @@
}
void ExpectEqualToCompressedRowSparseMatrixReference() {
- typedef Eigen::Map<const Eigen::VectorXi> ConstIntVectorRef;
+ using ConstIntVectorRef = Eigen::Map<const Eigen::VectorXi>;
ConstIntVectorRef crsm_rows(crsm->rows(), crsm->num_rows() + 1);
ConstIntVectorRef dcrsm_rows(dcrsm->rows(), dcrsm->num_rows() + 1);
diff --git a/internal/ceres/dynamic_numeric_diff_cost_function_test.cc b/internal/ceres/dynamic_numeric_diff_cost_function_test.cc
index 0150f5e..aec7819 100644
--- a/internal/ceres/dynamic_numeric_diff_cost_function_test.cc
+++ b/internal/ceres/dynamic_numeric_diff_cost_function_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,13 +33,13 @@
#include <cstddef>
#include <memory>
+#include <vector>
+#include "ceres/numeric_diff_options.h"
+#include "ceres/types.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
const double kTolerance = 1e-6;
@@ -75,8 +75,8 @@
};
TEST(DynamicNumericdiffCostFunctionTest, TestResiduals) {
- vector<double> param_block_0(10, 0.0);
- vector<double> param_block_1(5, 0.0);
+ std::vector<double> param_block_0(10, 0.0);
+ std::vector<double> param_block_1(5, 0.0);
DynamicNumericDiffCostFunction<MyCostFunctor> cost_function(
new MyCostFunctor());
cost_function.AddParameterBlock(param_block_0.size());
@@ -84,12 +84,12 @@
cost_function.SetNumResiduals(21);
// Test residual computation.
- vector<double> residuals(21, -100000);
- vector<double*> parameter_blocks(2);
+ std::vector<double> residuals(21, -100000);
+ std::vector<double*> parameter_blocks(2);
parameter_blocks[0] = ¶m_block_0[0];
parameter_blocks[1] = ¶m_block_1[0];
EXPECT_TRUE(
- cost_function.Evaluate(¶meter_blocks[0], residuals.data(), NULL));
+ cost_function.Evaluate(¶meter_blocks[0], residuals.data(), nullptr));
for (int r = 0; r < 10; ++r) {
EXPECT_EQ(1.0 * r, residuals.at(r * 2));
EXPECT_EQ(-1.0 * r, residuals.at(r * 2 + 1));
@@ -99,11 +99,11 @@
TEST(DynamicNumericdiffCostFunctionTest, TestJacobian) {
// Test the residual counting.
- vector<double> param_block_0(10, 0.0);
+ std::vector<double> param_block_0(10, 0.0);
for (int i = 0; i < 10; ++i) {
param_block_0[i] = 2 * i;
}
- vector<double> param_block_1(5, 0.0);
+ std::vector<double> param_block_1(5, 0.0);
DynamicNumericDiffCostFunction<MyCostFunctor> cost_function(
new MyCostFunctor());
cost_function.AddParameterBlock(param_block_0.size());
@@ -111,18 +111,18 @@
cost_function.SetNumResiduals(21);
// Prepare the residuals.
- vector<double> residuals(21, -100000);
+ std::vector<double> residuals(21, -100000);
// Prepare the parameters.
- vector<double*> parameter_blocks(2);
+ std::vector<double*> parameter_blocks(2);
parameter_blocks[0] = ¶m_block_0[0];
parameter_blocks[1] = ¶m_block_1[0];
// Prepare the jacobian.
- vector<vector<double>> jacobian_vect(2);
+ std::vector<std::vector<double>> jacobian_vect(2);
jacobian_vect[0].resize(21 * 10, -100000);
jacobian_vect[1].resize(21 * 5, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect[0].data());
jacobian.push_back(jacobian_vect[1].data());
@@ -149,8 +149,8 @@
EXPECT_NEAR(4 * p - 8, jacobian_vect[0][20 * 10 + p], kTolerance);
jacobian_vect[0][20 * 10 + p] = 0.0;
}
- for (int i = 0; i < jacobian_vect[0].size(); ++i) {
- EXPECT_NEAR(0.0, jacobian_vect[0][i], kTolerance);
+ for (double entry : jacobian_vect[0]) {
+ EXPECT_NEAR(0.0, entry, kTolerance);
}
// Check "C" Jacobian for second parameter block.
@@ -158,19 +158,19 @@
EXPECT_NEAR(1.0, jacobian_vect[1][20 * 5 + p], kTolerance);
jacobian_vect[1][20 * 5 + p] = 0.0;
}
- for (int i = 0; i < jacobian_vect[1].size(); ++i) {
- EXPECT_NEAR(0.0, jacobian_vect[1][i], kTolerance);
+ for (double entry : jacobian_vect[1]) {
+ EXPECT_NEAR(0.0, entry, kTolerance);
}
}
TEST(DynamicNumericdiffCostFunctionTest,
JacobianWithFirstParameterBlockConstant) { // NOLINT
// Test the residual counting.
- vector<double> param_block_0(10, 0.0);
+ std::vector<double> param_block_0(10, 0.0);
for (int i = 0; i < 10; ++i) {
param_block_0[i] = 2 * i;
}
- vector<double> param_block_1(5, 0.0);
+ std::vector<double> param_block_1(5, 0.0);
DynamicNumericDiffCostFunction<MyCostFunctor> cost_function(
new MyCostFunctor());
cost_function.AddParameterBlock(param_block_0.size());
@@ -178,19 +178,19 @@
cost_function.SetNumResiduals(21);
// Prepare the residuals.
- vector<double> residuals(21, -100000);
+ std::vector<double> residuals(21, -100000);
// Prepare the parameters.
- vector<double*> parameter_blocks(2);
+ std::vector<double*> parameter_blocks(2);
parameter_blocks[0] = ¶m_block_0[0];
parameter_blocks[1] = ¶m_block_1[0];
// Prepare the jacobian.
- vector<vector<double>> jacobian_vect(2);
+ std::vector<std::vector<double>> jacobian_vect(2);
jacobian_vect[0].resize(21 * 10, -100000);
jacobian_vect[1].resize(21 * 5, -100000);
- vector<double*> jacobian;
- jacobian.push_back(NULL);
+ std::vector<double*> jacobian;
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect[1].data());
// Test jacobian computation.
@@ -208,19 +208,19 @@
EXPECT_NEAR(1.0, jacobian_vect[1][20 * 5 + p], kTolerance);
jacobian_vect[1][20 * 5 + p] = 0.0;
}
- for (int i = 0; i < jacobian_vect[1].size(); ++i) {
- EXPECT_EQ(0.0, jacobian_vect[1][i]);
+ for (double& i : jacobian_vect[1]) {
+ EXPECT_EQ(0.0, i);
}
}
TEST(DynamicNumericdiffCostFunctionTest,
JacobianWithSecondParameterBlockConstant) { // NOLINT
// Test the residual counting.
- vector<double> param_block_0(10, 0.0);
+ std::vector<double> param_block_0(10, 0.0);
for (int i = 0; i < 10; ++i) {
param_block_0[i] = 2 * i;
}
- vector<double> param_block_1(5, 0.0);
+ std::vector<double> param_block_1(5, 0.0);
DynamicNumericDiffCostFunction<MyCostFunctor> cost_function(
new MyCostFunctor());
cost_function.AddParameterBlock(param_block_0.size());
@@ -228,20 +228,20 @@
cost_function.SetNumResiduals(21);
// Prepare the residuals.
- vector<double> residuals(21, -100000);
+ std::vector<double> residuals(21, -100000);
// Prepare the parameters.
- vector<double*> parameter_blocks(2);
+ std::vector<double*> parameter_blocks(2);
parameter_blocks[0] = ¶m_block_0[0];
parameter_blocks[1] = ¶m_block_1[0];
// Prepare the jacobian.
- vector<vector<double>> jacobian_vect(2);
+ std::vector<std::vector<double>> jacobian_vect(2);
jacobian_vect[0].resize(21 * 10, -100000);
jacobian_vect[1].resize(21 * 5, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect[0].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
// Test jacobian computation.
EXPECT_TRUE(cost_function.Evaluate(
@@ -266,8 +266,8 @@
EXPECT_NEAR(4 * p - 8, jacobian_vect[0][20 * 10 + p], kTolerance);
jacobian_vect[0][20 * 10 + p] = 0.0;
}
- for (int i = 0; i < jacobian_vect[0].size(); ++i) {
- EXPECT_EQ(0.0, jacobian_vect[0][i]);
+ for (double& i : jacobian_vect[0]) {
+ EXPECT_EQ(0.0, i);
}
}
@@ -328,17 +328,16 @@
parameter_blocks_[2] = &z_[0];
// Prepare the cost function.
- typedef DynamicNumericDiffCostFunction<MyThreeParameterCostFunctor>
- DynamicMyThreeParameterCostFunction;
- DynamicMyThreeParameterCostFunction* cost_function =
- new DynamicMyThreeParameterCostFunction(
- new MyThreeParameterCostFunctor());
+ using DynamicMyThreeParameterCostFunction =
+ DynamicNumericDiffCostFunction<MyThreeParameterCostFunctor>;
+ auto cost_function = std::make_unique<DynamicMyThreeParameterCostFunction>(
+ new MyThreeParameterCostFunctor());
cost_function->AddParameterBlock(1);
cost_function->AddParameterBlock(2);
cost_function->AddParameterBlock(3);
cost_function->SetNumResiduals(7);
- cost_function_.reset(cost_function);
+ cost_function_ = std::move(cost_function);
// Setup jacobian data.
jacobian_vect_.resize(3);
@@ -411,36 +410,36 @@
}
protected:
- vector<double> x_;
- vector<double> y_;
- vector<double> z_;
+ std::vector<double> x_;
+ std::vector<double> y_;
+ std::vector<double> z_;
- vector<double*> parameter_blocks_;
+ std::vector<double*> parameter_blocks_;
std::unique_ptr<CostFunction> cost_function_;
- vector<vector<double>> jacobian_vect_;
+ std::vector<std::vector<double>> jacobian_vect_;
- vector<double> expected_residuals_;
+ std::vector<double> expected_residuals_;
- vector<double> expected_jacobian_x_;
- vector<double> expected_jacobian_y_;
- vector<double> expected_jacobian_z_;
+ std::vector<double> expected_jacobian_x_;
+ std::vector<double> expected_jacobian_y_;
+ std::vector<double> expected_jacobian_z_;
};
TEST_F(ThreeParameterCostFunctorTest, TestThreeParameterResiduals) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
EXPECT_TRUE(cost_function_->Evaluate(
- parameter_blocks_.data(), residuals.data(), NULL));
+ parameter_blocks_.data(), residuals.data(), nullptr));
for (int i = 0; i < 7; ++i) {
EXPECT_EQ(expected_residuals_[i], residuals[i]);
}
}
TEST_F(ThreeParameterCostFunctorTest, TestThreeParameterJacobian) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect_[0].data());
jacobian.push_back(jacobian_vect_[1].data());
jacobian.push_back(jacobian_vect_[2].data());
@@ -467,12 +466,12 @@
TEST_F(ThreeParameterCostFunctorTest,
ThreeParameterJacobianWithFirstAndLastParameterBlockConstant) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
- jacobian.push_back(NULL);
+ std::vector<double*> jacobian;
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect_[1].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
EXPECT_TRUE(cost_function_->Evaluate(
parameter_blocks_.data(), residuals.data(), jacobian.data()));
@@ -488,11 +487,11 @@
TEST_F(ThreeParameterCostFunctorTest,
ThreeParameterJacobianWithSecondParameterBlockConstant) {
- vector<double> residuals(7, -100000);
+ std::vector<double> residuals(7, -100000);
- vector<double*> jacobian;
+ std::vector<double*> jacobian;
jacobian.push_back(jacobian_vect_[0].data());
- jacobian.push_back(NULL);
+ jacobian.push_back(nullptr);
jacobian.push_back(jacobian_vect_[2].data());
EXPECT_TRUE(cost_function_->Evaluate(
@@ -511,5 +510,24 @@
}
}
-} // namespace internal
-} // namespace ceres
+TEST(DynamicNumericdiffCostFunctionTest, DeductionTemplateCompilationTest) {
+ // Ensure deduction guide to be working
+ (void)DynamicNumericDiffCostFunction{std::make_unique<MyCostFunctor>()};
+ (void)DynamicNumericDiffCostFunction{std::make_unique<MyCostFunctor>(),
+ NumericDiffOptions{}};
+ (void)DynamicNumericDiffCostFunction{new MyCostFunctor};
+ (void)DynamicNumericDiffCostFunction{new MyCostFunctor, TAKE_OWNERSHIP};
+ (void)DynamicNumericDiffCostFunction{
+ new MyCostFunctor, TAKE_OWNERSHIP, NumericDiffOptions{}};
+}
+
+TEST(DynamicNumericdiffCostFunctionTest, ArgumentForwarding) {
+ (void)DynamicNumericDiffCostFunction<MyCostFunctor>();
+}
+
+TEST(DynamicAutoDiffCostFunctionTest, UniquePtr) {
+ (void)DynamicNumericDiffCostFunction<MyCostFunctor>(
+ std::make_unique<MyCostFunctor>());
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/dynamic_sparse_normal_cholesky_solver.cc b/internal/ceres/dynamic_sparse_normal_cholesky_solver.cc
index d31c422..d77d7f7 100644
--- a/internal/ceres/dynamic_sparse_normal_cholesky_solver.cc
+++ b/internal/ceres/dynamic_sparse_normal_cholesky_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,10 +35,11 @@
#include <ctime>
#include <memory>
#include <sstream>
+#include <utility>
#include "Eigen/SparseCore"
#include "ceres/compressed_row_sparse_matrix.h"
-#include "ceres/cxsparse.h"
+#include "ceres/internal/config.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_solver.h"
#include "ceres/suitesparse.h"
@@ -50,12 +51,11 @@
#include "Eigen/SparseCholesky"
#endif
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
DynamicSparseNormalCholeskySolver::DynamicSparseNormalCholeskySolver(
- const LinearSolver::Options& options)
- : options_(options) {}
+ LinearSolver::Options options)
+ : options_(std::move(options)) {}
LinearSolver::Summary DynamicSparseNormalCholeskySolver::SolveImpl(
CompressedRowSparseMatrix* A,
@@ -64,18 +64,18 @@
double* x) {
const int num_cols = A->num_cols();
VectorRef(x, num_cols).setZero();
- A->LeftMultiply(b, x);
+ A->LeftMultiplyAndAccumulate(b, x);
if (per_solve_options.D != nullptr) {
// Temporarily append a diagonal block to the A matrix, but undo
// it before returning the matrix to the user.
std::unique_ptr<CompressedRowSparseMatrix> regularizer;
if (!A->col_blocks().empty()) {
- regularizer.reset(CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(
- per_solve_options.D, A->col_blocks()));
+ regularizer = CompressedRowSparseMatrix::CreateBlockDiagonalMatrix(
+ per_solve_options.D, A->col_blocks());
} else {
- regularizer.reset(
- new CompressedRowSparseMatrix(per_solve_options.D, num_cols));
+ regularizer = std::make_unique<CompressedRowSparseMatrix>(
+ per_solve_options.D, num_cols);
}
A->AppendRows(*regularizer);
}
@@ -85,9 +85,6 @@
case SUITE_SPARSE:
summary = SolveImplUsingSuiteSparse(A, x);
break;
- case CX_SPARSE:
- summary = SolveImplUsingCXSparse(A, x);
- break;
case EIGEN_SPARSE:
summary = SolveImplUsingEigen(A, x);
break;
@@ -111,7 +108,7 @@
LinearSolver::Summary summary;
summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;
+ summary.termination_type = LinearSolverTerminationType::FATAL_ERROR;
summary.message =
"SPARSE_NORMAL_CHOLESKY cannot be used with EIGEN_SPARSE "
"because Ceres was not built with support for "
@@ -123,19 +120,20 @@
EventLogger event_logger("DynamicSparseNormalCholeskySolver::Eigen::Solve");
- Eigen::MappedSparseMatrix<double, Eigen::RowMajor> a(A->num_rows(),
- A->num_cols(),
- A->num_nonzeros(),
- A->mutable_rows(),
- A->mutable_cols(),
- A->mutable_values());
+ Eigen::Map<Eigen::SparseMatrix<double, Eigen::RowMajor>> a(
+ A->num_rows(),
+ A->num_cols(),
+ A->num_nonzeros(),
+ A->mutable_rows(),
+ A->mutable_cols(),
+ A->mutable_values());
Eigen::SparseMatrix<double> lhs = a.transpose() * a;
Eigen::SimplicialLDLT<Eigen::SparseMatrix<double>> solver;
LinearSolver::Summary summary;
summary.num_iterations = 1;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
summary.message = "Success.";
solver.analyzePattern(lhs);
@@ -147,7 +145,7 @@
event_logger.AddEvent("Analyze");
if (solver.info() != Eigen::Success) {
- summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;
+ summary.termination_type = LinearSolverTerminationType::FATAL_ERROR;
summary.message = "Eigen failure. Unable to find symbolic factorization.";
return summary;
}
@@ -155,7 +153,7 @@
solver.factorize(lhs);
event_logger.AddEvent("Factorize");
if (solver.info() != Eigen::Success) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
+ summary.termination_type = LinearSolverTerminationType::FAILURE;
summary.message = "Eigen failure. Unable to find numeric factorization.";
return summary;
}
@@ -164,7 +162,7 @@
VectorRef(rhs_and_solution, lhs.cols()) = solver.solve(rhs);
event_logger.AddEvent("Solve");
if (solver.info() != Eigen::Success) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
+ summary.termination_type = LinearSolverTerminationType::FAILURE;
summary.message = "Eigen failure. Unable to do triangular solve.";
return summary;
}
@@ -173,66 +171,16 @@
#endif // CERES_USE_EIGEN_SPARSE
}
-LinearSolver::Summary DynamicSparseNormalCholeskySolver::SolveImplUsingCXSparse(
- CompressedRowSparseMatrix* A, double* rhs_and_solution) {
-#ifdef CERES_NO_CXSPARSE
-
- LinearSolver::Summary summary;
- summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;
- summary.message =
- "SPARSE_NORMAL_CHOLESKY cannot be used with CX_SPARSE "
- "because Ceres was not built with support for CXSparse. "
- "This requires enabling building with -DCXSPARSE=ON.";
-
- return summary;
-
-#else
- EventLogger event_logger(
- "DynamicSparseNormalCholeskySolver::CXSparse::Solve");
-
- LinearSolver::Summary summary;
- summary.num_iterations = 1;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- summary.message = "Success.";
-
- CXSparse cxsparse;
-
- // Wrap the augmented Jacobian in a compressed sparse column matrix.
- cs_di a_transpose = cxsparse.CreateSparseMatrixTransposeView(A);
-
- // Compute the normal equations. J'J delta = J'f and solve them
- // using a sparse Cholesky factorization. Notice that when compared
- // to SuiteSparse we have to explicitly compute the transpose of Jt,
- // and then the normal equations before they can be
- // factorized. CHOLMOD/SuiteSparse on the other hand can just work
- // off of Jt to compute the Cholesky factorization of the normal
- // equations.
- cs_di* a = cxsparse.TransposeMatrix(&a_transpose);
- cs_di* lhs = cxsparse.MatrixMatrixMultiply(&a_transpose, a);
- cxsparse.Free(a);
- event_logger.AddEvent("NormalEquations");
-
- if (!cxsparse.SolveCholesky(lhs, rhs_and_solution)) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
- summary.message = "CXSparse::SolveCholesky failed";
- }
- event_logger.AddEvent("Solve");
-
- cxsparse.Free(lhs);
- event_logger.AddEvent("TearDown");
- return summary;
-#endif
-}
-
LinearSolver::Summary
DynamicSparseNormalCholeskySolver::SolveImplUsingSuiteSparse(
CompressedRowSparseMatrix* A, double* rhs_and_solution) {
#ifdef CERES_NO_SUITESPARSE
+ (void)A;
+ (void)rhs_and_solution;
LinearSolver::Summary summary;
summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;
+ summary.termination_type = LinearSolverTerminationType::FATAL_ERROR;
summary.message =
"SPARSE_NORMAL_CHOLESKY cannot be used with SUITE_SPARSE "
"because Ceres was not built with support for SuiteSparse. "
@@ -244,7 +192,7 @@
EventLogger event_logger(
"DynamicSparseNormalCholeskySolver::SuiteSparse::Solve");
LinearSolver::Summary summary;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
summary.num_iterations = 1;
summary.message = "Success.";
@@ -252,16 +200,17 @@
const int num_cols = A->num_cols();
cholmod_sparse lhs = ss.CreateSparseMatrixTransposeView(A);
event_logger.AddEvent("Setup");
- cholmod_factor* factor = ss.AnalyzeCholesky(&lhs, &summary.message);
+ cholmod_factor* factor =
+ ss.AnalyzeCholesky(&lhs, options_.ordering_type, &summary.message);
event_logger.AddEvent("Analysis");
if (factor == nullptr) {
- summary.termination_type = LINEAR_SOLVER_FATAL_ERROR;
+ summary.termination_type = LinearSolverTerminationType::FATAL_ERROR;
return summary;
}
summary.termination_type = ss.Cholesky(&lhs, factor, &summary.message);
- if (summary.termination_type == LINEAR_SOLVER_SUCCESS) {
+ if (summary.termination_type == LinearSolverTerminationType::SUCCESS) {
cholmod_dense cholmod_rhs =
ss.CreateDenseVectorView(rhs_and_solution, num_cols);
cholmod_dense* solution = ss.Solve(factor, &cholmod_rhs, &summary.message);
@@ -271,7 +220,7 @@
rhs_and_solution, solution->x, num_cols * sizeof(*rhs_and_solution));
ss.Free(solution);
} else {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
+ summary.termination_type = LinearSolverTerminationType::FAILURE;
}
}
@@ -282,5 +231,4 @@
#endif
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/dynamic_sparse_normal_cholesky_solver.h b/internal/ceres/dynamic_sparse_normal_cholesky_solver.h
index 36118ba..394ba2a 100644
--- a/internal/ceres/dynamic_sparse_normal_cholesky_solver.h
+++ b/internal/ceres/dynamic_sparse_normal_cholesky_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,13 +36,13 @@
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
// clang-format on
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class CompressedRowSparseMatrix;
@@ -53,12 +53,10 @@
//
// TODO(alex): Add support for Accelerate sparse solvers:
// https://github.com/ceres-solver/ceres-solver/issues/397
-class DynamicSparseNormalCholeskySolver
+class CERES_NO_EXPORT DynamicSparseNormalCholeskySolver
: public CompressedRowSparseMatrixSolver {
public:
- explicit DynamicSparseNormalCholeskySolver(
- const LinearSolver::Options& options);
- virtual ~DynamicSparseNormalCholeskySolver() {}
+ explicit DynamicSparseNormalCholeskySolver(LinearSolver::Options options);
private:
LinearSolver::Summary SolveImpl(CompressedRowSparseMatrix* A,
@@ -69,16 +67,12 @@
LinearSolver::Summary SolveImplUsingSuiteSparse(CompressedRowSparseMatrix* A,
double* rhs_and_solution);
- LinearSolver::Summary SolveImplUsingCXSparse(CompressedRowSparseMatrix* A,
- double* rhs_and_solution);
-
LinearSolver::Summary SolveImplUsingEigen(CompressedRowSparseMatrix* A,
double* rhs_and_solution);
const LinearSolver::Options options_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_DYNAMIC_SPARSE_NORMAL_CHOLESKY_SOLVER_H_
diff --git a/internal/ceres/dynamic_sparse_normal_cholesky_solver_test.cc b/internal/ceres/dynamic_sparse_normal_cholesky_solver_test.cc
index 8bf609e..4afd372 100644
--- a/internal/ceres/dynamic_sparse_normal_cholesky_solver_test.cc
+++ b/internal/ceres/dynamic_sparse_normal_cholesky_solver_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,6 +34,7 @@
#include "ceres/casts.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/context_impl.h"
+#include "ceres/internal/config.h"
#include "ceres/linear_least_squares_problems.h"
#include "ceres/linear_solver.h"
#include "ceres/triplet_sparse_matrix.h"
@@ -50,19 +51,19 @@
class DynamicSparseNormalCholeskySolverTest : public ::testing::Test {
protected:
void SetUp() final {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(1));
- A_.reset(CompressedRowSparseMatrix::FromTripletSparseMatrix(
- *down_cast<TripletSparseMatrix*>(problem->A.get())));
- b_.reset(problem->b.release());
- D_.reset(problem->D.release());
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(1);
+ A_ = CompressedRowSparseMatrix::FromTripletSparseMatrix(
+ *down_cast<TripletSparseMatrix*>(problem->A.get()));
+ b_ = std::move(problem->b);
+ D_ = std::move(problem->D);
}
void TestSolver(const LinearSolver::Options& options, double* D) {
Matrix dense_A;
A_->ToDenseMatrix(&dense_A);
Matrix lhs = dense_A.transpose() * dense_A;
- if (D != NULL) {
+ if (D != nullptr) {
lhs += (ConstVectorRef(D, A_->num_cols()).array() *
ConstVectorRef(D, A_->num_cols()).array())
.matrix()
@@ -71,7 +72,7 @@
Vector rhs(A_->num_cols());
rhs.setZero();
- A_->LeftMultiply(b_.get(), rhs.data());
+ A_->LeftMultiplyAndAccumulate(b_.get(), rhs.data());
Vector expected_solution = lhs.llt().solve(rhs);
std::unique_ptr<LinearSolver> solver(LinearSolver::Create(options));
@@ -82,7 +83,7 @@
summary = solver->Solve(
A_.get(), b_.get(), per_solve_options, actual_solution.data());
- EXPECT_EQ(summary.termination_type, LINEAR_SOLVER_SUCCESS);
+ EXPECT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
for (int i = 0; i < A_->num_cols(); ++i) {
EXPECT_NEAR(expected_solution(i), actual_solution(i), 1e-8)
@@ -92,15 +93,17 @@
}
void TestSolver(
- const SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type) {
+ const SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
+ const OrderingType ordering_type) {
LinearSolver::Options options;
options.type = SPARSE_NORMAL_CHOLESKY;
options.dynamic_sparsity = true;
options.sparse_linear_algebra_library_type =
sparse_linear_algebra_library_type;
+ options.ordering_type = ordering_type;
ContextImpl context;
options.context = &context;
- TestSolver(options, NULL);
+ TestSolver(options, nullptr);
TestSolver(options, D_.get());
}
@@ -110,21 +113,27 @@
};
#ifndef CERES_NO_SUITESPARSE
-TEST_F(DynamicSparseNormalCholeskySolverTest, SuiteSparse) {
- TestSolver(SUITE_SPARSE);
+TEST_F(DynamicSparseNormalCholeskySolverTest, SuiteSparseAMD) {
+ TestSolver(SUITE_SPARSE, OrderingType::AMD);
+}
+
+#ifndef CERES_NO_CHOLMOD_PARTITION
+TEST_F(DynamicSparseNormalCholeskySolverTest, SuiteSparseNESDIS) {
+ TestSolver(SUITE_SPARSE, OrderingType::NESDIS);
}
#endif
-
-#ifndef CERES_NO_CXSPARSE
-TEST_F(DynamicSparseNormalCholeskySolverTest, CXSparse) {
- TestSolver(CX_SPARSE);
-}
#endif
#ifdef CERES_USE_EIGEN_SPARSE
-TEST_F(DynamicSparseNormalCholeskySolverTest, Eigen) {
- TestSolver(EIGEN_SPARSE);
+TEST_F(DynamicSparseNormalCholeskySolverTest, EigenAMD) {
+ TestSolver(EIGEN_SPARSE, OrderingType::AMD);
}
+
+#ifndef CERES_NO_EIGEN_METIS
+TEST_F(DynamicSparseNormalCholeskySolverTest, EigenNESDIS) {
+ TestSolver(EIGEN_SPARSE, OrderingType::NESDIS);
+}
+#endif
#endif // CERES_USE_EIGEN_SPARSE
} // namespace internal
diff --git a/internal/ceres/dynamic_sparsity_test.cc b/internal/ceres/dynamic_sparsity_test.cc
index 12e62ef..0c29595 100644
--- a/internal/ceres/dynamic_sparsity_test.cc
+++ b/internal/ceres/dynamic_sparsity_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,14 +32,14 @@
// Based on examples/ellipse_approximation.cc
#include <cmath>
+#include <utility>
#include <vector>
#include "ceres/ceres.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Data generated with the following Python code.
// import numpy as np
@@ -280,8 +280,8 @@
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
PointToLineSegmentContourCostFunction(const int num_segments,
- const Eigen::Vector2d& y)
- : num_segments_(num_segments), y_(y) {
+ Eigen::Vector2d y)
+ : num_segments_(num_segments), y_(std::move(y)) {
// The first parameter is the preimage position.
mutable_parameter_block_sizes()->push_back(1);
// The next parameters are the control points for the line segment contour.
@@ -307,16 +307,16 @@
residuals[0] = y_[0] - ((1.0 - u) * x[1 + i0][0] + u * x[1 + i1][0]);
residuals[1] = y_[1] - ((1.0 - u) * x[1 + i0][1] + u * x[1 + i1][1]);
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return true;
}
- if (jacobians[0] != NULL) {
+ if (jacobians[0] != nullptr) {
jacobians[0][0] = x[1 + i0][0] - x[1 + i1][0];
jacobians[0][1] = x[1 + i0][1] - x[1 + i1][1];
}
for (int i = 0; i < num_segments_; ++i) {
- if (jacobians[i + 1] != NULL) {
+ if (jacobians[i + 1] != nullptr) {
MatrixRef(jacobians[i + 1], 2, 2).setZero();
if (i == i0) {
jacobians[i + 1][0] = -(1.0 - u);
@@ -366,9 +366,9 @@
};
TEST(DynamicSparsity, StaticAndDynamicSparsityProduceSameSolution) {
- // Skip test if there is no sparse linear algebra library.
+ // Skip test if there is no sparse linear algebra library that
+ // supports dynamic sparsity.
if (!IsSparseLinearAlgebraLibraryTypeAvailable(SUITE_SPARSE) &&
- !IsSparseLinearAlgebraLibraryTypeAvailable(CX_SPARSE) &&
!IsSparseLinearAlgebraLibraryTypeAvailable(EIGEN_SPARSE)) {
return;
}
@@ -383,7 +383,7 @@
//
// Initialize `X` to points on the unit circle.
Vector w(num_segments + 1);
- w.setLinSpaced(num_segments + 1, 0.0, 2.0 * M_PI);
+ w.setLinSpaced(num_segments + 1, 0.0, 2.0 * constants::pi);
w.conservativeResize(num_segments);
Matrix X(num_segments, 2);
X.col(0) = w.array().cos();
@@ -403,7 +403,7 @@
// For each data point add a residual which measures its distance to its
// corresponding position on the line segment contour.
std::vector<double*> parameter_blocks(1 + num_segments);
- parameter_blocks[0] = NULL;
+ parameter_blocks[0] = nullptr;
for (int i = 0; i < num_segments; ++i) {
parameter_blocks[i + 1] = X.data() + 2 * i;
}
@@ -411,7 +411,7 @@
parameter_blocks[0] = &t[i];
problem.AddResidualBlock(
PointToLineSegmentContourCostFunction::Create(num_segments, kY.row(i)),
- NULL,
+ nullptr,
parameter_blocks);
}
@@ -419,7 +419,7 @@
for (int i = 0; i < num_segments; ++i) {
problem.AddResidualBlock(
EuclideanDistanceFunctor::Create(sqrt(regularization_weight)),
- NULL,
+ nullptr,
X.data() + 2 * i,
X.data() + 2 * ((i + 1) % num_segments));
}
@@ -427,6 +427,13 @@
Solver::Options options;
options.max_num_iterations = 100;
options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ // Only SuiteSparse & EigenSparse currently support dynamic sparsity.
+ options.sparse_linear_algebra_library_type =
+#if !defined(CERES_NO_SUITESPARSE)
+ ceres::SUITE_SPARSE;
+#elif defined(CERES_USE_EIGEN_SPARSE)
+ ceres::EIGEN_SPARSE;
+#endif
// First, solve `X` and `t` jointly with dynamic_sparsity = true.
Matrix X0 = X;
@@ -453,5 +460,4 @@
<< dynamic_summary.FullReport();
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/eigen_vector_ops.h b/internal/ceres/eigen_vector_ops.h
new file mode 100644
index 0000000..6ebff88
--- /dev/null
+++ b/internal/ceres/eigen_vector_ops.h
@@ -0,0 +1,105 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#ifndef CERES_INTERNAL_EIGEN_VECTOR_OPS_H_
+#define CERES_INTERNAL_EIGEN_VECTOR_OPS_H_
+
+#include <numeric>
+
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/fixed_array.h"
+#include "ceres/parallel_for.h"
+#include "ceres/parallel_vector_ops.h"
+
+namespace ceres::internal {
+
+// Blas1 operations on Eigen vectors. These functions are needed as an
+// abstraction layer so that we can use different versions of a vector style
+// object in the conjugate gradients linear solver.
+template <typename Derived>
+inline double Norm(const Eigen::DenseBase<Derived>& x,
+ ContextImpl* context,
+ int num_threads) {
+ FixedArray<double> norms(num_threads, 0.);
+ ParallelFor(
+ context,
+ 0,
+ x.rows(),
+ num_threads,
+ [&x, &norms](int thread_id, std::tuple<int, int> range) {
+ auto [start, end] = range;
+ norms[thread_id] += x.segment(start, end - start).squaredNorm();
+ },
+ kMinBlockSizeParallelVectorOps);
+ return std::sqrt(std::accumulate(norms.begin(), norms.end(), 0.));
+}
+inline void SetZero(Vector& x, ContextImpl* context, int num_threads) {
+ ParallelSetZero(context, num_threads, x);
+}
+inline void Axpby(double a,
+ const Vector& x,
+ double b,
+ const Vector& y,
+ Vector& z,
+ ContextImpl* context,
+ int num_threads) {
+ ParallelAssign(context, num_threads, z, a * x + b * y);
+}
+template <typename VectorLikeX, typename VectorLikeY>
+inline double Dot(const VectorLikeX& x,
+ const VectorLikeY& y,
+ ContextImpl* context,
+ int num_threads) {
+ FixedArray<double> dots(num_threads, 0.);
+ ParallelFor(
+ context,
+ 0,
+ x.rows(),
+ num_threads,
+ [&x, &y, &dots](int thread_id, std::tuple<int, int> range) {
+ auto [start, end] = range;
+ const int block_size = end - start;
+ const auto& x_block = x.segment(start, block_size);
+ const auto& y_block = y.segment(start, block_size);
+ dots[thread_id] += x_block.dot(y_block);
+ },
+ kMinBlockSizeParallelVectorOps);
+ return std::accumulate(dots.begin(), dots.end(), 0.);
+}
+inline void Copy(const Vector& from,
+ Vector& to,
+ ContextImpl* context,
+ int num_threads) {
+ ParallelAssign(context, num_threads, to, from);
+}
+
+} // namespace ceres::internal
+
+#endif // CERES_INTERNAL_EIGEN_VECTOR_OPS_H_
diff --git a/internal/ceres/eigensparse.cc b/internal/ceres/eigensparse.cc
index 22ed2c4..7ed401d 100644
--- a/internal/ceres/eigensparse.cc
+++ b/internal/ceres/eigensparse.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,27 +30,31 @@
#include "ceres/eigensparse.h"
+#include <memory>
+
#ifdef CERES_USE_EIGEN_SPARSE
#include <sstream>
+#ifndef CERES_NO_EIGEN_METIS
+#include <iostream> // This is needed because MetisSupport depends on iostream.
+
+#include "Eigen/MetisSupport"
+#endif
+
#include "Eigen/SparseCholesky"
#include "Eigen/SparseCore"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-// TODO(sameeragarwal): Use enable_if to clean up the implementations
-// for when Scalar == double.
template <typename Solver>
-class EigenSparseCholeskyTemplate : public SparseCholesky {
+class EigenSparseCholeskyTemplate final : public SparseCholesky {
public:
- EigenSparseCholeskyTemplate() : analyzed_(false) {}
- virtual ~EigenSparseCholeskyTemplate() {}
+ EigenSparseCholeskyTemplate() = default;
CompressedRowSparseMatrix::StorageType StorageType() const final {
- return CompressedRowSparseMatrix::LOWER_TRIANGULAR;
+ return CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR;
}
LinearSolverTerminationType Factorize(
@@ -67,7 +71,7 @@
if (solver_.info() != Eigen::Success) {
*message = "Eigen failure. Unable to find symbolic factorization.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
}
analyzed_ = true;
@@ -76,43 +80,42 @@
solver_.factorize(lhs);
if (solver_.info() != Eigen::Success) {
*message = "Eigen failure. Unable to find numeric factorization.";
- return LINEAR_SOLVER_FAILURE;
+ return LinearSolverTerminationType::FAILURE;
}
- return LINEAR_SOLVER_SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
}
LinearSolverTerminationType Solve(const double* rhs_ptr,
double* solution_ptr,
- std::string* message) {
+ std::string* message) override {
CHECK(analyzed_) << "Solve called without a call to Factorize first.";
- scalar_rhs_ = ConstVectorRef(rhs_ptr, solver_.cols())
- .template cast<typename Solver::Scalar>();
-
- // The two casts are needed if the Scalar in this class is not
- // double. For code simplicity we are going to assume that Eigen
- // is smart enough to figure out that casting a double Vector to a
- // double Vector is a straight copy. If this turns into a
- // performance bottleneck (unlikely), we can revisit this.
- scalar_solution_ = solver_.solve(scalar_rhs_);
- VectorRef(solution_ptr, solver_.cols()) =
- scalar_solution_.template cast<double>();
+ // Avoid copying when the scalar type is double
+ if constexpr (std::is_same_v<typename Solver::Scalar, double>) {
+ ConstVectorRef scalar_rhs(rhs_ptr, solver_.cols());
+ VectorRef(solution_ptr, solver_.cols()) = solver_.solve(scalar_rhs);
+ } else {
+ auto scalar_rhs = ConstVectorRef(rhs_ptr, solver_.cols())
+ .template cast<typename Solver::Scalar>();
+ auto scalar_solution = solver_.solve(scalar_rhs);
+ VectorRef(solution_ptr, solver_.cols()) =
+ scalar_solution.template cast<double>();
+ }
if (solver_.info() != Eigen::Success) {
*message = "Eigen failure. Unable to do triangular solve.";
- return LINEAR_SOLVER_FAILURE;
+ return LinearSolverTerminationType::FAILURE;
}
- return LINEAR_SOLVER_SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
}
LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
std::string* message) final {
CHECK_EQ(lhs->storage_type(), StorageType());
- typename Solver::Scalar* values_ptr = NULL;
- if (std::is_same<typename Solver::Scalar, double>::value) {
- values_ptr =
- reinterpret_cast<typename Solver::Scalar*>(lhs->mutable_values());
+ typename Solver::Scalar* values_ptr = nullptr;
+ if constexpr (std::is_same_v<typename Solver::Scalar, double>) {
+ values_ptr = lhs->mutable_values();
} else {
// In the case where the scalar used in this class is not
// double. In that case, make a copy of the values array in the
@@ -122,69 +125,83 @@
values_ptr = values_.data();
}
- Eigen::MappedSparseMatrix<typename Solver::Scalar, Eigen::ColMajor>
+ Eigen::Map<
+ const Eigen::SparseMatrix<typename Solver::Scalar, Eigen::ColMajor>>
eigen_lhs(lhs->num_rows(),
lhs->num_rows(),
lhs->num_nonzeros(),
- lhs->mutable_rows(),
- lhs->mutable_cols(),
+ lhs->rows(),
+ lhs->cols(),
values_ptr);
return Factorize(eigen_lhs, message);
}
private:
- Eigen::Matrix<typename Solver::Scalar, Eigen::Dynamic, 1> values_,
- scalar_rhs_, scalar_solution_;
- bool analyzed_;
+ Eigen::Matrix<typename Solver::Scalar, Eigen::Dynamic, 1> values_;
+
+ bool analyzed_{false};
Solver solver_;
};
std::unique_ptr<SparseCholesky> EigenSparseCholesky::Create(
const OrderingType ordering_type) {
- std::unique_ptr<SparseCholesky> sparse_cholesky;
+ using WithAMDOrdering = Eigen::SimplicialLDLT<Eigen::SparseMatrix<double>,
+ Eigen::Upper,
+ Eigen::AMDOrdering<int>>;
+ using WithNaturalOrdering =
+ Eigen::SimplicialLDLT<Eigen::SparseMatrix<double>,
+ Eigen::Upper,
+ Eigen::NaturalOrdering<int>>;
- typedef Eigen::SimplicialLDLT<Eigen::SparseMatrix<double>,
- Eigen::Upper,
- Eigen::AMDOrdering<int>>
- WithAMDOrdering;
- typedef Eigen::SimplicialLDLT<Eigen::SparseMatrix<double>,
- Eigen::Upper,
- Eigen::NaturalOrdering<int>>
- WithNaturalOrdering;
- if (ordering_type == AMD) {
- sparse_cholesky.reset(new EigenSparseCholeskyTemplate<WithAMDOrdering>());
- } else {
- sparse_cholesky.reset(
- new EigenSparseCholeskyTemplate<WithNaturalOrdering>());
+ if (ordering_type == OrderingType::AMD) {
+ return std::make_unique<EigenSparseCholeskyTemplate<WithAMDOrdering>>();
+ } else if (ordering_type == OrderingType::NESDIS) {
+#ifndef CERES_NO_EIGEN_METIS
+ using WithMetisOrdering = Eigen::SimplicialLDLT<Eigen::SparseMatrix<double>,
+ Eigen::Upper,
+ Eigen::MetisOrdering<int>>;
+ return std::make_unique<EigenSparseCholeskyTemplate<WithMetisOrdering>>();
+#else
+ LOG(FATAL)
+ << "Congratulations you have found a bug in Ceres Solver. Please "
+ "report it to the Ceres Solver developers.";
+ return nullptr;
+#endif // CERES_NO_EIGEN_METIS
}
- return sparse_cholesky;
+ return std::make_unique<EigenSparseCholeskyTemplate<WithNaturalOrdering>>();
}
-EigenSparseCholesky::~EigenSparseCholesky() {}
+EigenSparseCholesky::~EigenSparseCholesky() = default;
std::unique_ptr<SparseCholesky> FloatEigenSparseCholesky::Create(
const OrderingType ordering_type) {
- std::unique_ptr<SparseCholesky> sparse_cholesky;
- typedef Eigen::SimplicialLDLT<Eigen::SparseMatrix<float>,
- Eigen::Upper,
- Eigen::AMDOrdering<int>>
- WithAMDOrdering;
- typedef Eigen::SimplicialLDLT<Eigen::SparseMatrix<float>,
- Eigen::Upper,
- Eigen::NaturalOrdering<int>>
- WithNaturalOrdering;
- if (ordering_type == AMD) {
- sparse_cholesky.reset(new EigenSparseCholeskyTemplate<WithAMDOrdering>());
- } else {
- sparse_cholesky.reset(
- new EigenSparseCholeskyTemplate<WithNaturalOrdering>());
+ using WithAMDOrdering = Eigen::SimplicialLDLT<Eigen::SparseMatrix<float>,
+ Eigen::Upper,
+ Eigen::AMDOrdering<int>>;
+ using WithNaturalOrdering =
+ Eigen::SimplicialLDLT<Eigen::SparseMatrix<float>,
+ Eigen::Upper,
+ Eigen::NaturalOrdering<int>>;
+ if (ordering_type == OrderingType::AMD) {
+ return std::make_unique<EigenSparseCholeskyTemplate<WithAMDOrdering>>();
+ } else if (ordering_type == OrderingType::NESDIS) {
+#ifndef CERES_NO_EIGEN_METIS
+ using WithMetisOrdering = Eigen::SimplicialLDLT<Eigen::SparseMatrix<float>,
+ Eigen::Upper,
+ Eigen::MetisOrdering<int>>;
+ return std::make_unique<EigenSparseCholeskyTemplate<WithMetisOrdering>>();
+#else
+ LOG(FATAL)
+ << "Congratulations you have found a bug in Ceres Solver. Please "
+ "report it to the Ceres Solver developers.";
+ return nullptr;
+#endif // CERES_NO_EIGEN_METIS
}
- return sparse_cholesky;
+ return std::make_unique<EigenSparseCholeskyTemplate<WithNaturalOrdering>>();
}
-FloatEigenSparseCholesky::~FloatEigenSparseCholesky() {}
+FloatEigenSparseCholesky::~FloatEigenSparseCholesky() = default;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/eigensparse.h b/internal/ceres/eigensparse.h
index bb89c2c..f16e8f2 100644
--- a/internal/ceres/eigensparse.h
+++ b/internal/ceres/eigensparse.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,7 +34,7 @@
#define CERES_INTERNAL_EIGENSPARSE_H_
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifdef CERES_USE_EIGEN_SPARSE
@@ -42,48 +42,69 @@
#include <string>
#include "Eigen/SparseCore"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
#include "ceres/sparse_cholesky.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class EigenSparseCholesky : public SparseCholesky {
+class EigenSparse {
+ public:
+ static constexpr bool IsNestedDissectionAvailable() noexcept {
+#ifdef CERES_NO_EIGEN_METIS
+ return false;
+#else
+ return true;
+#endif
+ }
+};
+
+class CERES_NO_EXPORT EigenSparseCholesky : public SparseCholesky {
public:
// Factory
static std::unique_ptr<SparseCholesky> Create(
const OrderingType ordering_type);
// SparseCholesky interface.
- virtual ~EigenSparseCholesky();
- virtual LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
- std::string* message) = 0;
- virtual CompressedRowSparseMatrix::StorageType StorageType() const = 0;
- virtual LinearSolverTerminationType Solve(const double* rhs,
- double* solution,
- std::string* message) = 0;
+ ~EigenSparseCholesky() override;
+ LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
+ std::string* message) override = 0;
+ CompressedRowSparseMatrix::StorageType StorageType() const override = 0;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override = 0;
};
// Even though the input is double precision linear system, this class
// solves it by computing a single precision Cholesky factorization.
-class FloatEigenSparseCholesky : public SparseCholesky {
+class CERES_NO_EXPORT FloatEigenSparseCholesky : public SparseCholesky {
public:
// Factory
static std::unique_ptr<SparseCholesky> Create(
const OrderingType ordering_type);
// SparseCholesky interface.
- virtual ~FloatEigenSparseCholesky();
- virtual LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
- std::string* message) = 0;
- virtual CompressedRowSparseMatrix::StorageType StorageType() const = 0;
- virtual LinearSolverTerminationType Solve(const double* rhs,
- double* solution,
- std::string* message) = 0;
+ ~FloatEigenSparseCholesky() override;
+ LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
+ std::string* message) override = 0;
+ CompressedRowSparseMatrix::StorageType StorageType() const override = 0;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override = 0;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#else
+
+namespace ceres::internal {
+
+class EigenSparse {
+ public:
+ static constexpr bool IsNestedDissectionAvailable() noexcept { return false; }
+};
+
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/evaluation_benchmark.cc b/internal/ceres/evaluation_benchmark.cc
new file mode 100644
index 0000000..c679885
--- /dev/null
+++ b/internal/ceres/evaluation_benchmark.cc
@@ -0,0 +1,1094 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#include <memory>
+#include <random>
+#include <string>
+#include <vector>
+
+#include "benchmark/benchmark.h"
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/cuda_block_sparse_crs_view.h"
+#include "ceres/cuda_partitioned_block_sparse_crs_view.h"
+#include "ceres/cuda_sparse_matrix.h"
+#include "ceres/cuda_vector.h"
+#include "ceres/evaluator.h"
+#include "ceres/implicit_schur_complement.h"
+#include "ceres/partitioned_matrix_view.h"
+#include "ceres/power_series_expansion_preconditioner.h"
+#include "ceres/preprocessor.h"
+#include "ceres/problem.h"
+#include "ceres/problem_impl.h"
+#include "ceres/program.h"
+#include "ceres/sparse_matrix.h"
+
+namespace ceres::internal {
+
+template <typename Derived, typename Base>
+std::unique_ptr<Derived> downcast_unique_ptr(std::unique_ptr<Base>& base) {
+ return std::unique_ptr<Derived>(dynamic_cast<Derived*>(base.release()));
+}
+
+// Benchmark library might invoke benchmark function multiple times.
+// In order to save time required to parse BAL data, we ensure that
+// each dataset is being loaded at most once.
+// Each type of jacobians is also cached after first creation
+struct BALData {
+ using PartitionedView = PartitionedMatrixView<2, 3, 9>;
+ explicit BALData(const std::string& path) {
+ bal_problem = std::make_unique<BundleAdjustmentProblem>(path);
+ CHECK(bal_problem != nullptr);
+
+ auto problem_impl = bal_problem->mutable_problem()->mutable_impl();
+ auto preprocessor = Preprocessor::Create(MinimizerType::TRUST_REGION);
+
+ preprocessed_problem = std::make_unique<PreprocessedProblem>();
+ Solver::Options options = bal_problem->options();
+ options.linear_solver_type = ITERATIVE_SCHUR;
+ CHECK(preprocessor->Preprocess(
+ options, problem_impl, preprocessed_problem.get()));
+
+ auto program = preprocessed_problem->reduced_program.get();
+
+ parameters.resize(program->NumParameters());
+ program->ParameterBlocksToStateVector(parameters.data());
+
+ const int num_residuals = program->NumResiduals();
+ b.resize(num_residuals);
+
+ std::mt19937 rng;
+ std::normal_distribution<double> rnorm;
+ for (int i = 0; i < num_residuals; ++i) {
+ b[i] = rnorm(rng);
+ }
+
+ const int num_parameters = program->NumParameters();
+ D.resize(num_parameters);
+ for (int i = 0; i < num_parameters; ++i) {
+ D[i] = rnorm(rng);
+ }
+ }
+
+ std::unique_ptr<BlockSparseMatrix> CreateBlockSparseJacobian(
+ ContextImpl* context, bool sequential) {
+ auto problem = bal_problem->mutable_problem();
+ auto problem_impl = problem->mutable_impl();
+ CHECK(problem_impl != nullptr);
+
+ Evaluator::Options options;
+ options.linear_solver_type = ITERATIVE_SCHUR;
+ options.num_threads = 1;
+ options.context = context;
+ options.num_eliminate_blocks = bal_problem->num_points();
+
+ std::string error;
+ auto program = preprocessed_problem->reduced_program.get();
+ auto evaluator = Evaluator::Create(options, program, &error);
+ CHECK(evaluator != nullptr);
+
+ auto jacobian = evaluator->CreateJacobian();
+ auto block_sparse = downcast_unique_ptr<BlockSparseMatrix>(jacobian);
+ CHECK(block_sparse != nullptr);
+
+ if (sequential) {
+ auto block_structure_sequential =
+ std::make_unique<CompressedRowBlockStructure>(
+ *block_sparse->block_structure());
+ int num_nonzeros = 0;
+ for (auto& row_block : block_structure_sequential->rows) {
+ const int row_block_size = row_block.block.size;
+ for (auto& cell : row_block.cells) {
+ const int col_block_size =
+ block_structure_sequential->cols[cell.block_id].size;
+ cell.position = num_nonzeros;
+ num_nonzeros += col_block_size * row_block_size;
+ }
+ }
+ block_sparse = std::make_unique<BlockSparseMatrix>(
+ block_structure_sequential.release(),
+#ifndef CERES_NO_CUDA
+ true
+#else
+ false
+#endif
+ );
+ }
+
+ std::mt19937 rng;
+ std::normal_distribution<double> rnorm;
+ const int nnz = block_sparse->num_nonzeros();
+ auto values = block_sparse->mutable_values();
+ for (int i = 0; i < nnz; ++i) {
+ values[i] = rnorm(rng);
+ }
+
+ return block_sparse;
+ }
+
+ std::unique_ptr<CompressedRowSparseMatrix> CreateCompressedRowSparseJacobian(
+ ContextImpl* context) {
+ auto block_sparse = BlockSparseJacobian(context);
+ return block_sparse->ToCompressedRowSparseMatrix();
+ }
+
+ const BlockSparseMatrix* BlockSparseJacobian(ContextImpl* context) {
+ if (!block_sparse_jacobian) {
+ block_sparse_jacobian = CreateBlockSparseJacobian(context, true);
+ }
+ return block_sparse_jacobian.get();
+ }
+
+ const BlockSparseMatrix* BlockSparseJacobianPartitioned(
+ ContextImpl* context) {
+ if (!block_sparse_jacobian_partitioned) {
+ block_sparse_jacobian_partitioned =
+ CreateBlockSparseJacobian(context, false);
+ }
+ return block_sparse_jacobian_partitioned.get();
+ }
+
+ const CompressedRowSparseMatrix* CompressedRowSparseJacobian(
+ ContextImpl* context) {
+ if (!crs_jacobian) {
+ crs_jacobian = CreateCompressedRowSparseJacobian(context);
+ }
+ return crs_jacobian.get();
+ }
+
+ std::unique_ptr<PartitionedView> PartitionedMatrixViewJacobian(
+ const LinearSolver::Options& options) {
+ auto block_sparse = BlockSparseJacobianPartitioned(options.context);
+ return std::make_unique<PartitionedView>(options, *block_sparse);
+ }
+
+ BlockSparseMatrix* BlockDiagonalEtE(const LinearSolver::Options& options) {
+ if (!block_diagonal_ete) {
+ auto partitioned_view = PartitionedMatrixViewJacobian(options);
+ block_diagonal_ete = partitioned_view->CreateBlockDiagonalEtE();
+ }
+ return block_diagonal_ete.get();
+ }
+
+ BlockSparseMatrix* BlockDiagonalFtF(const LinearSolver::Options& options) {
+ if (!block_diagonal_ftf) {
+ auto partitioned_view = PartitionedMatrixViewJacobian(options);
+ block_diagonal_ftf = partitioned_view->CreateBlockDiagonalFtF();
+ }
+ return block_diagonal_ftf.get();
+ }
+
+ const ImplicitSchurComplement* ImplicitSchurComplementWithoutDiagonal(
+ const LinearSolver::Options& options) {
+ auto block_sparse = BlockSparseJacobianPartitioned(options.context);
+ implicit_schur_complement =
+ std::make_unique<ImplicitSchurComplement>(options);
+ implicit_schur_complement->Init(*block_sparse, nullptr, b.data());
+ return implicit_schur_complement.get();
+ }
+
+ const ImplicitSchurComplement* ImplicitSchurComplementWithDiagonal(
+ const LinearSolver::Options& options) {
+ auto block_sparse = BlockSparseJacobianPartitioned(options.context);
+ implicit_schur_complement_diag =
+ std::make_unique<ImplicitSchurComplement>(options);
+ implicit_schur_complement_diag->Init(*block_sparse, D.data(), b.data());
+ return implicit_schur_complement_diag.get();
+ }
+
+ Vector parameters;
+ Vector D;
+ Vector b;
+ std::unique_ptr<BundleAdjustmentProblem> bal_problem;
+ std::unique_ptr<PreprocessedProblem> preprocessed_problem;
+ std::unique_ptr<BlockSparseMatrix> block_sparse_jacobian_partitioned;
+ std::unique_ptr<BlockSparseMatrix> block_sparse_jacobian;
+ std::unique_ptr<CompressedRowSparseMatrix> crs_jacobian;
+ std::unique_ptr<BlockSparseMatrix> block_diagonal_ete;
+ std::unique_ptr<BlockSparseMatrix> block_diagonal_ftf;
+ std::unique_ptr<ImplicitSchurComplement> implicit_schur_complement;
+ std::unique_ptr<ImplicitSchurComplement> implicit_schur_complement_diag;
+};
+
+static void Residuals(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ const int num_threads = static_cast<int>(state.range(0));
+
+ Evaluator::Options options;
+ options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options.num_threads = num_threads;
+ options.context = context;
+ options.num_eliminate_blocks = 0;
+
+ std::string error;
+ CHECK(data->preprocessed_problem != nullptr);
+ auto program = data->preprocessed_problem->reduced_program.get();
+ CHECK(program != nullptr);
+ auto evaluator = Evaluator::Create(options, program, &error);
+ CHECK(evaluator != nullptr);
+
+ double cost = 0.;
+ Vector residuals = Vector::Zero(program->NumResiduals());
+
+ Evaluator::EvaluateOptions eval_options;
+ for (auto _ : state) {
+ CHECK(evaluator->Evaluate(eval_options,
+ data->parameters.data(),
+ &cost,
+ residuals.data(),
+ nullptr,
+ nullptr));
+ }
+}
+
+static void ResidualsAndJacobian(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ const int num_threads = static_cast<int>(state.range(0));
+
+ Evaluator::Options options;
+ options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options.num_threads = num_threads;
+ options.context = context;
+ options.num_eliminate_blocks = 0;
+
+ std::string error;
+ CHECK(data->preprocessed_problem != nullptr);
+ auto program = data->preprocessed_problem->reduced_program.get();
+ CHECK(program != nullptr);
+ auto evaluator = Evaluator::Create(options, program, &error);
+ CHECK(evaluator != nullptr);
+
+ double cost = 0.;
+ Vector residuals = Vector::Zero(program->NumResiduals());
+ auto jacobian = evaluator->CreateJacobian();
+
+ Evaluator::EvaluateOptions eval_options;
+ for (auto _ : state) {
+ CHECK(evaluator->Evaluate(eval_options,
+ data->parameters.data(),
+ &cost,
+ residuals.data(),
+ nullptr,
+ jacobian.get()));
+ }
+}
+
+static void Plus(benchmark::State& state, BALData* data, ContextImpl* context) {
+ const int num_threads = static_cast<int>(state.range(0));
+
+ Evaluator::Options options;
+ options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options.num_threads = num_threads;
+ options.context = context;
+ options.num_eliminate_blocks = 0;
+
+ std::string error;
+ CHECK(data->preprocessed_problem != nullptr);
+ auto program = data->preprocessed_problem->reduced_program.get();
+ CHECK(program != nullptr);
+ auto evaluator = Evaluator::Create(options, program, &error);
+ CHECK(evaluator != nullptr);
+
+ Vector state_plus_delta = Vector::Zero(program->NumParameters());
+ Vector delta = Vector::Random(program->NumEffectiveParameters());
+
+ for (auto _ : state) {
+ CHECK(evaluator->Plus(
+ data->parameters.data(), delta.data(), state_plus_delta.data()));
+ }
+ CHECK_GT(state_plus_delta.squaredNorm(), 0.);
+}
+
+static void PSEPreconditioner(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+
+ auto jacobian = data->ImplicitSchurComplementWithDiagonal(options);
+ Preconditioner::Options preconditioner_options(options);
+
+ PowerSeriesExpansionPreconditioner preconditioner(
+ jacobian, 10, 0, preconditioner_options);
+
+ Vector y = Vector::Zero(jacobian->num_cols());
+ Vector x = Vector::Random(jacobian->num_cols());
+
+ for (auto _ : state) {
+ preconditioner.RightMultiplyAndAccumulate(x.data(), y.data());
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+static void PMVRightMultiplyAndAccumulateF(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+
+ Vector y = Vector::Zero(jacobian->num_rows());
+ Vector x = Vector::Random(jacobian->num_cols_f());
+
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulateF(x.data(), y.data());
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+static void PMVLeftMultiplyAndAccumulateF(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+
+ Vector y = Vector::Zero(jacobian->num_cols_f());
+ Vector x = Vector::Random(jacobian->num_rows());
+
+ for (auto _ : state) {
+ jacobian->LeftMultiplyAndAccumulateF(x.data(), y.data());
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+static void PMVRightMultiplyAndAccumulateE(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+
+ Vector y = Vector::Zero(jacobian->num_rows());
+ Vector x = Vector::Random(jacobian->num_cols_e());
+
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulateE(x.data(), y.data());
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+static void PMVLeftMultiplyAndAccumulateE(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+
+ Vector y = Vector::Zero(jacobian->num_cols_e());
+ Vector x = Vector::Random(jacobian->num_rows());
+
+ for (auto _ : state) {
+ jacobian->LeftMultiplyAndAccumulateE(x.data(), y.data());
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+static void PMVUpdateBlockDiagonalEtE(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+ auto block_diagonal_ete = data->BlockDiagonalEtE(options);
+
+ for (auto _ : state) {
+ jacobian->UpdateBlockDiagonalEtE(block_diagonal_ete);
+ }
+}
+
+static void PMVUpdateBlockDiagonalFtF(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+ auto block_diagonal_ftf = data->BlockDiagonalFtF(options);
+
+ for (auto _ : state) {
+ jacobian->UpdateBlockDiagonalFtF(block_diagonal_ftf);
+ }
+}
+
+static void ISCRightMultiplyNoDiag(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ auto jacobian = data->ImplicitSchurComplementWithoutDiagonal(options);
+
+ Vector y = Vector::Zero(jacobian->num_rows());
+ Vector x = Vector::Random(jacobian->num_cols());
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulate(x.data(), y.data());
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+static void ISCRightMultiplyDiag(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.num_threads = static_cast<int>(state.range(0));
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+
+ auto jacobian = data->ImplicitSchurComplementWithDiagonal(options);
+
+ Vector y = Vector::Zero(jacobian->num_rows());
+ Vector x = Vector::Random(jacobian->num_cols());
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulate(x.data(), y.data());
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+static void JacobianToCRS(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ auto jacobian = data->BlockSparseJacobian(context);
+
+ std::unique_ptr<CompressedRowSparseMatrix> matrix;
+ for (auto _ : state) {
+ matrix = jacobian->ToCompressedRowSparseMatrix();
+ }
+ CHECK(matrix != nullptr);
+}
+
+#ifndef CERES_NO_CUDA
+static void PMVRightMultiplyAndAccumulateFCuda(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ options.num_threads = 1;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+ auto underlying_matrix = data->BlockSparseJacobianPartitioned(context);
+ CudaPartitionedBlockSparseCRSView view(
+ *underlying_matrix, jacobian->num_col_blocks_e(), context);
+
+ Vector x = Vector::Random(jacobian->num_cols_f());
+ CudaVector cuda_x(context, x.size());
+ CudaVector cuda_y(context, jacobian->num_rows());
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.SetZero();
+
+ auto matrix = view.matrix_f();
+ for (auto _ : state) {
+ matrix->RightMultiplyAndAccumulate(cuda_x, &cuda_y);
+ }
+ CHECK_GT(cuda_y.Norm(), 0.);
+}
+
+static void PMVLeftMultiplyAndAccumulateFCuda(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ options.num_threads = 1;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+ auto underlying_matrix = data->BlockSparseJacobianPartitioned(context);
+ CudaPartitionedBlockSparseCRSView view(
+ *underlying_matrix, jacobian->num_col_blocks_e(), context);
+
+ Vector x = Vector::Random(jacobian->num_rows());
+ CudaVector cuda_x(context, x.size());
+ CudaVector cuda_y(context, jacobian->num_cols_f());
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.SetZero();
+
+ auto matrix = view.matrix_f();
+ for (auto _ : state) {
+ matrix->LeftMultiplyAndAccumulate(cuda_x, &cuda_y);
+ }
+ CHECK_GT(cuda_y.Norm(), 0.);
+}
+
+static void PMVRightMultiplyAndAccumulateECuda(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ options.num_threads = 1;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+ auto underlying_matrix = data->BlockSparseJacobianPartitioned(context);
+ CudaPartitionedBlockSparseCRSView view(
+ *underlying_matrix, jacobian->num_col_blocks_e(), context);
+
+ Vector x = Vector::Random(jacobian->num_cols_e());
+ CudaVector cuda_x(context, x.size());
+ CudaVector cuda_y(context, jacobian->num_rows());
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.SetZero();
+
+ auto matrix = view.matrix_e();
+ for (auto _ : state) {
+ matrix->RightMultiplyAndAccumulate(cuda_x, &cuda_y);
+ }
+ CHECK_GT(cuda_y.Norm(), 0.);
+}
+
+static void PMVLeftMultiplyAndAccumulateECuda(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ LinearSolver::Options options;
+ options.elimination_groups.push_back(data->bal_problem->num_points());
+ options.context = context;
+ options.num_threads = 1;
+ auto jacobian = data->PartitionedMatrixViewJacobian(options);
+ auto underlying_matrix = data->BlockSparseJacobianPartitioned(context);
+ CudaPartitionedBlockSparseCRSView view(
+ *underlying_matrix, jacobian->num_col_blocks_e(), context);
+
+ Vector x = Vector::Random(jacobian->num_rows());
+ CudaVector cuda_x(context, x.size());
+ CudaVector cuda_y(context, jacobian->num_cols_e());
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.SetZero();
+
+ auto matrix = view.matrix_e();
+ for (auto _ : state) {
+ matrix->LeftMultiplyAndAccumulate(cuda_x, &cuda_y);
+ }
+ CHECK_GT(cuda_y.Norm(), 0.);
+}
+
+// We want CudaBlockSparseCRSView to be not slower than explicit conversion to
+// CRS on CPU
+static void JacobianToCRSView(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ auto jacobian = data->BlockSparseJacobian(context);
+
+ std::unique_ptr<CudaBlockSparseCRSView> matrix;
+ for (auto _ : state) {
+ matrix = std::make_unique<CudaBlockSparseCRSView>(*jacobian, context);
+ }
+ CHECK(matrix != nullptr);
+}
+static void JacobianToCRSMatrix(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ auto jacobian = data->BlockSparseJacobian(context);
+
+ std::unique_ptr<CudaSparseMatrix> matrix;
+ std::unique_ptr<CompressedRowSparseMatrix> matrix_cpu;
+ for (auto _ : state) {
+ matrix_cpu = jacobian->ToCompressedRowSparseMatrix();
+ matrix = std::make_unique<CudaSparseMatrix>(context, *matrix_cpu);
+ }
+ CHECK(matrix != nullptr);
+}
+// Updating values in CudaBlockSparseCRSView should be +- as fast as just
+// copying values (time spent in value permutation has to be hidden by PCIe
+// transfer)
+static void JacobianToCRSViewUpdate(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ auto jacobian = data->BlockSparseJacobian(context);
+
+ auto matrix = CudaBlockSparseCRSView(*jacobian, context);
+ for (auto _ : state) {
+ matrix.UpdateValues(*jacobian);
+ }
+}
+static void JacobianToCRSMatrixUpdate(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ auto jacobian = data->BlockSparseJacobian(context);
+
+ auto matrix_cpu = jacobian->ToCompressedRowSparseMatrix();
+ auto matrix = std::make_unique<CudaSparseMatrix>(context, *matrix_cpu);
+ for (auto _ : state) {
+ CHECK_EQ(cudaSuccess,
+ cudaMemcpy(matrix->mutable_values(),
+ matrix_cpu->values(),
+ matrix->num_nonzeros() * sizeof(double),
+ cudaMemcpyHostToDevice));
+ }
+}
+#endif
+
+static void JacobianSquaredColumnNorm(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ const int num_threads = static_cast<int>(state.range(0));
+
+ auto jacobian = data->BlockSparseJacobian(context);
+
+ Vector x = Vector::Zero(jacobian->num_cols());
+
+ for (auto _ : state) {
+ jacobian->SquaredColumnNorm(x.data(), context, num_threads);
+ }
+ CHECK_GT(x.squaredNorm(), 0.);
+}
+
+static void JacobianScaleColumns(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ const int num_threads = static_cast<int>(state.range(0));
+
+ auto jacobian_const = data->BlockSparseJacobian(context);
+ auto jacobian = const_cast<BlockSparseMatrix*>(jacobian_const);
+
+ Vector x = Vector::Ones(jacobian->num_cols());
+
+ for (auto _ : state) {
+ jacobian->ScaleColumns(x.data(), context, num_threads);
+ }
+}
+
+static void JacobianRightMultiplyAndAccumulate(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ const int num_threads = static_cast<int>(state.range(0));
+
+ auto jacobian = data->BlockSparseJacobian(context);
+
+ Vector y = Vector::Zero(jacobian->num_rows());
+ Vector x = Vector::Random(jacobian->num_cols());
+
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulate(
+ x.data(), y.data(), context, num_threads);
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+static void JacobianLeftMultiplyAndAccumulate(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ const int num_threads = static_cast<int>(state.range(0));
+
+ auto jacobian = data->BlockSparseJacobian(context);
+
+ Vector y = Vector::Zero(jacobian->num_cols());
+ Vector x = Vector::Random(jacobian->num_rows());
+
+ for (auto _ : state) {
+ jacobian->LeftMultiplyAndAccumulate(
+ x.data(), y.data(), context, num_threads);
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+
+#ifndef CERES_NO_CUDA
+static void JacobianRightMultiplyAndAccumulateCuda(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ auto crs_jacobian = data->CompressedRowSparseJacobian(context);
+ CudaSparseMatrix cuda_jacobian(context, *crs_jacobian);
+ CudaVector cuda_x(context, 0);
+ CudaVector cuda_y(context, 0);
+
+ Vector x(crs_jacobian->num_cols());
+ Vector y(crs_jacobian->num_rows());
+ x.setRandom();
+ y.setRandom();
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.CopyFromCpu(y);
+ double sum = 0;
+ for (auto _ : state) {
+ cuda_jacobian.RightMultiplyAndAccumulate(cuda_x, &cuda_y);
+ sum += cuda_y.Norm();
+ CHECK_EQ(cudaDeviceSynchronize(), cudaSuccess);
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+static void JacobianLeftMultiplyAndAccumulateCuda(benchmark::State& state,
+ BALData* data,
+ ContextImpl* context) {
+ auto crs_jacobian = data->CompressedRowSparseJacobian(context);
+ CudaSparseMatrix cuda_jacobian(context, *crs_jacobian);
+ CudaVector cuda_x(context, 0);
+ CudaVector cuda_y(context, 0);
+
+ Vector x(crs_jacobian->num_rows());
+ Vector y(crs_jacobian->num_cols());
+ x.setRandom();
+ y.setRandom();
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.CopyFromCpu(y);
+ double sum = 0;
+ for (auto _ : state) {
+ cuda_jacobian.LeftMultiplyAndAccumulate(cuda_x, &cuda_y);
+ sum += cuda_y.Norm();
+ CHECK_EQ(cudaDeviceSynchronize(), cudaSuccess);
+ }
+ CHECK_NE(sum, 0.0);
+}
+#endif
+
+} // namespace ceres::internal
+
+// Older versions of benchmark library might come without ::benchmark::Shutdown
+// function. We provide an empty fallback variant of Shutdown function in
+// order to support both older and newer versions
+namespace benchmark_shutdown_fallback {
+template <typename... Args>
+void Shutdown(Args... args) {}
+}; // namespace benchmark_shutdown_fallback
+
+int main(int argc, char** argv) {
+ ::benchmark::Initialize(&argc, argv);
+
+ std::vector<std::unique_ptr<ceres::internal::BALData>> benchmark_data;
+ if (argc == 1) {
+ LOG(FATAL) << "No input datasets specified. Usage: " << argv[0]
+ << " [benchmark flags] path_to_BAL_data_1.txt ... "
+ "path_to_BAL_data_N.txt";
+ return -1;
+ }
+
+ ceres::internal::ContextImpl context;
+ context.EnsureMinimumThreads(16);
+#ifndef CERES_NO_CUDA
+ std::string message;
+ context.InitCuda(&message);
+#endif
+
+ for (int i = 1; i < argc; ++i) {
+ const std::string path(argv[i]);
+ const std::string name_residuals = "Residuals<" + path + ">";
+ benchmark_data.emplace_back(
+ std::make_unique<ceres::internal::BALData>(path));
+ auto data = benchmark_data.back().get();
+ ::benchmark::RegisterBenchmark(
+ name_residuals.c_str(), ceres::internal::Residuals, data, &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_jacobians = "ResidualsAndJacobian<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_jacobians.c_str(),
+ ceres::internal::ResidualsAndJacobian,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_plus = "Plus<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_plus.c_str(), ceres::internal::Plus, data, &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_right_product =
+ "JacobianRightMultiplyAndAccumulate<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_right_product.c_str(),
+ ceres::internal::JacobianRightMultiplyAndAccumulate,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_right_product_partitioned_f =
+ "PMVRightMultiplyAndAccumulateF<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_right_product_partitioned_f.c_str(),
+ ceres::internal::PMVRightMultiplyAndAccumulateF,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+#ifndef CERES_NO_CUDA
+ const std::string name_right_product_partitioned_f_cuda =
+ "PMVRightMultiplyAndAccumulateFCuda<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_right_product_partitioned_f_cuda.c_str(),
+ ceres::internal::PMVRightMultiplyAndAccumulateFCuda,
+ data,
+ &context);
+#endif
+
+ const std::string name_right_product_partitioned_e =
+ "PMVRightMultiplyAndAccumulateE<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_right_product_partitioned_e.c_str(),
+ ceres::internal::PMVRightMultiplyAndAccumulateE,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+#ifndef CERES_NO_CUDA
+ const std::string name_right_product_partitioned_e_cuda =
+ "PMVRightMultiplyAndAccumulateECuda<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_right_product_partitioned_e_cuda.c_str(),
+ ceres::internal::PMVRightMultiplyAndAccumulateECuda,
+ data,
+ &context);
+#endif
+
+ const std::string name_update_block_diagonal_ftf =
+ "PMVUpdateBlockDiagonalFtF<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_update_block_diagonal_ftf.c_str(),
+ ceres::internal::PMVUpdateBlockDiagonalFtF,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_pse =
+ "PSEPreconditionerRightMultiplyAndAccumulate<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_pse.c_str(), ceres::internal::PSEPreconditioner, data, &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_isc_no_diag =
+ "ISCRightMultiplyAndAccumulate<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_isc_no_diag.c_str(),
+ ceres::internal::ISCRightMultiplyNoDiag,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_update_block_diagonal_ete =
+ "PMVUpdateBlockDiagonalEtE<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_update_block_diagonal_ete.c_str(),
+ ceres::internal::PMVUpdateBlockDiagonalEtE,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+ const std::string name_isc_diag =
+ "ISCRightMultiplyAndAccumulateDiag<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_isc_diag.c_str(),
+ ceres::internal::ISCRightMultiplyDiag,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+#ifndef CERES_NO_CUDA
+ const std::string name_right_product_cuda =
+ "JacobianRightMultiplyAndAccumulateCuda<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_right_product_cuda.c_str(),
+ ceres::internal::JacobianRightMultiplyAndAccumulateCuda,
+ data,
+ &context)
+ ->Arg(1);
+#endif
+
+ const std::string name_left_product =
+ "JacobianLeftMultiplyAndAccumulate<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_left_product.c_str(),
+ ceres::internal::JacobianLeftMultiplyAndAccumulate,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_left_product_partitioned_f =
+ "PMVLeftMultiplyAndAccumulateF<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_left_product_partitioned_f.c_str(),
+ ceres::internal::PMVLeftMultiplyAndAccumulateF,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+#ifndef CERES_NO_CUDA
+ const std::string name_left_product_partitioned_f_cuda =
+ "PMVLeftMultiplyAndAccumulateFCuda<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_left_product_partitioned_f_cuda.c_str(),
+ ceres::internal::PMVLeftMultiplyAndAccumulateFCuda,
+ data,
+ &context);
+#endif
+
+ const std::string name_left_product_partitioned_e =
+ "PMVLeftMultiplyAndAccumulateE<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_left_product_partitioned_e.c_str(),
+ ceres::internal::PMVLeftMultiplyAndAccumulateE,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+#ifndef CERES_NO_CUDA
+ const std::string name_left_product_partitioned_e_cuda =
+ "PMVLeftMultiplyAndAccumulateECuda<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_left_product_partitioned_e_cuda.c_str(),
+ ceres::internal::PMVLeftMultiplyAndAccumulateECuda,
+ data,
+ &context);
+#endif
+
+#ifndef CERES_NO_CUDA
+ const std::string name_left_product_cuda =
+ "JacobianLeftMultiplyAndAccumulateCuda<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_left_product_cuda.c_str(),
+ ceres::internal::JacobianLeftMultiplyAndAccumulateCuda,
+ data,
+ &context)
+ ->Arg(1);
+#endif
+
+ const std::string name_squared_column_norm =
+ "JacobianSquaredColumnNorm<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_squared_column_norm.c_str(),
+ ceres::internal::JacobianSquaredColumnNorm,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_scale_columns = "JacobianScaleColumns<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_scale_columns.c_str(),
+ ceres::internal::JacobianScaleColumns,
+ data,
+ &context)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+ const std::string name_to_crs = "JacobianToCRS<" + path + ">";
+ ::benchmark::RegisterBenchmark(
+ name_to_crs.c_str(), ceres::internal::JacobianToCRS, data, &context);
+#ifndef CERES_NO_CUDA
+ const std::string name_to_crs_view = "JacobianToCRSView<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_to_crs_view.c_str(),
+ ceres::internal::JacobianToCRSView,
+ data,
+ &context);
+ const std::string name_to_crs_matrix = "JacobianToCRSMatrix<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_to_crs_matrix.c_str(),
+ ceres::internal::JacobianToCRSMatrix,
+ data,
+ &context);
+ const std::string name_to_crs_view_update =
+ "JacobianToCRSViewUpdate<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_to_crs_view_update.c_str(),
+ ceres::internal::JacobianToCRSViewUpdate,
+ data,
+ &context);
+ const std::string name_to_crs_matrix_update =
+ "JacobianToCRSMatrixUpdate<" + path + ">";
+ ::benchmark::RegisterBenchmark(name_to_crs_matrix_update.c_str(),
+ ceres::internal::JacobianToCRSMatrixUpdate,
+ data,
+ &context);
+#endif
+ }
+ ::benchmark::RunSpecifiedBenchmarks();
+
+ using namespace ::benchmark;
+ using namespace benchmark_shutdown_fallback;
+ Shutdown();
+ return 0;
+}
diff --git a/internal/ceres/float_cxsparse.cc b/internal/ceres/evaluation_callback.cc
similarity index 77%
copy from internal/ceres/float_cxsparse.cc
copy to internal/ceres/evaluation_callback.cc
index 6c68830..5ac6645 100644
--- a/internal/ceres/float_cxsparse.cc
+++ b/internal/ceres/evaluation_callback.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -26,22 +26,12 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
+// Author: mierle@gmail.com (Keir Mierle)
-#include "ceres/float_cxsparse.h"
-
-#if !defined(CERES_NO_CXSPARSE)
+#include "ceres/evaluation_callback.h"
namespace ceres {
-namespace internal {
-std::unique_ptr<SparseCholesky> FloatCXSparseCholesky::Create(
- OrderingType ordering_type) {
- LOG(FATAL) << "FloatCXSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
-}
+EvaluationCallback::~EvaluationCallback() = default;
-} // namespace internal
} // namespace ceres
-
-#endif // !defined(CERES_NO_CXSPARSE)
diff --git a/internal/ceres/evaluation_callback_test.cc b/internal/ceres/evaluation_callback_test.cc
index 0ca2625..7ce110c 100644
--- a/internal/ceres/evaluation_callback_test.cc
+++ b/internal/ceres/evaluation_callback_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,7 @@
#include <cmath>
#include <limits>
+#include <memory>
#include <vector>
#include "ceres/autodiff_cost_function.h"
@@ -41,15 +42,14 @@
#include "ceres/solver.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Use an inline hash function to avoid portability wrangling. Algorithm from
// Daniel Bernstein, known as the "djb2" hash.
template <typename T>
uint64_t Djb2Hash(const T* data, const int size) {
uint64_t hash = 5381;
- const uint8_t* data_as_bytes = reinterpret_cast<const uint8_t*>(data);
+ const auto* data_as_bytes = reinterpret_cast<const uint8_t*>(data);
for (int i = 0; i < sizeof(*data) * size; ++i) {
hash = hash * 33 + data_as_bytes[i];
}
@@ -72,8 +72,6 @@
evaluate_num_calls(0),
evaluate_last_parameter_hash(kUninitialized) {}
- virtual ~WigglyBowlCostFunctionAndEvaluationCallback() {}
-
// Evaluation callback interface. This checks that all the preconditions are
// met at the point that Ceres calls into it.
void PrepareForEvaluation(bool evaluate_jacobians,
@@ -132,7 +130,7 @@
double y = (*parameters)[1];
residuals[0] = y - a * sin(x);
residuals[1] = x;
- if (jacobians != NULL) {
+ if (jacobians != nullptr) {
(*jacobians)[2 * 0 + 0] = -a * cos(x); // df1/dx
(*jacobians)[2 * 0 + 1] = 1.0; // df1/dy
(*jacobians)[2 * 1 + 0] = 1.0; // df2/dx
@@ -157,7 +155,7 @@
EXPECT_EQ(prepare_parameter_hash, incoming_parameter_hash);
// Check: jacobians are requested if they were in PrepareForEvaluation().
- EXPECT_EQ(prepare_requested_jacobians, jacobians != NULL);
+ EXPECT_EQ(prepare_requested_jacobians, jacobians != nullptr);
evaluate_num_calls++;
evaluate_last_parameter_hash = incoming_parameter_hash;
@@ -196,7 +194,7 @@
problem_options.evaluation_callback = &cost_function;
problem_options.cost_function_ownership = DO_NOT_TAKE_OWNERSHIP;
Problem problem(problem_options);
- problem.AddResidualBlock(&cost_function, NULL, parameters);
+ problem.AddResidualBlock(&cost_function, nullptr, parameters);
Solver::Options options;
options.linear_solver_type = DENSE_QR;
@@ -254,7 +252,7 @@
counter_ += 1.0;
}
- const double counter() const { return counter_; }
+ double counter() const { return counter_; }
private:
double counter_ = -1;
@@ -322,7 +320,7 @@
problem_options.evaluation_callback = &cost_function;
problem_options.cost_function_ownership = DO_NOT_TAKE_OWNERSHIP;
Problem problem(problem_options);
- problem.AddResidualBlock(&cost_function, NULL, parameters);
+ problem.AddResidualBlock(&cost_function, nullptr, parameters);
Solver::Options options;
options.linear_solver_type = DENSE_QR;
@@ -387,5 +385,4 @@
WithLineSearchMinimizerImpl(ARMIJO, NONLINEAR_CONJUGATE_GRADIENT, QUADRATIC);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/evaluator.cc b/internal/ceres/evaluator.cc
index 5168741..64eb4c5 100644
--- a/internal/ceres/evaluator.cc
+++ b/internal/ceres/evaluator.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,6 +30,7 @@
#include "ceres/evaluator.h"
+#include <memory>
#include <vector>
#include "ceres/block_evaluate_preparer.h"
@@ -40,48 +41,57 @@
#include "ceres/dense_jacobian_writer.h"
#include "ceres/dynamic_compressed_row_finalizer.h"
#include "ceres/dynamic_compressed_row_jacobian_writer.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/program_evaluator.h"
#include "ceres/scratch_evaluate_preparer.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-Evaluator::~Evaluator() {}
+Evaluator::~Evaluator() = default;
-Evaluator* Evaluator::Create(const Evaluator::Options& options,
- Program* program,
- std::string* error) {
- CHECK(options.context != NULL);
+std::unique_ptr<Evaluator> Evaluator::Create(const Evaluator::Options& options,
+ Program* program,
+ std::string* error) {
+ CHECK(options.context != nullptr);
switch (options.linear_solver_type) {
case DENSE_QR:
case DENSE_NORMAL_CHOLESKY:
- return new ProgramEvaluator<ScratchEvaluatePreparer, DenseJacobianWriter>(
+ return std::make_unique<
+ ProgramEvaluator<ScratchEvaluatePreparer, DenseJacobianWriter>>(
options, program);
case DENSE_SCHUR:
case SPARSE_SCHUR:
case ITERATIVE_SCHUR:
- case CGNR:
- return new ProgramEvaluator<BlockEvaluatePreparer, BlockJacobianWriter>(
- options, program);
- case SPARSE_NORMAL_CHOLESKY:
- if (options.dynamic_sparsity) {
- return new ProgramEvaluator<ScratchEvaluatePreparer,
- DynamicCompressedRowJacobianWriter,
- DynamicCompressedRowJacobianFinalizer>(
+ case CGNR: {
+ if (options.sparse_linear_algebra_library_type == CUDA_SPARSE) {
+ return std::make_unique<ProgramEvaluator<ScratchEvaluatePreparer,
+ CompressedRowJacobianWriter>>(
options, program);
} else {
- return new ProgramEvaluator<BlockEvaluatePreparer, BlockJacobianWriter>(
+ return std::make_unique<
+ ProgramEvaluator<BlockEvaluatePreparer, BlockJacobianWriter>>(
+ options, program);
+ }
+ }
+ case SPARSE_NORMAL_CHOLESKY:
+ if (options.dynamic_sparsity) {
+ return std::make_unique<
+ ProgramEvaluator<ScratchEvaluatePreparer,
+ DynamicCompressedRowJacobianWriter,
+ DynamicCompressedRowJacobianFinalizer>>(options,
+ program);
+ } else {
+ return std::make_unique<
+ ProgramEvaluator<BlockEvaluatePreparer, BlockJacobianWriter>>(
options, program);
}
default:
*error = "Invalid Linear Solver Type. Unable to create evaluator.";
- return NULL;
+ return nullptr;
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/evaluator.h b/internal/ceres/evaluator.h
index 9cf4259..dcb3cf6 100644
--- a/internal/ceres/evaluator.h
+++ b/internal/ceres/evaluator.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,12 +33,14 @@
#define CERES_INTERNAL_EVALUATOR_H_
#include <map>
+#include <memory>
#include <string>
#include <vector>
#include "ceres/context_impl.h"
#include "ceres/execution_summary.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/types.h"
namespace ceres {
@@ -54,8 +56,8 @@
// The Evaluator interface offers a way to interact with a least squares cost
// function that is useful for an optimizer that wants to minimize the least
// squares objective. This insulates the optimizer from issues like Jacobian
-// storage, parameterization, etc.
-class CERES_EXPORT_INTERNAL Evaluator {
+// storage, manifolds, etc.
+class CERES_NO_EXPORT Evaluator {
public:
virtual ~Evaluator();
@@ -63,14 +65,16 @@
int num_threads = 1;
int num_eliminate_blocks = -1;
LinearSolverType linear_solver_type = DENSE_QR;
+ SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type =
+ NO_SPARSE;
bool dynamic_sparsity = false;
ContextImpl* context = nullptr;
EvaluationCallback* evaluation_callback = nullptr;
};
- static Evaluator* Create(const Options& options,
- Program* program,
- std::string* error);
+ static std::unique_ptr<Evaluator> Create(const Options& options,
+ Program* program,
+ std::string* error);
// Build and return a sparse matrix for storing and working with the Jacobian
// of the objective function. The jacobian has dimensions
@@ -88,7 +92,7 @@
// the jacobian for use with CHOLMOD, where as BlockOptimizationProblem
// creates a BlockSparseMatrix representation of the jacobian for use in the
// Schur complement based methods.
- virtual SparseMatrix* CreateJacobian() const = 0;
+ virtual std::unique_ptr<SparseMatrix> CreateJacobian() const = 0;
// Options struct to control Evaluator::Evaluate;
struct EvaluateOptions {
@@ -102,10 +106,10 @@
// Evaluate the cost function for the given state. Returns the cost,
// residuals, and jacobian in the corresponding arguments. Both residuals and
- // jacobian are optional; to avoid computing them, pass NULL.
+ // jacobian are optional; to avoid computing them, pass nullptr.
//
- // If non-NULL, the Jacobian must have a suitable sparsity pattern; only the
- // values array of the jacobian is modified.
+ // If non-nullptr, the Jacobian must have a suitable sparsity pattern; only
+ // the values array of the jacobian is modified.
//
// state is an array of size NumParameters(), cost is a pointer to a single
// double, and residuals is an array of doubles of size NumResiduals().
@@ -131,13 +135,13 @@
// Make a change delta (of size NumEffectiveParameters()) to state (of size
// NumParameters()) and store the result in state_plus_delta.
//
- // In the case that there are no parameterizations used, this is equivalent to
+ // In the case that there are no manifolds used, this is equivalent to
//
// state_plus_delta[i] = state[i] + delta[i] ;
//
- // however, the mapping is more complicated in the case of parameterizations
+ // however, the mapping is more complicated in the case of manifolds
// like quaternions. This is the same as the "Plus()" operation in
- // local_parameterization.h, but operating over the entire state vector for a
+ // manifold.h, but operating over the entire state vector for a
// problem.
virtual bool Plus(const double* state,
const double* delta,
@@ -147,7 +151,7 @@
virtual int NumParameters() const = 0;
// This is the effective number of parameters that the optimizer may adjust.
- // This applies when there are parameterizations on some of the parameters.
+ // This applies when there are manifolds on some of the parameters.
virtual int NumEffectiveParameters() const = 0;
// The number of residuals in the optimization problem.
@@ -158,11 +162,13 @@
// life time issues. Further, these calls are not expected to be
// frequent or performance sensitive.
virtual std::map<std::string, CallStatistics> Statistics() const {
- return std::map<std::string, CallStatistics>();
+ return {};
}
};
} // namespace internal
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_INTERNAL_EVALUATOR_H_
diff --git a/internal/ceres/evaluator_test.cc b/internal/ceres/evaluator_test.cc
index 5ddb733..43e8872 100644
--- a/internal/ceres/evaluator_test.cc
+++ b/internal/ceres/evaluator_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,13 +34,15 @@
#include "ceres/evaluator.h"
#include <memory>
+#include <string>
+#include <vector>
#include "ceres/casts.h"
#include "ceres/cost_function.h"
#include "ceres/crs_matrix.h"
#include "ceres/evaluator_test_utils.h"
#include "ceres/internal/eigen.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
#include "ceres/problem_impl.h"
#include "ceres/program.h"
#include "ceres/sized_cost_function.h"
@@ -52,14 +54,11 @@
namespace ceres {
namespace internal {
-using std::string;
-using std::vector;
-
// TODO(keir): Consider pushing this into a common test utils file.
template <int kFactor, int kNumResiduals, int... Ns>
class ParameterIgnoringCostFunction
: public SizedCostFunction<kNumResiduals, Ns...> {
- typedef SizedCostFunction<kNumResiduals, Ns...> Base;
+ using Base = SizedCostFunction<kNumResiduals, Ns...>;
public:
explicit ParameterIgnoringCostFunction(bool succeeds = true)
@@ -75,9 +74,9 @@
for (int k = 0; k < Base::parameter_block_sizes().size(); ++k) {
// The jacobians here are full sized, but they are transformed in the
// evaluator into the "local" jacobian. In the tests, the "subset
- // constant" parameterization is used, which should pick out columns
- // from these jacobians. Put values in the jacobian that make this
- // obvious; in particular, make the jacobians like this:
+ // constant" manifold is used, which should pick out columns from these
+ // jacobians. Put values in the jacobian that make this obvious; in
+ // particular, make the jacobians like this:
//
// 1 2 3 4 ...
// 1 2 3 4 ... .* kFactor
@@ -116,13 +115,13 @@
};
struct EvaluatorTest : public ::testing::TestWithParam<EvaluatorTestOptions> {
- Evaluator* CreateEvaluator(Program* program) {
+ std::unique_ptr<Evaluator> CreateEvaluator(Program* program) {
// This program is straight from the ProblemImpl, and so has no index/offset
// yet; compute it here as required by the evaluator implementations.
program->SetParameterOffsetsAndIndex();
if (VLOG_IS_ON(1)) {
- string report;
+ std::string report;
StringAppendF(&report,
"Creating evaluator with type: %d",
GetParam().linear_solver_type);
@@ -140,7 +139,7 @@
options.num_eliminate_blocks = GetParam().num_eliminate_blocks;
options.dynamic_sparsity = GetParam().dynamic_sparsity;
options.context = problem.context();
- string error;
+ std::string error;
return Evaluator::Create(options, program, &error);
}
@@ -151,8 +150,8 @@
const double* expected_residuals,
const double* expected_gradient,
const double* expected_jacobian) {
- std::unique_ptr<Evaluator> evaluator(
- CreateEvaluator(problem->mutable_program()));
+ std::unique_ptr<Evaluator> evaluator =
+ CreateEvaluator(problem->mutable_program());
int num_residuals = expected_num_rows;
int num_parameters = expected_num_cols;
@@ -171,7 +170,7 @@
ASSERT_EQ(expected_num_rows, jacobian->num_rows());
ASSERT_EQ(expected_num_cols, jacobian->num_cols());
- vector<double> state(evaluator->NumParameters());
+ std::vector<double> state(evaluator->NumParameters());
// clang-format off
ASSERT_TRUE(evaluator->Evaluate(
@@ -394,19 +393,19 @@
CheckAllEvaluationCombinations(expected);
}
-TEST_P(EvaluatorTest, MultipleResidualsWithLocalParameterizations) {
+TEST_P(EvaluatorTest, MultipleResidualsWithManifolds) {
// Add the parameters in explicit order to force the ordering in the program.
problem.AddParameterBlock(x, 2);
// Fix y's first dimension.
- vector<int> y_fixed;
+ std::vector<int> y_fixed;
y_fixed.push_back(0);
- problem.AddParameterBlock(y, 3, new SubsetParameterization(3, y_fixed));
+ problem.AddParameterBlock(y, 3, new SubsetManifold(3, y_fixed));
// Fix z's second dimension.
- vector<int> z_fixed;
+ std::vector<int> z_fixed;
z_fixed.push_back(1);
- problem.AddParameterBlock(z, 4, new SubsetParameterization(4, z_fixed));
+ problem.AddParameterBlock(z, 4, new SubsetManifold(4, z_fixed));
// f(x, y) in R^2
problem.AddResidualBlock(
@@ -486,7 +485,7 @@
// Normally, the preprocessing of the program that happens in solver_impl
// takes care of this, but we don't want to invoke the solver here.
Program reduced_program;
- vector<ParameterBlock*>* parameter_blocks =
+ std::vector<ParameterBlock*>* parameter_blocks =
problem.mutable_program()->mutable_parameter_blocks();
// "z" is the last parameter; save it for later and pop it off temporarily.
@@ -545,8 +544,8 @@
// The values are ignored.
double state[9];
- std::unique_ptr<Evaluator> evaluator(
- CreateEvaluator(problem.mutable_program()));
+ std::unique_ptr<Evaluator> evaluator =
+ CreateEvaluator(problem.mutable_program());
std::unique_ptr<SparseMatrix> jacobian(evaluator->CreateJacobian());
double cost;
EXPECT_FALSE(evaluator->Evaluate(state, &cost, nullptr, nullptr, nullptr));
@@ -620,7 +619,7 @@
options.linear_solver_type = DENSE_QR;
options.num_eliminate_blocks = 0;
options.context = problem.context();
- string error;
+ std::string error;
std::unique_ptr<Evaluator> evaluator(
Evaluator::Create(options, program, &error));
std::unique_ptr<SparseMatrix> jacobian(evaluator->CreateJacobian());
@@ -677,5 +676,51 @@
}
}
+class HugeCostFunction : public SizedCostFunction<46341, 46345> {
+ bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const override {
+ return true;
+ }
+};
+
+TEST(Evaluator, LargeProblemDoesNotCauseCrashBlockJacobianWriter) {
+ ProblemImpl problem;
+ std::vector<double> x(46345);
+
+ problem.AddResidualBlock(new HugeCostFunction, nullptr, x.data());
+ Evaluator::Options options;
+ options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options.context = problem.context();
+ options.num_eliminate_blocks = 0;
+ options.dynamic_sparsity = false;
+ std::string error;
+ auto program = problem.mutable_program();
+ program->SetParameterOffsetsAndIndex();
+ auto evaluator = Evaluator::Create(options, program, &error);
+ auto jacobian = evaluator->CreateJacobian();
+ EXPECT_EQ(jacobian, nullptr);
+}
+
+TEST(Evaluator, LargeProblemDoesNotCauseCrashCompressedRowJacobianWriter) {
+ ProblemImpl problem;
+ std::vector<double> x(46345);
+
+ problem.AddResidualBlock(new HugeCostFunction, nullptr, x.data());
+ Evaluator::Options options;
+ // CGNR on CUDA_SPARSE is the only combination that triggers a
+ // CompressedRowJacobianWriter.
+ options.linear_solver_type = CGNR;
+ options.sparse_linear_algebra_library_type = CUDA_SPARSE;
+ options.context = problem.context();
+ options.num_eliminate_blocks = 0;
+ std::string error;
+ auto program = problem.mutable_program();
+ program->SetParameterOffsetsAndIndex();
+ auto evaluator = Evaluator::Create(options, program, &error);
+ auto jacobian = evaluator->CreateJacobian();
+ EXPECT_EQ(jacobian, nullptr);
+}
+
} // namespace internal
} // namespace ceres
diff --git a/internal/ceres/evaluator_test_utils.cc b/internal/ceres/evaluator_test_utils.cc
index 25801db..904635b 100644
--- a/internal/ceres/evaluator_test_utils.cc
+++ b/internal/ceres/evaluator_test_utils.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,8 +34,7 @@
#include "ceres/internal/eigen.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
void CompareEvaluations(int expected_num_rows,
int expected_num_cols,
@@ -49,7 +48,7 @@
const double* actual_jacobian) {
EXPECT_EQ(expected_cost, actual_cost);
- if (expected_residuals != NULL) {
+ if (expected_residuals != nullptr) {
ConstVectorRef expected_residuals_vector(expected_residuals,
expected_num_rows);
ConstVectorRef actual_residuals_vector(actual_residuals, expected_num_rows);
@@ -61,7 +60,7 @@
<< expected_residuals_vector;
}
- if (expected_gradient != NULL) {
+ if (expected_gradient != nullptr) {
ConstVectorRef expected_gradient_vector(expected_gradient,
expected_num_cols);
ConstVectorRef actual_gradient_vector(actual_gradient, expected_num_cols);
@@ -74,7 +73,7 @@
<< expected_gradient_vector.transpose();
}
- if (expected_jacobian != NULL) {
+ if (expected_jacobian != nullptr) {
ConstMatrixRef expected_jacobian_matrix(
expected_jacobian, expected_num_rows, expected_num_cols);
ConstMatrixRef actual_jacobian_matrix(
@@ -88,5 +87,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/evaluator_test_utils.h b/internal/ceres/evaluator_test_utils.h
index d47b6fa..e98dfb6 100644
--- a/internal/ceres/evaluator_test_utils.h
+++ b/internal/ceres/evaluator_test_utils.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,13 +31,12 @@
//
// Test utils used for evaluation testing.
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Fixed sized struct for storing an evaluation.
-struct ExpectedEvaluation {
+struct CERES_NO_EXPORT ExpectedEvaluation {
int num_rows;
int num_cols;
double cost;
@@ -47,16 +46,15 @@
};
// Compare two evaluations.
-CERES_EXPORT_INTERNAL void CompareEvaluations(int expected_num_rows,
- int expected_num_cols,
- double expected_cost,
- const double* expected_residuals,
- const double* expected_gradient,
- const double* expected_jacobian,
- const double actual_cost,
- const double* actual_residuals,
- const double* actual_gradient,
- const double* actual_jacobian);
+CERES_NO_EXPORT void CompareEvaluations(int expected_num_rows,
+ int expected_num_cols,
+ double expected_cost,
+ const double* expected_residuals,
+ const double* expected_gradient,
+ const double* expected_jacobian,
+ const double actual_cost,
+ const double* actual_residuals,
+ const double* actual_gradient,
+ const double* actual_jacobian);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/execution_summary.h b/internal/ceres/execution_summary.h
index 17fd882..accc5e4 100644
--- a/internal/ceres/execution_summary.h
+++ b/internal/ceres/execution_summary.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,17 +34,17 @@
#include <map>
#include <mutex>
#include <string>
+#include <utility>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/wall_time.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
struct CallStatistics {
- CallStatistics() : time(0.), calls(0) {}
- double time;
- int calls;
+ CallStatistics() = default;
+ double time{0.};
+ int calls{0};
};
// Struct used by various objects to report statistics about their
@@ -69,8 +69,10 @@
class ScopedExecutionTimer {
public:
- ScopedExecutionTimer(const std::string& name, ExecutionSummary* summary)
- : start_time_(WallTimeInSeconds()), name_(name), summary_(summary) {}
+ ScopedExecutionTimer(std::string name, ExecutionSummary* summary)
+ : start_time_(WallTimeInSeconds()),
+ name_(std::move(name)),
+ summary_(summary) {}
~ScopedExecutionTimer() {
summary_->IncrementTimeBy(name_, WallTimeInSeconds() - start_time_);
@@ -82,7 +84,6 @@
ExecutionSummary* summary_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_EXECUTION_SUMMARY_H_
diff --git a/internal/ceres/fake_bundle_adjustment_jacobian.cc b/internal/ceres/fake_bundle_adjustment_jacobian.cc
new file mode 100644
index 0000000..22f3405
--- /dev/null
+++ b/internal/ceres/fake_bundle_adjustment_jacobian.cc
@@ -0,0 +1,99 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#include "ceres/fake_bundle_adjustment_jacobian.h"
+
+#include <memory>
+#include <random>
+#include <string>
+#include <utility>
+
+#include "Eigen/Dense"
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/internal/eigen.h"
+
+namespace ceres::internal {
+
+std::unique_ptr<BlockSparseMatrix> CreateFakeBundleAdjustmentJacobian(
+ int num_cameras,
+ int num_points,
+ int camera_size,
+ int point_size,
+ double visibility,
+ std::mt19937& prng) {
+ constexpr int kResidualSize = 2;
+
+ CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
+ int c = 0;
+ // Add column blocks for each point
+ for (int i = 0; i < num_points; ++i) {
+ bs->cols.push_back(Block(point_size, c));
+ c += point_size;
+ }
+
+ // Add column blocks for each camera.
+ for (int i = 0; i < num_cameras; ++i) {
+ bs->cols.push_back(Block(camera_size, c));
+ c += camera_size;
+ }
+
+ std::bernoulli_distribution visibility_distribution(visibility);
+ int row_pos = 0;
+ int cell_pos = 0;
+ for (int i = 0; i < num_points; ++i) {
+ for (int j = 0; j < num_cameras; ++j) {
+ if (!visibility_distribution(prng)) {
+ continue;
+ }
+ bs->rows.emplace_back();
+ auto& row = bs->rows.back();
+ row.block.position = row_pos;
+ row.block.size = kResidualSize;
+ auto& cells = row.cells;
+ cells.resize(2);
+
+ cells[0].block_id = i;
+ cells[0].position = cell_pos;
+ cell_pos += kResidualSize * point_size;
+
+ cells[1].block_id = num_points + j;
+ cells[1].position = cell_pos;
+ cell_pos += kResidualSize * camera_size;
+
+ row_pos += kResidualSize;
+ }
+ }
+
+ auto jacobian = std::make_unique<BlockSparseMatrix>(bs);
+ VectorRef(jacobian->mutable_values(), jacobian->num_nonzeros()).setRandom();
+ return jacobian;
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/float_cxsparse.cc b/internal/ceres/fake_bundle_adjustment_jacobian.h
similarity index 73%
copy from internal/ceres/float_cxsparse.cc
copy to internal/ceres/fake_bundle_adjustment_jacobian.h
index 6c68830..0448dbf 100644
--- a/internal/ceres/float_cxsparse.cc
+++ b/internal/ceres/fake_bundle_adjustment_jacobian.h
@@ -1,5 +1,6 @@
+
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,20 +29,24 @@
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
-#include "ceres/float_cxsparse.h"
+#ifndef CERES_INTERNAL_FAKE_BUNDLE_ADJUSTMENT_JACOBIAN
+#define CERES_INTERNAL_FAKE_BUNDLE_ADJUSTMENT_JACOBIAN
-#if !defined(CERES_NO_CXSPARSE)
+#include <memory>
+#include <random>
-namespace ceres {
-namespace internal {
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/partitioned_matrix_view.h"
-std::unique_ptr<SparseCholesky> FloatCXSparseCholesky::Create(
- OrderingType ordering_type) {
- LOG(FATAL) << "FloatCXSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
-}
+namespace ceres::internal {
+std::unique_ptr<BlockSparseMatrix> CreateFakeBundleAdjustmentJacobian(
+ int num_cameras,
+ int num_points,
+ int camera_size,
+ int point_size,
+ double visibility,
+ std::mt19937& prng);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // !defined(CERES_NO_CXSPARSE)
+#endif // CERES_INTERNAL_FAKE_BUNDLE_ADJUSTMENT_JACOBIAN
diff --git a/internal/ceres/file.cc b/internal/ceres/file.cc
index 94f2135..60d35fa 100644
--- a/internal/ceres/file.cc
+++ b/internal/ceres/file.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,15 +33,14 @@
#include "ceres/file.h"
#include <cstdio>
+#include <string>
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::string;
-
-void WriteStringToFileOrDie(const string& data, const string& filename) {
+void WriteStringToFileOrDie(const std::string& data,
+ const std::string& filename) {
FILE* file_descriptor = fopen(filename.c_str(), "wb");
if (!file_descriptor) {
LOG(FATAL) << "Couldn't write to file: " << filename;
@@ -50,7 +49,7 @@
fclose(file_descriptor);
}
-void ReadFileToStringOrDie(const string& filename, string* data) {
+void ReadFileToStringOrDie(const std::string& filename, std::string* data) {
FILE* file_descriptor = fopen(filename.c_str(), "r");
if (!file_descriptor) {
@@ -59,12 +58,12 @@
// Resize the input buffer appropriately.
fseek(file_descriptor, 0L, SEEK_END);
- int num_bytes = ftell(file_descriptor);
+ int64_t num_bytes = ftell(file_descriptor);
data->resize(num_bytes);
// Read the data.
fseek(file_descriptor, 0L, SEEK_SET);
- int num_read =
+ int64_t num_read =
fread(&((*data)[0]), sizeof((*data)[0]), num_bytes, file_descriptor);
if (num_read != num_bytes) {
LOG(FATAL) << "Couldn't read all of " << filename
@@ -74,7 +73,7 @@
fclose(file_descriptor);
}
-string JoinPath(const string& dirname, const string& basename) {
+std::string JoinPath(const std::string& dirname, const std::string& basename) {
#ifdef _WIN32
static const char separator = '\\';
#else
@@ -86,9 +85,8 @@
} else if (dirname[dirname.size() - 1] == separator) {
return dirname + basename;
} else {
- return dirname + string(&separator, 1) + basename;
+ return dirname + std::string(&separator, 1) + basename;
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/file.h b/internal/ceres/file.h
index c0015df..b21f1ca 100644
--- a/internal/ceres/file.h
+++ b/internal/ceres/file.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,21 +35,24 @@
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
+CERES_NO_EXPORT
void WriteStringToFileOrDie(const std::string& data,
const std::string& filename);
+CERES_NO_EXPORT
void ReadFileToStringOrDie(const std::string& filename, std::string* data);
// Join two path components, adding a slash if necessary. If basename is an
// absolute path then JoinPath ignores dirname and simply returns basename.
-CERES_EXPORT_INTERNAL std::string JoinPath(const std::string& dirname,
- const std::string& basename);
+CERES_NO_EXPORT
+std::string JoinPath(const std::string& dirname, const std::string& basename);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_FILE_H_
diff --git a/internal/ceres/float_cxsparse.cc b/internal/ceres/first_order_function.cc
similarity index 80%
rename from internal/ceres/float_cxsparse.cc
rename to internal/ceres/first_order_function.cc
index 6c68830..267b8ef 100644
--- a/internal/ceres/float_cxsparse.cc
+++ b/internal/ceres/first_order_function.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,20 +28,10 @@
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
-#include "ceres/float_cxsparse.h"
-
-#if !defined(CERES_NO_CXSPARSE)
+#include "ceres/first_order_function.h"
namespace ceres {
-namespace internal {
-std::unique_ptr<SparseCholesky> FloatCXSparseCholesky::Create(
- OrderingType ordering_type) {
- LOG(FATAL) << "FloatCXSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
-}
+FirstOrderFunction::~FirstOrderFunction() = default;
-} // namespace internal
} // namespace ceres
-
-#endif // !defined(CERES_NO_CXSPARSE)
diff --git a/internal/ceres/fixed_array_test.cc b/internal/ceres/fixed_array_test.cc
index d418786..66b3fbf 100644
--- a/internal/ceres/fixed_array_test.cc
+++ b/internal/ceres/fixed_array_test.cc
@@ -14,8 +14,7 @@
#include "ceres/internal/fixed_array.h"
-#include <stdio.h>
-
+#include <cstdio>
#include <cstring>
#include <list>
#include <memory>
@@ -54,7 +53,7 @@
class ConstructionTester {
public:
- ConstructionTester() : self_ptr_(this), value_(0) { constructions++; }
+ ConstructionTester() : self_ptr_(this) { constructions++; }
~ConstructionTester() {
assert(self_ptr_ == this);
self_ptr_ = nullptr;
@@ -75,7 +74,7 @@
// self_ptr_ should always point to 'this' -- that's how we can be sure the
// constructor has been called.
ConstructionTester* self_ptr_;
- int value_;
+ int value_{0};
};
int ConstructionTester::constructions = 0;
@@ -117,7 +116,7 @@
TEST(FixedArrayTest, MoveCtor) {
ceres::internal::FixedArray<std::unique_ptr<int>, 10> on_stack(5);
for (int i = 0; i < 5; ++i) {
- on_stack[i] = std::unique_ptr<int>(new int(i));
+ on_stack[i] = std::make_unique<int>(i);
}
ceres::internal::FixedArray<std::unique_ptr<int>, 10> stack_copy =
@@ -127,7 +126,7 @@
ceres::internal::FixedArray<std::unique_ptr<int>, 10> allocated(15);
for (int i = 0; i < 15; ++i) {
- allocated[i] = std::unique_ptr<int>(new int(i));
+ allocated[i] = std::make_unique<int>(i);
}
ceres::internal::FixedArray<std::unique_ptr<int>, 10> alloced_copy =
@@ -467,7 +466,7 @@
// will always overflow destination buffer [-Werror]
TEST(FixedArrayTest, AvoidParanoidDiagnostics) {
ceres::internal::FixedArray<char, 32> buf(32);
- sprintf(buf.data(), "foo"); // NOLINT(runtime/printf)
+ snprintf(buf.data(), 32, "foo");
}
TEST(FixedArrayTest, TooBigInlinedSpace) {
@@ -500,8 +499,6 @@
// PickyDelete EXPECTs its class-scope deallocation funcs are unused.
struct PickyDelete {
- PickyDelete() {}
- ~PickyDelete() {}
void operator delete(void* p) {
EXPECT_TRUE(false) << __FUNCTION__;
::operator delete(p);
@@ -655,12 +652,10 @@
class CountingAllocator : public std::allocator<T> {
public:
using Alloc = std::allocator<T>;
- using pointer = typename Alloc::pointer;
using size_type = typename Alloc::size_type;
- CountingAllocator() : bytes_used_(nullptr), instance_count_(nullptr) {}
- explicit CountingAllocator(int64_t* b)
- : bytes_used_(b), instance_count_(nullptr) {}
+ CountingAllocator() = default;
+ explicit CountingAllocator(int64_t* b) : bytes_used_(b) {}
CountingAllocator(int64_t* b, int64_t* a)
: bytes_used_(b), instance_count_(a) {}
@@ -670,41 +665,20 @@
bytes_used_(x.bytes_used_),
instance_count_(x.instance_count_) {}
- pointer allocate(size_type n, const void* const hint = nullptr) {
+ T* allocate(size_type n) {
assert(bytes_used_ != nullptr);
*bytes_used_ += n * sizeof(T);
- return Alloc::allocate(n, hint);
+ return Alloc::allocate(n);
}
- void deallocate(pointer p, size_type n) {
+ void deallocate(T* p, size_type n) {
Alloc::deallocate(p, n);
assert(bytes_used_ != nullptr);
*bytes_used_ -= n * sizeof(T);
}
- template <typename... Args>
- void construct(pointer p, Args&&... args) {
- Alloc::construct(p, std::forward<Args>(args)...);
- if (instance_count_) {
- *instance_count_ += 1;
- }
- }
-
- void destroy(pointer p) {
- Alloc::destroy(p);
- if (instance_count_) {
- *instance_count_ -= 1;
- }
- }
-
- template <typename U>
- class rebind {
- public:
- using other = CountingAllocator<U>;
- };
-
- int64_t* bytes_used_;
- int64_t* instance_count_;
+ int64_t* bytes_used_{nullptr};
+ int64_t* instance_count_{nullptr};
};
TEST(AllocatorSupportTest, CountInlineAllocations) {
diff --git a/internal/ceres/float_suitesparse.cc b/internal/ceres/float_suitesparse.cc
index 0360457..6016bad 100644
--- a/internal/ceres/float_suitesparse.cc
+++ b/internal/ceres/float_suitesparse.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,18 +30,18 @@
#include "ceres/float_suitesparse.h"
+#include <memory>
+
#if !defined(CERES_NO_SUITESPARSE)
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
std::unique_ptr<SparseCholesky> FloatSuiteSparseCholesky::Create(
OrderingType ordering_type) {
LOG(FATAL) << "FloatSuiteSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
+ return {};
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // !defined(CERES_NO_SUITESPARSE)
diff --git a/internal/ceres/float_suitesparse.h b/internal/ceres/float_suitesparse.h
index c436da4..b9d298e 100644
--- a/internal/ceres/float_suitesparse.h
+++ b/internal/ceres/float_suitesparse.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,27 +33,26 @@
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
// clang-format on
#include <memory>
+#include "ceres/internal/export.h"
#include "ceres/sparse_cholesky.h"
#if !defined(CERES_NO_SUITESPARSE)
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Fake implementation of a single precision Sparse Cholesky using
// SuiteSparse.
-class FloatSuiteSparseCholesky : public SparseCholesky {
+class CERES_NO_EXPORT FloatSuiteSparseCholesky : public SparseCholesky {
public:
static std::unique_ptr<SparseCholesky> Create(OrderingType ordering_type);
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // !defined(CERES_NO_SUITESPARSE)
diff --git a/internal/ceres/function_sample.cc b/internal/ceres/function_sample.cc
index 3e0ae60..bb4bcff 100644
--- a/internal/ceres/function_sample.cc
+++ b/internal/ceres/function_sample.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,8 +32,7 @@
#include "ceres/stringprintf.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
FunctionSample::FunctionSample()
: x(0.0),
@@ -75,5 +74,4 @@
gradient_is_valid);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/function_sample.h b/internal/ceres/function_sample.h
index 3bcea1b..0582769 100644
--- a/internal/ceres/function_sample.h
+++ b/internal/ceres/function_sample.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,11 +33,11 @@
#include <string>
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// FunctionSample is used by the line search routines to store and
// communicate the value and (optionally) the gradient of the function
@@ -47,7 +47,7 @@
// line/direction. FunctionSample contains the information in two
// ways. Information in the ambient space and information along the
// direction of search.
-struct CERES_EXPORT_INTERNAL FunctionSample {
+struct CERES_NO_EXPORT FunctionSample {
FunctionSample();
FunctionSample(double x, double value);
FunctionSample(double x, double value, double gradient);
@@ -82,12 +82,13 @@
//
// where d is the search direction.
double gradient;
- // True if the evaluation of the gradient was sucessful and the
+ // True if the evaluation of the gradient was successful and the
// value is a finite number.
bool gradient_is_valid;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_FUNCTION_SAMPLE_H_
diff --git a/internal/ceres/generate_bundle_adjustment_tests.py b/internal/ceres/generate_bundle_adjustment_tests.py
index 7b0caa3..ac83bc3 100644
--- a/internal/ceres/generate_bundle_adjustment_tests.py
+++ b/internal/ceres/generate_bundle_adjustment_tests.py
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2018 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -41,27 +41,34 @@
MULTI_THREADED = "4"
THREAD_CONFIGS = [SINGLE_THREADED, MULTI_THREADED]
-SOLVER_CONFIGS = [
- # Linear solver Sparse backend Preconditioner
- ('DENSE_SCHUR', 'NO_SPARSE', 'IDENTITY'),
- ('ITERATIVE_SCHUR', 'NO_SPARSE', 'JACOBI'),
- ('ITERATIVE_SCHUR', 'NO_SPARSE', 'SCHUR_JACOBI'),
- ('ITERATIVE_SCHUR', 'SUITE_SPARSE', 'CLUSTER_JACOBI'),
- ('ITERATIVE_SCHUR', 'EIGEN_SPARSE', 'CLUSTER_JACOBI'),
- ('ITERATIVE_SCHUR', 'CX_SPARSE', 'CLUSTER_JACOBI'),
- ('ITERATIVE_SCHUR', 'ACCELERATE_SPARSE','CLUSTER_JACOBI'),
- ('ITERATIVE_SCHUR', 'SUITE_SPARSE', 'CLUSTER_TRIDIAGONAL'),
- ('ITERATIVE_SCHUR', 'EIGEN_SPARSE', 'CLUSTER_TRIDIAGONAL'),
- ('ITERATIVE_SCHUR', 'CX_SPARSE', 'CLUSTER_TRIDIAGONAL'),
- ('ITERATIVE_SCHUR', 'ACCELERATE_SPARSE','CLUSTER_TRIDIAGONAL'),
- ('SPARSE_NORMAL_CHOLESKY', 'SUITE_SPARSE', 'IDENTITY'),
- ('SPARSE_NORMAL_CHOLESKY', 'EIGEN_SPARSE', 'IDENTITY'),
- ('SPARSE_NORMAL_CHOLESKY', 'CX_SPARSE', 'IDENTITY'),
- ('SPARSE_NORMAL_CHOLESKY', 'ACCELERATE_SPARSE','IDENTITY'),
- ('SPARSE_SCHUR', 'SUITE_SPARSE', 'IDENTITY'),
- ('SPARSE_SCHUR', 'EIGEN_SPARSE', 'IDENTITY'),
- ('SPARSE_SCHUR', 'CX_SPARSE', 'IDENTITY'),
- ('SPARSE_SCHUR', 'ACCELERATE_SPARSE','IDENTITY'),
+DENSE_SOLVER_CONFIGS = [
+ # Linear solver Dense backend
+ ('DENSE_SCHUR', 'EIGEN'),
+ ('DENSE_SCHUR', 'LAPACK'),
+ ('DENSE_SCHUR', 'CUDA'),
+]
+
+SPARSE_SOLVER_CONFIGS = [
+ # Linear solver Sparse backend
+ ('SPARSE_NORMAL_CHOLESKY', 'SUITE_SPARSE'),
+ ('SPARSE_NORMAL_CHOLESKY', 'EIGEN_SPARSE'),
+ ('SPARSE_NORMAL_CHOLESKY', 'ACCELERATE_SPARSE'),
+ ('SPARSE_SCHUR', 'SUITE_SPARSE'),
+ ('SPARSE_SCHUR', 'EIGEN_SPARSE'),
+ ('SPARSE_SCHUR', 'ACCELERATE_SPARSE'),
+]
+
+ITERATIVE_SOLVER_CONFIGS = [
+ # Linear solver Sparse backend Preconditioner
+ ('ITERATIVE_SCHUR', 'NO_SPARSE', 'JACOBI'),
+ ('ITERATIVE_SCHUR', 'NO_SPARSE', 'SCHUR_JACOBI'),
+ ('ITERATIVE_SCHUR', 'NO_SPARSE', 'SCHUR_POWER_SERIES_EXPANSION'),
+ ('ITERATIVE_SCHUR', 'SUITE_SPARSE', 'CLUSTER_JACOBI'),
+ ('ITERATIVE_SCHUR', 'EIGEN_SPARSE', 'CLUSTER_JACOBI'),
+ ('ITERATIVE_SCHUR', 'ACCELERATE_SPARSE','CLUSTER_JACOBI'),
+ ('ITERATIVE_SCHUR', 'SUITE_SPARSE', 'CLUSTER_TRIDIAGONAL'),
+ ('ITERATIVE_SCHUR', 'EIGEN_SPARSE', 'CLUSTER_TRIDIAGONAL'),
+ ('ITERATIVE_SCHUR', 'ACCELERATE_SPARSE','CLUSTER_TRIDIAGONAL'),
]
FILENAME_SHORTENING_MAP = dict(
@@ -69,23 +76,26 @@
ITERATIVE_SCHUR='iterschur',
SPARSE_NORMAL_CHOLESKY='sparsecholesky',
SPARSE_SCHUR='sparseschur',
+ EIGEN='eigen',
+ LAPACK='lapack',
+ CUDA='cuda',
NO_SPARSE='', # Omit sparse reference entirely for dense tests.
SUITE_SPARSE='suitesparse',
EIGEN_SPARSE='eigensparse',
- CX_SPARSE='cxsparse',
ACCELERATE_SPARSE='acceleratesparse',
IDENTITY='identity',
JACOBI='jacobi',
SCHUR_JACOBI='schurjacobi',
CLUSTER_JACOBI='clustjacobi',
CLUSTER_TRIDIAGONAL='clusttri',
+ SCHUR_POWER_SERIES_EXPANSION='spse',
kAutomaticOrdering='auto',
kUserOrdering='user',
)
COPYRIGHT_HEADER = (
"""// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -123,28 +133,30 @@
BUNDLE_ADJUSTMENT_TEST_TEMPLATE = (COPYRIGHT_HEADER + """
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
%(preprocessor_conditions_begin)s
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
%(test_class_name)s) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = %(num_threads)s;
options->linear_solver_type = %(linear_solver)s;
+ options->dense_linear_algebra_library_type = %(dense_backend)s;
options->sparse_linear_algebra_library_type = %(sparse_backend)s;
options->preconditioner_type = %(preconditioner)s;
if (%(ordering)s) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
%(preprocessor_conditions_end)s""")
def camelcasify(token):
@@ -153,6 +165,7 @@
def generate_bundle_test(linear_solver,
+ dense_backend,
sparse_backend,
preconditioner,
ordering,
@@ -164,6 +177,10 @@
if linear_solver != 'ITERATIVE_SCHUR':
preconditioner_tag = ''
+ dense_backend_tag = dense_backend
+ if linear_solver != 'DENSE_SCHUR':
+ dense_backend_tag=''
+
# Omit references to the sparse backend when one is not in use.
sparse_backend_tag = sparse_backend
if sparse_backend == 'NO_SPARSE':
@@ -172,6 +189,7 @@
# Use a double underscore; otherwise the names are harder to understand.
test_class_name = '_'.join(filter(lambda x: x, [
camelcasify(linear_solver),
+ camelcasify(dense_backend_tag),
camelcasify(sparse_backend_tag),
camelcasify(preconditioner_tag),
ordering[1:], # Strip 'k'
@@ -180,6 +198,7 @@
# Initial template parameters (augmented more below).
template_parameters = dict(
linear_solver=linear_solver,
+ dense_backend=dense_backend,
sparse_backend=sparse_backend,
preconditioner=preconditioner,
ordering=ordering,
@@ -192,9 +211,6 @@
if sparse_backend == 'SUITE_SPARSE':
preprocessor_conditions_begin.append('#ifndef CERES_NO_SUITESPARSE')
preprocessor_conditions_end.insert(0, '#endif // CERES_NO_SUITESPARSE')
- elif sparse_backend == 'CX_SPARSE':
- preprocessor_conditions_begin.append('#ifndef CERES_NO_CXSPARSE')
- preprocessor_conditions_end.insert(0, '#endif // CERES_NO_CXSPARSE')
elif sparse_backend == 'ACCELERATE_SPARSE':
preprocessor_conditions_begin.append('#ifndef CERES_NO_ACCELERATE_SPARSE')
preprocessor_conditions_end.insert(0, '#endif // CERES_NO_ACCELERATE_SPARSE')
@@ -202,10 +218,12 @@
preprocessor_conditions_begin.append('#ifdef CERES_USE_EIGEN_SPARSE')
preprocessor_conditions_end.insert(0, '#endif // CERES_USE_EIGEN_SPARSE')
- # Accumulate appropriate #ifdef/#ifndefs for threading conditions.
- if thread_config == MULTI_THREADED:
- preprocessor_conditions_begin.append('#ifndef CERES_NO_THREADS')
- preprocessor_conditions_end.insert(0, '#endif // CERES_NO_THREADS')
+ if dense_backend == "LAPACK":
+ preprocessor_conditions_begin.append('#ifndef CERES_NO_LAPACK')
+ preprocessor_conditions_end.insert(0, '#endif // CERES_NO_LAPACK')
+ elif dense_backend == "CUDA":
+ preprocessor_conditions_begin.append('#ifndef CERES_NO_CUDA')
+ preprocessor_conditions_end.insert(0, '#endif // CERES_NO_CUDA')
# If there are #ifdefs, put newlines around them.
if preprocessor_conditions_begin:
@@ -223,10 +241,12 @@
# Substitute variables into the test template, and write the result to a file.
filename_tag = '_'.join(FILENAME_SHORTENING_MAP.get(x) for x in [
linear_solver,
+ dense_backend_tag,
sparse_backend_tag,
preconditioner_tag,
ordering]
if FILENAME_SHORTENING_MAP.get(x))
+
if (thread_config == MULTI_THREADED):
filename_tag += '_threads'
@@ -236,7 +256,7 @@
fd.write(BUNDLE_ADJUSTMENT_TEST_TEMPLATE % template_parameters)
# All done.
- print 'Generated', filename
+ print('Generated', filename)
return filename
@@ -244,16 +264,37 @@
if __name__ == '__main__':
# Iterate over all the possible configurations and generate the tests.
generated_files = []
- for linear_solver, sparse_backend, preconditioner in SOLVER_CONFIGS:
- for ordering in ORDERINGS:
- for thread_config in THREAD_CONFIGS:
+
+ for ordering in ORDERINGS:
+ for thread_config in THREAD_CONFIGS:
+ for linear_solver, dense_backend in DENSE_SOLVER_CONFIGS:
generated_files.append(
generate_bundle_test(linear_solver,
+ dense_backend,
+ 'NO_SPARSE',
+ 'IDENTITY',
+ ordering,
+ thread_config))
+
+ for linear_solver, sparse_backend, in SPARSE_SOLVER_CONFIGS:
+ generated_files.append(
+ generate_bundle_test(linear_solver,
+ 'EIGEN',
+ sparse_backend,
+ 'IDENTITY',
+ ordering,
+ thread_config))
+
+ for linear_solver, sparse_backend, preconditioner, in ITERATIVE_SOLVER_CONFIGS:
+ generated_files.append(
+ generate_bundle_test(linear_solver,
+ 'EIGEN',
sparse_backend,
preconditioner,
ordering,
thread_config))
+
# Generate the CMakeLists.txt as well.
with open('generated_bundle_adjustment_tests/CMakeLists.txt', 'w') as fd:
fd.write(COPYRIGHT_HEADER.replace('//', '#').replace('http:#', 'http://'))
diff --git a/internal/ceres/generate_template_specializations.py b/internal/ceres/generate_template_specializations.py
index 74e46c2..12cf0b0 100644
--- a/internal/ceres/generate_template_specializations.py
+++ b/internal/ceres/generate_template_specializations.py
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -85,9 +85,9 @@
return str(size)
def SpecializationFilename(prefix, row_block_size, e_block_size, f_block_size):
- return "_".join([prefix] + map(SuffixForSize, (row_block_size,
+ return "_".join([prefix] + list(map(SuffixForSize, (row_block_size,
e_block_size,
- f_block_size)))
+ f_block_size))))
def GenerateFactoryConditional(row_block_size, e_block_size, f_block_size):
conditionals = []
@@ -144,7 +144,7 @@
f.write(data["FACTORY_FOOTER"])
QUERY_HEADER = """// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_2_2.cc b/internal/ceres/generated/partitioned_matrix_view_2_2_2.cc
index f5753be..c37dbf0 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_2_2.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_2_2.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 2, 2>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_2_3.cc b/internal/ceres/generated/partitioned_matrix_view_2_2_3.cc
index a7a9b52..d856df6 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_2_3.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_2_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 2, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_2_4.cc b/internal/ceres/generated/partitioned_matrix_view_2_2_4.cc
index faf6c4a..a62a436 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_2_4.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_2_4.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 2, 4>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_2_d.cc b/internal/ceres/generated/partitioned_matrix_view_2_2_d.cc
index 92fd4cd..f8b7089 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_2_d.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_2_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 2, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_3_3.cc b/internal/ceres/generated/partitioned_matrix_view_2_3_3.cc
index 2df314f..cd5bb91 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_3_3.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_3_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 3, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_3_4.cc b/internal/ceres/generated/partitioned_matrix_view_2_3_4.cc
index ff1ca3e..51af0f7 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_3_4.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_3_4.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 3, 4>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_3_6.cc b/internal/ceres/generated/partitioned_matrix_view_2_3_6.cc
index 5041df9..39b920a 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_3_6.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_3_6.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 3, 6>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_3_9.cc b/internal/ceres/generated/partitioned_matrix_view_2_3_9.cc
index c0b72fe..3f211b9 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_3_9.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_3_9.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 3, 9>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_3_d.cc b/internal/ceres/generated/partitioned_matrix_view_2_3_d.cc
index 8a3c162..a33d2e3 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_3_d.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_3_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 3, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_4_3.cc b/internal/ceres/generated/partitioned_matrix_view_2_4_3.cc
index 0e69ca6..14b91b3 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_4_3.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_4_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 4, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_4_4.cc b/internal/ceres/generated/partitioned_matrix_view_2_4_4.cc
index ba9bb61..be1c234 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_4_4.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_4_4.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 4, 4>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_4_6.cc b/internal/ceres/generated/partitioned_matrix_view_2_4_6.cc
index 1acdb9b..b4ad615 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_4_6.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_4_6.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 4, 6>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_4_8.cc b/internal/ceres/generated/partitioned_matrix_view_2_4_8.cc
index 888ff99..b505f56 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_4_8.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_4_8.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 4, 8>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_4_9.cc b/internal/ceres/generated/partitioned_matrix_view_2_4_9.cc
index bd4dde3..f2f1469 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_4_9.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_4_9.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 4, 9>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_4_d.cc b/internal/ceres/generated/partitioned_matrix_view_2_4_d.cc
index 6d3516f..a0e250c 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_4_d.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_4_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, 4, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_2_d_d.cc b/internal/ceres/generated/partitioned_matrix_view_2_d_d.cc
index 77d22ed..6878963 100644
--- a/internal/ceres/generated/partitioned_matrix_view_2_d_d.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_2_d_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<2, Eigen::Dynamic, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_3_3_3.cc b/internal/ceres/generated/partitioned_matrix_view_3_3_3.cc
index aeb456c..2e6b81a 100644
--- a/internal/ceres/generated/partitioned_matrix_view_3_3_3.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_3_3_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<3, 3, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_4_4_2.cc b/internal/ceres/generated/partitioned_matrix_view_4_4_2.cc
index bb240b9..8b09f75 100644
--- a/internal/ceres/generated/partitioned_matrix_view_4_4_2.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_4_4_2.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<4, 4, 2>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_4_4_3.cc b/internal/ceres/generated/partitioned_matrix_view_4_4_3.cc
index 5d47543..e857daa 100644
--- a/internal/ceres/generated/partitioned_matrix_view_4_4_3.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_4_4_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<4, 4, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_4_4_4.cc b/internal/ceres/generated/partitioned_matrix_view_4_4_4.cc
index e14f980..f51a642 100644
--- a/internal/ceres/generated/partitioned_matrix_view_4_4_4.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_4_4_4.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<4, 4, 4>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_4_4_d.cc b/internal/ceres/generated/partitioned_matrix_view_4_4_d.cc
index 9ec5056..5e27e2e 100644
--- a/internal/ceres/generated/partitioned_matrix_view_4_4_d.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_4_4_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<4, 4, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/partitioned_matrix_view_d_d_d.cc b/internal/ceres/generated/partitioned_matrix_view_d_d_d.cc
index 1e12479..6e788ad 100644
--- a/internal/ceres/generated/partitioned_matrix_view_d_d_d.cc
+++ b/internal/ceres/generated/partitioned_matrix_view_d_d_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -41,12 +41,10 @@
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<Eigen::Dynamic,
Eigen::Dynamic,
Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated/schur_eliminator_2_2_2.cc b/internal/ceres/generated/schur_eliminator_2_2_2.cc
index 289a809..de29abe 100644
--- a/internal/ceres/generated/schur_eliminator_2_2_2.cc
+++ b/internal/ceres/generated/schur_eliminator_2_2_2.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 2, 2>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_2_3.cc b/internal/ceres/generated/schur_eliminator_2_2_3.cc
index 20311ba..38e2402 100644
--- a/internal/ceres/generated/schur_eliminator_2_2_3.cc
+++ b/internal/ceres/generated/schur_eliminator_2_2_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 2, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_2_4.cc b/internal/ceres/generated/schur_eliminator_2_2_4.cc
index 1f6a8ae..edf48ee 100644
--- a/internal/ceres/generated/schur_eliminator_2_2_4.cc
+++ b/internal/ceres/generated/schur_eliminator_2_2_4.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 2, 4>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_2_d.cc b/internal/ceres/generated/schur_eliminator_2_2_d.cc
index 08b18d3..48a8301 100644
--- a/internal/ceres/generated/schur_eliminator_2_2_d.cc
+++ b/internal/ceres/generated/schur_eliminator_2_2_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 2, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_3_3.cc b/internal/ceres/generated/schur_eliminator_2_3_3.cc
index 115b4c8..49a450d 100644
--- a/internal/ceres/generated/schur_eliminator_2_3_3.cc
+++ b/internal/ceres/generated/schur_eliminator_2_3_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 3, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_3_4.cc b/internal/ceres/generated/schur_eliminator_2_3_4.cc
index c703537..730d2b1 100644
--- a/internal/ceres/generated/schur_eliminator_2_3_4.cc
+++ b/internal/ceres/generated/schur_eliminator_2_3_4.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 3, 4>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_3_6.cc b/internal/ceres/generated/schur_eliminator_2_3_6.cc
index edb9afe..84b83af 100644
--- a/internal/ceres/generated/schur_eliminator_2_3_6.cc
+++ b/internal/ceres/generated/schur_eliminator_2_3_6.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 3, 6>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_3_9.cc b/internal/ceres/generated/schur_eliminator_2_3_9.cc
index faa5c19..bfb903f 100644
--- a/internal/ceres/generated/schur_eliminator_2_3_9.cc
+++ b/internal/ceres/generated/schur_eliminator_2_3_9.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 3, 9>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_3_d.cc b/internal/ceres/generated/schur_eliminator_2_3_d.cc
index 81b6f97..041b7ac 100644
--- a/internal/ceres/generated/schur_eliminator_2_3_d.cc
+++ b/internal/ceres/generated/schur_eliminator_2_3_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 3, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_4_3.cc b/internal/ceres/generated/schur_eliminator_2_4_3.cc
index 2cb2d15..c7827d1 100644
--- a/internal/ceres/generated/schur_eliminator_2_4_3.cc
+++ b/internal/ceres/generated/schur_eliminator_2_4_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 4, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_4_4.cc b/internal/ceres/generated/schur_eliminator_2_4_4.cc
index a78eff3..9429d4c 100644
--- a/internal/ceres/generated/schur_eliminator_2_4_4.cc
+++ b/internal/ceres/generated/schur_eliminator_2_4_4.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 4, 4>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_4_6.cc b/internal/ceres/generated/schur_eliminator_2_4_6.cc
index e2534f2..ba14b08 100644
--- a/internal/ceres/generated/schur_eliminator_2_4_6.cc
+++ b/internal/ceres/generated/schur_eliminator_2_4_6.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 4, 6>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_4_8.cc b/internal/ceres/generated/schur_eliminator_2_4_8.cc
index 296a462..9210d9d 100644
--- a/internal/ceres/generated/schur_eliminator_2_4_8.cc
+++ b/internal/ceres/generated/schur_eliminator_2_4_8.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 4, 8>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_4_9.cc b/internal/ceres/generated/schur_eliminator_2_4_9.cc
index 0d0b04e..ea45d0f 100644
--- a/internal/ceres/generated/schur_eliminator_2_4_9.cc
+++ b/internal/ceres/generated/schur_eliminator_2_4_9.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 4, 9>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_4_d.cc b/internal/ceres/generated/schur_eliminator_2_4_d.cc
index 7979926..8ba7c8c 100644
--- a/internal/ceres/generated/schur_eliminator_2_4_d.cc
+++ b/internal/ceres/generated/schur_eliminator_2_4_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, 4, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_2_d_d.cc b/internal/ceres/generated/schur_eliminator_2_d_d.cc
index 189be04..1f40787 100644
--- a/internal/ceres/generated/schur_eliminator_2_d_d.cc
+++ b/internal/ceres/generated/schur_eliminator_2_d_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<2, Eigen::Dynamic, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_3_3_3.cc b/internal/ceres/generated/schur_eliminator_3_3_3.cc
index 35c14a8..909fb79 100644
--- a/internal/ceres/generated/schur_eliminator_3_3_3.cc
+++ b/internal/ceres/generated/schur_eliminator_3_3_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<3, 3, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_4_4_2.cc b/internal/ceres/generated/schur_eliminator_4_4_2.cc
index 878500a..5ca6fca 100644
--- a/internal/ceres/generated/schur_eliminator_4_4_2.cc
+++ b/internal/ceres/generated/schur_eliminator_4_4_2.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<4, 4, 2>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_4_4_3.cc b/internal/ceres/generated/schur_eliminator_4_4_3.cc
index c4b0959..9d0862a 100644
--- a/internal/ceres/generated/schur_eliminator_4_4_3.cc
+++ b/internal/ceres/generated/schur_eliminator_4_4_3.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<4, 4, 3>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_4_4_4.cc b/internal/ceres/generated/schur_eliminator_4_4_4.cc
index 20df534..b04ab66 100644
--- a/internal/ceres/generated/schur_eliminator_4_4_4.cc
+++ b/internal/ceres/generated/schur_eliminator_4_4_4.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<4, 4, 4>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_4_4_d.cc b/internal/ceres/generated/schur_eliminator_4_4_d.cc
index 17368dc..8e75543 100644
--- a/internal/ceres/generated/schur_eliminator_4_4_d.cc
+++ b/internal/ceres/generated/schur_eliminator_4_4_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,18 +40,16 @@
// This file is generated using generate_template_specializations.py.
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<4, 4, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
diff --git a/internal/ceres/generated/schur_eliminator_d_d_d.cc b/internal/ceres/generated/schur_eliminator_d_d_d.cc
index ca598fe..49c40e8 100644
--- a/internal/ceres/generated/schur_eliminator_d_d_d.cc
+++ b/internal/ceres/generated/schur_eliminator_d_d_d.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -41,10 +41,8 @@
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/CMakeLists.txt b/internal/ceres/generated_bundle_adjustment_tests/CMakeLists.txt
index db2d233..5f4f65e 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/CMakeLists.txt
+++ b/internal/ceres/generated_bundle_adjustment_tests/CMakeLists.txt
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2018 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -35,79 +35,75 @@
#
# This file is generated using generate_bundle_adjustment_tests.py.
-ceres_test(ba_denseschur_auto)
-ceres_test(ba_denseschur_auto_threads)
-ceres_test(ba_denseschur_user)
-ceres_test(ba_denseschur_user_threads)
-ceres_test(ba_iterschur_jacobi_auto)
-ceres_test(ba_iterschur_jacobi_auto_threads)
-ceres_test(ba_iterschur_jacobi_user)
-ceres_test(ba_iterschur_jacobi_user_threads)
-ceres_test(ba_iterschur_schurjacobi_auto)
-ceres_test(ba_iterschur_schurjacobi_auto_threads)
-ceres_test(ba_iterschur_schurjacobi_user)
-ceres_test(ba_iterschur_schurjacobi_user_threads)
-ceres_test(ba_iterschur_suitesparse_clustjacobi_auto)
-ceres_test(ba_iterschur_suitesparse_clustjacobi_auto_threads)
-ceres_test(ba_iterschur_suitesparse_clustjacobi_user)
-ceres_test(ba_iterschur_suitesparse_clustjacobi_user_threads)
-ceres_test(ba_iterschur_eigensparse_clustjacobi_auto)
-ceres_test(ba_iterschur_eigensparse_clustjacobi_auto_threads)
-ceres_test(ba_iterschur_eigensparse_clustjacobi_user)
-ceres_test(ba_iterschur_eigensparse_clustjacobi_user_threads)
-ceres_test(ba_iterschur_cxsparse_clustjacobi_auto)
-ceres_test(ba_iterschur_cxsparse_clustjacobi_auto_threads)
-ceres_test(ba_iterschur_cxsparse_clustjacobi_user)
-ceres_test(ba_iterschur_cxsparse_clustjacobi_user_threads)
-ceres_test(ba_iterschur_acceleratesparse_clustjacobi_auto)
-ceres_test(ba_iterschur_acceleratesparse_clustjacobi_auto_threads)
-ceres_test(ba_iterschur_acceleratesparse_clustjacobi_user)
-ceres_test(ba_iterschur_acceleratesparse_clustjacobi_user_threads)
-ceres_test(ba_iterschur_suitesparse_clusttri_auto)
-ceres_test(ba_iterschur_suitesparse_clusttri_auto_threads)
-ceres_test(ba_iterschur_suitesparse_clusttri_user)
-ceres_test(ba_iterschur_suitesparse_clusttri_user_threads)
-ceres_test(ba_iterschur_eigensparse_clusttri_auto)
-ceres_test(ba_iterschur_eigensparse_clusttri_auto_threads)
-ceres_test(ba_iterschur_eigensparse_clusttri_user)
-ceres_test(ba_iterschur_eigensparse_clusttri_user_threads)
-ceres_test(ba_iterschur_cxsparse_clusttri_auto)
-ceres_test(ba_iterschur_cxsparse_clusttri_auto_threads)
-ceres_test(ba_iterschur_cxsparse_clusttri_user)
-ceres_test(ba_iterschur_cxsparse_clusttri_user_threads)
-ceres_test(ba_iterschur_acceleratesparse_clusttri_auto)
-ceres_test(ba_iterschur_acceleratesparse_clusttri_auto_threads)
-ceres_test(ba_iterschur_acceleratesparse_clusttri_user)
-ceres_test(ba_iterschur_acceleratesparse_clusttri_user_threads)
+ceres_test(ba_denseschur_eigen_auto)
+ceres_test(ba_denseschur_lapack_auto)
+ceres_test(ba_denseschur_cuda_auto)
ceres_test(ba_sparsecholesky_suitesparse_auto)
-ceres_test(ba_sparsecholesky_suitesparse_auto_threads)
-ceres_test(ba_sparsecholesky_suitesparse_user)
-ceres_test(ba_sparsecholesky_suitesparse_user_threads)
ceres_test(ba_sparsecholesky_eigensparse_auto)
-ceres_test(ba_sparsecholesky_eigensparse_auto_threads)
-ceres_test(ba_sparsecholesky_eigensparse_user)
-ceres_test(ba_sparsecholesky_eigensparse_user_threads)
-ceres_test(ba_sparsecholesky_cxsparse_auto)
-ceres_test(ba_sparsecholesky_cxsparse_auto_threads)
-ceres_test(ba_sparsecholesky_cxsparse_user)
-ceres_test(ba_sparsecholesky_cxsparse_user_threads)
ceres_test(ba_sparsecholesky_acceleratesparse_auto)
-ceres_test(ba_sparsecholesky_acceleratesparse_auto_threads)
-ceres_test(ba_sparsecholesky_acceleratesparse_user)
-ceres_test(ba_sparsecholesky_acceleratesparse_user_threads)
ceres_test(ba_sparseschur_suitesparse_auto)
-ceres_test(ba_sparseschur_suitesparse_auto_threads)
-ceres_test(ba_sparseschur_suitesparse_user)
-ceres_test(ba_sparseschur_suitesparse_user_threads)
ceres_test(ba_sparseschur_eigensparse_auto)
-ceres_test(ba_sparseschur_eigensparse_auto_threads)
-ceres_test(ba_sparseschur_eigensparse_user)
-ceres_test(ba_sparseschur_eigensparse_user_threads)
-ceres_test(ba_sparseschur_cxsparse_auto)
-ceres_test(ba_sparseschur_cxsparse_auto_threads)
-ceres_test(ba_sparseschur_cxsparse_user)
-ceres_test(ba_sparseschur_cxsparse_user_threads)
ceres_test(ba_sparseschur_acceleratesparse_auto)
+ceres_test(ba_iterschur_jacobi_auto)
+ceres_test(ba_iterschur_schurjacobi_auto)
+ceres_test(ba_iterschur_spse_auto)
+ceres_test(ba_iterschur_suitesparse_clustjacobi_auto)
+ceres_test(ba_iterschur_eigensparse_clustjacobi_auto)
+ceres_test(ba_iterschur_acceleratesparse_clustjacobi_auto)
+ceres_test(ba_iterschur_suitesparse_clusttri_auto)
+ceres_test(ba_iterschur_eigensparse_clusttri_auto)
+ceres_test(ba_iterschur_acceleratesparse_clusttri_auto)
+ceres_test(ba_denseschur_eigen_auto_threads)
+ceres_test(ba_denseschur_lapack_auto_threads)
+ceres_test(ba_denseschur_cuda_auto_threads)
+ceres_test(ba_sparsecholesky_suitesparse_auto_threads)
+ceres_test(ba_sparsecholesky_eigensparse_auto_threads)
+ceres_test(ba_sparsecholesky_acceleratesparse_auto_threads)
+ceres_test(ba_sparseschur_suitesparse_auto_threads)
+ceres_test(ba_sparseschur_eigensparse_auto_threads)
ceres_test(ba_sparseschur_acceleratesparse_auto_threads)
+ceres_test(ba_iterschur_jacobi_auto_threads)
+ceres_test(ba_iterschur_schurjacobi_auto_threads)
+ceres_test(ba_iterschur_spse_auto_threads)
+ceres_test(ba_iterschur_suitesparse_clustjacobi_auto_threads)
+ceres_test(ba_iterschur_eigensparse_clustjacobi_auto_threads)
+ceres_test(ba_iterschur_acceleratesparse_clustjacobi_auto_threads)
+ceres_test(ba_iterschur_suitesparse_clusttri_auto_threads)
+ceres_test(ba_iterschur_eigensparse_clusttri_auto_threads)
+ceres_test(ba_iterschur_acceleratesparse_clusttri_auto_threads)
+ceres_test(ba_denseschur_eigen_user)
+ceres_test(ba_denseschur_lapack_user)
+ceres_test(ba_denseschur_cuda_user)
+ceres_test(ba_sparsecholesky_suitesparse_user)
+ceres_test(ba_sparsecholesky_eigensparse_user)
+ceres_test(ba_sparsecholesky_acceleratesparse_user)
+ceres_test(ba_sparseschur_suitesparse_user)
+ceres_test(ba_sparseschur_eigensparse_user)
ceres_test(ba_sparseschur_acceleratesparse_user)
+ceres_test(ba_iterschur_jacobi_user)
+ceres_test(ba_iterschur_schurjacobi_user)
+ceres_test(ba_iterschur_spse_user)
+ceres_test(ba_iterschur_suitesparse_clustjacobi_user)
+ceres_test(ba_iterschur_eigensparse_clustjacobi_user)
+ceres_test(ba_iterschur_acceleratesparse_clustjacobi_user)
+ceres_test(ba_iterschur_suitesparse_clusttri_user)
+ceres_test(ba_iterschur_eigensparse_clusttri_user)
+ceres_test(ba_iterschur_acceleratesparse_clusttri_user)
+ceres_test(ba_denseschur_eigen_user_threads)
+ceres_test(ba_denseschur_lapack_user_threads)
+ceres_test(ba_denseschur_cuda_user_threads)
+ceres_test(ba_sparsecholesky_suitesparse_user_threads)
+ceres_test(ba_sparsecholesky_eigensparse_user_threads)
+ceres_test(ba_sparsecholesky_acceleratesparse_user_threads)
+ceres_test(ba_sparseschur_suitesparse_user_threads)
+ceres_test(ba_sparseschur_eigensparse_user_threads)
ceres_test(ba_sparseschur_acceleratesparse_user_threads)
+ceres_test(ba_iterschur_jacobi_user_threads)
+ceres_test(ba_iterschur_schurjacobi_user_threads)
+ceres_test(ba_iterschur_spse_user_threads)
+ceres_test(ba_iterschur_suitesparse_clustjacobi_user_threads)
+ceres_test(ba_iterschur_eigensparse_clustjacobi_user_threads)
+ceres_test(ba_iterschur_acceleratesparse_clustjacobi_user_threads)
+ceres_test(ba_iterschur_suitesparse_clusttri_user_threads)
+ceres_test(ba_iterschur_eigensparse_clusttri_user_threads)
+ceres_test(ba_iterschur_acceleratesparse_clusttri_user_threads)
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_auto_test.cc
similarity index 84%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_auto_test.cc
index c0585e8..e48f646 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+#ifndef CERES_NO_CUDA
+
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_AutomaticOrdering) { // NOLINT
+ DenseSchur_Cuda_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = CUDA;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_auto_threads_test.cc
similarity index 84%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_auto_threads_test.cc
index 2ece1b4..336066e 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
+#ifndef CERES_NO_CUDA
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_AutomaticOrdering_Threads) { // NOLINT
+ DenseSchur_Cuda_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = CUDA;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_user_test.cc
similarity index 84%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_user_test.cc
index 983c09e..86df64d 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+#ifndef CERES_NO_CUDA
+
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_UserOrdering) { // NOLINT
+ DenseSchur_Cuda_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = CUDA;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_user_threads_test.cc
similarity index 84%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_user_threads_test.cc
index 5b739f9..5e752e4 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_cuda_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
+#ifndef CERES_NO_CUDA
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_UserOrdering_Threads) { // NOLINT
+ DenseSchur_Cuda_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = CUDA;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
+#endif // CERES_NO_CUDA
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_auto_test.cc
similarity index 85%
rename from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc
rename to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_auto_test.cc
index c0585e8..faca44e 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_AutomaticOrdering) { // NOLINT
+ DenseSchur_Eigen_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_auto_threads_test.cc
similarity index 85%
rename from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc
rename to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_auto_threads_test.cc
index 2ece1b4..c4db182 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_AutomaticOrdering_Threads) { // NOLINT
+ DenseSchur_Eigen_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_user_test.cc
similarity index 85%
rename from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc
rename to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_user_test.cc
index 983c09e..7fe05d1 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_UserOrdering) { // NOLINT
+ DenseSchur_Eigen_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_user_threads_test.cc
similarity index 85%
rename from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc
rename to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_user_threads_test.cc
index 5b739f9..7c34e20 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_eigen_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_UserOrdering_Threads) { // NOLINT
+ DenseSchur_Eigen_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_auto_test.cc
similarity index 84%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_auto_test.cc
index c0585e8..79ec5ce 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+#ifndef CERES_NO_LAPACK
+
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_AutomaticOrdering) { // NOLINT
+ DenseSchur_Lapack_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = LAPACK;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#endif // CERES_NO_LAPACK
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_auto_threads_test.cc
similarity index 84%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_auto_threads_test.cc
index 2ece1b4..ee74420 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
+#ifndef CERES_NO_LAPACK
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_AutomaticOrdering_Threads) { // NOLINT
+ DenseSchur_Lapack_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = LAPACK;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
+#endif // CERES_NO_LAPACK
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_user_test.cc
similarity index 84%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_user_test.cc
index 983c09e..205de87 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+#ifndef CERES_NO_LAPACK
+
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_UserOrdering) { // NOLINT
+ DenseSchur_Lapack_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = LAPACK;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#endif // CERES_NO_LAPACK
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_user_threads_test.cc
similarity index 84%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_user_threads_test.cc
index 5b739f9..03a0a73 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_lapack_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
+#ifndef CERES_NO_LAPACK
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_UserOrdering_Threads) { // NOLINT
+ DenseSchur_Lapack_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = DENSE_SCHUR;
+ options->dense_linear_algebra_library_type = LAPACK;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
+#endif // CERES_NO_LAPACK
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_auto_test.cc
index f2a6661..8ab6a9b 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_AccelerateSparse_ClusterJacobi_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_auto_threads_test.cc
index 0178c67..cf6be8d 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_AccelerateSparse_ClusterJacobi_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_user_test.cc
index 6f29df5..b6ca30f 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_AccelerateSparse_ClusterJacobi_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_user_threads_test.cc
index c92b364..2ef6aa1 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clustjacobi_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_AccelerateSparse_ClusterJacobi_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_auto_test.cc
index 576a251..ea24955 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_AccelerateSparse_ClusterTridiagonal_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_auto_threads_test.cc
index 363c92a..217edc3 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_AccelerateSparse_ClusterTridiagonal_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_user_test.cc
index 7444a77..4b66d17 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_AccelerateSparse_ClusterTridiagonal_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_user_threads_test.cc
index f258e6b..3e84734 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_acceleratesparse_clusttri_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_AccelerateSparse_ClusterTridiagonal_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_auto_test.cc
deleted file mode 100644
index 9f7032b..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_auto_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- IterativeSchur_CxSparse_ClusterJacobi_AutomaticOrdering) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 1;
- options->linear_solver_type = ITERATIVE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = CLUSTER_JACOBI;
- if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_auto_threads_test.cc
deleted file mode 100644
index 3d807cf..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_auto_threads_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- IterativeSchur_CxSparse_ClusterJacobi_AutomaticOrdering_Threads) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 4;
- options->linear_solver_type = ITERATIVE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = CLUSTER_JACOBI;
- if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_user_test.cc
deleted file mode 100644
index 5883d12..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_user_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- IterativeSchur_CxSparse_ClusterJacobi_UserOrdering) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 1;
- options->linear_solver_type = ITERATIVE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = CLUSTER_JACOBI;
- if (kUserOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_user_threads_test.cc
deleted file mode 100644
index b98933d..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clustjacobi_user_threads_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- IterativeSchur_CxSparse_ClusterJacobi_UserOrdering_Threads) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 4;
- options->linear_solver_type = ITERATIVE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = CLUSTER_JACOBI;
- if (kUserOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_auto_test.cc
deleted file mode 100644
index f29e939..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_auto_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- IterativeSchur_CxSparse_ClusterTridiagonal_AutomaticOrdering) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 1;
- options->linear_solver_type = ITERATIVE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = CLUSTER_TRIDIAGONAL;
- if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_auto_threads_test.cc
deleted file mode 100644
index b45d65c..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_auto_threads_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- IterativeSchur_CxSparse_ClusterTridiagonal_AutomaticOrdering_Threads) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 4;
- options->linear_solver_type = ITERATIVE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = CLUSTER_TRIDIAGONAL;
- if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_user_test.cc
deleted file mode 100644
index 35a68e8..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_user_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- IterativeSchur_CxSparse_ClusterTridiagonal_UserOrdering) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 1;
- options->linear_solver_type = ITERATIVE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = CLUSTER_TRIDIAGONAL;
- if (kUserOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_user_threads_test.cc
deleted file mode 100644
index ac3dc25..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_cxsparse_clusttri_user_threads_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- IterativeSchur_CxSparse_ClusterTridiagonal_UserOrdering_Threads) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 4;
- options->linear_solver_type = ITERATIVE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = CLUSTER_TRIDIAGONAL;
- if (kUserOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_auto_test.cc
index 92b3021..c1ceb9f 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_EigenSparse_ClusterJacobi_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_auto_threads_test.cc
index dc72edf..35760db 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_EigenSparse_ClusterJacobi_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_user_test.cc
index 576b8be..6b35e6b 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_EigenSparse_ClusterJacobi_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_user_threads_test.cc
index 786c19a..9b36562 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clustjacobi_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_EigenSparse_ClusterJacobi_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_auto_test.cc
index cb3c958..a62c8ba 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_EigenSparse_ClusterTridiagonal_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_auto_threads_test.cc
index 3851bfc..f306f81 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_EigenSparse_ClusterTridiagonal_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_user_test.cc
index 0df51c2..62d60b1 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_EigenSparse_ClusterTridiagonal_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_user_threads_test.cc
index 33c6bb8..223a76a 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_eigensparse_clusttri_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_EigenSparse_ClusterTridiagonal_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_auto_test.cc
index 78d8d44..8e9afa1 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_Jacobi_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_auto_threads_test.cc
index 98fa68b..433bf76 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_Jacobi_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_user_test.cc
index 07fa0a0..8e740da 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_Jacobi_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_user_threads_test.cc
index 3244173..705bfe2 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_jacobi_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_Jacobi_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_auto_test.cc
index 61f2c51..6b46f3d 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SchurJacobi_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = SCHUR_JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_auto_threads_test.cc
index 3bfb35c..a74efa8 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SchurJacobi_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = SCHUR_JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_user_test.cc
index f84b561..d0fa244 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SchurJacobi_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = SCHUR_JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_user_threads_test.cc
index 9206290..e5418c9 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_schurjacobi_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SchurJacobi_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
options->preconditioner_type = SCHUR_JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_auto_test.cc
similarity index 81%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_auto_test.cc
index c0585e8..1287a61 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_AutomaticOrdering) { // NOLINT
+ IterativeSchur_SchurPowerSeriesExpansion_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
- options->linear_solver_type = DENSE_SCHUR;
+ options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
- options->preconditioner_type = IDENTITY;
+ options->preconditioner_type = SCHUR_POWER_SERIES_EXPANSION;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_auto_threads_test.cc
similarity index 81%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_auto_threads_test.cc
index 2ece1b4..739d7bf 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_AutomaticOrdering_Threads) { // NOLINT
+ IterativeSchur_SchurPowerSeriesExpansion_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
- options->linear_solver_type = DENSE_SCHUR;
+ options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
- options->preconditioner_type = IDENTITY;
+ options->preconditioner_type = SCHUR_POWER_SERIES_EXPANSION;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_user_test.cc
similarity index 81%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_user_test.cc
index 983c09e..38b8d36 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,25 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_UserOrdering) { // NOLINT
+ IterativeSchur_SchurPowerSeriesExpansion_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
- options->linear_solver_type = DENSE_SCHUR;
+ options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
- options->preconditioner_type = IDENTITY;
+ options->preconditioner_type = SCHUR_POWER_SERIES_EXPANSION;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_user_threads_test.cc
similarity index 81%
copy from internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc
copy to internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_user_threads_test.cc
index 5b739f9..2a715cc 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_denseschur_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_spse_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,27 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
- DenseSchur_UserOrdering_Threads) { // NOLINT
+ IterativeSchur_SchurPowerSeriesExpansion_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
- options->linear_solver_type = DENSE_SCHUR;
+ options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = NO_SPARSE;
- options->preconditioner_type = IDENTITY;
+ options->preconditioner_type = SCHUR_POWER_SERIES_EXPANSION;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_auto_test.cc
index ec63ae1..d25a7e7 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SuiteSparse_ClusterJacobi_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_auto_threads_test.cc
index de40b81..38d65b1 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SuiteSparse_ClusterJacobi_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_user_test.cc
index 5406840..40b7451 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SuiteSparse_ClusterJacobi_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_user_threads_test.cc
index 9e8aeec..d12b524 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clustjacobi_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SuiteSparse_ClusterJacobi_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = CLUSTER_JACOBI;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_auto_test.cc
index fc80339..7206132 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SuiteSparse_ClusterTridiagonal_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_auto_threads_test.cc
index f4962ab..61efa20 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SuiteSparse_ClusterTridiagonal_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_user_test.cc
index 7f99834..b750cbb 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SuiteSparse_ClusterTridiagonal_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_user_threads_test.cc
index 041b77a..f704e32 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_iterschur_suitesparse_clusttri_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
IterativeSchur_SuiteSparse_ClusterTridiagonal_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = ITERATIVE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = CLUSTER_TRIDIAGONAL;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_auto_test.cc
index 95d6259..05caf3c 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_AccelerateSparse_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_auto_threads_test.cc
index cf525bc..bfd1d4b 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_AccelerateSparse_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_user_test.cc
index 01c86f8..0017874 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_AccelerateSparse_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_user_threads_test.cc
index b562b99..7bb7f0b 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_acceleratesparse_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_AccelerateSparse_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_auto_test.cc
deleted file mode 100644
index aa0dc2c..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_auto_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- SparseNormalCholesky_CxSparse_AutomaticOrdering) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 1;
- options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = IDENTITY;
- if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_auto_threads_test.cc
deleted file mode 100644
index 367c4fb..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_auto_threads_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- SparseNormalCholesky_CxSparse_AutomaticOrdering_Threads) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 4;
- options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = IDENTITY;
- if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_user_test.cc
deleted file mode 100644
index 523e031..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_user_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- SparseNormalCholesky_CxSparse_UserOrdering) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 1;
- options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = IDENTITY;
- if (kUserOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_user_threads_test.cc
deleted file mode 100644
index e1923ee..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_cxsparse_user_threads_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- SparseNormalCholesky_CxSparse_UserOrdering_Threads) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 4;
- options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = IDENTITY;
- if (kUserOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_auto_test.cc
index e9202e9..a15553c 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_EigenSparse_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_auto_threads_test.cc
index 769e3c8..2134295 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_EigenSparse_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_user_test.cc
index 87763c5..de00ccb 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_EigenSparse_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_user_threads_test.cc
index 38e10d9..f4ad5e0 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_eigensparse_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_EigenSparse_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_auto_test.cc
index fd9b6e7..f72ca55 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_SuiteSparse_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_auto_threads_test.cc
index 476087b..d301e5e 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_SuiteSparse_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_user_test.cc
index be64ae8..db22d6c 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_SuiteSparse_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_user_threads_test.cc
index d6a2653..8b97820 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparsecholesky_suitesparse_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseNormalCholesky_SuiteSparse_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_auto_test.cc
index 923eca4..d4b8dd2 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_AccelerateSparse_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_auto_threads_test.cc
index 8b1a613..8f7aacf 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_AccelerateSparse_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_user_test.cc
index b107c68..257cf0d 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_AccelerateSparse_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_user_threads_test.cc
index a765e8a..dea1a28 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_acceleratesparse_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_ACCELERATE_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_AccelerateSparse_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_ACCELERATE_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_auto_test.cc
deleted file mode 100644
index 5f2d3d9..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_auto_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- SparseSchur_CxSparse_AutomaticOrdering) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 1;
- options->linear_solver_type = SPARSE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = IDENTITY;
- if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_auto_threads_test.cc
deleted file mode 100644
index 791e8af..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_auto_threads_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- SparseSchur_CxSparse_AutomaticOrdering_Threads) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 4;
- options->linear_solver_type = SPARSE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = IDENTITY;
- if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_user_test.cc
deleted file mode 100644
index 260d2d7..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_user_test.cc
+++ /dev/null
@@ -1,63 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- SparseSchur_CxSparse_UserOrdering) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 1;
- options->linear_solver_type = SPARSE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = IDENTITY;
- if (kUserOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_user_threads_test.cc
deleted file mode 100644
index bf01577..0000000
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_cxsparse_user_threads_test.cc
+++ /dev/null
@@ -1,65 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// ========================================
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// THIS FILE IS AUTOGENERATED. DO NOT EDIT.
-// ========================================
-//
-// This file is generated using generate_bundle_adjustment_tests.py.
-
-#include "bundle_adjustment_test_util.h"
-
-#ifndef CERES_NO_CXSPARSE
-#ifndef CERES_NO_THREADS
-
-namespace ceres {
-namespace internal {
-
-TEST_F(BundleAdjustmentTest,
- SparseSchur_CxSparse_UserOrdering_Threads) { // NOLINT
- BundleAdjustmentProblem bundle_adjustment_problem;
- Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
- options->num_threads = 4;
- options->linear_solver_type = SPARSE_SCHUR;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- options->preconditioner_type = IDENTITY;
- if (kUserOrdering) {
- options->linear_solver_ordering.reset();
- }
- Problem* problem = bundle_adjustment_problem.mutable_problem();
- RunSolverForConfigAndExpectResidualsMatch(*options, problem);
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
-#endif // CERES_NO_CXSPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_auto_test.cc
index eeac03c..bcc2cc4 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_EigenSparse_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_auto_threads_test.cc
index 86d4ce4..e70f9ee 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_EigenSparse_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_user_test.cc
index 434b466..cb408af 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_EigenSparse_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_user_threads_test.cc
index db6e0cf..8902146 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_eigensparse_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifdef CERES_USE_EIGEN_SPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_EigenSparse_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = EIGEN_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_USE_EIGEN_SPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_auto_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_auto_test.cc
index 8dd0117..aaa38c0 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_auto_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_auto_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_SuiteSparse_AutomaticOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_auto_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_auto_threads_test.cc
index b497938..a3f4ad0 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_auto_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_auto_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_SuiteSparse_AutomaticOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kAutomaticOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_user_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_user_test.cc
index 1a38e9e..53cad2d 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_user_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_user_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,29 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_SuiteSparse_UserOrdering) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 1;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_user_threads_test.cc b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_user_threads_test.cc
index 05f28af..d23a277 100644
--- a/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_user_threads_test.cc
+++ b/internal/ceres/generated_bundle_adjustment_tests/ba_sparseschur_suitesparse_user_threads_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,31 +35,31 @@
//
// This file is generated using generate_bundle_adjustment_tests.py.
-#include "bundle_adjustment_test_util.h"
+#include "ceres/bundle_adjustment_test_util.h"
+#include "ceres/internal/config.h"
+#include "gtest/gtest.h"
#ifndef CERES_NO_SUITESPARSE
-#ifndef CERES_NO_THREADS
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST_F(BundleAdjustmentTest,
SparseSchur_SuiteSparse_UserOrdering_Threads) { // NOLINT
BundleAdjustmentProblem bundle_adjustment_problem;
Solver::Options* options = bundle_adjustment_problem.mutable_solver_options();
+ options->eta = 0.01;
options->num_threads = 4;
options->linear_solver_type = SPARSE_SCHUR;
+ options->dense_linear_algebra_library_type = EIGEN;
options->sparse_linear_algebra_library_type = SUITE_SPARSE;
options->preconditioner_type = IDENTITY;
if (kUserOrdering) {
- options->linear_solver_ordering.reset();
+ options->linear_solver_ordering = nullptr;
}
Problem* problem = bundle_adjustment_problem.mutable_problem();
RunSolverForConfigAndExpectResidualsMatch(*options, problem);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
-#endif // CERES_NO_THREADS
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/gmock/gmock.h b/internal/ceres/gmock/gmock.h
index 2180319..9bb49d0 100644
--- a/internal/ceres/gmock/gmock.h
+++ b/internal/ceres/gmock/gmock.h
@@ -34,19 +34,19 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_H_
// This file implements the following syntax:
//
-// ON_CALL(mock_object.Method(...))
+// ON_CALL(mock_object, Method(...))
// .With(...) ?
// .WillByDefault(...);
//
// where With() is optional and WillByDefault() must appear exactly
// once.
//
-// EXPECT_CALL(mock_object.Method(...))
+// EXPECT_CALL(mock_object, Method(...))
// .With(...) ?
// .Times(...) ?
// .InSequence(...) *
@@ -88,12 +88,105 @@
// Google Mock - a framework for writing C++ mock classes.
//
-// This file implements some commonly used actions.
+// The ACTION* family of macros can be used in a namespace scope to
+// define custom actions easily. The syntax:
+//
+// ACTION(name) { statements; }
+//
+// will define an action with the given name that executes the
+// statements. The value returned by the statements will be used as
+// the return value of the action. Inside the statements, you can
+// refer to the K-th (0-based) argument of the mock function by
+// 'argK', and refer to its type by 'argK_type'. For example:
+//
+// ACTION(IncrementArg1) {
+// arg1_type temp = arg1;
+// return ++(*temp);
+// }
+//
+// allows you to write
+//
+// ...WillOnce(IncrementArg1());
+//
+// You can also refer to the entire argument tuple and its type by
+// 'args' and 'args_type', and refer to the mock function type and its
+// return type by 'function_type' and 'return_type'.
+//
+// Note that you don't need to specify the types of the mock function
+// arguments. However rest assured that your code is still type-safe:
+// you'll get a compiler error if *arg1 doesn't support the ++
+// operator, or if the type of ++(*arg1) isn't compatible with the
+// mock function's return type, for example.
+//
+// Sometimes you'll want to parameterize the action. For that you can use
+// another macro:
+//
+// ACTION_P(name, param_name) { statements; }
+//
+// For example:
+//
+// ACTION_P(Add, n) { return arg0 + n; }
+//
+// will allow you to write:
+//
+// ...WillOnce(Add(5));
+//
+// Note that you don't need to provide the type of the parameter
+// either. If you need to reference the type of a parameter named
+// 'foo', you can write 'foo_type'. For example, in the body of
+// ACTION_P(Add, n) above, you can write 'n_type' to refer to the type
+// of 'n'.
+//
+// We also provide ACTION_P2, ACTION_P3, ..., up to ACTION_P10 to support
+// multi-parameter actions.
+//
+// For the purpose of typing, you can view
+//
+// ACTION_Pk(Foo, p1, ..., pk) { ... }
+//
+// as shorthand for
+//
+// template <typename p1_type, ..., typename pk_type>
+// FooActionPk<p1_type, ..., pk_type> Foo(p1_type p1, ..., pk_type pk) { ... }
+//
+// In particular, you can provide the template type arguments
+// explicitly when invoking Foo(), as in Foo<long, bool>(5, false);
+// although usually you can rely on the compiler to infer the types
+// for you automatically. You can assign the result of expression
+// Foo(p1, ..., pk) to a variable of type FooActionPk<p1_type, ...,
+// pk_type>. This can be useful when composing actions.
+//
+// You can also overload actions with different numbers of parameters:
+//
+// ACTION_P(Plus, a) { ... }
+// ACTION_P2(Plus, a, b) { ... }
+//
+// While it's tempting to always use the ACTION* macros when defining
+// a new action, you should also consider implementing ActionInterface
+// or using MakePolymorphicAction() instead, especially if you need to
+// use the action a lot. While these approaches require more work,
+// they give you more control on the types of the mock function
+// arguments and the action parameters, which in general leads to
+// better compiler error messages that pay off in the long run. They
+// also allow overloading actions based on parameter types (as opposed
+// to just based on the number of parameters).
+//
+// CAVEAT:
+//
+// ACTION*() can only be used in a namespace scope as templates cannot be
+// declared inside of a local class.
+// Users can, however, define any local functors (e.g. a lambda) that
+// can be used as actions.
+//
+// MORE INFORMATION:
+//
+// To learn more about using these macros, please search for 'ACTION' on
+// https://github.com/google/googletest/blob/master/docs/gmock_cook_book.md
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_ACTIONS_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_ACTIONS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_ACTIONS_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_ACTIONS_H_
#ifndef _WIN32_WCE
# include <errno.h>
@@ -103,6 +196,7 @@
#include <functional>
#include <memory>
#include <string>
+#include <tuple>
#include <type_traits>
#include <utility>
@@ -144,8 +238,8 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_
-#define GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_
#include <stdio.h>
#include <ostream> // NOLINT
@@ -190,11 +284,12 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_
-#define GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_
#include <assert.h>
#include <stdlib.h>
+#include <cstdint>
#include <iostream>
// Most of the utilities needed for porting Google Mock are also
@@ -241,10 +336,10 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
-#define GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
-#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_PORT_H_
// For MS Visual C++, check the compiler version. At least VS 2015 is
// required to compile Google Mock.
@@ -260,8 +355,7 @@
// Macros for declaring flags.
# define GMOCK_DECLARE_bool_(name) extern GTEST_API_ bool GMOCK_FLAG(name)
-# define GMOCK_DECLARE_int32_(name) \
- extern GTEST_API_ ::testing::internal::Int32 GMOCK_FLAG(name)
+# define GMOCK_DECLARE_int32_(name) extern GTEST_API_ int32_t GMOCK_FLAG(name)
# define GMOCK_DECLARE_string_(name) \
extern GTEST_API_ ::std::string GMOCK_FLAG(name)
@@ -269,13 +363,13 @@
# define GMOCK_DEFINE_bool_(name, default_val, doc) \
GTEST_API_ bool GMOCK_FLAG(name) = (default_val)
# define GMOCK_DEFINE_int32_(name, default_val, doc) \
- GTEST_API_ ::testing::internal::Int32 GMOCK_FLAG(name) = (default_val)
+ GTEST_API_ int32_t GMOCK_FLAG(name) = (default_val)
# define GMOCK_DEFINE_string_(name, default_val, doc) \
GTEST_API_ ::std::string GMOCK_FLAG(name) = (default_val)
#endif // !defined(GMOCK_DECLARE_bool_)
-#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PORT_H_
namespace testing {
@@ -302,20 +396,6 @@
// "foo_bar_123" are converted to "foo bar 123".
GTEST_API_ std::string ConvertIdentifierNameToWords(const char* id_name);
-// PointeeOf<Pointer>::type is the type of a value pointed to by a
-// Pointer, which can be either a smart pointer or a raw pointer. The
-// following default implementation is for the case where Pointer is a
-// smart pointer.
-template <typename Pointer>
-struct PointeeOf {
- // Smart pointer classes define type element_type as the type of
- // their pointees.
- typedef typename Pointer::element_type type;
-};
-// This specialization is for the raw pointer case.
-template <typename T>
-struct PointeeOf<T*> { typedef T type; }; // NOLINT
-
// GetRawPointer(p) returns the raw pointer underlying p when p is a
// smart pointer, or returns p itself when p is already a raw pointer.
// The following default implementation is for the smart pointer case.
@@ -337,22 +417,6 @@
# define GMOCK_WCHAR_T_IS_NATIVE_ 1
#endif
-// signed wchar_t and unsigned wchar_t are NOT in the C++ standard.
-// Using them is a bad practice and not portable. So DON'T use them.
-//
-// Still, Google Mock is designed to work even if the user uses signed
-// wchar_t or unsigned wchar_t (obviously, assuming the compiler
-// supports them).
-//
-// To gcc,
-// wchar_t == signed wchar_t != unsigned wchar_t == unsigned int
-#ifdef __GNUC__
-#if !defined(__WCHAR_UNSIGNED__)
-// signed/unsigned wchar_t are valid types.
-# define GMOCK_HAS_SIGNED_WCHAR_T_ 1
-#endif
-#endif
-
// In what follows, we use the term "kind" to indicate whether a type
// is bool, an integer type (excluding bool), a floating-point type,
// or none of them. This categorization is useful for determining
@@ -383,15 +447,13 @@
GMOCK_DECLARE_KIND_(unsigned int, kInteger);
GMOCK_DECLARE_KIND_(long, kInteger); // NOLINT
GMOCK_DECLARE_KIND_(unsigned long, kInteger); // NOLINT
+GMOCK_DECLARE_KIND_(long long, kInteger); // NOLINT
+GMOCK_DECLARE_KIND_(unsigned long long, kInteger); // NOLINT
#if GMOCK_WCHAR_T_IS_NATIVE_
GMOCK_DECLARE_KIND_(wchar_t, kInteger);
#endif
-// Non-standard integer types.
-GMOCK_DECLARE_KIND_(Int64, kInteger);
-GMOCK_DECLARE_KIND_(UInt64, kInteger);
-
// All standard floating-point types.
GMOCK_DECLARE_KIND_(float, kFloatingPoint);
GMOCK_DECLARE_KIND_(double, kFloatingPoint);
@@ -404,11 +466,8 @@
static_cast< ::testing::internal::TypeKind>( \
::testing::internal::KindOf<type>::value)
-// Evaluates to true iff integer type T is signed.
-#define GMOCK_IS_SIGNED_(T) (static_cast<T>(-1) < 0)
-
// LosslessArithmeticConvertibleImpl<kFromKind, From, kToKind, To>::value
-// is true iff arithmetic type From can be losslessly converted to
+// is true if and only if arithmetic type From can be losslessly converted to
// arithmetic type To.
//
// It's the user's responsibility to ensure that both From and To are
@@ -417,77 +476,42 @@
// From, and kToKind is the kind of To; the value is
// implementation-defined when the above pre-condition is violated.
template <TypeKind kFromKind, typename From, TypeKind kToKind, typename To>
-struct LosslessArithmeticConvertibleImpl : public false_type {};
+using LosslessArithmeticConvertibleImpl = std::integral_constant<
+ bool,
+ // clang-format off
+ // Converting from bool is always lossless
+ (kFromKind == kBool) ? true
+ // Converting between any other type kinds will be lossy if the type
+ // kinds are not the same.
+ : (kFromKind != kToKind) ? false
+ : (kFromKind == kInteger &&
+ // Converting between integers of different widths is allowed so long
+ // as the conversion does not go from signed to unsigned.
+ (((sizeof(From) < sizeof(To)) &&
+ !(std::is_signed<From>::value && !std::is_signed<To>::value)) ||
+ // Converting between integers of the same width only requires the
+ // two types to have the same signedness.
+ ((sizeof(From) == sizeof(To)) &&
+ (std::is_signed<From>::value == std::is_signed<To>::value)))
+ ) ? true
+ // Floating point conversions are lossless if and only if `To` is at least
+ // as wide as `From`.
+ : (kFromKind == kFloatingPoint && (sizeof(From) <= sizeof(To))) ? true
+ : false
+ // clang-format on
+ >;
-// Converting bool to bool is lossless.
-template <>
-struct LosslessArithmeticConvertibleImpl<kBool, bool, kBool, bool>
- : public true_type {}; // NOLINT
-
-// Converting bool to any integer type is lossless.
-template <typename To>
-struct LosslessArithmeticConvertibleImpl<kBool, bool, kInteger, To>
- : public true_type {}; // NOLINT
-
-// Converting bool to any floating-point type is lossless.
-template <typename To>
-struct LosslessArithmeticConvertibleImpl<kBool, bool, kFloatingPoint, To>
- : public true_type {}; // NOLINT
-
-// Converting an integer to bool is lossy.
-template <typename From>
-struct LosslessArithmeticConvertibleImpl<kInteger, From, kBool, bool>
- : public false_type {}; // NOLINT
-
-// Converting an integer to another non-bool integer is lossless iff
-// the target type's range encloses the source type's range.
-template <typename From, typename To>
-struct LosslessArithmeticConvertibleImpl<kInteger, From, kInteger, To>
- : public bool_constant<
- // When converting from a smaller size to a larger size, we are
- // fine as long as we are not converting from signed to unsigned.
- ((sizeof(From) < sizeof(To)) &&
- (!GMOCK_IS_SIGNED_(From) || GMOCK_IS_SIGNED_(To))) ||
- // When converting between the same size, the signedness must match.
- ((sizeof(From) == sizeof(To)) &&
- (GMOCK_IS_SIGNED_(From) == GMOCK_IS_SIGNED_(To)))> {}; // NOLINT
-
-#undef GMOCK_IS_SIGNED_
-
-// Converting an integer to a floating-point type may be lossy, since
-// the format of a floating-point number is implementation-defined.
-template <typename From, typename To>
-struct LosslessArithmeticConvertibleImpl<kInteger, From, kFloatingPoint, To>
- : public false_type {}; // NOLINT
-
-// Converting a floating-point to bool is lossy.
-template <typename From>
-struct LosslessArithmeticConvertibleImpl<kFloatingPoint, From, kBool, bool>
- : public false_type {}; // NOLINT
-
-// Converting a floating-point to an integer is lossy.
-template <typename From, typename To>
-struct LosslessArithmeticConvertibleImpl<kFloatingPoint, From, kInteger, To>
- : public false_type {}; // NOLINT
-
-// Converting a floating-point to another floating-point is lossless
-// iff the target type is at least as big as the source type.
-template <typename From, typename To>
-struct LosslessArithmeticConvertibleImpl<
- kFloatingPoint, From, kFloatingPoint, To>
- : public bool_constant<sizeof(From) <= sizeof(To)> {}; // NOLINT
-
-// LosslessArithmeticConvertible<From, To>::value is true iff arithmetic
-// type From can be losslessly converted to arithmetic type To.
+// LosslessArithmeticConvertible<From, To>::value is true if and only if
+// arithmetic type From can be losslessly converted to arithmetic type To.
//
// It's the user's responsibility to ensure that both From and To are
// raw (i.e. has no CV modifier, is not a pointer, and is not a
// reference) built-in arithmetic types; the value is
// implementation-defined when the above pre-condition is violated.
template <typename From, typename To>
-struct LosslessArithmeticConvertible
- : public LosslessArithmeticConvertibleImpl<
- GMOCK_KIND_OF_(From), From, GMOCK_KIND_OF_(To), To> {}; // NOLINT
+using LosslessArithmeticConvertible =
+ LosslessArithmeticConvertibleImpl<GMOCK_KIND_OF_(From), From,
+ GMOCK_KIND_OF_(To), To>;
// This interface knows how to report a Google Mock failure (either
// non-fatal or fatal).
@@ -552,11 +576,11 @@
// No logs are printed.
const char kErrorVerbosity[] = "error";
-// Returns true iff a log with the given severity is visible according
-// to the --gmock_verbose flag.
+// Returns true if and only if a log with the given severity is visible
+// according to the --gmock_verbose flag.
GTEST_API_ bool LogIsVisible(LogSeverity severity);
-// Prints the given message to stdout iff 'severity' >= the level
+// Prints the given message to stdout if and only if 'severity' >= the level
// specified by the --gmock_verbose flag. If stack_frames_to_skip >=
// 0, also prints the stack trace excluding the top
// stack_frames_to_skip frames. In opt mode, any positive
@@ -581,33 +605,6 @@
// Internal use only: access the singleton instance of WithoutMatchers.
GTEST_API_ WithoutMatchers GetWithoutMatchers();
-// Type traits.
-
-// is_reference<T>::value is non-zero iff T is a reference type.
-template <typename T> struct is_reference : public false_type {};
-template <typename T> struct is_reference<T&> : public true_type {};
-
-// type_equals<T1, T2>::value is non-zero iff T1 and T2 are the same type.
-template <typename T1, typename T2> struct type_equals : public false_type {};
-template <typename T> struct type_equals<T, T> : public true_type {};
-
-// remove_reference<T>::type removes the reference from type T, if any.
-template <typename T> struct remove_reference { typedef T type; }; // NOLINT
-template <typename T> struct remove_reference<T&> { typedef T type; }; // NOLINT
-
-// DecayArray<T>::type turns an array type U[N] to const U* and preserves
-// other types. Useful for saving a copy of a function argument.
-template <typename T> struct DecayArray { typedef T type; }; // NOLINT
-template <typename T, size_t N> struct DecayArray<T[N]> {
- typedef const T* type;
-};
-// Sometimes people use arrays whose size is not available at the use site
-// (e.g. extern const char kNamePrefix[]). This specialization covers that
-// case.
-template <typename T> struct DecayArray<T[]> {
- typedef const T* type;
-};
-
// Disable MSVC warnings for infinite recursion, since in this case the
// the recursion is unreachable.
#ifdef _MSC_VER
@@ -656,9 +653,8 @@
typedef const type& const_reference;
static const_reference ConstReference(const RawContainer& container) {
- // Ensures that RawContainer is not a const type.
- testing::StaticAssertTypeEq<RawContainer,
- GTEST_REMOVE_CONST_(RawContainer)>();
+ static_assert(!std::is_const<RawContainer>::value,
+ "RawContainer type must not be const");
return container;
}
static type Copy(const RawContainer& container) { return container; }
@@ -668,7 +664,7 @@
template <typename Element, size_t N>
class StlContainerView<Element[N]> {
public:
- typedef GTEST_REMOVE_CONST_(Element) RawElement;
+ typedef typename std::remove_const<Element>::type RawElement;
typedef internal::NativeArray<RawElement> type;
// NativeArray<T> can represent a native array either by value or by
// reference (selected by a constructor argument), so 'const type'
@@ -678,8 +674,8 @@
typedef const type const_reference;
static const_reference ConstReference(const Element (&array)[N]) {
- // Ensures that Element is not a const type.
- testing::StaticAssertTypeEq<Element, RawElement>();
+ static_assert(std::is_same<Element, RawElement>::value,
+ "Element type must not be const");
return type(array, N, RelationToSourceReference());
}
static type Copy(const Element (&array)[N]) {
@@ -692,8 +688,9 @@
template <typename ElementPointer, typename Size>
class StlContainerView< ::std::tuple<ElementPointer, Size> > {
public:
- typedef GTEST_REMOVE_CONST_(
- typename internal::PointeeOf<ElementPointer>::type) RawElement;
+ typedef typename std::remove_const<
+ typename std::pointer_traits<ElementPointer>::element_type>::type
+ RawElement;
typedef internal::NativeArray<RawElement> type;
typedef const type const_reference;
@@ -725,39 +722,25 @@
typedef std::pair<K, V> type;
};
-// Mapping from booleans to types. Similar to boost::bool_<kValue> and
-// std::integral_constant<bool, kValue>.
-template <bool kValue>
-struct BooleanConstant {};
-
// Emit an assertion failure due to incorrect DoDefault() usage. Out-of-lined to
// reduce code size.
GTEST_API_ void IllegalDoDefault(const char* file, int line);
-// Helper types for Apply() below.
-template <size_t... Is> struct int_pack { typedef int_pack type; };
-
-template <class Pack, size_t I> struct append;
-template <size_t... Is, size_t I>
-struct append<int_pack<Is...>, I> : int_pack<Is..., I> {};
-
-template <size_t C>
-struct make_int_pack : append<typename make_int_pack<C - 1>::type, C - 1> {};
-template <> struct make_int_pack<0> : int_pack<> {};
-
template <typename F, typename Tuple, size_t... Idx>
-auto ApplyImpl(F&& f, Tuple&& args, int_pack<Idx...>) -> decltype(
+auto ApplyImpl(F&& f, Tuple&& args, IndexSequence<Idx...>) -> decltype(
std::forward<F>(f)(std::get<Idx>(std::forward<Tuple>(args))...)) {
return std::forward<F>(f)(std::get<Idx>(std::forward<Tuple>(args))...);
}
// Apply the function to a tuple of arguments.
template <typename F, typename Tuple>
-auto Apply(F&& f, Tuple&& args)
- -> decltype(ApplyImpl(std::forward<F>(f), std::forward<Tuple>(args),
- make_int_pack<std::tuple_size<Tuple>::value>())) {
+auto Apply(F&& f, Tuple&& args) -> decltype(
+ ApplyImpl(std::forward<F>(f), std::forward<Tuple>(args),
+ MakeIndexSequence<std::tuple_size<
+ typename std::remove_reference<Tuple>::type>::value>())) {
return ApplyImpl(std::forward<F>(f), std::forward<Tuple>(args),
- make_int_pack<std::tuple_size<Tuple>::value>());
+ MakeIndexSequence<std::tuple_size<
+ typename std::remove_reference<Tuple>::type>::value>());
}
// Template struct Function<F>, where F must be a function type, contains
@@ -781,8 +764,7 @@
using Result = R;
static constexpr size_t ArgumentCount = sizeof...(Args);
template <size_t I>
- using Arg = ElemFromList<I, typename MakeIndexSequence<sizeof...(Args)>::type,
- Args...>;
+ using Arg = ElemFromList<I, Args...>;
using ArgumentTuple = std::tuple<Args...>;
using ArgumentMatcherTuple = std::tuple<Matcher<Args>...>;
using MakeResultVoid = void(Args...);
@@ -799,7 +781,286 @@
} // namespace internal
} // namespace testing
-#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_INTERNAL_UTILS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PP_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PP_H_
+
+// Expands and concatenates the arguments. Constructed macros reevaluate.
+#define GMOCK_PP_CAT(_1, _2) GMOCK_PP_INTERNAL_CAT(_1, _2)
+
+// Expands and stringifies the only argument.
+#define GMOCK_PP_STRINGIZE(...) GMOCK_PP_INTERNAL_STRINGIZE(__VA_ARGS__)
+
+// Returns empty. Given a variadic number of arguments.
+#define GMOCK_PP_EMPTY(...)
+
+// Returns a comma. Given a variadic number of arguments.
+#define GMOCK_PP_COMMA(...) ,
+
+// Returns the only argument.
+#define GMOCK_PP_IDENTITY(_1) _1
+
+// Evaluates to the number of arguments after expansion.
+//
+// #define PAIR x, y
+//
+// GMOCK_PP_NARG() => 1
+// GMOCK_PP_NARG(x) => 1
+// GMOCK_PP_NARG(x, y) => 2
+// GMOCK_PP_NARG(PAIR) => 2
+//
+// Requires: the number of arguments after expansion is at most 15.
+#define GMOCK_PP_NARG(...) \
+ GMOCK_PP_INTERNAL_16TH( \
+ (__VA_ARGS__, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0))
+
+// Returns 1 if the expansion of arguments has an unprotected comma. Otherwise
+// returns 0. Requires no more than 15 unprotected commas.
+#define GMOCK_PP_HAS_COMMA(...) \
+ GMOCK_PP_INTERNAL_16TH( \
+ (__VA_ARGS__, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0))
+
+// Returns the first argument.
+#define GMOCK_PP_HEAD(...) GMOCK_PP_INTERNAL_HEAD((__VA_ARGS__, unusedArg))
+
+// Returns the tail. A variadic list of all arguments minus the first. Requires
+// at least one argument.
+#define GMOCK_PP_TAIL(...) GMOCK_PP_INTERNAL_TAIL((__VA_ARGS__))
+
+// Calls CAT(_Macro, NARG(__VA_ARGS__))(__VA_ARGS__)
+#define GMOCK_PP_VARIADIC_CALL(_Macro, ...) \
+ GMOCK_PP_IDENTITY( \
+ GMOCK_PP_CAT(_Macro, GMOCK_PP_NARG(__VA_ARGS__))(__VA_ARGS__))
+
+// If the arguments after expansion have no tokens, evaluates to `1`. Otherwise
+// evaluates to `0`.
+//
+// Requires: * the number of arguments after expansion is at most 15.
+// * If the argument is a macro, it must be able to be called with one
+// argument.
+//
+// Implementation details:
+//
+// There is one case when it generates a compile error: if the argument is macro
+// that cannot be called with one argument.
+//
+// #define M(a, b) // it doesn't matter what it expands to
+//
+// // Expected: expands to `0`.
+// // Actual: compile error.
+// GMOCK_PP_IS_EMPTY(M)
+//
+// There are 4 cases tested:
+//
+// * __VA_ARGS__ possible expansion has no unparen'd commas. Expected 0.
+// * __VA_ARGS__ possible expansion is not enclosed in parenthesis. Expected 0.
+// * __VA_ARGS__ possible expansion is not a macro that ()-evaluates to a comma.
+// Expected 0
+// * __VA_ARGS__ is empty, or has unparen'd commas, or is enclosed in
+// parenthesis, or is a macro that ()-evaluates to comma. Expected 1.
+//
+// We trigger detection on '0001', i.e. on empty.
+#define GMOCK_PP_IS_EMPTY(...) \
+ GMOCK_PP_INTERNAL_IS_EMPTY(GMOCK_PP_HAS_COMMA(__VA_ARGS__), \
+ GMOCK_PP_HAS_COMMA(GMOCK_PP_COMMA __VA_ARGS__), \
+ GMOCK_PP_HAS_COMMA(__VA_ARGS__()), \
+ GMOCK_PP_HAS_COMMA(GMOCK_PP_COMMA __VA_ARGS__()))
+
+// Evaluates to _Then if _Cond is 1 and _Else if _Cond is 0.
+#define GMOCK_PP_IF(_Cond, _Then, _Else) \
+ GMOCK_PP_CAT(GMOCK_PP_INTERNAL_IF_, _Cond)(_Then, _Else)
+
+// Similar to GMOCK_PP_IF but takes _Then and _Else in parentheses.
+//
+// GMOCK_PP_GENERIC_IF(1, (a, b, c), (d, e, f)) => a, b, c
+// GMOCK_PP_GENERIC_IF(0, (a, b, c), (d, e, f)) => d, e, f
+//
+#define GMOCK_PP_GENERIC_IF(_Cond, _Then, _Else) \
+ GMOCK_PP_REMOVE_PARENS(GMOCK_PP_IF(_Cond, _Then, _Else))
+
+// Evaluates to the number of arguments after expansion. Identifies 'empty' as
+// 0.
+//
+// #define PAIR x, y
+//
+// GMOCK_PP_NARG0() => 0
+// GMOCK_PP_NARG0(x) => 1
+// GMOCK_PP_NARG0(x, y) => 2
+// GMOCK_PP_NARG0(PAIR) => 2
+//
+// Requires: * the number of arguments after expansion is at most 15.
+// * If the argument is a macro, it must be able to be called with one
+// argument.
+#define GMOCK_PP_NARG0(...) \
+ GMOCK_PP_IF(GMOCK_PP_IS_EMPTY(__VA_ARGS__), 0, GMOCK_PP_NARG(__VA_ARGS__))
+
+// Expands to 1 if the first argument starts with something in parentheses,
+// otherwise to 0.
+#define GMOCK_PP_IS_BEGIN_PARENS(...) \
+ GMOCK_PP_HEAD(GMOCK_PP_CAT(GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_R_, \
+ GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_C __VA_ARGS__))
+
+// Expands to 1 is there is only one argument and it is enclosed in parentheses.
+#define GMOCK_PP_IS_ENCLOSED_PARENS(...) \
+ GMOCK_PP_IF(GMOCK_PP_IS_BEGIN_PARENS(__VA_ARGS__), \
+ GMOCK_PP_IS_EMPTY(GMOCK_PP_EMPTY __VA_ARGS__), 0)
+
+// Remove the parens, requires GMOCK_PP_IS_ENCLOSED_PARENS(args) => 1.
+#define GMOCK_PP_REMOVE_PARENS(...) GMOCK_PP_INTERNAL_REMOVE_PARENS __VA_ARGS__
+
+// Expands to _Macro(0, _Data, e1) _Macro(1, _Data, e2) ... _Macro(K -1, _Data,
+// eK) as many of GMOCK_INTERNAL_NARG0 _Tuple.
+// Requires: * |_Macro| can be called with 3 arguments.
+// * |_Tuple| expansion has no more than 15 elements.
+#define GMOCK_PP_FOR_EACH(_Macro, _Data, _Tuple) \
+ GMOCK_PP_CAT(GMOCK_PP_INTERNAL_FOR_EACH_IMPL_, GMOCK_PP_NARG0 _Tuple) \
+ (0, _Macro, _Data, _Tuple)
+
+// Expands to _Macro(0, _Data, ) _Macro(1, _Data, ) ... _Macro(K - 1, _Data, )
+// Empty if _K = 0.
+// Requires: * |_Macro| can be called with 3 arguments.
+// * |_K| literal between 0 and 15
+#define GMOCK_PP_REPEAT(_Macro, _Data, _N) \
+ GMOCK_PP_CAT(GMOCK_PP_INTERNAL_FOR_EACH_IMPL_, _N) \
+ (0, _Macro, _Data, GMOCK_PP_INTENRAL_EMPTY_TUPLE)
+
+// Increments the argument, requires the argument to be between 0 and 15.
+#define GMOCK_PP_INC(_i) GMOCK_PP_CAT(GMOCK_PP_INTERNAL_INC_, _i)
+
+// Returns comma if _i != 0. Requires _i to be between 0 and 15.
+#define GMOCK_PP_COMMA_IF(_i) GMOCK_PP_CAT(GMOCK_PP_INTERNAL_COMMA_IF_, _i)
+
+// Internal details follow. Do not use any of these symbols outside of this
+// file or we will break your code.
+#define GMOCK_PP_INTENRAL_EMPTY_TUPLE (, , , , , , , , , , , , , , , )
+#define GMOCK_PP_INTERNAL_CAT(_1, _2) _1##_2
+#define GMOCK_PP_INTERNAL_STRINGIZE(...) #__VA_ARGS__
+#define GMOCK_PP_INTERNAL_CAT_5(_1, _2, _3, _4, _5) _1##_2##_3##_4##_5
+#define GMOCK_PP_INTERNAL_IS_EMPTY(_1, _2, _3, _4) \
+ GMOCK_PP_HAS_COMMA(GMOCK_PP_INTERNAL_CAT_5(GMOCK_PP_INTERNAL_IS_EMPTY_CASE_, \
+ _1, _2, _3, _4))
+#define GMOCK_PP_INTERNAL_IS_EMPTY_CASE_0001 ,
+#define GMOCK_PP_INTERNAL_IF_1(_Then, _Else) _Then
+#define GMOCK_PP_INTERNAL_IF_0(_Then, _Else) _Else
+
+// Because of MSVC treating a token with a comma in it as a single token when
+// passed to another macro, we need to force it to evaluate it as multiple
+// tokens. We do that by using a "IDENTITY(MACRO PARENTHESIZED_ARGS)" macro. We
+// define one per possible macro that relies on this behavior. Note "_Args" must
+// be parenthesized.
+#define GMOCK_PP_INTERNAL_INTERNAL_16TH(_1, _2, _3, _4, _5, _6, _7, _8, _9, \
+ _10, _11, _12, _13, _14, _15, _16, \
+ ...) \
+ _16
+#define GMOCK_PP_INTERNAL_16TH(_Args) \
+ GMOCK_PP_IDENTITY(GMOCK_PP_INTERNAL_INTERNAL_16TH _Args)
+#define GMOCK_PP_INTERNAL_INTERNAL_HEAD(_1, ...) _1
+#define GMOCK_PP_INTERNAL_HEAD(_Args) \
+ GMOCK_PP_IDENTITY(GMOCK_PP_INTERNAL_INTERNAL_HEAD _Args)
+#define GMOCK_PP_INTERNAL_INTERNAL_TAIL(_1, ...) __VA_ARGS__
+#define GMOCK_PP_INTERNAL_TAIL(_Args) \
+ GMOCK_PP_IDENTITY(GMOCK_PP_INTERNAL_INTERNAL_TAIL _Args)
+
+#define GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_C(...) 1 _
+#define GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_R_1 1,
+#define GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_R_GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_C \
+ 0,
+#define GMOCK_PP_INTERNAL_REMOVE_PARENS(...) __VA_ARGS__
+#define GMOCK_PP_INTERNAL_INC_0 1
+#define GMOCK_PP_INTERNAL_INC_1 2
+#define GMOCK_PP_INTERNAL_INC_2 3
+#define GMOCK_PP_INTERNAL_INC_3 4
+#define GMOCK_PP_INTERNAL_INC_4 5
+#define GMOCK_PP_INTERNAL_INC_5 6
+#define GMOCK_PP_INTERNAL_INC_6 7
+#define GMOCK_PP_INTERNAL_INC_7 8
+#define GMOCK_PP_INTERNAL_INC_8 9
+#define GMOCK_PP_INTERNAL_INC_9 10
+#define GMOCK_PP_INTERNAL_INC_10 11
+#define GMOCK_PP_INTERNAL_INC_11 12
+#define GMOCK_PP_INTERNAL_INC_12 13
+#define GMOCK_PP_INTERNAL_INC_13 14
+#define GMOCK_PP_INTERNAL_INC_14 15
+#define GMOCK_PP_INTERNAL_INC_15 16
+#define GMOCK_PP_INTERNAL_COMMA_IF_0
+#define GMOCK_PP_INTERNAL_COMMA_IF_1 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_2 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_3 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_4 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_5 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_6 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_7 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_8 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_9 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_10 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_11 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_12 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_13 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_14 ,
+#define GMOCK_PP_INTERNAL_COMMA_IF_15 ,
+#define GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, _element) \
+ _Macro(_i, _Data, _element)
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_0(_i, _Macro, _Data, _Tuple)
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_1(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple)
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_2(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_1(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_3(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_2(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_4(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_3(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_5(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_4(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_6(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_5(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_7(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_6(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_8(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_7(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_9(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_8(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_10(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_9(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_11(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_10(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_12(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_11(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_13(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_12(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_14(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_13(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_15(_i, _Macro, _Data, _Tuple) \
+ GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
+ GMOCK_PP_INTERNAL_FOR_EACH_IMPL_14(GMOCK_PP_INC(_i), _Macro, _Data, \
+ (GMOCK_PP_TAIL _Tuple))
+
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PP_H_
#ifdef _MSC_VER
# pragma warning(push)
@@ -849,7 +1110,8 @@
template <typename T>
class BuiltInDefaultValue {
public:
- // This function returns true iff type T has a built-in default value.
+ // This function returns true if and only if type T has a built-in default
+ // value.
static bool Exists() {
return ::std::is_default_constructible<T>::value;
}
@@ -889,9 +1151,6 @@
}
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(void, ); // NOLINT
-#if GTEST_HAS_GLOBAL_STRING
-GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(::string, "");
-#endif // GTEST_HAS_GLOBAL_STRING
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(::std::string, "");
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(bool, false);
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(unsigned char, '\0');
@@ -914,13 +1173,17 @@
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(signed int, 0);
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(unsigned long, 0UL); // NOLINT
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(signed long, 0L); // NOLINT
-GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(UInt64, 0);
-GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(Int64, 0);
+GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(unsigned long long, 0); // NOLINT
+GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(signed long long, 0); // NOLINT
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(float, 0);
GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_(double, 0);
#undef GMOCK_DEFINE_DEFAULT_ACTION_FOR_RETURN_TYPE_
+// Simple two-arg form of std::disjunction.
+template <typename P, typename Q>
+using disjunction = typename ::std::conditional<P::value, P, Q>::type;
+
} // namespace internal
// When an unexpected function call is encountered, Google Mock will
@@ -961,7 +1224,7 @@
producer_ = nullptr;
}
- // Returns true iff the user has set the default value for type T.
+ // Returns true if and only if the user has set the default value for type T.
static bool IsSet() { return producer_ != nullptr; }
// Returns true if T has a default return value set by the user or there
@@ -1022,7 +1285,7 @@
// Unsets the default value for type T&.
static void Clear() { address_ = nullptr; }
- // Returns true iff the user has set the default value for type T&.
+ // Returns true if and only if the user has set the default value for type T&.
static bool IsSet() { return address_ != nullptr; }
// Returns true if T has a default return value set by the user or there
@@ -1102,6 +1365,9 @@
}
};
+ template <typename G>
+ using IsCompatibleFunctor = std::is_constructible<std::function<F>, G>;
+
public:
typedef typename internal::Function<F>::Result Result;
typedef typename internal::Function<F>::ArgumentTuple ArgumentTuple;
@@ -1113,10 +1379,14 @@
// Construct an Action from a specified callable.
// This cannot take std::function directly, because then Action would not be
// directly constructible from lambda (it would require two conversions).
- template <typename G,
- typename = typename ::std::enable_if<
- ::std::is_constructible<::std::function<F>, G>::value>::type>
- Action(G&& fun) : fun_(::std::forward<G>(fun)) {} // NOLINT
+ template <
+ typename G,
+ typename = typename std::enable_if<internal::disjunction<
+ IsCompatibleFunctor<G>, std::is_constructible<std::function<Result()>,
+ G>>::value>::type>
+ Action(G&& fun) { // NOLINT
+ Init(::std::forward<G>(fun), IsCompatibleFunctor<G>());
+ }
// Constructs an Action from its implementation.
explicit Action(ActionInterface<F>* impl)
@@ -1128,7 +1398,7 @@
template <typename Func>
explicit Action(const Action<Func>& action) : fun_(action.fun_) {}
- // Returns true iff this is the DoDefault() action.
+ // Returns true if and only if this is the DoDefault() action.
bool IsDoDefault() const { return fun_ == nullptr; }
// Performs the action. Note that this method is const even though
@@ -1148,7 +1418,27 @@
template <typename G>
friend class Action;
- // fun_ is an empty function iff this is the DoDefault() action.
+ template <typename G>
+ void Init(G&& g, ::std::true_type) {
+ fun_ = ::std::forward<G>(g);
+ }
+
+ template <typename G>
+ void Init(G&& g, ::std::false_type) {
+ fun_ = IgnoreArgs<typename ::std::decay<G>::type>{::std::forward<G>(g)};
+ }
+
+ template <typename FunctionImpl>
+ struct IgnoreArgs {
+ template <typename... Args>
+ Result operator()(const Args&...) const {
+ return function_impl();
+ }
+
+ FunctionImpl function_impl;
+ };
+
+ // fun_ is an empty function if and only if this is the DoDefault() action.
::std::function<F> fun_;
};
@@ -1198,13 +1488,9 @@
private:
Impl impl_;
-
- GTEST_DISALLOW_ASSIGN_(MonomorphicImpl);
};
Impl impl_;
-
- GTEST_DISALLOW_ASSIGN_(PolymorphicAction);
};
// Creates an Action from its implementation and returns it. The
@@ -1285,7 +1571,7 @@
// in the Impl class. But both definitions must be the same.
typedef typename Function<F>::Result Result;
GTEST_COMPILE_ASSERT_(
- !is_reference<Result>::value,
+ !std::is_reference<Result>::value,
use_ReturnRef_instead_of_Return_to_return_a_reference);
static_assert(!std::is_void<Result>::value,
"Can't use Return() on an action expected to return `void`.");
@@ -1314,7 +1600,7 @@
Result Perform(const ArgumentTuple&) override { return value_; }
private:
- GTEST_COMPILE_ASSERT_(!is_reference<Result>::value,
+ GTEST_COMPILE_ASSERT_(!std::is_reference<Result>::value,
Result_cannot_be_a_reference_type);
// We save the value before casting just in case it is being cast to a
// wrapper type.
@@ -1345,13 +1631,9 @@
private:
bool performed_;
const std::shared_ptr<R> wrapper_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
const std::shared_ptr<R> value_;
-
- GTEST_DISALLOW_ASSIGN_(ReturnAction);
};
// Implements the ReturnNull() action.
@@ -1372,7 +1654,7 @@
// Allows Return() to be used in any void-returning function.
template <typename Result, typename ArgumentTuple>
static void Perform(const ArgumentTuple&) {
- CompileAssertTypesEqual<void, Result>();
+ static_assert(std::is_void<Result>::value, "Result should be void.");
}
};
@@ -1393,7 +1675,7 @@
// Asserts that the function return type is a reference. This
// catches the user error of using ReturnRef(x) when Return(x)
// should be used, and generates some helpful error message.
- GTEST_COMPILE_ASSERT_(internal::is_reference<Result>::value,
+ GTEST_COMPILE_ASSERT_(std::is_reference<Result>::value,
use_Return_instead_of_ReturnRef_to_return_a_value);
return Action<F>(new Impl<F>(ref_));
}
@@ -1412,13 +1694,9 @@
private:
T& ref_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
T& ref_;
-
- GTEST_DISALLOW_ASSIGN_(ReturnRefAction);
};
// Implements the polymorphic ReturnRefOfCopy(x) action, which can be
@@ -1440,7 +1718,7 @@
// catches the user error of using ReturnRefOfCopy(x) when Return(x)
// should be used, and generates some helpful error message.
GTEST_COMPILE_ASSERT_(
- internal::is_reference<Result>::value,
+ std::is_reference<Result>::value,
use_Return_instead_of_ReturnRefOfCopy_to_return_a_value);
return Action<F>(new Impl<F>(value_));
}
@@ -1459,13 +1737,39 @@
private:
T value_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
const T value_;
+};
- GTEST_DISALLOW_ASSIGN_(ReturnRefOfCopyAction);
+// Implements the polymorphic ReturnRoundRobin(v) action, which can be
+// used in any function that returns the element_type of v.
+template <typename T>
+class ReturnRoundRobinAction {
+ public:
+ explicit ReturnRoundRobinAction(std::vector<T> values) {
+ GTEST_CHECK_(!values.empty())
+ << "ReturnRoundRobin requires at least one element.";
+ state_->values = std::move(values);
+ }
+
+ template <typename... Args>
+ T operator()(Args&&...) const {
+ return state_->Next();
+ }
+
+ private:
+ struct State {
+ T Next() {
+ T ret_val = values[i++];
+ if (i == values.size()) i = 0;
+ return ret_val;
+ }
+
+ std::vector<T> values;
+ size_t i = 0;
+ };
+ std::shared_ptr<State> state_ = std::make_shared<State>();
};
// Implements the polymorphic DoDefault() action.
@@ -1492,8 +1796,6 @@
private:
T1* const ptr_;
const T2 value_;
-
- GTEST_DISALLOW_ASSIGN_(AssignAction);
};
#if !GTEST_OS_WINDOWS_MOBILE
@@ -1515,56 +1817,20 @@
private:
const int errno_;
const T result_;
-
- GTEST_DISALLOW_ASSIGN_(SetErrnoAndReturnAction);
};
#endif // !GTEST_OS_WINDOWS_MOBILE
// Implements the SetArgumentPointee<N>(x) action for any function
-// whose N-th argument (0-based) is a pointer to x's type. The
-// template parameter kIsProto is true iff type A is ProtocolMessage,
-// proto2::Message, or a sub-class of those.
-template <size_t N, typename A, bool kIsProto>
-class SetArgumentPointeeAction {
- public:
- // Constructs an action that sets the variable pointed to by the
- // N-th function argument to 'value'.
- explicit SetArgumentPointeeAction(const A& value) : value_(value) {}
+// whose N-th argument (0-based) is a pointer to x's type.
+template <size_t N, typename A, typename = void>
+struct SetArgumentPointeeAction {
+ A value;
- template <typename Result, typename ArgumentTuple>
- void Perform(const ArgumentTuple& args) const {
- CompileAssertTypesEqual<void, Result>();
- *::std::get<N>(args) = value_;
+ template <typename... Args>
+ void operator()(const Args&... args) const {
+ *::std::get<N>(std::tie(args...)) = value;
}
-
- private:
- const A value_;
-
- GTEST_DISALLOW_ASSIGN_(SetArgumentPointeeAction);
-};
-
-template <size_t N, typename Proto>
-class SetArgumentPointeeAction<N, Proto, true> {
- public:
- // Constructs an action that sets the variable pointed to by the
- // N-th function argument to 'proto'. Both ProtocolMessage and
- // proto2::Message have the CopyFrom() method, so the same
- // implementation works for both.
- explicit SetArgumentPointeeAction(const Proto& proto) : proto_(new Proto) {
- proto_->CopyFrom(proto);
- }
-
- template <typename Result, typename ArgumentTuple>
- void Perform(const ArgumentTuple& args) const {
- CompileAssertTypesEqual<void, Result>();
- ::std::get<N>(args)->CopyFrom(*proto_);
- }
-
- private:
- const std::shared_ptr<Proto> proto_;
-
- GTEST_DISALLOW_ASSIGN_(SetArgumentPointeeAction);
};
// Implements the Invoke(object_ptr, &Class::Method) action.
@@ -1602,7 +1868,8 @@
Class* const obj_ptr;
const MethodPtr method_ptr;
- using ReturnType = typename std::result_of<MethodPtr(Class*)>::type;
+ using ReturnType =
+ decltype((std::declval<Class*>()->*std::declval<MethodPtr>())());
template <typename... Args>
ReturnType operator()(const Args&...) const {
@@ -1629,7 +1896,7 @@
typedef typename internal::Function<F>::Result Result;
// Asserts at compile time that F returns void.
- CompileAssertTypesEqual<void, Result>();
+ static_assert(std::is_void<Result>::value, "Result type should be void.");
return Action<F>(new Impl<F>(action_));
}
@@ -1655,13 +1922,9 @@
OriginalFunction;
const Action<OriginalFunction> action_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
const A action_;
-
- GTEST_DISALLOW_ASSIGN_(IgnoreResultAction);
};
template <typename InnerAction, size_t... I>
@@ -1672,7 +1935,8 @@
// We use the conversion operator to detect the signature of the inner Action.
template <typename R, typename... Args>
operator Action<R(Args...)>() const { // NOLINT
- Action<R(typename std::tuple_element<I, std::tuple<Args...>>::type...)>
+ using TupleType = std::tuple<Args...>;
+ Action<R(typename std::tuple_element<I, TupleType>::type...)>
converted(action);
return [converted](Args... args) -> R {
@@ -1685,9 +1949,13 @@
template <typename... Actions>
struct DoAllAction {
private:
- template <typename... Args, size_t... I>
- std::vector<Action<void(Args...)>> Convert(IndexSequence<I...>) const {
- return {std::get<I>(actions)...};
+ template <typename T>
+ using NonFinalType =
+ typename std::conditional<std::is_scalar<T>::value, T, const T&>::type;
+
+ template <typename ActionT, size_t... I>
+ std::vector<ActionT> Convert(IndexSequence<I...>) const {
+ return {ActionT(std::get<I>(actions))...};
}
public:
@@ -1696,21 +1964,121 @@
template <typename R, typename... Args>
operator Action<R(Args...)>() const { // NOLINT
struct Op {
- std::vector<Action<void(Args...)>> converted;
+ std::vector<Action<void(NonFinalType<Args>...)>> converted;
Action<R(Args...)> last;
R operator()(Args... args) const {
auto tuple_args = std::forward_as_tuple(std::forward<Args>(args)...);
for (auto& a : converted) {
a.Perform(tuple_args);
}
- return last.Perform(tuple_args);
+ return last.Perform(std::move(tuple_args));
}
};
- return Op{Convert<Args...>(MakeIndexSequence<sizeof...(Actions) - 1>()),
+ return Op{Convert<Action<void(NonFinalType<Args>...)>>(
+ MakeIndexSequence<sizeof...(Actions) - 1>()),
std::get<sizeof...(Actions) - 1>(actions)};
}
};
+template <typename T, typename... Params>
+struct ReturnNewAction {
+ T* operator()() const {
+ return internal::Apply(
+ [](const Params&... unpacked_params) {
+ return new T(unpacked_params...);
+ },
+ params);
+ }
+ std::tuple<Params...> params;
+};
+
+template <size_t k>
+struct ReturnArgAction {
+ template <typename... Args>
+ auto operator()(const Args&... args) const ->
+ typename std::tuple_element<k, std::tuple<Args...>>::type {
+ return std::get<k>(std::tie(args...));
+ }
+};
+
+template <size_t k, typename Ptr>
+struct SaveArgAction {
+ Ptr pointer;
+
+ template <typename... Args>
+ void operator()(const Args&... args) const {
+ *pointer = std::get<k>(std::tie(args...));
+ }
+};
+
+template <size_t k, typename Ptr>
+struct SaveArgPointeeAction {
+ Ptr pointer;
+
+ template <typename... Args>
+ void operator()(const Args&... args) const {
+ *pointer = *std::get<k>(std::tie(args...));
+ }
+};
+
+template <size_t k, typename T>
+struct SetArgRefereeAction {
+ T value;
+
+ template <typename... Args>
+ void operator()(Args&&... args) const {
+ using argk_type =
+ typename ::std::tuple_element<k, std::tuple<Args...>>::type;
+ static_assert(std::is_lvalue_reference<argk_type>::value,
+ "Argument must be a reference type.");
+ std::get<k>(std::tie(args...)) = value;
+ }
+};
+
+template <size_t k, typename I1, typename I2>
+struct SetArrayArgumentAction {
+ I1 first;
+ I2 last;
+
+ template <typename... Args>
+ void operator()(const Args&... args) const {
+ auto value = std::get<k>(std::tie(args...));
+ for (auto it = first; it != last; ++it, (void)++value) {
+ *value = *it;
+ }
+ }
+};
+
+template <size_t k>
+struct DeleteArgAction {
+ template <typename... Args>
+ void operator()(const Args&... args) const {
+ delete std::get<k>(std::tie(args...));
+ }
+};
+
+template <typename Ptr>
+struct ReturnPointeeAction {
+ Ptr pointer;
+ template <typename... Args>
+ auto operator()(const Args&...) const -> decltype(*pointer) {
+ return *pointer;
+ }
+};
+
+#if GTEST_HAS_EXCEPTIONS
+template <typename T>
+struct ThrowAction {
+ T exception;
+ // We use a conversion operator to adapt to any return type.
+ template <typename R, typename... Args>
+ operator Action<R(Args...)>() const { // NOLINT
+ T copy = exception;
+ return [copy](Args...) -> R { throw copy; };
+ }
+};
+#endif // GTEST_HAS_EXCEPTIONS
+
} // namespace internal
// An Unused object can be implicitly constructed from ANY value.
@@ -1746,7 +2114,8 @@
typedef internal::IgnoredValue Unused;
// Creates an action that does actions a1, a2, ..., sequentially in
-// each invocation.
+// each invocation. All but the last action will have a readonly view of the
+// arguments.
template <typename... Action>
internal::DoAllAction<typename std::decay<Action>::type...> DoAll(
Action&&... action) {
@@ -1808,6 +2177,10 @@
return internal::ReturnRefAction<R>(x);
}
+// Prevent using ReturnRef on reference to temporary.
+template <typename R, R* = nullptr>
+internal::ReturnRefAction<R> ReturnRef(R&&) = delete;
+
// Creates an action that returns the reference to a copy of the
// argument. The copy is created when the action is constructed and
// lives as long as the action.
@@ -1825,6 +2198,23 @@
return internal::ByMoveWrapper<R>(std::move(x));
}
+// Creates an action that returns an element of `vals`. Calling this action will
+// repeatedly return the next value from `vals` until it reaches the end and
+// will restart from the beginning.
+template <typename T>
+internal::ReturnRoundRobinAction<T> ReturnRoundRobin(std::vector<T> vals) {
+ return internal::ReturnRoundRobinAction<T>(std::move(vals));
+}
+
+// Creates an action that returns an element of `vals`. Calling this action will
+// repeatedly return the next value from `vals` until it reaches the end and
+// will restart from the beginning.
+template <typename T>
+internal::ReturnRoundRobinAction<T> ReturnRoundRobin(
+ std::initializer_list<T> vals) {
+ return internal::ReturnRoundRobinAction<T>(std::vector<T>(vals));
+}
+
// Creates an action that does the default action for the give mock function.
inline internal::DoDefaultAction DoDefault() {
return internal::DoDefaultAction();
@@ -1833,38 +2223,14 @@
// Creates an action that sets the variable pointed by the N-th
// (0-based) function argument to 'value'.
template <size_t N, typename T>
-PolymorphicAction<
- internal::SetArgumentPointeeAction<
- N, T, internal::IsAProtocolMessage<T>::value> >
-SetArgPointee(const T& x) {
- return MakePolymorphicAction(internal::SetArgumentPointeeAction<
- N, T, internal::IsAProtocolMessage<T>::value>(x));
-}
-
-template <size_t N>
-PolymorphicAction<
- internal::SetArgumentPointeeAction<N, const char*, false> >
-SetArgPointee(const char* p) {
- return MakePolymorphicAction(internal::SetArgumentPointeeAction<
- N, const char*, false>(p));
-}
-
-template <size_t N>
-PolymorphicAction<
- internal::SetArgumentPointeeAction<N, const wchar_t*, false> >
-SetArgPointee(const wchar_t* p) {
- return MakePolymorphicAction(internal::SetArgumentPointeeAction<
- N, const wchar_t*, false>(p));
+internal::SetArgumentPointeeAction<N, T> SetArgPointee(T value) {
+ return {std::move(value)};
}
// The following version is DEPRECATED.
template <size_t N, typename T>
-PolymorphicAction<
- internal::SetArgumentPointeeAction<
- N, T, internal::IsAProtocolMessage<T>::value> >
-SetArgumentPointee(const T& x) {
- return MakePolymorphicAction(internal::SetArgumentPointeeAction<
- N, T, internal::IsAProtocolMessage<T>::value>(x));
+internal::SetArgumentPointeeAction<N, T> SetArgumentPointee(T value) {
+ return {std::move(value)};
}
// Creates an action that sets a pointer referent to a given value.
@@ -1942,14 +2308,299 @@
return ::std::reference_wrapper<T>(l_value);
}
+// The ReturnNew<T>(a1, a2, ..., a_k) action returns a pointer to a new
+// instance of type T, constructed on the heap with constructor arguments
+// a1, a2, ..., and a_k. The caller assumes ownership of the returned value.
+template <typename T, typename... Params>
+internal::ReturnNewAction<T, typename std::decay<Params>::type...> ReturnNew(
+ Params&&... params) {
+ return {std::forward_as_tuple(std::forward<Params>(params)...)};
+}
+
+// Action ReturnArg<k>() returns the k-th argument of the mock function.
+template <size_t k>
+internal::ReturnArgAction<k> ReturnArg() {
+ return {};
+}
+
+// Action SaveArg<k>(pointer) saves the k-th (0-based) argument of the
+// mock function to *pointer.
+template <size_t k, typename Ptr>
+internal::SaveArgAction<k, Ptr> SaveArg(Ptr pointer) {
+ return {pointer};
+}
+
+// Action SaveArgPointee<k>(pointer) saves the value pointed to
+// by the k-th (0-based) argument of the mock function to *pointer.
+template <size_t k, typename Ptr>
+internal::SaveArgPointeeAction<k, Ptr> SaveArgPointee(Ptr pointer) {
+ return {pointer};
+}
+
+// Action SetArgReferee<k>(value) assigns 'value' to the variable
+// referenced by the k-th (0-based) argument of the mock function.
+template <size_t k, typename T>
+internal::SetArgRefereeAction<k, typename std::decay<T>::type> SetArgReferee(
+ T&& value) {
+ return {std::forward<T>(value)};
+}
+
+// Action SetArrayArgument<k>(first, last) copies the elements in
+// source range [first, last) to the array pointed to by the k-th
+// (0-based) argument, which can be either a pointer or an
+// iterator. The action does not take ownership of the elements in the
+// source range.
+template <size_t k, typename I1, typename I2>
+internal::SetArrayArgumentAction<k, I1, I2> SetArrayArgument(I1 first,
+ I2 last) {
+ return {first, last};
+}
+
+// Action DeleteArg<k>() deletes the k-th (0-based) argument of the mock
+// function.
+template <size_t k>
+internal::DeleteArgAction<k> DeleteArg() {
+ return {};
+}
+
+// This action returns the value pointed to by 'pointer'.
+template <typename Ptr>
+internal::ReturnPointeeAction<Ptr> ReturnPointee(Ptr pointer) {
+ return {pointer};
+}
+
+// Action Throw(exception) can be used in a mock function of any type
+// to throw the given exception. Any copyable value can be thrown.
+#if GTEST_HAS_EXCEPTIONS
+template <typename T>
+internal::ThrowAction<typename std::decay<T>::type> Throw(T&& exception) {
+ return {std::forward<T>(exception)};
+}
+#endif // GTEST_HAS_EXCEPTIONS
+
+namespace internal {
+
+// A macro from the ACTION* family (defined later in gmock-generated-actions.h)
+// defines an action that can be used in a mock function. Typically,
+// these actions only care about a subset of the arguments of the mock
+// function. For example, if such an action only uses the second
+// argument, it can be used in any mock function that takes >= 2
+// arguments where the type of the second argument is compatible.
+//
+// Therefore, the action implementation must be prepared to take more
+// arguments than it needs. The ExcessiveArg type is used to
+// represent those excessive arguments. In order to keep the compiler
+// error messages tractable, we define it in the testing namespace
+// instead of testing::internal. However, this is an INTERNAL TYPE
+// and subject to change without notice, so a user MUST NOT USE THIS
+// TYPE DIRECTLY.
+struct ExcessiveArg {};
+
+// Builds an implementation of an Action<> for some particular signature, using
+// a class defined by an ACTION* macro.
+template <typename F, typename Impl> struct ActionImpl;
+
+template <typename Impl>
+struct ImplBase {
+ struct Holder {
+ // Allows each copy of the Action<> to get to the Impl.
+ explicit operator const Impl&() const { return *ptr; }
+ std::shared_ptr<Impl> ptr;
+ };
+ using type = typename std::conditional<std::is_constructible<Impl>::value,
+ Impl, Holder>::type;
+};
+
+template <typename R, typename... Args, typename Impl>
+struct ActionImpl<R(Args...), Impl> : ImplBase<Impl>::type {
+ using Base = typename ImplBase<Impl>::type;
+ using function_type = R(Args...);
+ using args_type = std::tuple<Args...>;
+
+ ActionImpl() = default; // Only defined if appropriate for Base.
+ explicit ActionImpl(std::shared_ptr<Impl> impl) : Base{std::move(impl)} { }
+
+ R operator()(Args&&... arg) const {
+ static constexpr size_t kMaxArgs =
+ sizeof...(Args) <= 10 ? sizeof...(Args) : 10;
+ return Apply(MakeIndexSequence<kMaxArgs>{},
+ MakeIndexSequence<10 - kMaxArgs>{},
+ args_type{std::forward<Args>(arg)...});
+ }
+
+ template <std::size_t... arg_id, std::size_t... excess_id>
+ R Apply(IndexSequence<arg_id...>, IndexSequence<excess_id...>,
+ const args_type& args) const {
+ // Impl need not be specific to the signature of action being implemented;
+ // only the implementing function body needs to have all of the specific
+ // types instantiated. Up to 10 of the args that are provided by the
+ // args_type get passed, followed by a dummy of unspecified type for the
+ // remainder up to 10 explicit args.
+ static constexpr ExcessiveArg kExcessArg{};
+ return static_cast<const Impl&>(*this).template gmock_PerformImpl<
+ /*function_type=*/function_type, /*return_type=*/R,
+ /*args_type=*/args_type,
+ /*argN_type=*/typename std::tuple_element<arg_id, args_type>::type...>(
+ /*args=*/args, std::get<arg_id>(args)...,
+ ((void)excess_id, kExcessArg)...);
+ }
+};
+
+// Stores a default-constructed Impl as part of the Action<>'s
+// std::function<>. The Impl should be trivial to copy.
+template <typename F, typename Impl>
+::testing::Action<F> MakeAction() {
+ return ::testing::Action<F>(ActionImpl<F, Impl>());
+}
+
+// Stores just the one given instance of Impl.
+template <typename F, typename Impl>
+::testing::Action<F> MakeAction(std::shared_ptr<Impl> impl) {
+ return ::testing::Action<F>(ActionImpl<F, Impl>(std::move(impl)));
+}
+
+#define GMOCK_INTERNAL_ARG_UNUSED(i, data, el) \
+ , const arg##i##_type& arg##i GTEST_ATTRIBUTE_UNUSED_
+#define GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_ \
+ const args_type& args GTEST_ATTRIBUTE_UNUSED_ GMOCK_PP_REPEAT( \
+ GMOCK_INTERNAL_ARG_UNUSED, , 10)
+
+#define GMOCK_INTERNAL_ARG(i, data, el) , const arg##i##_type& arg##i
+#define GMOCK_ACTION_ARG_TYPES_AND_NAMES_ \
+ const args_type& args GMOCK_PP_REPEAT(GMOCK_INTERNAL_ARG, , 10)
+
+#define GMOCK_INTERNAL_TEMPLATE_ARG(i, data, el) , typename arg##i##_type
+#define GMOCK_ACTION_TEMPLATE_ARGS_NAMES_ \
+ GMOCK_PP_TAIL(GMOCK_PP_REPEAT(GMOCK_INTERNAL_TEMPLATE_ARG, , 10))
+
+#define GMOCK_INTERNAL_TYPENAME_PARAM(i, data, param) , typename param##_type
+#define GMOCK_ACTION_TYPENAME_PARAMS_(params) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_TYPENAME_PARAM, , params))
+
+#define GMOCK_INTERNAL_TYPE_PARAM(i, data, param) , param##_type
+#define GMOCK_ACTION_TYPE_PARAMS_(params) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_TYPE_PARAM, , params))
+
+#define GMOCK_INTERNAL_TYPE_GVALUE_PARAM(i, data, param) \
+ , param##_type gmock_p##i
+#define GMOCK_ACTION_TYPE_GVALUE_PARAMS_(params) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_TYPE_GVALUE_PARAM, , params))
+
+#define GMOCK_INTERNAL_GVALUE_PARAM(i, data, param) \
+ , std::forward<param##_type>(gmock_p##i)
+#define GMOCK_ACTION_GVALUE_PARAMS_(params) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_GVALUE_PARAM, , params))
+
+#define GMOCK_INTERNAL_INIT_PARAM(i, data, param) \
+ , param(::std::forward<param##_type>(gmock_p##i))
+#define GMOCK_ACTION_INIT_PARAMS_(params) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_INIT_PARAM, , params))
+
+#define GMOCK_INTERNAL_FIELD_PARAM(i, data, param) param##_type param;
+#define GMOCK_ACTION_FIELD_PARAMS_(params) \
+ GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_FIELD_PARAM, , params)
+
+#define GMOCK_INTERNAL_ACTION(name, full_name, params) \
+ template <GMOCK_ACTION_TYPENAME_PARAMS_(params)> \
+ class full_name { \
+ public: \
+ explicit full_name(GMOCK_ACTION_TYPE_GVALUE_PARAMS_(params)) \
+ : impl_(std::make_shared<gmock_Impl>( \
+ GMOCK_ACTION_GVALUE_PARAMS_(params))) { } \
+ full_name(const full_name&) = default; \
+ full_name(full_name&&) noexcept = default; \
+ template <typename F> \
+ operator ::testing::Action<F>() const { \
+ return ::testing::internal::MakeAction<F>(impl_); \
+ } \
+ private: \
+ class gmock_Impl { \
+ public: \
+ explicit gmock_Impl(GMOCK_ACTION_TYPE_GVALUE_PARAMS_(params)) \
+ : GMOCK_ACTION_INIT_PARAMS_(params) {} \
+ template <typename function_type, typename return_type, \
+ typename args_type, GMOCK_ACTION_TEMPLATE_ARGS_NAMES_> \
+ return_type gmock_PerformImpl(GMOCK_ACTION_ARG_TYPES_AND_NAMES_) const; \
+ GMOCK_ACTION_FIELD_PARAMS_(params) \
+ }; \
+ std::shared_ptr<const gmock_Impl> impl_; \
+ }; \
+ template <GMOCK_ACTION_TYPENAME_PARAMS_(params)> \
+ inline full_name<GMOCK_ACTION_TYPE_PARAMS_(params)> name( \
+ GMOCK_ACTION_TYPE_GVALUE_PARAMS_(params)) { \
+ return full_name<GMOCK_ACTION_TYPE_PARAMS_(params)>( \
+ GMOCK_ACTION_GVALUE_PARAMS_(params)); \
+ } \
+ template <GMOCK_ACTION_TYPENAME_PARAMS_(params)> \
+ template <typename function_type, typename return_type, typename args_type, \
+ GMOCK_ACTION_TEMPLATE_ARGS_NAMES_> \
+ return_type full_name<GMOCK_ACTION_TYPE_PARAMS_(params)>::gmock_Impl:: \
+ gmock_PerformImpl(GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
+
+} // namespace internal
+
+// Similar to GMOCK_INTERNAL_ACTION, but no bound parameters are stored.
+#define ACTION(name) \
+ class name##Action { \
+ public: \
+ explicit name##Action() noexcept {} \
+ name##Action(const name##Action&) noexcept {} \
+ template <typename F> \
+ operator ::testing::Action<F>() const { \
+ return ::testing::internal::MakeAction<F, gmock_Impl>(); \
+ } \
+ private: \
+ class gmock_Impl { \
+ public: \
+ template <typename function_type, typename return_type, \
+ typename args_type, GMOCK_ACTION_TEMPLATE_ARGS_NAMES_> \
+ return_type gmock_PerformImpl(GMOCK_ACTION_ARG_TYPES_AND_NAMES_) const; \
+ }; \
+ }; \
+ inline name##Action name() GTEST_MUST_USE_RESULT_; \
+ inline name##Action name() { return name##Action(); } \
+ template <typename function_type, typename return_type, typename args_type, \
+ GMOCK_ACTION_TEMPLATE_ARGS_NAMES_> \
+ return_type name##Action::gmock_Impl::gmock_PerformImpl( \
+ GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
+
+#define ACTION_P(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP, (__VA_ARGS__))
+
+#define ACTION_P2(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP2, (__VA_ARGS__))
+
+#define ACTION_P3(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP3, (__VA_ARGS__))
+
+#define ACTION_P4(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP4, (__VA_ARGS__))
+
+#define ACTION_P5(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP5, (__VA_ARGS__))
+
+#define ACTION_P6(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP6, (__VA_ARGS__))
+
+#define ACTION_P7(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP7, (__VA_ARGS__))
+
+#define ACTION_P8(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP8, (__VA_ARGS__))
+
+#define ACTION_P9(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP9, (__VA_ARGS__))
+
+#define ACTION_P10(name, ...) \
+ GMOCK_INTERNAL_ACTION(name, name##ActionP10, (__VA_ARGS__))
+
} // namespace testing
#ifdef _MSC_VER
# pragma warning(pop)
#endif
-
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_ACTIONS_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_ACTIONS_H_
// Copyright 2007, Google Inc.
// All rights reserved.
//
@@ -1988,8 +2639,8 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_
#include <limits.h>
#include <memory>
@@ -2020,10 +2671,12 @@
virtual int ConservativeLowerBound() const { return 0; }
virtual int ConservativeUpperBound() const { return INT_MAX; }
- // Returns true iff call_count calls will satisfy this cardinality.
+ // Returns true if and only if call_count calls will satisfy this
+ // cardinality.
virtual bool IsSatisfiedByCallCount(int call_count) const = 0;
- // Returns true iff call_count calls will saturate this cardinality.
+ // Returns true if and only if call_count calls will saturate this
+ // cardinality.
virtual bool IsSaturatedByCallCount(int call_count) const = 0;
// Describes self to an ostream.
@@ -2048,17 +2701,19 @@
int ConservativeLowerBound() const { return impl_->ConservativeLowerBound(); }
int ConservativeUpperBound() const { return impl_->ConservativeUpperBound(); }
- // Returns true iff call_count calls will satisfy this cardinality.
+ // Returns true if and only if call_count calls will satisfy this
+ // cardinality.
bool IsSatisfiedByCallCount(int call_count) const {
return impl_->IsSatisfiedByCallCount(call_count);
}
- // Returns true iff call_count calls will saturate this cardinality.
+ // Returns true if and only if call_count calls will saturate this
+ // cardinality.
bool IsSaturatedByCallCount(int call_count) const {
return impl_->IsSaturatedByCallCount(call_count);
}
- // Returns true iff call_count calls will over-saturate this
+ // Returns true if and only if call_count calls will over-saturate this
// cardinality, i.e. exceed the maximum number of allowed calls.
bool IsOverSaturatedByCallCount(int call_count) const {
return impl_->IsSaturatedByCallCount(call_count) &&
@@ -2100,14 +2755,7 @@
GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_
-#ifndef THIRD_PARTY_GOOGLETEST_GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_FUNCTION_MOCKER_H_ // NOLINT
-#define THIRD_PARTY_GOOGLETEST_GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_FUNCTION_MOCKER_H_ // NOLINT
-
-// This file was GENERATED by command:
-// pump.py gmock-generated-function-mockers.h.pump
-// DO NOT EDIT BY HAND!!!
-
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_CARDINALITIES_H_
// Copyright 2007, Google Inc.
// All rights reserved.
//
@@ -2137,18 +2785,17 @@
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
// Google Mock - a framework for writing C++ mock classes.
//
-// This file implements function mockers of various arities.
+// This file implements MOCK_METHOD.
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_FUNCTION_MOCKERS_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_FUNCTION_MOCKERS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_FUNCTION_MOCKER_H_ // NOLINT
+#define GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_FUNCTION_MOCKER_H_ // NOLINT
-#include <functional>
-#include <utility>
+#include <type_traits> // IWYU pragma: keep
+#include <utility> // IWYU pragma: keep
// Copyright 2007, Google Inc.
// All rights reserved.
@@ -2210,14 +2857,16 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_SPEC_BUILDERS_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_SPEC_BUILDERS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_SPEC_BUILDERS_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_SPEC_BUILDERS_H_
+#include <functional>
#include <map>
#include <memory>
#include <set>
#include <sstream>
#include <string>
+#include <type_traits>
#include <utility>
#include <vector>
// Copyright 2007, Google Inc.
@@ -2252,7 +2901,220 @@
// Google Mock - a framework for writing C++ mock classes.
//
-// This file implements some commonly used argument matchers. More
+// The MATCHER* family of macros can be used in a namespace scope to
+// define custom matchers easily.
+//
+// Basic Usage
+// ===========
+//
+// The syntax
+//
+// MATCHER(name, description_string) { statements; }
+//
+// defines a matcher with the given name that executes the statements,
+// which must return a bool to indicate if the match succeeds. Inside
+// the statements, you can refer to the value being matched by 'arg',
+// and refer to its type by 'arg_type'.
+//
+// The description string documents what the matcher does, and is used
+// to generate the failure message when the match fails. Since a
+// MATCHER() is usually defined in a header file shared by multiple
+// C++ source files, we require the description to be a C-string
+// literal to avoid possible side effects. It can be empty, in which
+// case we'll use the sequence of words in the matcher name as the
+// description.
+//
+// For example:
+//
+// MATCHER(IsEven, "") { return (arg % 2) == 0; }
+//
+// allows you to write
+//
+// // Expects mock_foo.Bar(n) to be called where n is even.
+// EXPECT_CALL(mock_foo, Bar(IsEven()));
+//
+// or,
+//
+// // Verifies that the value of some_expression is even.
+// EXPECT_THAT(some_expression, IsEven());
+//
+// If the above assertion fails, it will print something like:
+//
+// Value of: some_expression
+// Expected: is even
+// Actual: 7
+//
+// where the description "is even" is automatically calculated from the
+// matcher name IsEven.
+//
+// Argument Type
+// =============
+//
+// Note that the type of the value being matched (arg_type) is
+// determined by the context in which you use the matcher and is
+// supplied to you by the compiler, so you don't need to worry about
+// declaring it (nor can you). This allows the matcher to be
+// polymorphic. For example, IsEven() can be used to match any type
+// where the value of "(arg % 2) == 0" can be implicitly converted to
+// a bool. In the "Bar(IsEven())" example above, if method Bar()
+// takes an int, 'arg_type' will be int; if it takes an unsigned long,
+// 'arg_type' will be unsigned long; and so on.
+//
+// Parameterizing Matchers
+// =======================
+//
+// Sometimes you'll want to parameterize the matcher. For that you
+// can use another macro:
+//
+// MATCHER_P(name, param_name, description_string) { statements; }
+//
+// For example:
+//
+// MATCHER_P(HasAbsoluteValue, value, "") { return abs(arg) == value; }
+//
+// will allow you to write:
+//
+// EXPECT_THAT(Blah("a"), HasAbsoluteValue(n));
+//
+// which may lead to this message (assuming n is 10):
+//
+// Value of: Blah("a")
+// Expected: has absolute value 10
+// Actual: -9
+//
+// Note that both the matcher description and its parameter are
+// printed, making the message human-friendly.
+//
+// In the matcher definition body, you can write 'foo_type' to
+// reference the type of a parameter named 'foo'. For example, in the
+// body of MATCHER_P(HasAbsoluteValue, value) above, you can write
+// 'value_type' to refer to the type of 'value'.
+//
+// We also provide MATCHER_P2, MATCHER_P3, ..., up to MATCHER_P$n to
+// support multi-parameter matchers.
+//
+// Describing Parameterized Matchers
+// =================================
+//
+// The last argument to MATCHER*() is a string-typed expression. The
+// expression can reference all of the matcher's parameters and a
+// special bool-typed variable named 'negation'. When 'negation' is
+// false, the expression should evaluate to the matcher's description;
+// otherwise it should evaluate to the description of the negation of
+// the matcher. For example,
+//
+// using testing::PrintToString;
+//
+// MATCHER_P2(InClosedRange, low, hi,
+// std::string(negation ? "is not" : "is") + " in range [" +
+// PrintToString(low) + ", " + PrintToString(hi) + "]") {
+// return low <= arg && arg <= hi;
+// }
+// ...
+// EXPECT_THAT(3, InClosedRange(4, 6));
+// EXPECT_THAT(3, Not(InClosedRange(2, 4)));
+//
+// would generate two failures that contain the text:
+//
+// Expected: is in range [4, 6]
+// ...
+// Expected: is not in range [2, 4]
+//
+// If you specify "" as the description, the failure message will
+// contain the sequence of words in the matcher name followed by the
+// parameter values printed as a tuple. For example,
+//
+// MATCHER_P2(InClosedRange, low, hi, "") { ... }
+// ...
+// EXPECT_THAT(3, InClosedRange(4, 6));
+// EXPECT_THAT(3, Not(InClosedRange(2, 4)));
+//
+// would generate two failures that contain the text:
+//
+// Expected: in closed range (4, 6)
+// ...
+// Expected: not (in closed range (2, 4))
+//
+// Types of Matcher Parameters
+// ===========================
+//
+// For the purpose of typing, you can view
+//
+// MATCHER_Pk(Foo, p1, ..., pk, description_string) { ... }
+//
+// as shorthand for
+//
+// template <typename p1_type, ..., typename pk_type>
+// FooMatcherPk<p1_type, ..., pk_type>
+// Foo(p1_type p1, ..., pk_type pk) { ... }
+//
+// When you write Foo(v1, ..., vk), the compiler infers the types of
+// the parameters v1, ..., and vk for you. If you are not happy with
+// the result of the type inference, you can specify the types by
+// explicitly instantiating the template, as in Foo<long, bool>(5,
+// false). As said earlier, you don't get to (or need to) specify
+// 'arg_type' as that's determined by the context in which the matcher
+// is used. You can assign the result of expression Foo(p1, ..., pk)
+// to a variable of type FooMatcherPk<p1_type, ..., pk_type>. This
+// can be useful when composing matchers.
+//
+// While you can instantiate a matcher template with reference types,
+// passing the parameters by pointer usually makes your code more
+// readable. If, however, you still want to pass a parameter by
+// reference, be aware that in the failure message generated by the
+// matcher you will see the value of the referenced object but not its
+// address.
+//
+// Explaining Match Results
+// ========================
+//
+// Sometimes the matcher description alone isn't enough to explain why
+// the match has failed or succeeded. For example, when expecting a
+// long string, it can be very helpful to also print the diff between
+// the expected string and the actual one. To achieve that, you can
+// optionally stream additional information to a special variable
+// named result_listener, whose type is a pointer to class
+// MatchResultListener:
+//
+// MATCHER_P(EqualsLongString, str, "") {
+// if (arg == str) return true;
+//
+// *result_listener << "the difference: "
+/// << DiffStrings(str, arg);
+// return false;
+// }
+//
+// Overloading Matchers
+// ====================
+//
+// You can overload matchers with different numbers of parameters:
+//
+// MATCHER_P(Blah, a, description_string1) { ... }
+// MATCHER_P2(Blah, a, b, description_string2) { ... }
+//
+// Caveats
+// =======
+//
+// When defining a new matcher, you should also consider implementing
+// MatcherInterface or using MakePolymorphicMatcher(). These
+// approaches require more work than the MATCHER* macros, but also
+// give you more control on the types of the value being matched and
+// the matcher parameters, which may leads to better compiler error
+// messages when the matcher is used wrong. They also allow
+// overloading matchers based on parameter types (as opposed to just
+// based on the number of parameters).
+//
+// MATCHER*() can only be used in a namespace scope as templates cannot be
+// declared inside of a local class.
+//
+// More Information
+// ================
+//
+// To learn more about using these macros, please search for 'MATCHER'
+// on
+// https://github.com/google/googletest/blob/master/docs/gmock_cook_book.md
+//
+// This file also implements some commonly used argument matchers. More
// matchers can be defined by the user implementing the
// MatcherInterface<T> interface if necessary.
//
@@ -2261,11 +3123,11 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_MATCHERS_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_MATCHERS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MATCHERS_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MATCHERS_H_
-#include <math.h>
#include <algorithm>
+#include <cmath>
#include <initializer_list>
#include <iterator>
#include <limits>
@@ -2277,9 +3139,17 @@
#include <utility>
#include <vector>
+
+// MSVC warning C5046 is new as of VS2017 version 15.8.
+#if defined(_MSC_VER) && _MSC_VER >= 1915
+#define GMOCK_MAYBE_5046_ 5046
+#else
+#define GMOCK_MAYBE_5046_
+#endif
+
GTEST_DISABLE_MSC_WARNINGS_PUSH_(
- 4251 5046 /* class A needs to have dll-interface to be used by clients of
- class B */
+ 4251 GMOCK_MAYBE_5046_ /* class A needs to have dll-interface to be used by
+ clients of class B */
/* Symbol involving type with internal linkage not defined */)
namespace testing {
@@ -2340,23 +3210,20 @@
// constructor from M (this usually happens when T has an implicit
// constructor from any type).
//
- // It won't work to unconditionally implict_cast
+ // It won't work to unconditionally implicit_cast
// polymorphic_matcher_or_value to Matcher<T> because it won't trigger
// a user-defined conversion from M to T if one exists (assuming M is
// a value).
- return CastImpl(
- polymorphic_matcher_or_value,
- BooleanConstant<
- std::is_convertible<M, Matcher<T> >::value>(),
- BooleanConstant<
- std::is_convertible<M, T>::value>());
+ return CastImpl(polymorphic_matcher_or_value,
+ std::is_convertible<M, Matcher<T>>{},
+ std::is_convertible<M, T>{});
}
private:
template <bool Ignore>
static Matcher<T> CastImpl(const M& polymorphic_matcher_or_value,
- BooleanConstant<true> /* convertible_to_matcher */,
- BooleanConstant<Ignore>) {
+ std::true_type /* convertible_to_matcher */,
+ std::integral_constant<bool, Ignore>) {
// M is implicitly convertible to Matcher<T>, which means that either
// M is a polymorphic matcher or Matcher<T> has an implicit constructor
// from M. In both cases using the implicit conversion will produce a
@@ -2371,9 +3238,9 @@
// M can't be implicitly converted to Matcher<T>, so M isn't a polymorphic
// matcher. It's a value of a type implicitly convertible to T. Use direct
// initialization to create a matcher.
- static Matcher<T> CastImpl(
- const M& value, BooleanConstant<false> /* convertible_to_matcher */,
- BooleanConstant<true> /* convertible_to_T */) {
+ static Matcher<T> CastImpl(const M& value,
+ std::false_type /* convertible_to_matcher */,
+ std::true_type /* convertible_to_T */) {
return Matcher<T>(ImplicitCast_<T>(value));
}
@@ -2387,9 +3254,9 @@
// (e.g. std::pair<const int, int> vs. std::pair<int, int>).
//
// We don't define this method inline as we need the declaration of Eq().
- static Matcher<T> CastImpl(
- const M& value, BooleanConstant<false> /* convertible_to_matcher */,
- BooleanConstant<false> /* convertible_to_T */);
+ static Matcher<T> CastImpl(const M& value,
+ std::false_type /* convertible_to_matcher */,
+ std::false_type /* convertible_to_T */);
};
// This more specialized version is used when MatcherCast()'s argument
@@ -2424,7 +3291,14 @@
!std::is_base_of<FromType, ToType>::value,
"Can't implicitly convert from <base> to <derived>");
- return source_matcher_.MatchAndExplain(static_cast<U>(x), listener);
+ // Do the cast to `U` explicitly if necessary.
+ // Otherwise, let implicit conversions do the trick.
+ using CastType =
+ typename std::conditional<std::is_convertible<T&, const U&>::value,
+ T&, U>::type;
+
+ return source_matcher_.MatchAndExplain(static_cast<CastType>(x),
+ listener);
}
void DescribeTo(::std::ostream* os) const override {
@@ -2437,8 +3311,6 @@
private:
const Matcher<U> source_matcher_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
};
@@ -2450,6 +3322,50 @@
static Matcher<T> Cast(const Matcher<T>& matcher) { return matcher; }
};
+// Template specialization for parameterless Matcher.
+template <typename Derived>
+class MatcherBaseImpl {
+ public:
+ MatcherBaseImpl() = default;
+
+ template <typename T>
+ operator ::testing::Matcher<T>() const { // NOLINT(runtime/explicit)
+ return ::testing::Matcher<T>(new
+ typename Derived::template gmock_Impl<T>());
+ }
+};
+
+// Template specialization for Matcher with parameters.
+template <template <typename...> class Derived, typename... Ts>
+class MatcherBaseImpl<Derived<Ts...>> {
+ public:
+ // Mark the constructor explicit for single argument T to avoid implicit
+ // conversions.
+ template <typename E = std::enable_if<sizeof...(Ts) == 1>,
+ typename E::type* = nullptr>
+ explicit MatcherBaseImpl(Ts... params)
+ : params_(std::forward<Ts>(params)...) {}
+ template <typename E = std::enable_if<sizeof...(Ts) != 1>,
+ typename = typename E::type>
+ MatcherBaseImpl(Ts... params) // NOLINT
+ : params_(std::forward<Ts>(params)...) {}
+
+ template <typename F>
+ operator ::testing::Matcher<F>() const { // NOLINT(runtime/explicit)
+ return Apply<F>(MakeIndexSequence<sizeof...(Ts)>{});
+ }
+
+ private:
+ template <typename F, std::size_t... tuple_ids>
+ ::testing::Matcher<F> Apply(IndexSequence<tuple_ids...>) const {
+ return ::testing::Matcher<F>(
+ new typename Derived<Ts...>::template gmock_Impl<F>(
+ std::get<tuple_ids>(params_)...));
+ }
+
+ const std::tuple<Ts...> params_;
+};
+
} // namespace internal
// In order to be safe and clear, casting between different matcher
@@ -2461,56 +3377,43 @@
return internal::MatcherCastImpl<T, M>::Cast(matcher);
}
-// Implements SafeMatcherCast().
-//
-// FIXME: The intermediate SafeMatcherCastImpl class was introduced as a
-// workaround for a compiler bug, and can now be removed.
-template <typename T>
-class SafeMatcherCastImpl {
- public:
- // This overload handles polymorphic matchers and values only since
- // monomorphic matchers are handled by the next one.
- template <typename M>
- static inline Matcher<T> Cast(const M& polymorphic_matcher_or_value) {
- return internal::MatcherCastImpl<T, M>::Cast(polymorphic_matcher_or_value);
- }
-
- // This overload handles monomorphic matchers.
- //
- // In general, if type T can be implicitly converted to type U, we can
- // safely convert a Matcher<U> to a Matcher<T> (i.e. Matcher is
- // contravariant): just keep a copy of the original Matcher<U>, convert the
- // argument from type T to U, and then pass it to the underlying Matcher<U>.
- // The only exception is when U is a reference and T is not, as the
- // underlying Matcher<U> may be interested in the argument's address, which
- // is not preserved in the conversion from T to U.
- template <typename U>
- static inline Matcher<T> Cast(const Matcher<U>& matcher) {
- // Enforce that T can be implicitly converted to U.
- GTEST_COMPILE_ASSERT_((std::is_convertible<T, U>::value),
- "T must be implicitly convertible to U");
- // Enforce that we are not converting a non-reference type T to a reference
- // type U.
- GTEST_COMPILE_ASSERT_(
- internal::is_reference<T>::value || !internal::is_reference<U>::value,
- cannot_convert_non_reference_arg_to_reference);
- // In case both T and U are arithmetic types, enforce that the
- // conversion is not lossy.
- typedef GTEST_REMOVE_REFERENCE_AND_CONST_(T) RawT;
- typedef GTEST_REMOVE_REFERENCE_AND_CONST_(U) RawU;
- const bool kTIsOther = GMOCK_KIND_OF_(RawT) == internal::kOther;
- const bool kUIsOther = GMOCK_KIND_OF_(RawU) == internal::kOther;
- GTEST_COMPILE_ASSERT_(
- kTIsOther || kUIsOther ||
- (internal::LosslessArithmeticConvertible<RawT, RawU>::value),
- conversion_of_arithmetic_types_must_be_lossless);
- return MatcherCast<T>(matcher);
- }
-};
-
+// This overload handles polymorphic matchers and values only since
+// monomorphic matchers are handled by the next one.
template <typename T, typename M>
-inline Matcher<T> SafeMatcherCast(const M& polymorphic_matcher) {
- return SafeMatcherCastImpl<T>::Cast(polymorphic_matcher);
+inline Matcher<T> SafeMatcherCast(const M& polymorphic_matcher_or_value) {
+ return MatcherCast<T>(polymorphic_matcher_or_value);
+}
+
+// This overload handles monomorphic matchers.
+//
+// In general, if type T can be implicitly converted to type U, we can
+// safely convert a Matcher<U> to a Matcher<T> (i.e. Matcher is
+// contravariant): just keep a copy of the original Matcher<U>, convert the
+// argument from type T to U, and then pass it to the underlying Matcher<U>.
+// The only exception is when U is a reference and T is not, as the
+// underlying Matcher<U> may be interested in the argument's address, which
+// is not preserved in the conversion from T to U.
+template <typename T, typename U>
+inline Matcher<T> SafeMatcherCast(const Matcher<U>& matcher) {
+ // Enforce that T can be implicitly converted to U.
+ static_assert(std::is_convertible<const T&, const U&>::value,
+ "T must be implicitly convertible to U");
+ // Enforce that we are not converting a non-reference type T to a reference
+ // type U.
+ GTEST_COMPILE_ASSERT_(
+ std::is_reference<T>::value || !std::is_reference<U>::value,
+ cannot_convert_non_reference_arg_to_reference);
+ // In case both T and U are arithmetic types, enforce that the
+ // conversion is not lossy.
+ typedef GTEST_REMOVE_REFERENCE_AND_CONST_(T) RawT;
+ typedef GTEST_REMOVE_REFERENCE_AND_CONST_(U) RawU;
+ constexpr bool kTIsOther = GMOCK_KIND_OF_(RawT) == internal::kOther;
+ constexpr bool kUIsOther = GMOCK_KIND_OF_(RawU) == internal::kOther;
+ GTEST_COMPILE_ASSERT_(
+ kTIsOther || kUIsOther ||
+ (internal::LosslessArithmeticConvertible<RawT, RawU>::value),
+ conversion_of_arithmetic_types_must_be_lossless);
+ return MatcherCast<T>(matcher);
}
// A<T>() returns a matcher that matches any value of type T.
@@ -2573,8 +3476,8 @@
class TuplePrefix {
public:
// TuplePrefix<N>::Matches(matcher_tuple, value_tuple) returns true
- // iff the first N fields of matcher_tuple matches the first N
- // fields of value_tuple, respectively.
+ // if and only if the first N fields of matcher_tuple matches
+ // the first N fields of value_tuple, respectively.
template <typename MatcherTuple, typename ValueTuple>
static bool Matches(const MatcherTuple& matcher_tuple,
const ValueTuple& value_tuple) {
@@ -2632,8 +3535,8 @@
::std::ostream* /* os */) {}
};
-// TupleMatches(matcher_tuple, value_tuple) returns true iff all
-// matchers in matcher_tuple match the corresponding fields in
+// TupleMatches(matcher_tuple, value_tuple) returns true if and only if
+// all matchers in matcher_tuple match the corresponding fields in
// value_tuple. It is a compiler error if matcher_tuple and
// value_tuple have different number of fields or incompatible field
// types.
@@ -2699,31 +3602,25 @@
return TransformTupleValuesHelper<Tuple, Func, OutIter>::Run(f, t, out);
}
-// Implements A<T>().
-template <typename T>
-class AnyMatcherImpl : public MatcherInterface<const T&> {
- public:
- bool MatchAndExplain(const T& /* x */,
- MatchResultListener* /* listener */) const override {
- return true;
- }
- void DescribeTo(::std::ostream* os) const override { *os << "is anything"; }
- void DescribeNegationTo(::std::ostream* os) const override {
- // This is mostly for completeness' safe, as it's not very useful
- // to write Not(A<bool>()). However we cannot completely rule out
- // such a possibility, and it doesn't hurt to be prepared.
- *os << "never matches";
- }
-};
-
// Implements _, a matcher that matches any value of any
// type. This is a polymorphic matcher, so we need a template type
// conversion operator to make it appearing as a Matcher<T> for any
// type T.
class AnythingMatcher {
public:
+ using is_gtest_matcher = void;
+
template <typename T>
- operator Matcher<T>() const { return A<T>(); }
+ bool MatchAndExplain(const T& /* x */, std::ostream* /* listener */) const {
+ return true;
+ }
+ void DescribeTo(std::ostream* os) const { *os << "is anything"; }
+ void DescribeNegationTo(::std::ostream* os) const {
+ // This is mostly for completeness' sake, as it's not very useful
+ // to write Not(A<bool>()). However we cannot completely rule out
+ // such a possibility, and it doesn't hurt to be prepared.
+ *os << "never matches";
+ }
};
// Implements the polymorphic IsNull() matcher, which matches any raw or smart
@@ -2823,13 +3720,9 @@
private:
const Super& object_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
T& object_;
-
- GTEST_DISALLOW_ASSIGN_(RefMatcher);
};
// Polymorphic helper functions for narrow and wide string matchers.
@@ -2871,19 +3764,20 @@
template <typename StringType>
class StrEqualityMatcher {
public:
- StrEqualityMatcher(const StringType& str, bool expect_eq,
- bool case_sensitive)
- : string_(str), expect_eq_(expect_eq), case_sensitive_(case_sensitive) {}
+ StrEqualityMatcher(StringType str, bool expect_eq, bool case_sensitive)
+ : string_(std::move(str)),
+ expect_eq_(expect_eq),
+ case_sensitive_(case_sensitive) {}
-#if GTEST_HAS_ABSL
- bool MatchAndExplain(const absl::string_view& s,
+#if GTEST_INTERNAL_HAS_STRING_VIEW
+ bool MatchAndExplain(const internal::StringView& s,
MatchResultListener* listener) const {
- // This should fail to compile if absl::string_view is used with wide
+ // This should fail to compile if StringView is used with wide
// strings.
- const StringType& str = string(s);
+ const StringType& str = std::string(s);
return MatchAndExplain(str, listener);
}
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_STRING_VIEW
// Accepts pointer types, particularly:
// const char*
@@ -2901,11 +3795,11 @@
// Matches anything that can convert to StringType.
//
// This is a template, not just a plain function with const StringType&,
- // because absl::string_view has some interfering non-explicit constructors.
+ // because StringView has some interfering non-explicit constructors.
template <typename MatcheeStringType>
bool MatchAndExplain(const MatcheeStringType& s,
MatchResultListener* /* listener */) const {
- const StringType& s2(s);
+ const StringType s2(s);
const bool eq = case_sensitive_ ? s2 == string_ :
CaseInsensitiveStringEquals(s2, string_);
return expect_eq_ == eq;
@@ -2932,8 +3826,6 @@
const StringType string_;
const bool expect_eq_;
const bool case_sensitive_;
-
- GTEST_DISALLOW_ASSIGN_(StrEqualityMatcher);
};
// Implements the polymorphic HasSubstr(substring) matcher, which
@@ -2945,15 +3837,15 @@
explicit HasSubstrMatcher(const StringType& substring)
: substring_(substring) {}
-#if GTEST_HAS_ABSL
- bool MatchAndExplain(const absl::string_view& s,
+#if GTEST_INTERNAL_HAS_STRING_VIEW
+ bool MatchAndExplain(const internal::StringView& s,
MatchResultListener* listener) const {
- // This should fail to compile if absl::string_view is used with wide
+ // This should fail to compile if StringView is used with wide
// strings.
- const StringType& str = string(s);
+ const StringType& str = std::string(s);
return MatchAndExplain(str, listener);
}
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_STRING_VIEW
// Accepts pointer types, particularly:
// const char*
@@ -2968,12 +3860,11 @@
// Matches anything that can convert to StringType.
//
// This is a template, not just a plain function with const StringType&,
- // because absl::string_view has some interfering non-explicit constructors.
+ // because StringView has some interfering non-explicit constructors.
template <typename MatcheeStringType>
bool MatchAndExplain(const MatcheeStringType& s,
MatchResultListener* /* listener */) const {
- const StringType& s2(s);
- return s2.find(substring_) != StringType::npos;
+ return StringType(s).find(substring_) != StringType::npos;
}
// Describes what this matcher matches.
@@ -2989,8 +3880,6 @@
private:
const StringType substring_;
-
- GTEST_DISALLOW_ASSIGN_(HasSubstrMatcher);
};
// Implements the polymorphic StartsWith(substring) matcher, which
@@ -3002,15 +3891,15 @@
explicit StartsWithMatcher(const StringType& prefix) : prefix_(prefix) {
}
-#if GTEST_HAS_ABSL
- bool MatchAndExplain(const absl::string_view& s,
+#if GTEST_INTERNAL_HAS_STRING_VIEW
+ bool MatchAndExplain(const internal::StringView& s,
MatchResultListener* listener) const {
- // This should fail to compile if absl::string_view is used with wide
+ // This should fail to compile if StringView is used with wide
// strings.
- const StringType& str = string(s);
+ const StringType& str = std::string(s);
return MatchAndExplain(str, listener);
}
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_STRING_VIEW
// Accepts pointer types, particularly:
// const char*
@@ -3025,7 +3914,7 @@
// Matches anything that can convert to StringType.
//
// This is a template, not just a plain function with const StringType&,
- // because absl::string_view has some interfering non-explicit constructors.
+ // because StringView has some interfering non-explicit constructors.
template <typename MatcheeStringType>
bool MatchAndExplain(const MatcheeStringType& s,
MatchResultListener* /* listener */) const {
@@ -3046,8 +3935,6 @@
private:
const StringType prefix_;
-
- GTEST_DISALLOW_ASSIGN_(StartsWithMatcher);
};
// Implements the polymorphic EndsWith(substring) matcher, which
@@ -3058,15 +3945,15 @@
public:
explicit EndsWithMatcher(const StringType& suffix) : suffix_(suffix) {}
-#if GTEST_HAS_ABSL
- bool MatchAndExplain(const absl::string_view& s,
+#if GTEST_INTERNAL_HAS_STRING_VIEW
+ bool MatchAndExplain(const internal::StringView& s,
MatchResultListener* listener) const {
- // This should fail to compile if absl::string_view is used with wide
+ // This should fail to compile if StringView is used with wide
// strings.
- const StringType& str = string(s);
+ const StringType& str = std::string(s);
return MatchAndExplain(str, listener);
}
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_STRING_VIEW
// Accepts pointer types, particularly:
// const char*
@@ -3081,7 +3968,7 @@
// Matches anything that can convert to StringType.
//
// This is a template, not just a plain function with const StringType&,
- // because absl::string_view has some interfering non-explicit constructors.
+ // because StringView has some interfering non-explicit constructors.
template <typename MatcheeStringType>
bool MatchAndExplain(const MatcheeStringType& s,
MatchResultListener* /* listener */) const {
@@ -3102,8 +3989,6 @@
private:
const StringType suffix_;
-
- GTEST_DISALLOW_ASSIGN_(EndsWithMatcher);
};
// Implements a matcher that compares the two fields of a 2-tuple
@@ -3197,8 +4082,6 @@
private:
const Matcher<T> matcher_;
-
- GTEST_DISALLOW_ASSIGN_(NotMatcherImpl);
};
// Implements the Not(m) matcher, which matches a value that doesn't
@@ -3217,8 +4100,6 @@
private:
InnerMatcher matcher_;
-
- GTEST_DISALLOW_ASSIGN_(NotMatcher);
};
// Implements the AllOf(m1, m2) matcher for a particular argument type
@@ -3280,8 +4161,6 @@
private:
const std::vector<Matcher<T> > matchers_;
-
- GTEST_DISALLOW_ASSIGN_(AllOfMatcherImpl);
};
// VariadicMatcher is used for the variadic implementation of
@@ -3296,6 +4175,9 @@
static_assert(sizeof...(Args) > 0, "Must have at least one matcher.");
}
+ VariadicMatcher(const VariadicMatcher&) = default;
+ VariadicMatcher& operator=(const VariadicMatcher&) = delete;
+
// This template type conversion operator allows an
// VariadicMatcher<Matcher1, Matcher2...> object to match any type that
// all of the provided matchers (Matcher1, Matcher2, ...) can match.
@@ -3320,8 +4202,6 @@
std::integral_constant<size_t, sizeof...(Args)>) const {}
std::tuple<Args...> matchers_;
-
- GTEST_DISALLOW_ASSIGN_(VariadicMatcher);
};
template <typename... Args>
@@ -3386,8 +4266,6 @@
private:
const std::vector<Matcher<T> > matchers_;
-
- GTEST_DISALLOW_ASSIGN_(AnyOfMatcherImpl);
};
// AnyOfMatcher is used for the variadic implementation of AnyOf(m_1, m_2, ...).
@@ -3415,8 +4293,6 @@
private:
const ::std::vector<T> matchers_;
-
- GTEST_DISALLOW_ASSIGN_(SomeOfArrayMatcher);
};
template <typename T>
@@ -3438,7 +4314,7 @@
// interested in the address of the argument.
template <typename T>
bool MatchAndExplain(T& x, // NOLINT
- MatchResultListener* /* listener */) const {
+ MatchResultListener* listener) const {
// Without the if-statement, MSVC sometimes warns about converting
// a value to bool (warning 4800).
//
@@ -3447,6 +4323,7 @@
// having no operator!().
if (predicate_(x))
return true;
+ *listener << "didn't satisfy the given predicate";
return false;
}
@@ -3460,8 +4337,6 @@
private:
Predicate predicate_;
-
- GTEST_DISALLOW_ASSIGN_(TrulyMatcher);
};
// Used for implementing Matches(matcher), which turns a matcher into
@@ -3498,8 +4373,6 @@
private:
M matcher_;
-
- GTEST_DISALLOW_ASSIGN_(MatcherAsPredicate);
};
// For implementing ASSERT_THAT() and EXPECT_THAT(). The template
@@ -3538,7 +4411,7 @@
<< "Expected: ";
matcher.DescribeTo(&ss);
- // Rerun the matcher to "PrintAndExain" the failure.
+ // Rerun the matcher to "PrintAndExplain" the failure.
StringMatchResultListener listener;
if (MatchPrintAndExplain(x, matcher, &listener)) {
ss << "\n The matcher failed on the initial attempt; but passed when "
@@ -3550,8 +4423,6 @@
private:
const M matcher_;
-
- GTEST_DISALLOW_ASSIGN_(PredicateFormatterFromMatcher);
};
// A helper function for converting a matcher to a predicate-formatter
@@ -3564,6 +4435,22 @@
return PredicateFormatterFromMatcher<M>(std::move(matcher));
}
+// Implements the polymorphic IsNan() matcher, which matches any floating type
+// value that is Nan.
+class IsNanMatcher {
+ public:
+ template <typename FloatType>
+ bool MatchAndExplain(const FloatType& f,
+ MatchResultListener* /* listener */) const {
+ return (::std::isnan)(f);
+ }
+
+ void DescribeTo(::std::ostream* os) const { *os << "is NaN"; }
+ void DescribeNegationTo(::std::ostream* os) const {
+ *os << "isn't NaN";
+ }
+};
+
// Implements the polymorphic floating point equality matcher, which matches
// two float values using ULP-based approximation or, optionally, a
// user-specified epsilon. The template is meant to be instantiated with
@@ -3624,7 +4511,7 @@
}
const FloatType diff = value - expected_;
- if (fabs(diff) <= max_abs_error_) {
+ if (::std::fabs(diff) <= max_abs_error_) {
return true;
}
@@ -3687,16 +4574,11 @@
const bool nan_eq_nan_;
// max_abs_error will be used for value comparison when >= 0.
const FloatType max_abs_error_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
// The following 3 type conversion operators allow FloatEq(expected) and
// NanSensitiveFloatEq(expected) to be used as a Matcher<float>, a
// Matcher<const float&>, or a Matcher<float&>, but nothing else.
- // (While Google's C++ coding style doesn't allow arguments passed
- // by non-const reference, we may see them in code not conforming to
- // the style. Therefore Google Mock needs to support them.)
operator Matcher<FloatType>() const {
return MakeMatcher(
new Impl<FloatType>(expected_, nan_eq_nan_, max_abs_error_));
@@ -3717,8 +4599,6 @@
const bool nan_eq_nan_;
// max_abs_error will be used for value comparison when >= 0.
const FloatType max_abs_error_;
-
- GTEST_DISALLOW_ASSIGN_(FloatingEqMatcher);
};
// A 2-tuple ("binary") wrapper around FloatingEqMatcher:
@@ -3822,8 +4702,9 @@
template <typename Pointer>
class Impl : public MatcherInterface<Pointer> {
public:
- typedef typename PointeeOf<GTEST_REMOVE_CONST_( // NOLINT
- GTEST_REMOVE_REFERENCE_(Pointer))>::type Pointee;
+ using Pointee =
+ typename std::pointer_traits<GTEST_REMOVE_REFERENCE_AND_CONST_(
+ Pointer)>::element_type;
explicit Impl(const InnerMatcher& matcher)
: matcher_(MatcherCast<const Pointee&>(matcher)) {}
@@ -3848,13 +4729,67 @@
private:
const Matcher<const Pointee&> matcher_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
const InnerMatcher matcher_;
+};
- GTEST_DISALLOW_ASSIGN_(PointeeMatcher);
+// Implements the Pointer(m) matcher
+// Implements the Pointer(m) matcher for matching a pointer that matches matcher
+// m. The pointer can be either raw or smart, and will match `m` against the
+// raw pointer.
+template <typename InnerMatcher>
+class PointerMatcher {
+ public:
+ explicit PointerMatcher(const InnerMatcher& matcher) : matcher_(matcher) {}
+
+ // This type conversion operator template allows Pointer(m) to be
+ // used as a matcher for any pointer type whose pointer type is
+ // compatible with the inner matcher, where type PointerType can be
+ // either a raw pointer or a smart pointer.
+ //
+ // The reason we do this instead of relying on
+ // MakePolymorphicMatcher() is that the latter is not flexible
+ // enough for implementing the DescribeTo() method of Pointer().
+ template <typename PointerType>
+ operator Matcher<PointerType>() const { // NOLINT
+ return Matcher<PointerType>(new Impl<const PointerType&>(matcher_));
+ }
+
+ private:
+ // The monomorphic implementation that works for a particular pointer type.
+ template <typename PointerType>
+ class Impl : public MatcherInterface<PointerType> {
+ public:
+ using Pointer =
+ const typename std::pointer_traits<GTEST_REMOVE_REFERENCE_AND_CONST_(
+ PointerType)>::element_type*;
+
+ explicit Impl(const InnerMatcher& matcher)
+ : matcher_(MatcherCast<Pointer>(matcher)) {}
+
+ void DescribeTo(::std::ostream* os) const override {
+ *os << "is a pointer that ";
+ matcher_.DescribeTo(os);
+ }
+
+ void DescribeNegationTo(::std::ostream* os) const override {
+ *os << "is not a pointer that ";
+ matcher_.DescribeTo(os);
+ }
+
+ bool MatchAndExplain(PointerType pointer,
+ MatchResultListener* listener) const override {
+ *listener << "which is a pointer that ";
+ Pointer p = GetRawPointer(pointer);
+ return MatchPrintAndExplain(p, matcher_, listener);
+ }
+
+ private:
+ Matcher<Pointer> matcher_;
+ };
+
+ const InnerMatcher matcher_;
};
#if GTEST_HAS_RTTI
@@ -3891,8 +4826,6 @@
static void GetCastTypeDescription(::std::ostream* os) {
*os << "when dynamic_cast to " << GetToName() << ", ";
}
-
- GTEST_DISALLOW_ASSIGN_(WhenDynamicCastToMatcherBase);
};
// Primary template.
@@ -3961,8 +4894,8 @@
// FIXME: The dispatch on std::is_pointer was introduced as a workaround for
// a compiler bug, and can now be removed.
return MatchAndExplainImpl(
- typename std::is_pointer<GTEST_REMOVE_CONST_(T)>::type(), value,
- listener);
+ typename std::is_pointer<typename std::remove_const<T>::type>::type(),
+ value, listener);
}
private:
@@ -3990,8 +4923,6 @@
// Contains either "whose given field " if the name of the field is unknown
// or "whose field `name_of_field` " if the name is known.
const std::string whose_field_;
-
- GTEST_DISALLOW_ASSIGN_(FieldMatcher);
};
// Implements the Property() matcher for matching a property
@@ -4028,8 +4959,8 @@
template <typename T>
bool MatchAndExplain(const T&value, MatchResultListener* listener) const {
return MatchAndExplainImpl(
- typename std::is_pointer<GTEST_REMOVE_CONST_(T)>::type(), value,
- listener);
+ typename std::is_pointer<typename std::remove_const<T>::type>::type(),
+ value, listener);
}
private:
@@ -4060,8 +4991,6 @@
// Contains either "whose given property " if the name of the property is
// unknown or "whose property `name_of_property` " if the name is known.
const std::string whose_property_;
-
- GTEST_DISALLOW_ASSIGN_(PropertyMatcher);
};
// Type traits specifying various features of different functors for ResultOf.
@@ -4073,7 +5002,9 @@
static void CheckIsValid(Functor /* functor */) {}
template <typename T>
- static auto Invoke(Functor f, T arg) -> decltype(f(arg)) { return f(arg); }
+ static auto Invoke(Functor f, const T& arg) -> decltype(f(arg)) {
+ return f(arg);
+ }
};
// Specialization for function pointers.
@@ -4104,7 +5035,7 @@
template <typename T>
operator Matcher<T>() const {
- return Matcher<T>(new Impl<T>(callable_, matcher_));
+ return Matcher<T>(new Impl<const T&>(callable_, matcher_));
}
private:
@@ -4149,14 +5080,10 @@
// how many times the callable will be invoked.
mutable CallableStorageType callable_;
const Matcher<ResultType> matcher_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
}; // class Impl
const CallableStorageType callable_;
const InnerMatcher matcher_;
-
- GTEST_DISALLOW_ASSIGN_(ResultOfMatcher);
};
// Implements a matcher that checks the size of an STL-style container.
@@ -4201,12 +5128,10 @@
private:
const Matcher<SizeType> size_matcher_;
- GTEST_DISALLOW_ASSIGN_(Impl);
};
private:
const SizeMatcher size_matcher_;
- GTEST_DISALLOW_ASSIGN_(SizeIsMatcher);
};
// Implements a matcher that checks the begin()..end() distance of an STL-style
@@ -4258,12 +5183,10 @@
private:
const Matcher<DistanceType> distance_matcher_;
- GTEST_DISALLOW_ASSIGN_(Impl);
};
private:
const DistanceMatcher distance_matcher_;
- GTEST_DISALLOW_ASSIGN_(BeginEndDistanceIsMatcher);
};
// Implements an equality matcher for any STL-style container whose elements
@@ -4283,15 +5206,15 @@
typedef typename View::type StlContainer;
typedef typename View::const_reference StlContainerReference;
+ static_assert(!std::is_const<Container>::value,
+ "Container type must not be const");
+ static_assert(!std::is_reference<Container>::value,
+ "Container type must not be a reference");
+
// We make a copy of expected in case the elements in it are modified
// after this matcher is created.
explicit ContainerEqMatcher(const Container& expected)
- : expected_(View::Copy(expected)) {
- // Makes sure the user doesn't instantiate this class template
- // with a const or reference type.
- (void)testing::StaticAssertTypeEq<Container,
- GTEST_REMOVE_REFERENCE_AND_CONST_(Container)>();
- }
+ : expected_(View::Copy(expected)) {}
void DescribeTo(::std::ostream* os) const {
*os << "equals ";
@@ -4305,9 +5228,8 @@
template <typename LhsContainer>
bool MatchAndExplain(const LhsContainer& lhs,
MatchResultListener* listener) const {
- // GTEST_REMOVE_CONST_() is needed to work around an MSVC 8.0 bug
- // that causes LhsContainer to be a const type sometimes.
- typedef internal::StlContainerView<GTEST_REMOVE_CONST_(LhsContainer)>
+ typedef internal::StlContainerView<
+ typename std::remove_const<LhsContainer>::type>
LhsView;
typedef typename LhsView::type LhsStlContainer;
StlContainerReference lhs_stl_container = LhsView::ConstReference(lhs);
@@ -4357,8 +5279,6 @@
private:
const StlContainer expected_;
-
- GTEST_DISALLOW_ASSIGN_(ContainerEqMatcher);
};
// A comparator functor that uses the < operator to compare two values.
@@ -4440,8 +5360,6 @@
private:
const Comparator comparator_;
const ContainerMatcher matcher_;
-
- GTEST_DISALLOW_ASSIGN_(WhenSortedByMatcher);
};
// Implements Pointwise(tuple_matcher, rhs_container). tuple_matcher
@@ -4459,15 +5377,15 @@
typedef typename RhsView::type RhsStlContainer;
typedef typename RhsStlContainer::value_type RhsValue;
+ static_assert(!std::is_const<RhsContainer>::value,
+ "RhsContainer type must not be const");
+ static_assert(!std::is_reference<RhsContainer>::value,
+ "RhsContainer type must not be a reference");
+
// Like ContainerEq, we make a copy of rhs in case the elements in
// it are modified after this matcher is created.
PointwiseMatcher(const TupleMatcher& tuple_matcher, const RhsContainer& rhs)
- : tuple_matcher_(tuple_matcher), rhs_(RhsView::Copy(rhs)) {
- // Makes sure the user doesn't instantiate this class template
- // with a const or reference type.
- (void)testing::StaticAssertTypeEq<RhsContainer,
- GTEST_REMOVE_REFERENCE_AND_CONST_(RhsContainer)>();
- }
+ : tuple_matcher_(tuple_matcher), rhs_(RhsView::Copy(rhs)) {}
template <typename LhsContainer>
operator Matcher<LhsContainer>() const {
@@ -4557,15 +5475,11 @@
private:
const Matcher<InnerMatcherArg> mono_tuple_matcher_;
const RhsStlContainer rhs_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
private:
const TupleMatcher tuple_matcher_;
const RhsStlContainer rhs_;
-
- GTEST_DISALLOW_ASSIGN_(PointwiseMatcher);
};
// Holds the logic common to ContainsMatcherImpl and EachMatcherImpl.
@@ -4608,8 +5522,6 @@
protected:
const Matcher<const Element&> inner_matcher_;
-
- GTEST_DISALLOW_ASSIGN_(QuantifierMatcherImpl);
};
// Implements Contains(element_matcher) for the given argument type Container.
@@ -4636,9 +5548,6 @@
MatchResultListener* listener) const override {
return this->MatchAndExplainImpl(false, container, listener);
}
-
- private:
- GTEST_DISALLOW_ASSIGN_(ContainsMatcherImpl);
};
// Implements Each(element_matcher) for the given argument type Container.
@@ -4665,9 +5574,6 @@
MatchResultListener* listener) const override {
return this->MatchAndExplainImpl(true, container, listener);
}
-
- private:
- GTEST_DISALLOW_ASSIGN_(EachMatcherImpl);
};
// Implements polymorphic Contains(element_matcher).
@@ -4684,8 +5590,6 @@
private:
const M inner_matcher_;
-
- GTEST_DISALLOW_ASSIGN_(ContainsMatcher);
};
// Implements polymorphic Each(element_matcher).
@@ -4702,8 +5606,6 @@
private:
const M inner_matcher_;
-
- GTEST_DISALLOW_ASSIGN_(EachMatcher);
};
struct Rank1 {};
@@ -4746,7 +5648,8 @@
testing::SafeMatcherCast<const KeyType&>(inner_matcher)) {
}
- // Returns true iff 'key_value.first' (the key) matches the inner matcher.
+ // Returns true if and only if 'key_value.first' (the key) matches the inner
+ // matcher.
bool MatchAndExplain(PairType key_value,
MatchResultListener* listener) const override {
StringMatchResultListener inner_listener;
@@ -4773,8 +5676,6 @@
private:
const Matcher<const KeyType&> inner_matcher_;
-
- GTEST_DISALLOW_ASSIGN_(KeyMatcherImpl);
};
// Implements polymorphic Key(matcher_for_key).
@@ -4791,8 +5692,49 @@
private:
const M matcher_for_key_;
+};
- GTEST_DISALLOW_ASSIGN_(KeyMatcher);
+// Implements polymorphic Address(matcher_for_address).
+template <typename InnerMatcher>
+class AddressMatcher {
+ public:
+ explicit AddressMatcher(InnerMatcher m) : matcher_(m) {}
+
+ template <typename Type>
+ operator Matcher<Type>() const { // NOLINT
+ return Matcher<Type>(new Impl<const Type&>(matcher_));
+ }
+
+ private:
+ // The monomorphic implementation that works for a particular object type.
+ template <typename Type>
+ class Impl : public MatcherInterface<Type> {
+ public:
+ using Address = const GTEST_REMOVE_REFERENCE_AND_CONST_(Type) *;
+ explicit Impl(const InnerMatcher& matcher)
+ : matcher_(MatcherCast<Address>(matcher)) {}
+
+ void DescribeTo(::std::ostream* os) const override {
+ *os << "has address that ";
+ matcher_.DescribeTo(os);
+ }
+
+ void DescribeNegationTo(::std::ostream* os) const override {
+ *os << "does not have address that ";
+ matcher_.DescribeTo(os);
+ }
+
+ bool MatchAndExplain(Type object,
+ MatchResultListener* listener) const override {
+ *listener << "which has address ";
+ Address address = std::addressof(object);
+ return MatchPrintAndExplain(address, matcher_, listener);
+ }
+
+ private:
+ const Matcher<Address> matcher_;
+ };
+ const InnerMatcher matcher_;
};
// Implements Pair(first_matcher, second_matcher) for the given argument pair
@@ -4828,8 +5770,8 @@
second_matcher_.DescribeNegationTo(os);
}
- // Returns true iff 'a_pair.first' matches first_matcher and 'a_pair.second'
- // matches second_matcher.
+ // Returns true if and only if 'a_pair.first' matches first_matcher and
+ // 'a_pair.second' matches second_matcher.
bool MatchAndExplain(PairType a_pair,
MatchResultListener* listener) const override {
if (!listener->IsInterested()) {
@@ -4878,8 +5820,6 @@
const Matcher<const FirstType&> first_matcher_;
const Matcher<const SecondType&> second_matcher_;
-
- GTEST_DISALLOW_ASSIGN_(PairMatcherImpl);
};
// Implements polymorphic Pair(first_matcher, second_matcher).
@@ -4898,8 +5838,203 @@
private:
const FirstMatcher first_matcher_;
const SecondMatcher second_matcher_;
+};
- GTEST_DISALLOW_ASSIGN_(PairMatcher);
+template <typename T, size_t... I>
+auto UnpackStructImpl(const T& t, IndexSequence<I...>, int)
+ -> decltype(std::tie(get<I>(t)...)) {
+ static_assert(std::tuple_size<T>::value == sizeof...(I),
+ "Number of arguments doesn't match the number of fields.");
+ return std::tie(get<I>(t)...);
+}
+
+#if defined(__cpp_structured_bindings) && __cpp_structured_bindings >= 201606
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<1>, char) {
+ const auto& [a] = t;
+ return std::tie(a);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<2>, char) {
+ const auto& [a, b] = t;
+ return std::tie(a, b);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<3>, char) {
+ const auto& [a, b, c] = t;
+ return std::tie(a, b, c);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<4>, char) {
+ const auto& [a, b, c, d] = t;
+ return std::tie(a, b, c, d);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<5>, char) {
+ const auto& [a, b, c, d, e] = t;
+ return std::tie(a, b, c, d, e);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<6>, char) {
+ const auto& [a, b, c, d, e, f] = t;
+ return std::tie(a, b, c, d, e, f);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<7>, char) {
+ const auto& [a, b, c, d, e, f, g] = t;
+ return std::tie(a, b, c, d, e, f, g);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<8>, char) {
+ const auto& [a, b, c, d, e, f, g, h] = t;
+ return std::tie(a, b, c, d, e, f, g, h);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<9>, char) {
+ const auto& [a, b, c, d, e, f, g, h, i] = t;
+ return std::tie(a, b, c, d, e, f, g, h, i);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<10>, char) {
+ const auto& [a, b, c, d, e, f, g, h, i, j] = t;
+ return std::tie(a, b, c, d, e, f, g, h, i, j);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<11>, char) {
+ const auto& [a, b, c, d, e, f, g, h, i, j, k] = t;
+ return std::tie(a, b, c, d, e, f, g, h, i, j, k);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<12>, char) {
+ const auto& [a, b, c, d, e, f, g, h, i, j, k, l] = t;
+ return std::tie(a, b, c, d, e, f, g, h, i, j, k, l);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<13>, char) {
+ const auto& [a, b, c, d, e, f, g, h, i, j, k, l, m] = t;
+ return std::tie(a, b, c, d, e, f, g, h, i, j, k, l, m);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<14>, char) {
+ const auto& [a, b, c, d, e, f, g, h, i, j, k, l, m, n] = t;
+ return std::tie(a, b, c, d, e, f, g, h, i, j, k, l, m, n);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<15>, char) {
+ const auto& [a, b, c, d, e, f, g, h, i, j, k, l, m, n, o] = t;
+ return std::tie(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o);
+}
+template <typename T>
+auto UnpackStructImpl(const T& t, MakeIndexSequence<16>, char) {
+ const auto& [a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p] = t;
+ return std::tie(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p);
+}
+#endif // defined(__cpp_structured_bindings)
+
+template <size_t I, typename T>
+auto UnpackStruct(const T& t)
+ -> decltype((UnpackStructImpl)(t, MakeIndexSequence<I>{}, 0)) {
+ return (UnpackStructImpl)(t, MakeIndexSequence<I>{}, 0);
+}
+
+// Helper function to do comma folding in C++11.
+// The array ensures left-to-right order of evaluation.
+// Usage: VariadicExpand({expr...});
+template <typename T, size_t N>
+void VariadicExpand(const T (&)[N]) {}
+
+template <typename Struct, typename StructSize>
+class FieldsAreMatcherImpl;
+
+template <typename Struct, size_t... I>
+class FieldsAreMatcherImpl<Struct, IndexSequence<I...>>
+ : public MatcherInterface<Struct> {
+ using UnpackedType =
+ decltype(UnpackStruct<sizeof...(I)>(std::declval<const Struct&>()));
+ using MatchersType = std::tuple<
+ Matcher<const typename std::tuple_element<I, UnpackedType>::type&>...>;
+
+ public:
+ template <typename Inner>
+ explicit FieldsAreMatcherImpl(const Inner& matchers)
+ : matchers_(testing::SafeMatcherCast<
+ const typename std::tuple_element<I, UnpackedType>::type&>(
+ std::get<I>(matchers))...) {}
+
+ void DescribeTo(::std::ostream* os) const override {
+ const char* separator = "";
+ VariadicExpand(
+ {(*os << separator << "has field #" << I << " that ",
+ std::get<I>(matchers_).DescribeTo(os), separator = ", and ")...});
+ }
+
+ void DescribeNegationTo(::std::ostream* os) const override {
+ const char* separator = "";
+ VariadicExpand({(*os << separator << "has field #" << I << " that ",
+ std::get<I>(matchers_).DescribeNegationTo(os),
+ separator = ", or ")...});
+ }
+
+ bool MatchAndExplain(Struct t, MatchResultListener* listener) const override {
+ return MatchInternal((UnpackStruct<sizeof...(I)>)(t), listener);
+ }
+
+ private:
+ bool MatchInternal(UnpackedType tuple, MatchResultListener* listener) const {
+ if (!listener->IsInterested()) {
+ // If the listener is not interested, we don't need to construct the
+ // explanation.
+ bool good = true;
+ VariadicExpand({good = good && std::get<I>(matchers_).Matches(
+ std::get<I>(tuple))...});
+ return good;
+ }
+
+ size_t failed_pos = ~size_t{};
+
+ std::vector<StringMatchResultListener> inner_listener(sizeof...(I));
+
+ VariadicExpand(
+ {failed_pos == ~size_t{} && !std::get<I>(matchers_).MatchAndExplain(
+ std::get<I>(tuple), &inner_listener[I])
+ ? failed_pos = I
+ : 0 ...});
+ if (failed_pos != ~size_t{}) {
+ *listener << "whose field #" << failed_pos << " does not match";
+ PrintIfNotEmpty(inner_listener[failed_pos].str(), listener->stream());
+ return false;
+ }
+
+ *listener << "whose all elements match";
+ const char* separator = ", where";
+ for (size_t index = 0; index < sizeof...(I); ++index) {
+ const std::string str = inner_listener[index].str();
+ if (!str.empty()) {
+ *listener << separator << " field #" << index << " is a value " << str;
+ separator = ", and";
+ }
+ }
+
+ return true;
+ }
+
+ MatchersType matchers_;
+};
+
+template <typename... Inner>
+class FieldsAreMatcher {
+ public:
+ explicit FieldsAreMatcher(Inner... inner) : matchers_(std::move(inner)...) {}
+
+ template <typename Struct>
+ operator Matcher<Struct>() const { // NOLINT
+ return Matcher<Struct>(
+ new FieldsAreMatcherImpl<const Struct&, IndexSequenceFor<Inner...>>(
+ matchers_));
+ }
+
+ private:
+ std::tuple<Inner...> matchers_;
};
// Implements ElementsAre() and ElementsAreArray().
@@ -5045,8 +6180,6 @@
size_t count() const { return matchers_.size(); }
::std::vector<Matcher<const Element&> > matchers_;
-
- GTEST_DISALLOW_ASSIGN_(ElementsAreMatcherImpl);
};
// Connectivity matrix of (elements X matchers), in element-major order.
@@ -5149,8 +6282,6 @@
private:
UnorderedMatcherRequire::Flags match_flags_;
MatcherDescriberVec matcher_describers_;
-
- GTEST_DISALLOW_ASSIGN_(UnorderedElementsAreMatcherImplBase);
};
// Implements UnorderedElementsAre, UnorderedElementsAreArray, IsSubsetOf, and
@@ -5173,7 +6304,9 @@
: UnorderedElementsAreMatcherImplBase(matcher_flags) {
for (; first != last; ++first) {
matchers_.push_back(MatcherCast<const Element&>(*first));
- matcher_describers().push_back(matchers_.back().GetDescriber());
+ }
+ for (const auto& m : matchers_) {
+ matcher_describers().push_back(m.GetDescriber());
}
}
@@ -5224,12 +6357,14 @@
element_printouts->clear();
::std::vector<char> did_match;
size_t num_elements = 0;
+ DummyMatchResultListener dummy;
for (; elem_first != elem_last; ++num_elements, ++elem_first) {
if (listener->IsInterested()) {
element_printouts->push_back(PrintToString(*elem_first));
}
for (size_t irhs = 0; irhs != matchers_.size(); ++irhs) {
- did_match.push_back(Matches(matchers_[irhs])(*elem_first));
+ did_match.push_back(
+ matchers_[irhs].MatchAndExplain(*elem_first, &dummy));
}
}
@@ -5244,8 +6379,6 @@
}
::std::vector<Matcher<const Element&> > matchers_;
-
- GTEST_DISALLOW_ASSIGN_(UnorderedElementsAreMatcherImpl);
};
// Functor for use in TransformTuple.
@@ -5283,7 +6416,6 @@
private:
const MatcherTuple matchers_;
- GTEST_DISALLOW_ASSIGN_(UnorderedElementsAreMatcher);
};
// Implements ElementsAre.
@@ -5313,7 +6445,6 @@
private:
const MatcherTuple matchers_;
- GTEST_DISALLOW_ASSIGN_(ElementsAreMatcher);
};
// Implements UnorderedElementsAreArray(), IsSubsetOf(), and IsSupersetOf().
@@ -5335,8 +6466,6 @@
private:
UnorderedMatcherRequire::Flags match_flags_;
::std::vector<T> matchers_;
-
- GTEST_DISALLOW_ASSIGN_(UnorderedElementsAreArrayMatcher);
};
// Implements ElementsAreArray().
@@ -5358,14 +6487,12 @@
private:
const ::std::vector<T> matchers_;
-
- GTEST_DISALLOW_ASSIGN_(ElementsAreArrayMatcher);
};
// Given a 2-tuple matcher tm of type Tuple2Matcher and a value second
// of type Second, BoundSecondMatcher<Tuple2Matcher, Second>(tm,
-// second) is a polymorphic matcher that matches a value x iff tm
-// matches tuple (x, second). Useful for implementing
+// second) is a polymorphic matcher that matches a value x if and only if
+// tm matches tuple (x, second). Useful for implementing
// UnorderedPointwise() in terms of UnorderedElementsAreArray().
//
// BoundSecondMatcher is copyable and assignable, as we need to put
@@ -5377,6 +6504,8 @@
BoundSecondMatcher(const Tuple2Matcher& tm, const Second& second)
: tuple2_matcher_(tm), second_value_(second) {}
+ BoundSecondMatcher(const BoundSecondMatcher& other) = default;
+
template <typename T>
operator Matcher<T>() const {
return MakeMatcher(new Impl<T>(tuple2_matcher_, second_value_));
@@ -5419,8 +6548,6 @@
private:
const Matcher<const ArgTuple&> mono_tuple2_matcher_;
const Second second_value_;
-
- GTEST_DISALLOW_ASSIGN_(Impl);
};
const Tuple2Matcher tuple2_matcher_;
@@ -5429,8 +6556,8 @@
// Given a 2-tuple matcher tm and a value second,
// MatcherBindSecond(tm, second) returns a matcher that matches a
-// value x iff tm matches tuple (x, second). Useful for implementing
-// UnorderedPointwise() in terms of UnorderedElementsAreArray().
+// value x if and only if tm matches tuple (x, second). Useful for
+// implementing UnorderedPointwise() in terms of UnorderedElementsAreArray().
template <typename Tuple2Matcher, typename Second>
BoundSecondMatcher<Tuple2Matcher, Second> MatcherBindSecond(
const Tuple2Matcher& tm, const Second& second) {
@@ -5493,12 +6620,10 @@
private:
const Matcher<ValueType> value_matcher_;
- GTEST_DISALLOW_ASSIGN_(Impl);
};
private:
const ValueMatcher value_matcher_;
- GTEST_DISALLOW_ASSIGN_(OptionalMatcher);
};
namespace variant_matcher {
@@ -5806,18 +6931,19 @@
// Creates a matcher that matches any value of the given type T.
template <typename T>
inline Matcher<T> A() {
- return Matcher<T>(new internal::AnyMatcherImpl<T>());
+ return _;
}
// Creates a matcher that matches any value of the given type T.
template <typename T>
-inline Matcher<T> An() { return A<T>(); }
+inline Matcher<T> An() {
+ return _;
+}
template <typename T, typename M>
Matcher<T> internal::MatcherCastImpl<T, M>::CastImpl(
- const M& value,
- internal::BooleanConstant<false> /* convertible_to_matcher */,
- internal::BooleanConstant<false> /* convertible_to_T */) {
+ const M& value, std::false_type /* convertible_to_matcher */,
+ std::false_type /* convertible_to_T */) {
return Eq(value);
}
@@ -5840,6 +6966,11 @@
return internal::RefMatcher<T&>(x);
}
+// Creates a polymorphic matcher that matches any NaN floating point.
+inline PolymorphicMatcher<internal::IsNanMatcher> IsNan() {
+ return MakePolymorphicMatcher(internal::IsNanMatcher());
+}
+
// Creates a matcher that matches any double argument approximately
// equal to rhs, where two NANs are considered unequal.
inline internal::FloatingEqMatcher<double> DoubleEq(double rhs) {
@@ -5922,7 +7053,7 @@
// Creates a matcher that matches an object whose given field matches
// 'matcher'. For example,
// Field(&Foo::number, Ge(5))
-// matches a Foo object x iff x.number >= 5.
+// matches a Foo object x if and only if x.number >= 5.
template <typename Class, typename FieldType, typename FieldMatcher>
inline PolymorphicMatcher<
internal::FieldMatcher<Class, FieldType> > Field(
@@ -5949,7 +7080,7 @@
// Creates a matcher that matches an object whose given property
// matches 'matcher'. For example,
// Property(&Foo::str, StartsWith("hi"))
-// matches a Foo object x iff x.str() starts with "hi".
+// matches a Foo object x if and only if x.str() starts with "hi".
template <typename Class, typename PropertyType, typename PropertyMatcher>
inline PolymorphicMatcher<internal::PropertyMatcher<
Class, PropertyType, PropertyType (Class::*)() const> >
@@ -6004,11 +7135,10 @@
property_name, property, MatcherCast<const PropertyType&>(matcher)));
}
-// Creates a matcher that matches an object iff the result of applying
-// a callable to x matches 'matcher'.
-// For example,
+// Creates a matcher that matches an object if and only if the result of
+// applying a callable to x matches 'matcher'. For example,
// ResultOf(f, StartsWith("hi"))
-// matches a Foo object x iff f(x) starts with "hi".
+// matches a Foo object x if and only if f(x) starts with "hi".
// `callable` parameter can be a function, function pointer, or a functor. It is
// required to keep no state affecting the results of the calls on it and make
// no assumptions about how many calls will be made. Any state it keeps must be
@@ -6023,55 +7153,63 @@
// String matchers.
// Matches a string equal to str.
-inline PolymorphicMatcher<internal::StrEqualityMatcher<std::string> > StrEq(
- const std::string& str) {
+template <typename T = std::string>
+PolymorphicMatcher<internal::StrEqualityMatcher<std::string> > StrEq(
+ const internal::StringLike<T>& str) {
return MakePolymorphicMatcher(
- internal::StrEqualityMatcher<std::string>(str, true, true));
+ internal::StrEqualityMatcher<std::string>(std::string(str), true, true));
}
// Matches a string not equal to str.
-inline PolymorphicMatcher<internal::StrEqualityMatcher<std::string> > StrNe(
- const std::string& str) {
+template <typename T = std::string>
+PolymorphicMatcher<internal::StrEqualityMatcher<std::string> > StrNe(
+ const internal::StringLike<T>& str) {
return MakePolymorphicMatcher(
- internal::StrEqualityMatcher<std::string>(str, false, true));
+ internal::StrEqualityMatcher<std::string>(std::string(str), false, true));
}
// Matches a string equal to str, ignoring case.
-inline PolymorphicMatcher<internal::StrEqualityMatcher<std::string> > StrCaseEq(
- const std::string& str) {
+template <typename T = std::string>
+PolymorphicMatcher<internal::StrEqualityMatcher<std::string> > StrCaseEq(
+ const internal::StringLike<T>& str) {
return MakePolymorphicMatcher(
- internal::StrEqualityMatcher<std::string>(str, true, false));
+ internal::StrEqualityMatcher<std::string>(std::string(str), true, false));
}
// Matches a string not equal to str, ignoring case.
-inline PolymorphicMatcher<internal::StrEqualityMatcher<std::string> > StrCaseNe(
- const std::string& str) {
- return MakePolymorphicMatcher(
- internal::StrEqualityMatcher<std::string>(str, false, false));
+template <typename T = std::string>
+PolymorphicMatcher<internal::StrEqualityMatcher<std::string> > StrCaseNe(
+ const internal::StringLike<T>& str) {
+ return MakePolymorphicMatcher(internal::StrEqualityMatcher<std::string>(
+ std::string(str), false, false));
}
// Creates a matcher that matches any string, std::string, or C string
// that contains the given substring.
-inline PolymorphicMatcher<internal::HasSubstrMatcher<std::string> > HasSubstr(
- const std::string& substring) {
+template <typename T = std::string>
+PolymorphicMatcher<internal::HasSubstrMatcher<std::string> > HasSubstr(
+ const internal::StringLike<T>& substring) {
return MakePolymorphicMatcher(
- internal::HasSubstrMatcher<std::string>(substring));
+ internal::HasSubstrMatcher<std::string>(std::string(substring)));
}
// Matches a string that starts with 'prefix' (case-sensitive).
-inline PolymorphicMatcher<internal::StartsWithMatcher<std::string> > StartsWith(
- const std::string& prefix) {
+template <typename T = std::string>
+PolymorphicMatcher<internal::StartsWithMatcher<std::string> > StartsWith(
+ const internal::StringLike<T>& prefix) {
return MakePolymorphicMatcher(
- internal::StartsWithMatcher<std::string>(prefix));
+ internal::StartsWithMatcher<std::string>(std::string(prefix)));
}
// Matches a string that ends with 'suffix' (case-sensitive).
-inline PolymorphicMatcher<internal::EndsWithMatcher<std::string> > EndsWith(
- const std::string& suffix) {
- return MakePolymorphicMatcher(internal::EndsWithMatcher<std::string>(suffix));
+template <typename T = std::string>
+PolymorphicMatcher<internal::EndsWithMatcher<std::string> > EndsWith(
+ const internal::StringLike<T>& suffix) {
+ return MakePolymorphicMatcher(
+ internal::EndsWithMatcher<std::string>(std::string(suffix)));
}
-#if GTEST_HAS_GLOBAL_WSTRING || GTEST_HAS_STD_WSTRING
+#if GTEST_HAS_STD_WSTRING
// Wide string matchers.
// Matches a string equal to str.
@@ -6124,7 +7262,7 @@
internal::EndsWithMatcher<std::wstring>(suffix));
}
-#endif // GTEST_HAS_GLOBAL_WSTRING || GTEST_HAS_STD_WSTRING
+#endif // GTEST_HAS_STD_WSTRING
// Creates a polymorphic matcher that matches a 2-tuple where the
// first field == the second field.
@@ -6246,14 +7384,10 @@
// values that are included in one container but not the other. (Duplicate
// values and order differences are not explained.)
template <typename Container>
-inline PolymorphicMatcher<internal::ContainerEqMatcher< // NOLINT
- GTEST_REMOVE_CONST_(Container)> >
- ContainerEq(const Container& rhs) {
- // This following line is for working around a bug in MSVC 8.0,
- // which causes Container to be a const type sometimes.
- typedef GTEST_REMOVE_CONST_(Container) RawContainer;
- return MakePolymorphicMatcher(
- internal::ContainerEqMatcher<RawContainer>(rhs));
+inline PolymorphicMatcher<internal::ContainerEqMatcher<
+ typename std::remove_const<Container>::type>>
+ContainerEq(const Container& rhs) {
+ return MakePolymorphicMatcher(internal::ContainerEqMatcher<Container>(rhs));
}
// Returns a matcher that matches a container that, when sorted using
@@ -6284,14 +7418,10 @@
// LHS container and the RHS container respectively.
template <typename TupleMatcher, typename Container>
inline internal::PointwiseMatcher<TupleMatcher,
- GTEST_REMOVE_CONST_(Container)>
+ typename std::remove_const<Container>::type>
Pointwise(const TupleMatcher& tuple_matcher, const Container& rhs) {
- // This following line is for working around a bug in MSVC 8.0,
- // which causes Container to be a const type sometimes (e.g. when
- // rhs is a const int[])..
- typedef GTEST_REMOVE_CONST_(Container) RawContainer;
- return internal::PointwiseMatcher<TupleMatcher, RawContainer>(
- tuple_matcher, rhs);
+ return internal::PointwiseMatcher<TupleMatcher, Container>(tuple_matcher,
+ rhs);
}
@@ -6317,18 +7447,14 @@
template <typename Tuple2Matcher, typename RhsContainer>
inline internal::UnorderedElementsAreArrayMatcher<
typename internal::BoundSecondMatcher<
- Tuple2Matcher, typename internal::StlContainerView<GTEST_REMOVE_CONST_(
- RhsContainer)>::type::value_type> >
+ Tuple2Matcher,
+ typename internal::StlContainerView<
+ typename std::remove_const<RhsContainer>::type>::type::value_type>>
UnorderedPointwise(const Tuple2Matcher& tuple2_matcher,
const RhsContainer& rhs_container) {
- // This following line is for working around a bug in MSVC 8.0,
- // which causes RhsContainer to be a const type sometimes (e.g. when
- // rhs_container is a const int[]).
- typedef GTEST_REMOVE_CONST_(RhsContainer) RawRhsContainer;
-
// RhsView allows the same code to handle RhsContainer being a
// STL-style container and it being a native C-style array.
- typedef typename internal::StlContainerView<RawRhsContainer> RhsView;
+ typedef typename internal::StlContainerView<RhsContainer> RhsView;
typedef typename RhsView::type RhsStlContainer;
typedef typename RhsStlContainer::value_type Second;
const RhsStlContainer& rhs_stl_container =
@@ -6550,6 +7676,35 @@
first_matcher, second_matcher);
}
+namespace no_adl {
+// FieldsAre(matchers...) matches piecewise the fields of compatible structs.
+// These include those that support `get<I>(obj)`, and when structured bindings
+// are enabled any class that supports them.
+// In particular, `std::tuple`, `std::pair`, `std::array` and aggregate types.
+template <typename... M>
+internal::FieldsAreMatcher<typename std::decay<M>::type...> FieldsAre(
+ M&&... matchers) {
+ return internal::FieldsAreMatcher<typename std::decay<M>::type...>(
+ std::forward<M>(matchers)...);
+}
+
+// Creates a matcher that matches a pointer (raw or smart) that matches
+// inner_matcher.
+template <typename InnerMatcher>
+inline internal::PointerMatcher<InnerMatcher> Pointer(
+ const InnerMatcher& inner_matcher) {
+ return internal::PointerMatcher<InnerMatcher>(inner_matcher);
+}
+
+// Creates a matcher that matches an object that has an address that matches
+// inner_matcher.
+template <typename InnerMatcher>
+inline internal::AddressMatcher<InnerMatcher> Address(
+ const InnerMatcher& inner_matcher) {
+ return internal::AddressMatcher<InnerMatcher>(inner_matcher);
+}
+} // namespace no_adl
+
// Returns a predicate that is satisfied by anything that matches the
// given matcher.
template <typename M>
@@ -6557,7 +7712,7 @@
return internal::MatcherAsPredicate<M>(matcher);
}
-// Returns true iff the value matches the matcher.
+// Returns true if and only if the value matches the matcher.
template <typename T, typename M>
inline bool Value(const T& value, M matcher) {
return testing::Matches(matcher)(value);
@@ -6734,7 +7889,7 @@
// and is printable using 'PrintToString'. It is compatible with
// std::optional/std::experimental::optional.
// Note that to compare an optional type variable against nullopt you should
-// use Eq(nullopt) and not Optional(Eq(nullopt)). The latter implies that the
+// use Eq(nullopt) and not Eq(Optional(nullopt)). The latter implies that the
// optional value contains an optional itself.
template <typename ValueMatcher>
inline internal::OptionalMatcher<ValueMatcher> Optional(
@@ -6761,15 +7916,337 @@
internal::variant_matcher::VariantMatcher<T>(matcher));
}
+#if GTEST_HAS_EXCEPTIONS
+
+// Anything inside the `internal` namespace is internal to the implementation
+// and must not be used in user code!
+namespace internal {
+
+class WithWhatMatcherImpl {
+ public:
+ WithWhatMatcherImpl(Matcher<std::string> matcher)
+ : matcher_(std::move(matcher)) {}
+
+ void DescribeTo(std::ostream* os) const {
+ *os << "contains .what() that ";
+ matcher_.DescribeTo(os);
+ }
+
+ void DescribeNegationTo(std::ostream* os) const {
+ *os << "contains .what() that does not ";
+ matcher_.DescribeTo(os);
+ }
+
+ template <typename Err>
+ bool MatchAndExplain(const Err& err, MatchResultListener* listener) const {
+ *listener << "which contains .what() that ";
+ return matcher_.MatchAndExplain(err.what(), listener);
+ }
+
+ private:
+ const Matcher<std::string> matcher_;
+};
+
+inline PolymorphicMatcher<WithWhatMatcherImpl> WithWhat(
+ Matcher<std::string> m) {
+ return MakePolymorphicMatcher(WithWhatMatcherImpl(std::move(m)));
+}
+
+template <typename Err>
+class ExceptionMatcherImpl {
+ class NeverThrown {
+ public:
+ const char* what() const noexcept {
+ return "this exception should never be thrown";
+ }
+ };
+
+ // If the matchee raises an exception of a wrong type, we'd like to
+ // catch it and print its message and type. To do that, we add an additional
+ // catch clause:
+ //
+ // try { ... }
+ // catch (const Err&) { /* an expected exception */ }
+ // catch (const std::exception&) { /* exception of a wrong type */ }
+ //
+ // However, if the `Err` itself is `std::exception`, we'd end up with two
+ // identical `catch` clauses:
+ //
+ // try { ... }
+ // catch (const std::exception&) { /* an expected exception */ }
+ // catch (const std::exception&) { /* exception of a wrong type */ }
+ //
+ // This can cause a warning or an error in some compilers. To resolve
+ // the issue, we use a fake error type whenever `Err` is `std::exception`:
+ //
+ // try { ... }
+ // catch (const std::exception&) { /* an expected exception */ }
+ // catch (const NeverThrown&) { /* exception of a wrong type */ }
+ using DefaultExceptionType = typename std::conditional<
+ std::is_same<typename std::remove_cv<
+ typename std::remove_reference<Err>::type>::type,
+ std::exception>::value,
+ const NeverThrown&, const std::exception&>::type;
+
+ public:
+ ExceptionMatcherImpl(Matcher<const Err&> matcher)
+ : matcher_(std::move(matcher)) {}
+
+ void DescribeTo(std::ostream* os) const {
+ *os << "throws an exception which is a " << GetTypeName<Err>();
+ *os << " which ";
+ matcher_.DescribeTo(os);
+ }
+
+ void DescribeNegationTo(std::ostream* os) const {
+ *os << "throws an exception which is not a " << GetTypeName<Err>();
+ *os << " which ";
+ matcher_.DescribeNegationTo(os);
+ }
+
+ template <typename T>
+ bool MatchAndExplain(T&& x, MatchResultListener* listener) const {
+ try {
+ (void)(std::forward<T>(x)());
+ } catch (const Err& err) {
+ *listener << "throws an exception which is a " << GetTypeName<Err>();
+ *listener << " ";
+ return matcher_.MatchAndExplain(err, listener);
+ } catch (DefaultExceptionType err) {
+#if GTEST_HAS_RTTI
+ *listener << "throws an exception of type " << GetTypeName(typeid(err));
+ *listener << " ";
+#else
+ *listener << "throws an std::exception-derived type ";
+#endif
+ *listener << "with description \"" << err.what() << "\"";
+ return false;
+ } catch (...) {
+ *listener << "throws an exception of an unknown type";
+ return false;
+ }
+
+ *listener << "does not throw any exception";
+ return false;
+ }
+
+ private:
+ const Matcher<const Err&> matcher_;
+};
+
+} // namespace internal
+
+// Throws()
+// Throws(exceptionMatcher)
+// ThrowsMessage(messageMatcher)
+//
+// This matcher accepts a callable and verifies that when invoked, it throws
+// an exception with the given type and properties.
+//
+// Examples:
+//
+// EXPECT_THAT(
+// []() { throw std::runtime_error("message"); },
+// Throws<std::runtime_error>());
+//
+// EXPECT_THAT(
+// []() { throw std::runtime_error("message"); },
+// ThrowsMessage<std::runtime_error>(HasSubstr("message")));
+//
+// EXPECT_THAT(
+// []() { throw std::runtime_error("message"); },
+// Throws<std::runtime_error>(
+// Property(&std::runtime_error::what, HasSubstr("message"))));
+
+template <typename Err>
+PolymorphicMatcher<internal::ExceptionMatcherImpl<Err>> Throws() {
+ return MakePolymorphicMatcher(
+ internal::ExceptionMatcherImpl<Err>(A<const Err&>()));
+}
+
+template <typename Err, typename ExceptionMatcher>
+PolymorphicMatcher<internal::ExceptionMatcherImpl<Err>> Throws(
+ const ExceptionMatcher& exception_matcher) {
+ // Using matcher cast allows users to pass a matcher of a more broad type.
+ // For example user may want to pass Matcher<std::exception>
+ // to Throws<std::runtime_error>, or Matcher<int64> to Throws<int32>.
+ return MakePolymorphicMatcher(internal::ExceptionMatcherImpl<Err>(
+ SafeMatcherCast<const Err&>(exception_matcher)));
+}
+
+template <typename Err, typename MessageMatcher>
+PolymorphicMatcher<internal::ExceptionMatcherImpl<Err>> ThrowsMessage(
+ MessageMatcher&& message_matcher) {
+ static_assert(std::is_base_of<std::exception, Err>::value,
+ "expected an std::exception-derived type");
+ return Throws<Err>(internal::WithWhat(
+ MatcherCast<std::string>(std::forward<MessageMatcher>(message_matcher))));
+}
+
+#endif // GTEST_HAS_EXCEPTIONS
+
// These macros allow using matchers to check values in Google Test
// tests. ASSERT_THAT(value, matcher) and EXPECT_THAT(value, matcher)
-// succeed iff the value matches the matcher. If the assertion fails,
-// the value and the description of the matcher will be printed.
+// succeed if and only if the value matches the matcher. If the assertion
+// fails, the value and the description of the matcher will be printed.
#define ASSERT_THAT(value, matcher) ASSERT_PRED_FORMAT1(\
::testing::internal::MakePredicateFormatterFromMatcher(matcher), value)
#define EXPECT_THAT(value, matcher) EXPECT_PRED_FORMAT1(\
::testing::internal::MakePredicateFormatterFromMatcher(matcher), value)
+// MATCHER* macroses itself are listed below.
+#define MATCHER(name, description) \
+ class name##Matcher \
+ : public ::testing::internal::MatcherBaseImpl<name##Matcher> { \
+ public: \
+ template <typename arg_type> \
+ class gmock_Impl : public ::testing::MatcherInterface<const arg_type&> { \
+ public: \
+ gmock_Impl() {} \
+ bool MatchAndExplain( \
+ const arg_type& arg, \
+ ::testing::MatchResultListener* result_listener) const override; \
+ void DescribeTo(::std::ostream* gmock_os) const override { \
+ *gmock_os << FormatDescription(false); \
+ } \
+ void DescribeNegationTo(::std::ostream* gmock_os) const override { \
+ *gmock_os << FormatDescription(true); \
+ } \
+ \
+ private: \
+ ::std::string FormatDescription(bool negation) const { \
+ ::std::string gmock_description = (description); \
+ if (!gmock_description.empty()) { \
+ return gmock_description; \
+ } \
+ return ::testing::internal::FormatMatcherDescription(negation, #name, \
+ {}); \
+ } \
+ }; \
+ }; \
+ GTEST_ATTRIBUTE_UNUSED_ inline name##Matcher name() { return {}; } \
+ template <typename arg_type> \
+ bool name##Matcher::gmock_Impl<arg_type>::MatchAndExplain( \
+ const arg_type& arg, \
+ ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_) \
+ const
+
+#define MATCHER_P(name, p0, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP, description, (p0))
+#define MATCHER_P2(name, p0, p1, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP2, description, (p0, p1))
+#define MATCHER_P3(name, p0, p1, p2, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP3, description, (p0, p1, p2))
+#define MATCHER_P4(name, p0, p1, p2, p3, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP4, description, (p0, p1, p2, p3))
+#define MATCHER_P5(name, p0, p1, p2, p3, p4, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP5, description, \
+ (p0, p1, p2, p3, p4))
+#define MATCHER_P6(name, p0, p1, p2, p3, p4, p5, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP6, description, \
+ (p0, p1, p2, p3, p4, p5))
+#define MATCHER_P7(name, p0, p1, p2, p3, p4, p5, p6, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP7, description, \
+ (p0, p1, p2, p3, p4, p5, p6))
+#define MATCHER_P8(name, p0, p1, p2, p3, p4, p5, p6, p7, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP8, description, \
+ (p0, p1, p2, p3, p4, p5, p6, p7))
+#define MATCHER_P9(name, p0, p1, p2, p3, p4, p5, p6, p7, p8, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP9, description, \
+ (p0, p1, p2, p3, p4, p5, p6, p7, p8))
+#define MATCHER_P10(name, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, description) \
+ GMOCK_INTERNAL_MATCHER(name, name##MatcherP10, description, \
+ (p0, p1, p2, p3, p4, p5, p6, p7, p8, p9))
+
+#define GMOCK_INTERNAL_MATCHER(name, full_name, description, args) \
+ template <GMOCK_INTERNAL_MATCHER_TEMPLATE_PARAMS(args)> \
+ class full_name : public ::testing::internal::MatcherBaseImpl< \
+ full_name<GMOCK_INTERNAL_MATCHER_TYPE_PARAMS(args)>> { \
+ public: \
+ using full_name::MatcherBaseImpl::MatcherBaseImpl; \
+ template <typename arg_type> \
+ class gmock_Impl : public ::testing::MatcherInterface<const arg_type&> { \
+ public: \
+ explicit gmock_Impl(GMOCK_INTERNAL_MATCHER_FUNCTION_ARGS(args)) \
+ : GMOCK_INTERNAL_MATCHER_FORWARD_ARGS(args) {} \
+ bool MatchAndExplain( \
+ const arg_type& arg, \
+ ::testing::MatchResultListener* result_listener) const override; \
+ void DescribeTo(::std::ostream* gmock_os) const override { \
+ *gmock_os << FormatDescription(false); \
+ } \
+ void DescribeNegationTo(::std::ostream* gmock_os) const override { \
+ *gmock_os << FormatDescription(true); \
+ } \
+ GMOCK_INTERNAL_MATCHER_MEMBERS(args) \
+ \
+ private: \
+ ::std::string FormatDescription(bool negation) const { \
+ ::std::string gmock_description = (description); \
+ if (!gmock_description.empty()) { \
+ return gmock_description; \
+ } \
+ return ::testing::internal::FormatMatcherDescription( \
+ negation, #name, \
+ ::testing::internal::UniversalTersePrintTupleFieldsToStrings( \
+ ::std::tuple<GMOCK_INTERNAL_MATCHER_TYPE_PARAMS(args)>( \
+ GMOCK_INTERNAL_MATCHER_MEMBERS_USAGE(args)))); \
+ } \
+ }; \
+ }; \
+ template <GMOCK_INTERNAL_MATCHER_TEMPLATE_PARAMS(args)> \
+ inline full_name<GMOCK_INTERNAL_MATCHER_TYPE_PARAMS(args)> name( \
+ GMOCK_INTERNAL_MATCHER_FUNCTION_ARGS(args)) { \
+ return full_name<GMOCK_INTERNAL_MATCHER_TYPE_PARAMS(args)>( \
+ GMOCK_INTERNAL_MATCHER_ARGS_USAGE(args)); \
+ } \
+ template <GMOCK_INTERNAL_MATCHER_TEMPLATE_PARAMS(args)> \
+ template <typename arg_type> \
+ bool full_name<GMOCK_INTERNAL_MATCHER_TYPE_PARAMS(args)>::gmock_Impl< \
+ arg_type>::MatchAndExplain(const arg_type& arg, \
+ ::testing::MatchResultListener* \
+ result_listener GTEST_ATTRIBUTE_UNUSED_) \
+ const
+
+#define GMOCK_INTERNAL_MATCHER_TEMPLATE_PARAMS(args) \
+ GMOCK_PP_TAIL( \
+ GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_MATCHER_TEMPLATE_PARAM, , args))
+#define GMOCK_INTERNAL_MATCHER_TEMPLATE_PARAM(i_unused, data_unused, arg) \
+ , typename arg##_type
+
+#define GMOCK_INTERNAL_MATCHER_TYPE_PARAMS(args) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_MATCHER_TYPE_PARAM, , args))
+#define GMOCK_INTERNAL_MATCHER_TYPE_PARAM(i_unused, data_unused, arg) \
+ , arg##_type
+
+#define GMOCK_INTERNAL_MATCHER_FUNCTION_ARGS(args) \
+ GMOCK_PP_TAIL(dummy_first GMOCK_PP_FOR_EACH( \
+ GMOCK_INTERNAL_MATCHER_FUNCTION_ARG, , args))
+#define GMOCK_INTERNAL_MATCHER_FUNCTION_ARG(i, data_unused, arg) \
+ , arg##_type gmock_p##i
+
+#define GMOCK_INTERNAL_MATCHER_FORWARD_ARGS(args) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_MATCHER_FORWARD_ARG, , args))
+#define GMOCK_INTERNAL_MATCHER_FORWARD_ARG(i, data_unused, arg) \
+ , arg(::std::forward<arg##_type>(gmock_p##i))
+
+#define GMOCK_INTERNAL_MATCHER_MEMBERS(args) \
+ GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_MATCHER_MEMBER, , args)
+#define GMOCK_INTERNAL_MATCHER_MEMBER(i_unused, data_unused, arg) \
+ const arg##_type arg;
+
+#define GMOCK_INTERNAL_MATCHER_MEMBERS_USAGE(args) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_MATCHER_MEMBER_USAGE, , args))
+#define GMOCK_INTERNAL_MATCHER_MEMBER_USAGE(i_unused, data_unused, arg) , arg
+
+#define GMOCK_INTERNAL_MATCHER_ARGS_USAGE(args) \
+ GMOCK_PP_TAIL(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_MATCHER_ARG_USAGE, , args))
+#define GMOCK_INTERNAL_MATCHER_ARG_USAGE(i, data_unused, arg_unused) \
+ , gmock_p##i
+
+// To prevent ADL on certain functions we put them on a separate namespace.
+using namespace no_adl; // NOLINT
+
} // namespace testing
GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251 5046
@@ -6810,11 +8287,11 @@
//
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_MATCHERS_H_
-#define GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_MATCHERS_H_
-#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_MATCHERS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_MATCHERS_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_MATCHERS_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_MATCHERS_H_
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_MATCHERS_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MATCHERS_H_
#if GTEST_HAS_EXCEPTIONS
# include <stdexcept> // NOLINT
@@ -6847,6 +8324,14 @@
// Helper class for testing the Expectation class template.
class ExpectationTester;
+// Helper classes for implementing NiceMock, StrictMock, and NaggyMock.
+template <typename MockClass>
+class NiceMockImpl;
+template <typename MockClass>
+class StrictMockImpl;
+template <typename MockClass>
+class NaggyMockImpl;
+
// Protects the mock object registry (in class Mock), all function
// mockers, and all expectations.
//
@@ -7071,7 +8556,7 @@
return *this;
}
- // Returns true iff the given arguments match the matchers.
+ // Returns true if and only if the given arguments match the matchers.
bool Matches(const ArgumentTuple& args) const {
return TupleMatches(matchers_, args) && extra_matcher_.Matches(args);
}
@@ -7129,7 +8614,7 @@
GTEST_LOCK_EXCLUDED_(internal::g_gmock_mutex);
// Verifies all expectations on the given mock object and clears its
- // default actions and expectations. Returns true iff the
+ // default actions and expectations. Returns true if and only if the
// verification was successful.
static bool VerifyAndClear(void* mock_obj)
GTEST_LOCK_EXCLUDED_(internal::g_gmock_mutex);
@@ -7152,14 +8637,12 @@
template <typename F>
friend class internal::FunctionMocker;
- template <typename M>
- friend class NiceMock;
-
- template <typename M>
- friend class NaggyMock;
-
- template <typename M>
- friend class StrictMock;
+ template <typename MockClass>
+ friend class internal::NiceMockImpl;
+ template <typename MockClass>
+ friend class internal::NaggyMockImpl;
+ template <typename MockClass>
+ friend class internal::StrictMockImpl;
// Tells Google Mock to allow uninteresting calls on the given mock
// object.
@@ -7238,7 +8721,10 @@
public:
// Constructs a null object that doesn't reference any expectation.
Expectation();
-
+ Expectation(Expectation&&) = default;
+ Expectation(const Expectation&) = default;
+ Expectation& operator=(Expectation&&) = default;
+ Expectation& operator=(const Expectation&) = default;
~Expectation();
// This single-argument ctor must not be explicit, in order to support the
@@ -7255,7 +8741,8 @@
// The compiler-generated copy ctor and operator= work exactly as
// intended, so we don't need to define our own.
- // Returns true iff rhs references the same expectation as this object does.
+ // Returns true if and only if rhs references the same expectation as this
+ // object does.
bool operator==(const Expectation& rhs) const {
return expectation_base_ == rhs.expectation_base_;
}
@@ -7337,8 +8824,8 @@
// The compiler-generator ctor and operator= works exactly as
// intended, so we don't need to define our own.
- // Returns true iff rhs contains the same set of Expectation objects
- // as this does.
+ // Returns true if and only if rhs contains the same set of Expectation
+ // objects as this does.
bool operator==(const ExpectationSet& rhs) const {
return expectations_ == rhs.expectations_;
}
@@ -7499,8 +8986,8 @@
// by the subclasses to implement the .Times() clause.
void SpecifyCardinality(const Cardinality& cardinality);
- // Returns true iff the user specified the cardinality explicitly
- // using a .Times().
+ // Returns true if and only if the user specified the cardinality
+ // explicitly using a .Times().
bool cardinality_specified() const { return cardinality_specified_; }
// Sets the cardinality of this expectation spec.
@@ -7516,7 +9003,7 @@
void RetireAllPreRequisites()
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex);
- // Returns true iff this expectation is retired.
+ // Returns true if and only if this expectation is retired.
bool is_retired() const
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
g_gmock_mutex.AssertHeld();
@@ -7530,28 +9017,29 @@
retired_ = true;
}
- // Returns true iff this expectation is satisfied.
+ // Returns true if and only if this expectation is satisfied.
bool IsSatisfied() const
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
g_gmock_mutex.AssertHeld();
return cardinality().IsSatisfiedByCallCount(call_count_);
}
- // Returns true iff this expectation is saturated.
+ // Returns true if and only if this expectation is saturated.
bool IsSaturated() const
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
g_gmock_mutex.AssertHeld();
return cardinality().IsSaturatedByCallCount(call_count_);
}
- // Returns true iff this expectation is over-saturated.
+ // Returns true if and only if this expectation is over-saturated.
bool IsOverSaturated() const
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
g_gmock_mutex.AssertHeld();
return cardinality().IsOverSaturatedByCallCount(call_count_);
}
- // Returns true iff all pre-requisites of this expectation are satisfied.
+ // Returns true if and only if all pre-requisites of this expectation are
+ // satisfied.
bool AllPrerequisitesAreSatisfied() const
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex);
@@ -7594,7 +9082,7 @@
const char* file_; // The file that contains the expectation.
int line_; // The line number of the expectation.
const std::string source_text_; // The EXPECT_CALL(...) source text.
- // True iff the cardinality is specified explicitly.
+ // True if and only if the cardinality is specified explicitly.
bool cardinality_specified_;
Cardinality cardinality_; // The cardinality of the expectation.
// The immediate pre-requisites (i.e. expectations that must be
@@ -7608,7 +9096,7 @@
// This group of fields are the current state of the expectation,
// and can change as the mock function is called.
int call_count_; // How many times this expectation has been invoked.
- bool retired_; // True iff this expectation has retired.
+ bool retired_; // True if and only if this expectation has retired.
UntypedActions untyped_actions_;
bool extra_matcher_specified_;
bool repeated_action_specified_; // True if a WillRepeatedly() was specified.
@@ -7616,8 +9104,6 @@
Clause last_clause_;
mutable bool action_count_checked_; // Under mutex_.
mutable Mutex mutex_; // Protects action_count_checked_.
-
- GTEST_DISALLOW_ASSIGN_(ExpectationBase);
}; // class ExpectationBase
// Impements an expectation for the given function type.
@@ -7826,14 +9312,15 @@
// statement finishes and when the current thread holds
// g_gmock_mutex.
- // Returns true iff this expectation matches the given arguments.
+ // Returns true if and only if this expectation matches the given arguments.
bool Matches(const ArgumentTuple& args) const
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
g_gmock_mutex.AssertHeld();
return TupleMatches(matchers_, args) && extra_matcher_.Matches(args);
}
- // Returns true iff this expectation should handle the given arguments.
+ // Returns true if and only if this expectation should handle the given
+ // arguments.
bool ShouldHandleArguments(const ArgumentTuple& args) const
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
g_gmock_mutex.AssertHeld();
@@ -8031,8 +9518,6 @@
internal::FunctionMocker<F>* const function_mocker_;
// The argument matchers specified in the spec.
ArgumentMatcherTuple matchers_;
-
- GTEST_DISALLOW_ASSIGN_(MockSpec);
}; // class MockSpec
// Wrapper type for generically holding an ordinary value or lvalue reference.
@@ -8059,8 +9544,8 @@
// Provides nondestructive access to the underlying value/reference.
// Always returns a const reference (more precisely,
- // const RemoveReference<T>&). The behavior of calling this after
- // calling Unwrap on the same object is unspecified.
+ // const std::add_lvalue_reference<T>::type). The behavior of calling this
+ // after calling Unwrap on the same object is unspecified.
const T& Peek() const {
return value_;
}
@@ -8086,12 +9571,6 @@
T* value_ptr_;
};
-// MSVC warns about using 'this' in base member initializer list, so
-// we need to temporarily disable the warning. We have to do it for
-// the entire class to suppress the warning, even though it's about
-// the constructor only.
-GTEST_DISABLE_MSC_WARNINGS_PUSH_(4355)
-
// C++ treats the void type specially. For example, you cannot define
// a void-typed variable or pass a void value to a function.
// ActionResultHolder<T> holds a value of type T, where T must be a
@@ -8393,9 +9872,8 @@
const OnCallSpec<F>* const spec = FindOnCallSpec(args);
if (spec == nullptr) {
- *os << (internal::type_equals<Result, void>::value ?
- "returning directly.\n" :
- "returning default value.\n");
+ *os << (std::is_void<Result>::value ? "returning directly.\n"
+ : "returning default value.\n");
} else {
*os << "taking default action specified at:\n"
<< FormatFileLocation(spec->file(), spec->line()) << "\n";
@@ -8523,18 +10001,87 @@
}
}; // class FunctionMocker
-GTEST_DISABLE_MSC_WARNINGS_POP_() // 4355
-
// Reports an uninteresting call (whose description is in msg) in the
// manner specified by 'reaction'.
void ReportUninterestingCall(CallReaction reaction, const std::string& msg);
} // namespace internal
-// A MockFunction<F> class has one mock method whose type is F. It is
-// useful when you just want your test code to emit some messages and
-// have Google Mock verify the right messages are sent (and perhaps at
-// the right times). For example, if you are exercising code:
+namespace internal {
+
+template <typename F>
+class MockFunction;
+
+template <typename R, typename... Args>
+class MockFunction<R(Args...)> {
+ public:
+ MockFunction(const MockFunction&) = delete;
+ MockFunction& operator=(const MockFunction&) = delete;
+
+ std::function<R(Args...)> AsStdFunction() {
+ return [this](Args... args) -> R {
+ return this->Call(std::forward<Args>(args)...);
+ };
+ }
+
+ // Implementation detail: the expansion of the MOCK_METHOD macro.
+ R Call(Args... args) {
+ mock_.SetOwnerAndName(this, "Call");
+ return mock_.Invoke(std::forward<Args>(args)...);
+ }
+
+ MockSpec<R(Args...)> gmock_Call(Matcher<Args>... m) {
+ mock_.RegisterOwner(this);
+ return mock_.With(std::move(m)...);
+ }
+
+ MockSpec<R(Args...)> gmock_Call(const WithoutMatchers&, R (*)(Args...)) {
+ return this->gmock_Call(::testing::A<Args>()...);
+ }
+
+ protected:
+ MockFunction() = default;
+ ~MockFunction() = default;
+
+ private:
+ FunctionMocker<R(Args...)> mock_;
+};
+
+/*
+The SignatureOf<F> struct is a meta-function returning function signature
+corresponding to the provided F argument.
+
+It makes use of MockFunction easier by allowing it to accept more F arguments
+than just function signatures.
+
+Specializations provided here cover a signature type itself and any template
+that can be parameterized with a signature, including std::function and
+boost::function.
+*/
+
+template <typename F, typename = void>
+struct SignatureOf;
+
+template <typename R, typename... Args>
+struct SignatureOf<R(Args...)> {
+ using type = R(Args...);
+};
+
+template <template <typename> class C, typename F>
+struct SignatureOf<C<F>,
+ typename std::enable_if<std::is_function<F>::value>::type>
+ : SignatureOf<F> {};
+
+template <typename F>
+using SignatureOfT = typename SignatureOf<F>::type;
+
+} // namespace internal
+
+// A MockFunction<F> type has one mock method whose type is
+// internal::SignatureOfT<F>. It is useful when you just want your
+// test code to emit some messages and have Google Mock verify the
+// right messages are sent (and perhaps at the right times). For
+// example, if you are exercising code:
//
// Foo(1);
// Foo(2);
@@ -8568,49 +10115,34 @@
// Bar("a") is called by which call to Foo().
//
// MockFunction<F> can also be used to exercise code that accepts
-// std::function<F> callbacks. To do so, use AsStdFunction() method
-// to create std::function proxy forwarding to original object's Call.
-// Example:
+// std::function<internal::SignatureOfT<F>> callbacks. To do so, use
+// AsStdFunction() method to create std::function proxy forwarding to
+// original object's Call. Example:
//
// TEST(FooTest, RunsCallbackWithBarArgument) {
// MockFunction<int(string)> callback;
// EXPECT_CALL(callback, Call("bar")).WillOnce(Return(1));
// Foo(callback.AsStdFunction());
// }
+//
+// The internal::SignatureOfT<F> indirection allows to use other types
+// than just function signature type. This is typically useful when
+// providing a mock for a predefined std::function type. Example:
+//
+// using FilterPredicate = std::function<bool(string)>;
+// void MyFilterAlgorithm(FilterPredicate predicate);
+//
+// TEST(FooTest, FilterPredicateAlwaysAccepts) {
+// MockFunction<FilterPredicate> predicateMock;
+// EXPECT_CALL(predicateMock, Call(_)).WillRepeatedly(Return(true));
+// MyFilterAlgorithm(predicateMock.AsStdFunction());
+// }
template <typename F>
-class MockFunction;
+class MockFunction : public internal::MockFunction<internal::SignatureOfT<F>> {
+ using Base = internal::MockFunction<internal::SignatureOfT<F>>;
-template <typename R, typename... Args>
-class MockFunction<R(Args...)> {
public:
- MockFunction() {}
- MockFunction(const MockFunction&) = delete;
- MockFunction& operator=(const MockFunction&) = delete;
-
- std::function<R(Args...)> AsStdFunction() {
- return [this](Args... args) -> R {
- return this->Call(std::forward<Args>(args)...);
- };
- }
-
- // Implementation detail: the expansion of the MOCK_METHOD macro.
- R Call(Args... args) {
- mock_.SetOwnerAndName(this, "Call");
- return mock_.Invoke(std::forward<Args>(args)...);
- }
-
- internal::MockSpec<R(Args...)> gmock_Call(Matcher<Args>... m) {
- mock_.RegisterOwner(this);
- return mock_.With(std::move(m)...);
- }
-
- internal::MockSpec<R(Args...)> gmock_Call(const internal::WithoutMatchers&,
- R (*)(Args...)) {
- return this->gmock_Call(::testing::A<Args>()...);
- }
-
- private:
- mutable internal::FunctionMocker<R(Args...)> mock_;
+ using Base::Base;
};
// The style guide prohibits "using" statements in a namespace scope
@@ -8719,61 +10251,28 @@
#define EXPECT_CALL(obj, call) \
GMOCK_ON_CALL_IMPL_(obj, InternalExpectedAt, call)
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_SPEC_BUILDERS_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_SPEC_BUILDERS_H_
namespace testing {
namespace internal {
-// Removes the given pointer; this is a helper for the expectation setter method
-// for parameterless matchers.
-//
-// We want to make sure that the user cannot set a parameterless expectation on
-// overloaded methods, including methods which are overloaded on const. Example:
-//
-// class MockClass {
-// MOCK_METHOD0(GetName, string&());
-// MOCK_CONST_METHOD0(GetName, const string&());
-// };
-//
-// TEST() {
-// // This should be an error, as it's not clear which overload is expected.
-// EXPECT_CALL(mock, GetName).WillOnce(ReturnRef(value));
-// }
-//
-// Here are the generated expectation-setter methods:
-//
-// class MockClass {
-// // Overload 1
-// MockSpec<string&()> gmock_GetName() { ... }
-// // Overload 2. Declared const so that the compiler will generate an
-// // error when trying to resolve between this and overload 4 in
-// // 'gmock_GetName(WithoutMatchers(), nullptr)'.
-// MockSpec<string&()> gmock_GetName(
-// const WithoutMatchers&, const Function<string&()>*) const {
-// // Removes const from this, calls overload 1
-// return AdjustConstness_(this)->gmock_GetName();
-// }
-//
-// // Overload 3
-// const string& gmock_GetName() const { ... }
-// // Overload 4
-// MockSpec<const string&()> gmock_GetName(
-// const WithoutMatchers&, const Function<const string&()>*) const {
-// // Does not remove const, calls overload 3
-// return AdjustConstness_const(this)->gmock_GetName();
-// }
-// }
-//
-template <typename MockType>
-const MockType* AdjustConstness_const(const MockType* mock) {
- return mock;
-}
+template <typename T>
+using identity_t = T;
-// Removes const from and returns the given pointer; this is a helper for the
-// expectation setter method for parameterless matchers.
-template <typename MockType>
-MockType* AdjustConstness_(const MockType* mock) {
- return const_cast<MockType*>(mock);
-}
+template <typename Pattern>
+struct ThisRefAdjuster {
+ template <typename T>
+ using AdjustT = typename std::conditional<
+ std::is_const<typename std::remove_reference<Pattern>::type>::value,
+ typename std::conditional<std::is_lvalue_reference<Pattern>::value,
+ const T&, const T&&>::type,
+ typename std::conditional<std::is_lvalue_reference<Pattern>::value, T&,
+ T&&>::type>::type;
+
+ template <typename MockType>
+ static AdjustT<MockType> Adjust(const MockType& mock) {
+ return static_cast<AdjustT<MockType>>(const_cast<MockType&>(mock));
+ }
+};
} // namespace internal
@@ -8783,965 +10282,8 @@
// line is just a trick for working around a bug in MSVC 8.0, which
// cannot handle it if we define FunctionMocker in ::testing.
using internal::FunctionMocker;
-
-// GMOCK_RESULT_(tn, F) expands to the result type of function type F.
-// We define this as a variadic macro in case F contains unprotected
-// commas (the same reason that we use variadic macros in other places
-// in this file).
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_RESULT_(tn, ...) \
- tn ::testing::internal::Function<__VA_ARGS__>::Result
-
-// The type of argument N of the given function type.
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_ARG_(tn, N, ...) \
- tn ::testing::internal::Function<__VA_ARGS__>::template Arg<N-1>::type
-
-// The matcher type for argument N of the given function type.
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_MATCHER_(tn, N, ...) \
- const ::testing::Matcher<GMOCK_ARG_(tn, N, __VA_ARGS__)>&
-
-// The variable for mocking the given method.
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_MOCKER_(arity, constness, Method) \
- GTEST_CONCAT_TOKEN_(gmock##constness##arity##_##Method##_, __LINE__)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD0_(tn, constness, ct, Method, ...) \
- static_assert(0 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- ) constness { \
- GMOCK_MOCKER_(0, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(0, constness, Method).Invoke(); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method() constness { \
- GMOCK_MOCKER_(0, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(0, constness, Method).With(); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(0, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD1_(tn, constness, ct, Method, ...) \
- static_assert(1 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1) constness { \
- GMOCK_MOCKER_(1, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(1, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1) constness { \
- GMOCK_MOCKER_(1, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(1, constness, Method).With(gmock_a1); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(1, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD2_(tn, constness, ct, Method, ...) \
- static_assert(2 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2) constness { \
- GMOCK_MOCKER_(2, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(2, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2) constness { \
- GMOCK_MOCKER_(2, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(2, constness, Method).With(gmock_a1, gmock_a2); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(2, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD3_(tn, constness, ct, Method, ...) \
- static_assert(3 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2, GMOCK_ARG_(tn, 3, \
- __VA_ARGS__) gmock_a3) constness { \
- GMOCK_MOCKER_(3, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(3, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2), \
- ::std::forward<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(gmock_a3)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2, \
- GMOCK_MATCHER_(tn, 3, __VA_ARGS__) gmock_a3) constness { \
- GMOCK_MOCKER_(3, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(3, constness, Method).With(gmock_a1, gmock_a2, \
- gmock_a3); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 3, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(3, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD4_(tn, constness, ct, Method, ...) \
- static_assert(4 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2, GMOCK_ARG_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_ARG_(tn, 4, __VA_ARGS__) gmock_a4) constness { \
- GMOCK_MOCKER_(4, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(4, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2), \
- ::std::forward<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(gmock_a3), \
- ::std::forward<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(gmock_a4)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2, \
- GMOCK_MATCHER_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_MATCHER_(tn, 4, __VA_ARGS__) gmock_a4) constness { \
- GMOCK_MOCKER_(4, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(4, constness, Method).With(gmock_a1, gmock_a2, \
- gmock_a3, gmock_a4); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 4, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(4, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD5_(tn, constness, ct, Method, ...) \
- static_assert(5 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2, GMOCK_ARG_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_ARG_(tn, 4, __VA_ARGS__) gmock_a4, GMOCK_ARG_(tn, 5, \
- __VA_ARGS__) gmock_a5) constness { \
- GMOCK_MOCKER_(5, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(5, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2), \
- ::std::forward<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(gmock_a3), \
- ::std::forward<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(gmock_a4), \
- ::std::forward<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(gmock_a5)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2, \
- GMOCK_MATCHER_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_MATCHER_(tn, 4, __VA_ARGS__) gmock_a4, \
- GMOCK_MATCHER_(tn, 5, __VA_ARGS__) gmock_a5) constness { \
- GMOCK_MOCKER_(5, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(5, constness, Method).With(gmock_a1, gmock_a2, \
- gmock_a3, gmock_a4, gmock_a5); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 5, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(5, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD6_(tn, constness, ct, Method, ...) \
- static_assert(6 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2, GMOCK_ARG_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_ARG_(tn, 4, __VA_ARGS__) gmock_a4, GMOCK_ARG_(tn, 5, \
- __VA_ARGS__) gmock_a5, GMOCK_ARG_(tn, 6, \
- __VA_ARGS__) gmock_a6) constness { \
- GMOCK_MOCKER_(6, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(6, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2), \
- ::std::forward<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(gmock_a3), \
- ::std::forward<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(gmock_a4), \
- ::std::forward<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(gmock_a5), \
- ::std::forward<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(gmock_a6)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2, \
- GMOCK_MATCHER_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_MATCHER_(tn, 4, __VA_ARGS__) gmock_a4, \
- GMOCK_MATCHER_(tn, 5, __VA_ARGS__) gmock_a5, \
- GMOCK_MATCHER_(tn, 6, __VA_ARGS__) gmock_a6) constness { \
- GMOCK_MOCKER_(6, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(6, constness, Method).With(gmock_a1, gmock_a2, \
- gmock_a3, gmock_a4, gmock_a5, gmock_a6); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 6, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(6, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD7_(tn, constness, ct, Method, ...) \
- static_assert(7 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2, GMOCK_ARG_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_ARG_(tn, 4, __VA_ARGS__) gmock_a4, GMOCK_ARG_(tn, 5, \
- __VA_ARGS__) gmock_a5, GMOCK_ARG_(tn, 6, __VA_ARGS__) gmock_a6, \
- GMOCK_ARG_(tn, 7, __VA_ARGS__) gmock_a7) constness { \
- GMOCK_MOCKER_(7, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(7, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2), \
- ::std::forward<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(gmock_a3), \
- ::std::forward<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(gmock_a4), \
- ::std::forward<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(gmock_a5), \
- ::std::forward<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(gmock_a6), \
- ::std::forward<GMOCK_ARG_(tn, 7, __VA_ARGS__)>(gmock_a7)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2, \
- GMOCK_MATCHER_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_MATCHER_(tn, 4, __VA_ARGS__) gmock_a4, \
- GMOCK_MATCHER_(tn, 5, __VA_ARGS__) gmock_a5, \
- GMOCK_MATCHER_(tn, 6, __VA_ARGS__) gmock_a6, \
- GMOCK_MATCHER_(tn, 7, __VA_ARGS__) gmock_a7) constness { \
- GMOCK_MOCKER_(7, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(7, constness, Method).With(gmock_a1, gmock_a2, \
- gmock_a3, gmock_a4, gmock_a5, gmock_a6, gmock_a7); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 7, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(7, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD8_(tn, constness, ct, Method, ...) \
- static_assert(8 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2, GMOCK_ARG_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_ARG_(tn, 4, __VA_ARGS__) gmock_a4, GMOCK_ARG_(tn, 5, \
- __VA_ARGS__) gmock_a5, GMOCK_ARG_(tn, 6, __VA_ARGS__) gmock_a6, \
- GMOCK_ARG_(tn, 7, __VA_ARGS__) gmock_a7, GMOCK_ARG_(tn, 8, \
- __VA_ARGS__) gmock_a8) constness { \
- GMOCK_MOCKER_(8, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(8, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2), \
- ::std::forward<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(gmock_a3), \
- ::std::forward<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(gmock_a4), \
- ::std::forward<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(gmock_a5), \
- ::std::forward<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(gmock_a6), \
- ::std::forward<GMOCK_ARG_(tn, 7, __VA_ARGS__)>(gmock_a7), \
- ::std::forward<GMOCK_ARG_(tn, 8, __VA_ARGS__)>(gmock_a8)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2, \
- GMOCK_MATCHER_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_MATCHER_(tn, 4, __VA_ARGS__) gmock_a4, \
- GMOCK_MATCHER_(tn, 5, __VA_ARGS__) gmock_a5, \
- GMOCK_MATCHER_(tn, 6, __VA_ARGS__) gmock_a6, \
- GMOCK_MATCHER_(tn, 7, __VA_ARGS__) gmock_a7, \
- GMOCK_MATCHER_(tn, 8, __VA_ARGS__) gmock_a8) constness { \
- GMOCK_MOCKER_(8, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(8, constness, Method).With(gmock_a1, gmock_a2, \
- gmock_a3, gmock_a4, gmock_a5, gmock_a6, gmock_a7, gmock_a8); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 7, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 8, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(8, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD9_(tn, constness, ct, Method, ...) \
- static_assert(9 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2, GMOCK_ARG_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_ARG_(tn, 4, __VA_ARGS__) gmock_a4, GMOCK_ARG_(tn, 5, \
- __VA_ARGS__) gmock_a5, GMOCK_ARG_(tn, 6, __VA_ARGS__) gmock_a6, \
- GMOCK_ARG_(tn, 7, __VA_ARGS__) gmock_a7, GMOCK_ARG_(tn, 8, \
- __VA_ARGS__) gmock_a8, GMOCK_ARG_(tn, 9, \
- __VA_ARGS__) gmock_a9) constness { \
- GMOCK_MOCKER_(9, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(9, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2), \
- ::std::forward<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(gmock_a3), \
- ::std::forward<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(gmock_a4), \
- ::std::forward<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(gmock_a5), \
- ::std::forward<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(gmock_a6), \
- ::std::forward<GMOCK_ARG_(tn, 7, __VA_ARGS__)>(gmock_a7), \
- ::std::forward<GMOCK_ARG_(tn, 8, __VA_ARGS__)>(gmock_a8), \
- ::std::forward<GMOCK_ARG_(tn, 9, __VA_ARGS__)>(gmock_a9)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2, \
- GMOCK_MATCHER_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_MATCHER_(tn, 4, __VA_ARGS__) gmock_a4, \
- GMOCK_MATCHER_(tn, 5, __VA_ARGS__) gmock_a5, \
- GMOCK_MATCHER_(tn, 6, __VA_ARGS__) gmock_a6, \
- GMOCK_MATCHER_(tn, 7, __VA_ARGS__) gmock_a7, \
- GMOCK_MATCHER_(tn, 8, __VA_ARGS__) gmock_a8, \
- GMOCK_MATCHER_(tn, 9, __VA_ARGS__) gmock_a9) constness { \
- GMOCK_MOCKER_(9, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(9, constness, Method).With(gmock_a1, gmock_a2, \
- gmock_a3, gmock_a4, gmock_a5, gmock_a6, gmock_a7, gmock_a8, \
- gmock_a9); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 7, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 8, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 9, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(9, constness, \
- Method)
-
-// INTERNAL IMPLEMENTATION - DON'T USE IN USER CODE!!!
-#define GMOCK_METHOD10_(tn, constness, ct, Method, ...) \
- static_assert(10 == \
- ::testing::internal::Function<__VA_ARGS__>::ArgumentCount, \
- "MOCK_METHOD<N> must match argument count.");\
- GMOCK_RESULT_(tn, __VA_ARGS__) ct Method( \
- GMOCK_ARG_(tn, 1, __VA_ARGS__) gmock_a1, GMOCK_ARG_(tn, 2, \
- __VA_ARGS__) gmock_a2, GMOCK_ARG_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_ARG_(tn, 4, __VA_ARGS__) gmock_a4, GMOCK_ARG_(tn, 5, \
- __VA_ARGS__) gmock_a5, GMOCK_ARG_(tn, 6, __VA_ARGS__) gmock_a6, \
- GMOCK_ARG_(tn, 7, __VA_ARGS__) gmock_a7, GMOCK_ARG_(tn, 8, \
- __VA_ARGS__) gmock_a8, GMOCK_ARG_(tn, 9, __VA_ARGS__) gmock_a9, \
- GMOCK_ARG_(tn, 10, __VA_ARGS__) gmock_a10) constness { \
- GMOCK_MOCKER_(10, constness, Method).SetOwnerAndName(this, #Method); \
- return GMOCK_MOCKER_(10, constness, \
- Method).Invoke(::std::forward<GMOCK_ARG_(tn, 1, \
- __VA_ARGS__)>(gmock_a1), \
- ::std::forward<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(gmock_a2), \
- ::std::forward<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(gmock_a3), \
- ::std::forward<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(gmock_a4), \
- ::std::forward<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(gmock_a5), \
- ::std::forward<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(gmock_a6), \
- ::std::forward<GMOCK_ARG_(tn, 7, __VA_ARGS__)>(gmock_a7), \
- ::std::forward<GMOCK_ARG_(tn, 8, __VA_ARGS__)>(gmock_a8), \
- ::std::forward<GMOCK_ARG_(tn, 9, __VA_ARGS__)>(gmock_a9), \
- ::std::forward<GMOCK_ARG_(tn, 10, __VA_ARGS__)>(gmock_a10)); \
- } \
- ::testing::MockSpec<__VA_ARGS__> \
- gmock_##Method(GMOCK_MATCHER_(tn, 1, __VA_ARGS__) gmock_a1, \
- GMOCK_MATCHER_(tn, 2, __VA_ARGS__) gmock_a2, \
- GMOCK_MATCHER_(tn, 3, __VA_ARGS__) gmock_a3, \
- GMOCK_MATCHER_(tn, 4, __VA_ARGS__) gmock_a4, \
- GMOCK_MATCHER_(tn, 5, __VA_ARGS__) gmock_a5, \
- GMOCK_MATCHER_(tn, 6, __VA_ARGS__) gmock_a6, \
- GMOCK_MATCHER_(tn, 7, __VA_ARGS__) gmock_a7, \
- GMOCK_MATCHER_(tn, 8, __VA_ARGS__) gmock_a8, \
- GMOCK_MATCHER_(tn, 9, __VA_ARGS__) gmock_a9, \
- GMOCK_MATCHER_(tn, 10, \
- __VA_ARGS__) gmock_a10) constness { \
- GMOCK_MOCKER_(10, constness, Method).RegisterOwner(this); \
- return GMOCK_MOCKER_(10, constness, Method).With(gmock_a1, gmock_a2, \
- gmock_a3, gmock_a4, gmock_a5, gmock_a6, gmock_a7, gmock_a8, gmock_a9, \
- gmock_a10); \
- } \
- ::testing::MockSpec<__VA_ARGS__> gmock_##Method( \
- const ::testing::internal::WithoutMatchers&, \
- constness ::testing::internal::Function<__VA_ARGS__>* ) const { \
- return ::testing::internal::AdjustConstness_##constness(this)-> \
- gmock_##Method(::testing::A<GMOCK_ARG_(tn, 1, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 2, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 3, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 4, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 5, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 6, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 7, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 8, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 9, __VA_ARGS__)>(), \
- ::testing::A<GMOCK_ARG_(tn, 10, __VA_ARGS__)>()); \
- } \
- mutable ::testing::FunctionMocker<__VA_ARGS__> GMOCK_MOCKER_(10, constness, \
- Method)
-
-#define MOCK_METHOD0(m, ...) GMOCK_METHOD0_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD1(m, ...) GMOCK_METHOD1_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD2(m, ...) GMOCK_METHOD2_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD3(m, ...) GMOCK_METHOD3_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD4(m, ...) GMOCK_METHOD4_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD5(m, ...) GMOCK_METHOD5_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD6(m, ...) GMOCK_METHOD6_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD7(m, ...) GMOCK_METHOD7_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD8(m, ...) GMOCK_METHOD8_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD9(m, ...) GMOCK_METHOD9_(, , , m, __VA_ARGS__)
-#define MOCK_METHOD10(m, ...) GMOCK_METHOD10_(, , , m, __VA_ARGS__)
-
-#define MOCK_CONST_METHOD0(m, ...) GMOCK_METHOD0_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD1(m, ...) GMOCK_METHOD1_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD2(m, ...) GMOCK_METHOD2_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD3(m, ...) GMOCK_METHOD3_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD4(m, ...) GMOCK_METHOD4_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD5(m, ...) GMOCK_METHOD5_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD6(m, ...) GMOCK_METHOD6_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD7(m, ...) GMOCK_METHOD7_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD8(m, ...) GMOCK_METHOD8_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD9(m, ...) GMOCK_METHOD9_(, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD10(m, ...) GMOCK_METHOD10_(, const, , m, __VA_ARGS__)
-
-#define MOCK_METHOD0_T(m, ...) GMOCK_METHOD0_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD1_T(m, ...) GMOCK_METHOD1_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD2_T(m, ...) GMOCK_METHOD2_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD3_T(m, ...) GMOCK_METHOD3_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD4_T(m, ...) GMOCK_METHOD4_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD5_T(m, ...) GMOCK_METHOD5_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD6_T(m, ...) GMOCK_METHOD6_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD7_T(m, ...) GMOCK_METHOD7_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD8_T(m, ...) GMOCK_METHOD8_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD9_T(m, ...) GMOCK_METHOD9_(typename, , , m, __VA_ARGS__)
-#define MOCK_METHOD10_T(m, ...) GMOCK_METHOD10_(typename, , , m, __VA_ARGS__)
-
-#define MOCK_CONST_METHOD0_T(m, ...) \
- GMOCK_METHOD0_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD1_T(m, ...) \
- GMOCK_METHOD1_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD2_T(m, ...) \
- GMOCK_METHOD2_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD3_T(m, ...) \
- GMOCK_METHOD3_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD4_T(m, ...) \
- GMOCK_METHOD4_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD5_T(m, ...) \
- GMOCK_METHOD5_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD6_T(m, ...) \
- GMOCK_METHOD6_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD7_T(m, ...) \
- GMOCK_METHOD7_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD8_T(m, ...) \
- GMOCK_METHOD8_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD9_T(m, ...) \
- GMOCK_METHOD9_(typename, const, , m, __VA_ARGS__)
-#define MOCK_CONST_METHOD10_T(m, ...) \
- GMOCK_METHOD10_(typename, const, , m, __VA_ARGS__)
-
-#define MOCK_METHOD0_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD0_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD1_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD1_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD2_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD2_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD3_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD3_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD4_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD4_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD5_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD5_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD6_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD6_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD7_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD7_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD8_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD8_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD9_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD9_(, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD10_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD10_(, , ct, m, __VA_ARGS__)
-
-#define MOCK_CONST_METHOD0_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD0_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD1_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD1_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD2_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD2_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD3_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD3_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD4_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD4_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD5_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD5_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD6_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD6_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD7_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD7_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD8_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD8_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD9_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD9_(, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD10_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD10_(, const, ct, m, __VA_ARGS__)
-
-#define MOCK_METHOD0_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD0_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD1_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD1_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD2_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD2_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD3_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD3_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD4_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD4_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD5_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD5_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD6_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD6_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD7_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD7_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD8_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD8_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD9_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD9_(typename, , ct, m, __VA_ARGS__)
-#define MOCK_METHOD10_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD10_(typename, , ct, m, __VA_ARGS__)
-
-#define MOCK_CONST_METHOD0_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD0_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD1_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD1_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD2_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD2_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD3_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD3_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD4_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD4_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD5_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD5_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD6_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD6_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD7_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD7_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD8_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD8_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD9_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD9_(typename, const, ct, m, __VA_ARGS__)
-#define MOCK_CONST_METHOD10_T_WITH_CALLTYPE(ct, m, ...) \
- GMOCK_METHOD10_(typename, const, ct, m, __VA_ARGS__)
-
} // namespace testing
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_FUNCTION_MOCKERS_H_
-#ifndef THIRD_PARTY_GOOGLETEST_GOOGLEMOCK_INCLUDE_GMOCK_PP_H_
-#define THIRD_PARTY_GOOGLETEST_GOOGLEMOCK_INCLUDE_GMOCK_PP_H_
-
-#undef GMOCK_PP_INTERNAL_USE_MSVC
-#if defined(__clang__)
-#define GMOCK_PP_INTERNAL_USE_MSVC 0
-#elif defined(_MSC_VER)
-// TODO(iserna): Also verify tradional versus comformant preprocessor.
-static_assert(
- _MSC_VER >= 1900,
- "MSVC version not supported. There is support for MSVC 14.0 and above.");
-#define GMOCK_PP_INTERNAL_USE_MSVC 1
-#else
-#define GMOCK_PP_INTERNAL_USE_MSVC 0
-#endif
-
-// Expands and concatenates the arguments. Constructed macros reevaluate.
-#define GMOCK_PP_CAT(_1, _2) GMOCK_PP_INTERNAL_CAT(_1, _2)
-
-// Expands and stringifies the only argument.
-#define GMOCK_PP_STRINGIZE(...) GMOCK_PP_INTERNAL_STRINGIZE(__VA_ARGS__)
-
-// Returns empty. Given a variadic number of arguments.
-#define GMOCK_PP_EMPTY(...)
-
-// Returns a comma. Given a variadic number of arguments.
-#define GMOCK_PP_COMMA(...) ,
-
-// Returns the only argument.
-#define GMOCK_PP_IDENTITY(_1) _1
-
-// MSVC preprocessor collapses __VA_ARGS__ in a single argument, we use a
-// CAT-like directive to force correct evaluation. Each macro has its own.
-#if GMOCK_PP_INTERNAL_USE_MSVC
-
-// Evaluates to the number of arguments after expansion.
-//
-// #define PAIR x, y
-//
-// GMOCK_PP_NARG() => 1
-// GMOCK_PP_NARG(x) => 1
-// GMOCK_PP_NARG(x, y) => 2
-// GMOCK_PP_NARG(PAIR) => 2
-//
-// Requires: the number of arguments after expansion is at most 15.
-#define GMOCK_PP_NARG(...) \
- GMOCK_PP_INTERNAL_NARG_CAT( \
- GMOCK_PP_INTERNAL_INTERNAL_16TH(__VA_ARGS__, 15, 14, 13, 12, 11, 10, 9, \
- 8, 7, 6, 5, 4, 3, 2, 1), )
-
-// Returns 1 if the expansion of arguments has an unprotected comma. Otherwise
-// returns 0. Requires no more than 15 unprotected commas.
-#define GMOCK_PP_HAS_COMMA(...) \
- GMOCK_PP_INTERNAL_HAS_COMMA_CAT( \
- GMOCK_PP_INTERNAL_INTERNAL_16TH(__VA_ARGS__, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
- 1, 1, 1, 1, 1, 0), )
-// Returns the first argument.
-#define GMOCK_PP_HEAD(...) \
- GMOCK_PP_INTERNAL_HEAD_CAT(GMOCK_PP_INTERNAL_HEAD(__VA_ARGS__), )
-
-// Returns the tail. A variadic list of all arguments minus the first. Requires
-// at least one argument.
-#define GMOCK_PP_TAIL(...) \
- GMOCK_PP_INTERNAL_TAIL_CAT(GMOCK_PP_INTERNAL_TAIL(__VA_ARGS__), )
-
-// Calls CAT(_Macro, NARG(__VA_ARGS__))(__VA_ARGS__)
-#define GMOCK_PP_VARIADIC_CALL(_Macro, ...) \
- GMOCK_PP_INTERNAL_VARIADIC_CALL_CAT( \
- GMOCK_PP_CAT(_Macro, GMOCK_PP_NARG(__VA_ARGS__))(__VA_ARGS__), )
-
-#else // GMOCK_PP_INTERNAL_USE_MSVC
-
-#define GMOCK_PP_NARG(...) \
- GMOCK_PP_INTERNAL_INTERNAL_16TH(__VA_ARGS__, 15, 14, 13, 12, 11, 10, 9, 8, \
- 7, 6, 5, 4, 3, 2, 1)
-#define GMOCK_PP_HAS_COMMA(...) \
- GMOCK_PP_INTERNAL_INTERNAL_16TH(__VA_ARGS__, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, \
- 1, 1, 1, 1, 0)
-#define GMOCK_PP_HEAD(...) GMOCK_PP_INTERNAL_HEAD(__VA_ARGS__)
-#define GMOCK_PP_TAIL(...) GMOCK_PP_INTERNAL_TAIL(__VA_ARGS__)
-#define GMOCK_PP_VARIADIC_CALL(_Macro, ...) \
- GMOCK_PP_CAT(_Macro, GMOCK_PP_NARG(__VA_ARGS__))(__VA_ARGS__)
-
-#endif // GMOCK_PP_INTERNAL_USE_MSVC
-
-// If the arguments after expansion have no tokens, evaluates to `1`. Otherwise
-// evaluates to `0`.
-//
-// Requires: * the number of arguments after expansion is at most 15.
-// * If the argument is a macro, it must be able to be called with one
-// argument.
-//
-// Implementation details:
-//
-// There is one case when it generates a compile error: if the argument is macro
-// that cannot be called with one argument.
-//
-// #define M(a, b) // it doesn't matter what it expands to
-//
-// // Expected: expands to `0`.
-// // Actual: compile error.
-// GMOCK_PP_IS_EMPTY(M)
-//
-// There are 4 cases tested:
-//
-// * __VA_ARGS__ possible expansion has no unparen'd commas. Expected 0.
-// * __VA_ARGS__ possible expansion is not enclosed in parenthesis. Expected 0.
-// * __VA_ARGS__ possible expansion is not a macro that ()-evaluates to a comma.
-// Expected 0
-// * __VA_ARGS__ is empty, or has unparen'd commas, or is enclosed in
-// parenthesis, or is a macro that ()-evaluates to comma. Expected 1.
-//
-// We trigger detection on '0001', i.e. on empty.
-#define GMOCK_PP_IS_EMPTY(...) \
- GMOCK_PP_INTERNAL_IS_EMPTY(GMOCK_PP_HAS_COMMA(__VA_ARGS__), \
- GMOCK_PP_HAS_COMMA(GMOCK_PP_COMMA __VA_ARGS__), \
- GMOCK_PP_HAS_COMMA(__VA_ARGS__()), \
- GMOCK_PP_HAS_COMMA(GMOCK_PP_COMMA __VA_ARGS__()))
-
-// Evaluates to _Then if _Cond is 1 and _Else if _Cond is 0.
-#define GMOCK_PP_IF(_Cond, _Then, _Else) \
- GMOCK_PP_CAT(GMOCK_PP_INTERNAL_IF_, _Cond)(_Then, _Else)
-
-// Evaluates to the number of arguments after expansion. Identifies 'empty' as
-// 0.
-//
-// #define PAIR x, y
-//
-// GMOCK_PP_NARG0() => 0
-// GMOCK_PP_NARG0(x) => 1
-// GMOCK_PP_NARG0(x, y) => 2
-// GMOCK_PP_NARG0(PAIR) => 2
-//
-// Requires: * the number of arguments after expansion is at most 15.
-// * If the argument is a macro, it must be able to be called with one
-// argument.
-#define GMOCK_PP_NARG0(...) \
- GMOCK_PP_IF(GMOCK_PP_IS_EMPTY(__VA_ARGS__), 0, GMOCK_PP_NARG(__VA_ARGS__))
-
-// Expands to 1 if the first argument starts with something in parentheses,
-// otherwise to 0.
-#define GMOCK_PP_IS_BEGIN_PARENS(...) \
- GMOCK_PP_INTERNAL_ALTERNATE_HEAD( \
- GMOCK_PP_CAT(GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_R_, \
- GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_C __VA_ARGS__))
-
-// Expands to 1 is there is only one argument and it is enclosed in parentheses.
-#define GMOCK_PP_IS_ENCLOSED_PARENS(...) \
- GMOCK_PP_IF(GMOCK_PP_IS_BEGIN_PARENS(__VA_ARGS__), \
- GMOCK_PP_IS_EMPTY(GMOCK_PP_EMPTY __VA_ARGS__), 0)
-
-// Remove the parens, requires GMOCK_PP_IS_ENCLOSED_PARENS(args) => 1.
-#define GMOCK_PP_REMOVE_PARENS(...) GMOCK_PP_INTERNAL_REMOVE_PARENS __VA_ARGS__
-
-// Expands to _Macro(0, _Data, e1) _Macro(1, _Data, e2) ... _Macro(K -1, _Data,
-// eK) as many of GMOCK_INTERNAL_NARG0 _Tuple.
-// Requires: * |_Macro| can be called with 3 arguments.
-// * |_Tuple| expansion has no more than 15 elements.
-#define GMOCK_PP_FOR_EACH(_Macro, _Data, _Tuple) \
- GMOCK_PP_CAT(GMOCK_PP_INTERNAL_FOR_EACH_IMPL_, GMOCK_PP_NARG0 _Tuple) \
- (0, _Macro, _Data, _Tuple)
-
-// Expands to _Macro(0, _Data, ) _Macro(1, _Data, ) ... _Macro(K - 1, _Data, )
-// Empty if _K = 0.
-// Requires: * |_Macro| can be called with 3 arguments.
-// * |_K| literal between 0 and 15
-#define GMOCK_PP_REPEAT(_Macro, _Data, _N) \
- GMOCK_PP_CAT(GMOCK_PP_INTERNAL_FOR_EACH_IMPL_, _N) \
- (0, _Macro, _Data, GMOCK_PP_INTENRAL_EMPTY_TUPLE)
-
-// Increments the argument, requires the argument to be between 0 and 15.
-#define GMOCK_PP_INC(_i) GMOCK_PP_CAT(GMOCK_PP_INTERNAL_INC_, _i)
-
-// Returns comma if _i != 0. Requires _i to be between 0 and 15.
-#define GMOCK_PP_COMMA_IF(_i) GMOCK_PP_CAT(GMOCK_PP_INTERNAL_COMMA_IF_, _i)
-
-// Internal details follow. Do not use any of these symbols outside of this
-// file or we will break your code.
-#define GMOCK_PP_INTENRAL_EMPTY_TUPLE (, , , , , , , , , , , , , , , )
-#define GMOCK_PP_INTERNAL_CAT(_1, _2) _1##_2
-#define GMOCK_PP_INTERNAL_STRINGIZE(...) #__VA_ARGS__
-#define GMOCK_PP_INTERNAL_INTERNAL_16TH(_1, _2, _3, _4, _5, _6, _7, _8, _9, \
- _10, _11, _12, _13, _14, _15, _16, \
- ...) \
- _16
-#define GMOCK_PP_INTERNAL_CAT_5(_1, _2, _3, _4, _5) _1##_2##_3##_4##_5
-#define GMOCK_PP_INTERNAL_IS_EMPTY(_1, _2, _3, _4) \
- GMOCK_PP_HAS_COMMA(GMOCK_PP_INTERNAL_CAT_5(GMOCK_PP_INTERNAL_IS_EMPTY_CASE_, \
- _1, _2, _3, _4))
-#define GMOCK_PP_INTERNAL_IS_EMPTY_CASE_0001 ,
-#define GMOCK_PP_INTERNAL_IF_1(_Then, _Else) _Then
-#define GMOCK_PP_INTERNAL_IF_0(_Then, _Else) _Else
-#define GMOCK_PP_INTERNAL_HEAD(_1, ...) _1
-#define GMOCK_PP_INTERNAL_TAIL(_1, ...) __VA_ARGS__
-
-#if GMOCK_PP_INTERNAL_USE_MSVC
-#define GMOCK_PP_INTERNAL_NARG_CAT(_1, _2) GMOCK_PP_INTERNAL_NARG_CAT_I(_1, _2)
-#define GMOCK_PP_INTERNAL_HEAD_CAT(_1, _2) GMOCK_PP_INTERNAL_HEAD_CAT_I(_1, _2)
-#define GMOCK_PP_INTERNAL_HAS_COMMA_CAT(_1, _2) \
- GMOCK_PP_INTERNAL_HAS_COMMA_CAT_I(_1, _2)
-#define GMOCK_PP_INTERNAL_TAIL_CAT(_1, _2) GMOCK_PP_INTERNAL_TAIL_CAT_I(_1, _2)
-#define GMOCK_PP_INTERNAL_VARIADIC_CALL_CAT(_1, _2) \
- GMOCK_PP_INTERNAL_VARIADIC_CALL_CAT_I(_1, _2)
-#define GMOCK_PP_INTERNAL_NARG_CAT_I(_1, _2) _1##_2
-#define GMOCK_PP_INTERNAL_HEAD_CAT_I(_1, _2) _1##_2
-#define GMOCK_PP_INTERNAL_HAS_COMMA_CAT_I(_1, _2) _1##_2
-#define GMOCK_PP_INTERNAL_TAIL_CAT_I(_1, _2) _1##_2
-#define GMOCK_PP_INTERNAL_VARIADIC_CALL_CAT_I(_1, _2) _1##_2
-#define GMOCK_PP_INTERNAL_ALTERNATE_HEAD(...) \
- GMOCK_PP_INTERNAL_ALTERNATE_HEAD_CAT(GMOCK_PP_HEAD(__VA_ARGS__), )
-#define GMOCK_PP_INTERNAL_ALTERNATE_HEAD_CAT(_1, _2) \
- GMOCK_PP_INTERNAL_ALTERNATE_HEAD_CAT_I(_1, _2)
-#define GMOCK_PP_INTERNAL_ALTERNATE_HEAD_CAT_I(_1, _2) _1##_2
-#else // GMOCK_PP_INTERNAL_USE_MSVC
-#define GMOCK_PP_INTERNAL_ALTERNATE_HEAD(...) GMOCK_PP_HEAD(__VA_ARGS__)
-#endif // GMOCK_PP_INTERNAL_USE_MSVC
-
-#define GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_C(...) 1 _
-#define GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_R_1 1,
-#define GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_R_GMOCK_PP_INTERNAL_IBP_IS_VARIADIC_C \
- 0,
-#define GMOCK_PP_INTERNAL_REMOVE_PARENS(...) __VA_ARGS__
-#define GMOCK_PP_INTERNAL_INC_0 1
-#define GMOCK_PP_INTERNAL_INC_1 2
-#define GMOCK_PP_INTERNAL_INC_2 3
-#define GMOCK_PP_INTERNAL_INC_3 4
-#define GMOCK_PP_INTERNAL_INC_4 5
-#define GMOCK_PP_INTERNAL_INC_5 6
-#define GMOCK_PP_INTERNAL_INC_6 7
-#define GMOCK_PP_INTERNAL_INC_7 8
-#define GMOCK_PP_INTERNAL_INC_8 9
-#define GMOCK_PP_INTERNAL_INC_9 10
-#define GMOCK_PP_INTERNAL_INC_10 11
-#define GMOCK_PP_INTERNAL_INC_11 12
-#define GMOCK_PP_INTERNAL_INC_12 13
-#define GMOCK_PP_INTERNAL_INC_13 14
-#define GMOCK_PP_INTERNAL_INC_14 15
-#define GMOCK_PP_INTERNAL_INC_15 16
-#define GMOCK_PP_INTERNAL_COMMA_IF_0
-#define GMOCK_PP_INTERNAL_COMMA_IF_1 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_2 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_3 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_4 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_5 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_6 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_7 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_8 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_9 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_10 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_11 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_12 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_13 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_14 ,
-#define GMOCK_PP_INTERNAL_COMMA_IF_15 ,
-#define GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, _element) \
- _Macro(_i, _Data, _element)
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_0(_i, _Macro, _Data, _Tuple)
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_1(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple)
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_2(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_1(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_3(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_2(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_4(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_3(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_5(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_4(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_6(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_5(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_7(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_6(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_8(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_7(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_9(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_8(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_10(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_9(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_11(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_10(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_12(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_11(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_13(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_12(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_14(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_13(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-#define GMOCK_PP_INTERNAL_FOR_EACH_IMPL_15(_i, _Macro, _Data, _Tuple) \
- GMOCK_PP_INTERNAL_CALL_MACRO(_Macro, _i, _Data, GMOCK_PP_HEAD _Tuple) \
- GMOCK_PP_INTERNAL_FOR_EACH_IMPL_14(GMOCK_PP_INC(_i), _Macro, _Data, \
- (GMOCK_PP_TAIL _Tuple))
-
-#endif // THIRD_PARTY_GOOGLETEST_GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_PP_H_
-
#define MOCK_METHOD(...) \
GMOCK_PP_VARIADIC_CALL(GMOCK_INTERNAL_MOCK_METHOD_ARG_, __VA_ARGS__)
@@ -9763,7 +10305,8 @@
GMOCK_INTERNAL_MOCK_METHOD_IMPL( \
GMOCK_PP_NARG0 _Args, _MethodName, GMOCK_INTERNAL_HAS_CONST(_Spec), \
GMOCK_INTERNAL_HAS_OVERRIDE(_Spec), GMOCK_INTERNAL_HAS_FINAL(_Spec), \
- GMOCK_INTERNAL_HAS_NOEXCEPT(_Spec), GMOCK_INTERNAL_GET_CALLTYPE(_Spec), \
+ GMOCK_INTERNAL_GET_NOEXCEPT_SPEC(_Spec), \
+ GMOCK_INTERNAL_GET_CALLTYPE(_Spec), GMOCK_INTERNAL_GET_REF_SPEC(_Spec), \
(GMOCK_INTERNAL_SIGNATURE(_Ret, _Args)))
#define GMOCK_INTERNAL_MOCK_METHOD_ARG_5(...) \
@@ -9797,21 +10340,20 @@
::testing::tuple_size<typename ::testing::internal::Function< \
__VA_ARGS__>::ArgumentTuple>::value == _N, \
"This method does not take " GMOCK_PP_STRINGIZE( \
- _N) " arguments. Parenthesize all types with unproctected commas.")
+ _N) " arguments. Parenthesize all types with unprotected commas.")
#define GMOCK_INTERNAL_ASSERT_VALID_SPEC(_Spec) \
GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_ASSERT_VALID_SPEC_ELEMENT, ~, _Spec)
#define GMOCK_INTERNAL_MOCK_METHOD_IMPL(_N, _MethodName, _Constness, \
- _Override, _Final, _Noexcept, \
- _CallType, _Signature) \
+ _Override, _Final, _NoexceptSpec, \
+ _CallType, _RefSpec, _Signature) \
typename ::testing::internal::Function<GMOCK_PP_REMOVE_PARENS( \
_Signature)>::Result \
GMOCK_INTERNAL_EXPAND(_CallType) \
_MethodName(GMOCK_PP_REPEAT(GMOCK_INTERNAL_PARAMETER, _Signature, _N)) \
- GMOCK_PP_IF(_Constness, const, ) GMOCK_PP_IF(_Noexcept, noexcept, ) \
- GMOCK_PP_IF(_Override, override, ) \
- GMOCK_PP_IF(_Final, final, ) { \
+ GMOCK_PP_IF(_Constness, const, ) _RefSpec _NoexceptSpec \
+ GMOCK_PP_IF(_Override, override, ) GMOCK_PP_IF(_Final, final, ) { \
GMOCK_MOCKER_(_N, _Constness, _MethodName) \
.SetOwnerAndName(this, #_MethodName); \
return GMOCK_MOCKER_(_N, _Constness, _MethodName) \
@@ -9819,7 +10361,7 @@
} \
::testing::MockSpec<GMOCK_PP_REMOVE_PARENS(_Signature)> gmock_##_MethodName( \
GMOCK_PP_REPEAT(GMOCK_INTERNAL_MATCHER_PARAMETER, _Signature, _N)) \
- GMOCK_PP_IF(_Constness, const, ) { \
+ GMOCK_PP_IF(_Constness, const, ) _RefSpec { \
GMOCK_MOCKER_(_N, _Constness, _MethodName).RegisterOwner(this); \
return GMOCK_MOCKER_(_N, _Constness, _MethodName) \
.With(GMOCK_PP_REPEAT(GMOCK_INTERNAL_MATCHER_ARGUMENT, , _N)); \
@@ -9827,11 +10369,10 @@
::testing::MockSpec<GMOCK_PP_REMOVE_PARENS(_Signature)> gmock_##_MethodName( \
const ::testing::internal::WithoutMatchers&, \
GMOCK_PP_IF(_Constness, const, )::testing::internal::Function< \
- GMOCK_PP_REMOVE_PARENS(_Signature)>*) \
- const GMOCK_PP_IF(_Noexcept, noexcept, ) { \
- return GMOCK_PP_CAT(::testing::internal::AdjustConstness_, \
- GMOCK_PP_IF(_Constness, const, ))(this) \
- ->gmock_##_MethodName(GMOCK_PP_REPEAT( \
+ GMOCK_PP_REMOVE_PARENS(_Signature)>*) const _RefSpec _NoexceptSpec { \
+ return ::testing::internal::ThisRefAdjuster<GMOCK_PP_IF( \
+ _Constness, const, ) int _RefSpec>::Adjust(*this) \
+ .gmock_##_MethodName(GMOCK_PP_REPEAT( \
GMOCK_INTERNAL_A_MATCHER_ARGUMENT, _Signature, _N)); \
} \
mutable ::testing::FunctionMocker<GMOCK_PP_REMOVE_PARENS(_Signature)> \
@@ -9850,9 +10391,20 @@
#define GMOCK_INTERNAL_HAS_FINAL(_Tuple) \
GMOCK_PP_HAS_COMMA(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_DETECT_FINAL, ~, _Tuple))
-#define GMOCK_INTERNAL_HAS_NOEXCEPT(_Tuple) \
- GMOCK_PP_HAS_COMMA( \
- GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_DETECT_NOEXCEPT, ~, _Tuple))
+#define GMOCK_INTERNAL_GET_NOEXCEPT_SPEC(_Tuple) \
+ GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_NOEXCEPT_SPEC_IF_NOEXCEPT, ~, _Tuple)
+
+#define GMOCK_INTERNAL_NOEXCEPT_SPEC_IF_NOEXCEPT(_i, _, _elem) \
+ GMOCK_PP_IF( \
+ GMOCK_PP_HAS_COMMA(GMOCK_INTERNAL_DETECT_NOEXCEPT(_i, _, _elem)), \
+ _elem, )
+
+#define GMOCK_INTERNAL_GET_REF_SPEC(_Tuple) \
+ GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_REF_SPEC_IF_REF, ~, _Tuple)
+
+#define GMOCK_INTERNAL_REF_SPEC_IF_REF(_i, _, _elem) \
+ GMOCK_PP_IF(GMOCK_PP_HAS_COMMA(GMOCK_INTERNAL_DETECT_REF(_i, _, _elem)), \
+ GMOCK_PP_CAT(GMOCK_INTERNAL_UNPACK_, _elem), )
#define GMOCK_INTERNAL_GET_CALLTYPE(_Tuple) \
GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_GET_CALLTYPE_IMPL, ~, _Tuple)
@@ -9863,6 +10415,7 @@
GMOCK_PP_HAS_COMMA(GMOCK_INTERNAL_DETECT_OVERRIDE(_i, _, _elem)) + \
GMOCK_PP_HAS_COMMA(GMOCK_INTERNAL_DETECT_FINAL(_i, _, _elem)) + \
GMOCK_PP_HAS_COMMA(GMOCK_INTERNAL_DETECT_NOEXCEPT(_i, _, _elem)) + \
+ GMOCK_PP_HAS_COMMA(GMOCK_INTERNAL_DETECT_REF(_i, _, _elem)) + \
GMOCK_INTERNAL_IS_CALLTYPE(_elem)) == 1, \
GMOCK_PP_STRINGIZE( \
_elem) " cannot be recognized as a valid specification modifier.");
@@ -9883,12 +10436,18 @@
#define GMOCK_INTERNAL_DETECT_FINAL_I_final ,
-// TODO(iserna): Maybe noexcept should accept an argument here as well.
#define GMOCK_INTERNAL_DETECT_NOEXCEPT(_i, _, _elem) \
GMOCK_PP_CAT(GMOCK_INTERNAL_DETECT_NOEXCEPT_I_, _elem)
#define GMOCK_INTERNAL_DETECT_NOEXCEPT_I_noexcept ,
+#define GMOCK_INTERNAL_DETECT_REF(_i, _, _elem) \
+ GMOCK_PP_CAT(GMOCK_INTERNAL_DETECT_REF_I_, _elem)
+
+#define GMOCK_INTERNAL_DETECT_REF_I_ref ,
+
+#define GMOCK_INTERNAL_UNPACK_ref(x) x
+
#define GMOCK_INTERNAL_GET_CALLTYPE_IMPL(_i, _, _elem) \
GMOCK_PP_IF(GMOCK_INTERNAL_IS_CALLTYPE(_elem), \
GMOCK_INTERNAL_GET_VALUE_CALLTYPE, GMOCK_PP_EMPTY) \
@@ -9906,14 +10465,28 @@
GMOCK_INTERNAL_GET_VALUE_CALLTYPE_I( \
GMOCK_PP_CAT(GMOCK_INTERNAL_IS_CALLTYPE_HELPER_, _arg))
#define GMOCK_INTERNAL_GET_VALUE_CALLTYPE_I(_arg) \
- GMOCK_PP_CAT(GMOCK_PP_IDENTITY, _arg)
+ GMOCK_PP_IDENTITY _arg
#define GMOCK_INTERNAL_IS_CALLTYPE_HELPER_Calltype
-#define GMOCK_INTERNAL_SIGNATURE(_Ret, _Args) \
- GMOCK_PP_IF(GMOCK_PP_IS_BEGIN_PARENS(_Ret), GMOCK_PP_REMOVE_PARENS, \
- GMOCK_PP_IDENTITY) \
- (_Ret)(GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_GET_TYPE, _, _Args))
+// Note: The use of `identity_t` here allows _Ret to represent return types that
+// would normally need to be specified in a different way. For example, a method
+// returning a function pointer must be written as
+//
+// fn_ptr_return_t (*method(method_args_t...))(fn_ptr_args_t...)
+//
+// But we only support placing the return type at the beginning. To handle this,
+// we wrap all calls in identity_t, so that a declaration will be expanded to
+//
+// identity_t<fn_ptr_return_t (*)(fn_ptr_args_t...)> method(method_args_t...)
+//
+// This allows us to work around the syntactic oddities of function/method
+// types.
+#define GMOCK_INTERNAL_SIGNATURE(_Ret, _Args) \
+ ::testing::internal::identity_t<GMOCK_PP_IF(GMOCK_PP_IS_BEGIN_PARENS(_Ret), \
+ GMOCK_PP_REMOVE_PARENS, \
+ GMOCK_PP_IDENTITY)(_Ret)>( \
+ GMOCK_PP_FOR_EACH(GMOCK_INTERNAL_GET_TYPE, _, _Args))
#define GMOCK_INTERNAL_GET_TYPE(_i, _, _elem) \
GMOCK_PP_COMMA_IF(_i) \
@@ -9921,43 +10494,199 @@
GMOCK_PP_IDENTITY) \
(_elem)
-#define GMOCK_INTERNAL_PARAMETER(_i, _Signature, _) \
- GMOCK_PP_COMMA_IF(_i) \
- GMOCK_INTERNAL_ARG_O(typename, GMOCK_PP_INC(_i), \
- GMOCK_PP_REMOVE_PARENS(_Signature)) \
+#define GMOCK_INTERNAL_PARAMETER(_i, _Signature, _) \
+ GMOCK_PP_COMMA_IF(_i) \
+ GMOCK_INTERNAL_ARG_O(_i, GMOCK_PP_REMOVE_PARENS(_Signature)) \
gmock_a##_i
-#define GMOCK_INTERNAL_FORWARD_ARG(_i, _Signature, _) \
- GMOCK_PP_COMMA_IF(_i) \
- ::std::forward<GMOCK_INTERNAL_ARG_O(typename, GMOCK_PP_INC(_i), \
- GMOCK_PP_REMOVE_PARENS(_Signature))>( \
- gmock_a##_i)
+#define GMOCK_INTERNAL_FORWARD_ARG(_i, _Signature, _) \
+ GMOCK_PP_COMMA_IF(_i) \
+ ::std::forward<GMOCK_INTERNAL_ARG_O( \
+ _i, GMOCK_PP_REMOVE_PARENS(_Signature))>(gmock_a##_i)
-#define GMOCK_INTERNAL_MATCHER_PARAMETER(_i, _Signature, _) \
- GMOCK_PP_COMMA_IF(_i) \
- GMOCK_INTERNAL_MATCHER_O(typename, GMOCK_PP_INC(_i), \
- GMOCK_PP_REMOVE_PARENS(_Signature)) \
+#define GMOCK_INTERNAL_MATCHER_PARAMETER(_i, _Signature, _) \
+ GMOCK_PP_COMMA_IF(_i) \
+ GMOCK_INTERNAL_MATCHER_O(_i, GMOCK_PP_REMOVE_PARENS(_Signature)) \
gmock_a##_i
#define GMOCK_INTERNAL_MATCHER_ARGUMENT(_i, _1, _2) \
GMOCK_PP_COMMA_IF(_i) \
gmock_a##_i
-#define GMOCK_INTERNAL_A_MATCHER_ARGUMENT(_i, _Signature, _) \
- GMOCK_PP_COMMA_IF(_i) \
- ::testing::A<GMOCK_INTERNAL_ARG_O(typename, GMOCK_PP_INC(_i), \
- GMOCK_PP_REMOVE_PARENS(_Signature))>()
+#define GMOCK_INTERNAL_A_MATCHER_ARGUMENT(_i, _Signature, _) \
+ GMOCK_PP_COMMA_IF(_i) \
+ ::testing::A<GMOCK_INTERNAL_ARG_O(_i, GMOCK_PP_REMOVE_PARENS(_Signature))>()
-#define GMOCK_INTERNAL_ARG_O(_tn, _i, ...) GMOCK_ARG_(_tn, _i, __VA_ARGS__)
+#define GMOCK_INTERNAL_ARG_O(_i, ...) \
+ typename ::testing::internal::Function<__VA_ARGS__>::template Arg<_i>::type
-#define GMOCK_INTERNAL_MATCHER_O(_tn, _i, ...) \
- GMOCK_MATCHER_(_tn, _i, __VA_ARGS__)
+#define GMOCK_INTERNAL_MATCHER_O(_i, ...) \
+ const ::testing::Matcher<typename ::testing::internal::Function< \
+ __VA_ARGS__>::template Arg<_i>::type>&
-#endif // THIRD_PARTY_GOOGLETEST_GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_FUNCTION_MOCKER_H_
-// This file was GENERATED by command:
-// pump.py gmock-generated-actions.h.pump
-// DO NOT EDIT BY HAND!!!
+#define MOCK_METHOD0(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 0, __VA_ARGS__)
+#define MOCK_METHOD1(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 1, __VA_ARGS__)
+#define MOCK_METHOD2(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 2, __VA_ARGS__)
+#define MOCK_METHOD3(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 3, __VA_ARGS__)
+#define MOCK_METHOD4(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 4, __VA_ARGS__)
+#define MOCK_METHOD5(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 5, __VA_ARGS__)
+#define MOCK_METHOD6(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 6, __VA_ARGS__)
+#define MOCK_METHOD7(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 7, __VA_ARGS__)
+#define MOCK_METHOD8(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 8, __VA_ARGS__)
+#define MOCK_METHOD9(m, ...) GMOCK_INTERNAL_MOCK_METHODN(, , m, 9, __VA_ARGS__)
+#define MOCK_METHOD10(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, , m, 10, __VA_ARGS__)
+#define MOCK_CONST_METHOD0(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 0, __VA_ARGS__)
+#define MOCK_CONST_METHOD1(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 1, __VA_ARGS__)
+#define MOCK_CONST_METHOD2(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 2, __VA_ARGS__)
+#define MOCK_CONST_METHOD3(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 3, __VA_ARGS__)
+#define MOCK_CONST_METHOD4(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 4, __VA_ARGS__)
+#define MOCK_CONST_METHOD5(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 5, __VA_ARGS__)
+#define MOCK_CONST_METHOD6(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 6, __VA_ARGS__)
+#define MOCK_CONST_METHOD7(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 7, __VA_ARGS__)
+#define MOCK_CONST_METHOD8(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 8, __VA_ARGS__)
+#define MOCK_CONST_METHOD9(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 9, __VA_ARGS__)
+#define MOCK_CONST_METHOD10(m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, , m, 10, __VA_ARGS__)
+
+#define MOCK_METHOD0_T(m, ...) MOCK_METHOD0(m, __VA_ARGS__)
+#define MOCK_METHOD1_T(m, ...) MOCK_METHOD1(m, __VA_ARGS__)
+#define MOCK_METHOD2_T(m, ...) MOCK_METHOD2(m, __VA_ARGS__)
+#define MOCK_METHOD3_T(m, ...) MOCK_METHOD3(m, __VA_ARGS__)
+#define MOCK_METHOD4_T(m, ...) MOCK_METHOD4(m, __VA_ARGS__)
+#define MOCK_METHOD5_T(m, ...) MOCK_METHOD5(m, __VA_ARGS__)
+#define MOCK_METHOD6_T(m, ...) MOCK_METHOD6(m, __VA_ARGS__)
+#define MOCK_METHOD7_T(m, ...) MOCK_METHOD7(m, __VA_ARGS__)
+#define MOCK_METHOD8_T(m, ...) MOCK_METHOD8(m, __VA_ARGS__)
+#define MOCK_METHOD9_T(m, ...) MOCK_METHOD9(m, __VA_ARGS__)
+#define MOCK_METHOD10_T(m, ...) MOCK_METHOD10(m, __VA_ARGS__)
+
+#define MOCK_CONST_METHOD0_T(m, ...) MOCK_CONST_METHOD0(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD1_T(m, ...) MOCK_CONST_METHOD1(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD2_T(m, ...) MOCK_CONST_METHOD2(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD3_T(m, ...) MOCK_CONST_METHOD3(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD4_T(m, ...) MOCK_CONST_METHOD4(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD5_T(m, ...) MOCK_CONST_METHOD5(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD6_T(m, ...) MOCK_CONST_METHOD6(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD7_T(m, ...) MOCK_CONST_METHOD7(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD8_T(m, ...) MOCK_CONST_METHOD8(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD9_T(m, ...) MOCK_CONST_METHOD9(m, __VA_ARGS__)
+#define MOCK_CONST_METHOD10_T(m, ...) MOCK_CONST_METHOD10(m, __VA_ARGS__)
+
+#define MOCK_METHOD0_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 0, __VA_ARGS__)
+#define MOCK_METHOD1_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 1, __VA_ARGS__)
+#define MOCK_METHOD2_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 2, __VA_ARGS__)
+#define MOCK_METHOD3_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 3, __VA_ARGS__)
+#define MOCK_METHOD4_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 4, __VA_ARGS__)
+#define MOCK_METHOD5_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 5, __VA_ARGS__)
+#define MOCK_METHOD6_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 6, __VA_ARGS__)
+#define MOCK_METHOD7_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 7, __VA_ARGS__)
+#define MOCK_METHOD8_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 8, __VA_ARGS__)
+#define MOCK_METHOD9_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 9, __VA_ARGS__)
+#define MOCK_METHOD10_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(, ct, m, 10, __VA_ARGS__)
+
+#define MOCK_CONST_METHOD0_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 0, __VA_ARGS__)
+#define MOCK_CONST_METHOD1_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 1, __VA_ARGS__)
+#define MOCK_CONST_METHOD2_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 2, __VA_ARGS__)
+#define MOCK_CONST_METHOD3_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 3, __VA_ARGS__)
+#define MOCK_CONST_METHOD4_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 4, __VA_ARGS__)
+#define MOCK_CONST_METHOD5_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 5, __VA_ARGS__)
+#define MOCK_CONST_METHOD6_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 6, __VA_ARGS__)
+#define MOCK_CONST_METHOD7_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 7, __VA_ARGS__)
+#define MOCK_CONST_METHOD8_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 8, __VA_ARGS__)
+#define MOCK_CONST_METHOD9_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 9, __VA_ARGS__)
+#define MOCK_CONST_METHOD10_WITH_CALLTYPE(ct, m, ...) \
+ GMOCK_INTERNAL_MOCK_METHODN(const, ct, m, 10, __VA_ARGS__)
+
+#define MOCK_METHOD0_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD0_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD1_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD1_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD2_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD2_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD3_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD3_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD4_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD4_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD5_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD5_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD6_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD6_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD7_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD7_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD8_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD8_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD9_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD9_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_METHOD10_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_METHOD10_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+
+#define MOCK_CONST_METHOD0_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD0_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD1_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD1_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD2_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD2_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD3_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD3_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD4_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD4_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD5_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD5_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD6_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD6_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD7_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD7_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD8_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD8_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD9_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD9_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+#define MOCK_CONST_METHOD10_T_WITH_CALLTYPE(ct, m, ...) \
+ MOCK_CONST_METHOD10_WITH_CALLTYPE(ct, m, __VA_ARGS__)
+
+#define GMOCK_INTERNAL_MOCK_METHODN(constness, ct, Method, args_num, ...) \
+ GMOCK_INTERNAL_ASSERT_VALID_SIGNATURE( \
+ args_num, ::testing::internal::identity_t<__VA_ARGS__>); \
+ GMOCK_INTERNAL_MOCK_METHOD_IMPL( \
+ args_num, Method, GMOCK_PP_NARG0(constness), 0, 0, , ct, , \
+ (::testing::internal::identity_t<__VA_ARGS__>))
+
+#define GMOCK_MOCKER_(arity, constness, Method) \
+ GTEST_CONCAT_TOKEN_(gmock##constness##arity##_##Method##_, __LINE__)
+
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_FUNCTION_MOCKER_H_
// Copyright 2007, Google Inc.
// All rights reserved.
//
@@ -9994,249 +10723,20 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_ACTIONS_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_ACTIONS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
#include <memory>
#include <utility>
-namespace testing {
-namespace internal {
+// Include any custom callback actions added by the local installation.
+// GOOGLETEST_CM0002 DO NOT DELETE
-// A macro from the ACTION* family (defined later in this file)
-// defines an action that can be used in a mock function. Typically,
-// these actions only care about a subset of the arguments of the mock
-// function. For example, if such an action only uses the second
-// argument, it can be used in any mock function that takes >= 2
-// arguments where the type of the second argument is compatible.
-//
-// Therefore, the action implementation must be prepared to take more
-// arguments than it needs. The ExcessiveArg type is used to
-// represent those excessive arguments. In order to keep the compiler
-// error messages tractable, we define it in the testing namespace
-// instead of testing::internal. However, this is an INTERNAL TYPE
-// and subject to change without notice, so a user MUST NOT USE THIS
-// TYPE DIRECTLY.
-struct ExcessiveArg {};
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_
-// A helper class needed for implementing the ACTION* macros.
-template <typename Result, class Impl>
-class ActionHelper {
- public:
- static Result Perform(Impl* impl, const ::std::tuple<>& args) {
- return impl->template gmock_PerformImpl<>(args, ExcessiveArg(),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg());
- }
-
- template <typename A0>
- static Result Perform(Impl* impl, const ::std::tuple<A0>& args) {
- return impl->template gmock_PerformImpl<A0>(args, std::get<0>(args),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg());
- }
-
- template <typename A0, typename A1>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1>& args) {
- return impl->template gmock_PerformImpl<A0, A1>(args, std::get<0>(args),
- std::get<1>(args), ExcessiveArg(), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg());
- }
-
- template <typename A0, typename A1, typename A2>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1, A2>& args) {
- return impl->template gmock_PerformImpl<A0, A1, A2>(args,
- std::get<0>(args), std::get<1>(args), std::get<2>(args),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg());
- }
-
- template <typename A0, typename A1, typename A2, typename A3>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1, A2, A3>& args) {
- return impl->template gmock_PerformImpl<A0, A1, A2, A3>(args,
- std::get<0>(args), std::get<1>(args), std::get<2>(args),
- std::get<3>(args), ExcessiveArg(), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg());
- }
-
- template <typename A0, typename A1, typename A2, typename A3, typename A4>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1, A2, A3,
- A4>& args) {
- return impl->template gmock_PerformImpl<A0, A1, A2, A3, A4>(args,
- std::get<0>(args), std::get<1>(args), std::get<2>(args),
- std::get<3>(args), std::get<4>(args), ExcessiveArg(), ExcessiveArg(),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg());
- }
-
- template <typename A0, typename A1, typename A2, typename A3, typename A4,
- typename A5>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1, A2, A3, A4,
- A5>& args) {
- return impl->template gmock_PerformImpl<A0, A1, A2, A3, A4, A5>(args,
- std::get<0>(args), std::get<1>(args), std::get<2>(args),
- std::get<3>(args), std::get<4>(args), std::get<5>(args),
- ExcessiveArg(), ExcessiveArg(), ExcessiveArg(), ExcessiveArg());
- }
-
- template <typename A0, typename A1, typename A2, typename A3, typename A4,
- typename A5, typename A6>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1, A2, A3, A4, A5,
- A6>& args) {
- return impl->template gmock_PerformImpl<A0, A1, A2, A3, A4, A5, A6>(args,
- std::get<0>(args), std::get<1>(args), std::get<2>(args),
- std::get<3>(args), std::get<4>(args), std::get<5>(args),
- std::get<6>(args), ExcessiveArg(), ExcessiveArg(), ExcessiveArg());
- }
-
- template <typename A0, typename A1, typename A2, typename A3, typename A4,
- typename A5, typename A6, typename A7>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1, A2, A3, A4, A5,
- A6, A7>& args) {
- return impl->template gmock_PerformImpl<A0, A1, A2, A3, A4, A5, A6,
- A7>(args, std::get<0>(args), std::get<1>(args), std::get<2>(args),
- std::get<3>(args), std::get<4>(args), std::get<5>(args),
- std::get<6>(args), std::get<7>(args), ExcessiveArg(), ExcessiveArg());
- }
-
- template <typename A0, typename A1, typename A2, typename A3, typename A4,
- typename A5, typename A6, typename A7, typename A8>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1, A2, A3, A4, A5,
- A6, A7, A8>& args) {
- return impl->template gmock_PerformImpl<A0, A1, A2, A3, A4, A5, A6, A7,
- A8>(args, std::get<0>(args), std::get<1>(args), std::get<2>(args),
- std::get<3>(args), std::get<4>(args), std::get<5>(args),
- std::get<6>(args), std::get<7>(args), std::get<8>(args),
- ExcessiveArg());
- }
-
- template <typename A0, typename A1, typename A2, typename A3, typename A4,
- typename A5, typename A6, typename A7, typename A8, typename A9>
- static Result Perform(Impl* impl, const ::std::tuple<A0, A1, A2, A3, A4, A5,
- A6, A7, A8, A9>& args) {
- return impl->template gmock_PerformImpl<A0, A1, A2, A3, A4, A5, A6, A7, A8,
- A9>(args, std::get<0>(args), std::get<1>(args), std::get<2>(args),
- std::get<3>(args), std::get<4>(args), std::get<5>(args),
- std::get<6>(args), std::get<7>(args), std::get<8>(args),
- std::get<9>(args));
- }
-};
-
-} // namespace internal
-} // namespace testing
-
-// The ACTION* family of macros can be used in a namespace scope to
-// define custom actions easily. The syntax:
-//
-// ACTION(name) { statements; }
-//
-// will define an action with the given name that executes the
-// statements. The value returned by the statements will be used as
-// the return value of the action. Inside the statements, you can
-// refer to the K-th (0-based) argument of the mock function by
-// 'argK', and refer to its type by 'argK_type'. For example:
-//
-// ACTION(IncrementArg1) {
-// arg1_type temp = arg1;
-// return ++(*temp);
-// }
-//
-// allows you to write
-//
-// ...WillOnce(IncrementArg1());
-//
-// You can also refer to the entire argument tuple and its type by
-// 'args' and 'args_type', and refer to the mock function type and its
-// return type by 'function_type' and 'return_type'.
-//
-// Note that you don't need to specify the types of the mock function
-// arguments. However rest assured that your code is still type-safe:
-// you'll get a compiler error if *arg1 doesn't support the ++
-// operator, or if the type of ++(*arg1) isn't compatible with the
-// mock function's return type, for example.
-//
-// Sometimes you'll want to parameterize the action. For that you can use
-// another macro:
-//
-// ACTION_P(name, param_name) { statements; }
-//
-// For example:
-//
-// ACTION_P(Add, n) { return arg0 + n; }
-//
-// will allow you to write:
-//
-// ...WillOnce(Add(5));
-//
-// Note that you don't need to provide the type of the parameter
-// either. If you need to reference the type of a parameter named
-// 'foo', you can write 'foo_type'. For example, in the body of
-// ACTION_P(Add, n) above, you can write 'n_type' to refer to the type
-// of 'n'.
-//
-// We also provide ACTION_P2, ACTION_P3, ..., up to ACTION_P10 to support
-// multi-parameter actions.
-//
-// For the purpose of typing, you can view
-//
-// ACTION_Pk(Foo, p1, ..., pk) { ... }
-//
-// as shorthand for
-//
-// template <typename p1_type, ..., typename pk_type>
-// FooActionPk<p1_type, ..., pk_type> Foo(p1_type p1, ..., pk_type pk) { ... }
-//
-// In particular, you can provide the template type arguments
-// explicitly when invoking Foo(), as in Foo<long, bool>(5, false);
-// although usually you can rely on the compiler to infer the types
-// for you automatically. You can assign the result of expression
-// Foo(p1, ..., pk) to a variable of type FooActionPk<p1_type, ...,
-// pk_type>. This can be useful when composing actions.
-//
-// You can also overload actions with different numbers of parameters:
-//
-// ACTION_P(Plus, a) { ... }
-// ACTION_P2(Plus, a, b) { ... }
-//
-// While it's tempting to always use the ACTION* macros when defining
-// a new action, you should also consider implementing ActionInterface
-// or using MakePolymorphicAction() instead, especially if you need to
-// use the action a lot. While these approaches require more work,
-// they give you more control on the types of the mock function
-// arguments and the action parameters, which in general leads to
-// better compiler error messages that pay off in the long run. They
-// also allow overloading actions based on parameter types (as opposed
-// to just based on the number of parameters).
-//
-// CAVEAT:
-//
-// ACTION*() can only be used in a namespace scope. The reason is
-// that C++ doesn't yet allow function-local types to be used to
-// instantiate templates. The up-coming C++0x standard will fix this.
-// Once that's done, we'll consider supporting using ACTION*() inside
-// a function.
-//
-// MORE INFORMATION:
-//
-// To learn more about using these macros, please search for 'ACTION' on
-// https://github.com/google/googletest/blob/master/googlemock/docs/CookBook.md
-
-// An internal macro needed for implementing ACTION*().
-#define GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_\
- const args_type& args GTEST_ATTRIBUTE_UNUSED_, \
- arg0_type arg0 GTEST_ATTRIBUTE_UNUSED_, \
- arg1_type arg1 GTEST_ATTRIBUTE_UNUSED_, \
- arg2_type arg2 GTEST_ATTRIBUTE_UNUSED_, \
- arg3_type arg3 GTEST_ATTRIBUTE_UNUSED_, \
- arg4_type arg4 GTEST_ATTRIBUTE_UNUSED_, \
- arg5_type arg5 GTEST_ATTRIBUTE_UNUSED_, \
- arg6_type arg6 GTEST_ATTRIBUTE_UNUSED_, \
- arg7_type arg7 GTEST_ATTRIBUTE_UNUSED_, \
- arg8_type arg8 GTEST_ATTRIBUTE_UNUSED_, \
- arg9_type arg9 GTEST_ATTRIBUTE_UNUSED_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_
// Sometimes you want to give an action explicit template parameters
// that cannot be inferred from its value parameters. ACTION() and
@@ -10482,6 +10982,20 @@
p7(::std::move(gmock_p7)), p8(::std::move(gmock_p8)), \
p9(::std::move(gmock_p9))
+// Defines the copy constructor
+#define GMOCK_INTERNAL_DEFN_COPY_AND_0_VALUE_PARAMS() \
+ {} // Avoid https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82134
+#define GMOCK_INTERNAL_DEFN_COPY_AND_1_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_2_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_3_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_4_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_5_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_6_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_7_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_8_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_9_VALUE_PARAMS(...) = default;
+#define GMOCK_INTERNAL_DEFN_COPY_AND_10_VALUE_PARAMS(...) = default;
+
// Declares the fields for storing the value parameters.
#define GMOCK_INTERNAL_DEFN_AND_0_VALUE_PARAMS()
#define GMOCK_INTERNAL_DEFN_AND_1_VALUE_PARAMS(p0) p0##_type p0;
@@ -10603,922 +11117,70 @@
#define GMOCK_ACTION_CLASS_(name, value_params)\
GTEST_CONCAT_TOKEN_(name##Action, GMOCK_INTERNAL_COUNT_##value_params)
-#define ACTION_TEMPLATE(name, template_params, value_params)\
- template <GMOCK_INTERNAL_DECL_##template_params\
- GMOCK_INTERNAL_DECL_TYPE_##value_params>\
- class GMOCK_ACTION_CLASS_(name, value_params) {\
- public:\
- explicit GMOCK_ACTION_CLASS_(name, value_params)\
- GMOCK_INTERNAL_INIT_##value_params {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- explicit gmock_Impl GMOCK_INTERNAL_INIT_##value_params {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- GMOCK_INTERNAL_DEFN_##value_params\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(\
- new gmock_Impl<F>(GMOCK_INTERNAL_LIST_##value_params));\
- }\
- GMOCK_INTERNAL_DEFN_##value_params\
- private:\
- GTEST_DISALLOW_ASSIGN_(GMOCK_ACTION_CLASS_(name, value_params));\
- };\
- template <GMOCK_INTERNAL_DECL_##template_params\
- GMOCK_INTERNAL_DECL_TYPE_##value_params>\
- inline GMOCK_ACTION_CLASS_(name, value_params)<\
- GMOCK_INTERNAL_LIST_##template_params\
- GMOCK_INTERNAL_LIST_TYPE_##value_params> name(\
- GMOCK_INTERNAL_DECL_##value_params) {\
- return GMOCK_ACTION_CLASS_(name, value_params)<\
- GMOCK_INTERNAL_LIST_##template_params\
- GMOCK_INTERNAL_LIST_TYPE_##value_params>(\
- GMOCK_INTERNAL_LIST_##value_params);\
- }\
- template <GMOCK_INTERNAL_DECL_##template_params\
- GMOCK_INTERNAL_DECL_TYPE_##value_params>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- GMOCK_ACTION_CLASS_(name, value_params)<\
- GMOCK_INTERNAL_LIST_##template_params\
- GMOCK_INTERNAL_LIST_TYPE_##value_params>::gmock_Impl<F>::\
- gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION(name)\
- class name##Action {\
- public:\
- name##Action() {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl() {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>());\
- }\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##Action);\
- };\
- inline name##Action name() {\
- return name##Action();\
- }\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##Action::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P(name, p0)\
- template <typename p0##_type>\
- class name##ActionP {\
- public:\
- explicit name##ActionP(p0##_type gmock_p0) : \
- p0(::std::forward<p0##_type>(gmock_p0)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- explicit gmock_Impl(p0##_type gmock_p0) : \
- p0(::std::forward<p0##_type>(gmock_p0)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0));\
- }\
- p0##_type p0;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP);\
- };\
- template <typename p0##_type>\
- inline name##ActionP<p0##_type> name(p0##_type p0) {\
- return name##ActionP<p0##_type>(p0);\
- }\
- template <typename p0##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP<p0##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P2(name, p0, p1)\
- template <typename p0##_type, typename p1##_type>\
- class name##ActionP2 {\
- public:\
- name##ActionP2(p0##_type gmock_p0, \
- p1##_type gmock_p1) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, \
- p1##_type gmock_p1) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP2);\
- };\
- template <typename p0##_type, typename p1##_type>\
- inline name##ActionP2<p0##_type, p1##_type> name(p0##_type p0, \
- p1##_type p1) {\
- return name##ActionP2<p0##_type, p1##_type>(p0, p1);\
- }\
- template <typename p0##_type, typename p1##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP2<p0##_type, p1##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P3(name, p0, p1, p2)\
- template <typename p0##_type, typename p1##_type, typename p2##_type>\
- class name##ActionP3 {\
- public:\
- name##ActionP3(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1, p2));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP3);\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type>\
- inline name##ActionP3<p0##_type, p1##_type, p2##_type> name(p0##_type p0, \
- p1##_type p1, p2##_type p2) {\
- return name##ActionP3<p0##_type, p1##_type, p2##_type>(p0, p1, p2);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP3<p0##_type, p1##_type, \
- p2##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P4(name, p0, p1, p2, p3)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type>\
- class name##ActionP4 {\
- public:\
- name##ActionP4(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, \
- p3##_type gmock_p3) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1, p2, p3));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP4);\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type>\
- inline name##ActionP4<p0##_type, p1##_type, p2##_type, \
- p3##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, \
- p3##_type p3) {\
- return name##ActionP4<p0##_type, p1##_type, p2##_type, p3##_type>(p0, p1, \
- p2, p3);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP4<p0##_type, p1##_type, p2##_type, \
- p3##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P5(name, p0, p1, p2, p3, p4)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type>\
- class name##ActionP5 {\
- public:\
- name##ActionP5(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, \
- p4##_type gmock_p4) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, \
- p4##_type gmock_p4) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1, p2, p3, p4));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP5);\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type>\
- inline name##ActionP5<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, p3##_type p3, \
- p4##_type p4) {\
- return name##ActionP5<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type>(p0, p1, p2, p3, p4);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP5<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P6(name, p0, p1, p2, p3, p4, p5)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type>\
- class name##ActionP6 {\
- public:\
- name##ActionP6(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1, p2, p3, p4, p5));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP6);\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type>\
- inline name##ActionP6<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, \
- p3##_type p3, p4##_type p4, p5##_type p5) {\
- return name##ActionP6<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type>(p0, p1, p2, p3, p4, p5);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP6<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P7(name, p0, p1, p2, p3, p4, p5, p6)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type>\
- class name##ActionP7 {\
- public:\
- name##ActionP7(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5, \
- p6##_type gmock_p6) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)), \
- p6(::std::forward<p6##_type>(gmock_p6)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5, \
- p6##_type gmock_p6) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)), \
- p6(::std::forward<p6##_type>(gmock_p6)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- p6##_type p6;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1, p2, p3, p4, p5, \
- p6));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- p6##_type p6;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP7);\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type>\
- inline name##ActionP7<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type> name(p0##_type p0, p1##_type p1, \
- p2##_type p2, p3##_type p3, p4##_type p4, p5##_type p5, \
- p6##_type p6) {\
- return name##ActionP7<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type>(p0, p1, p2, p3, p4, p5, p6);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP7<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type, p6##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P8(name, p0, p1, p2, p3, p4, p5, p6, p7)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type>\
- class name##ActionP8 {\
- public:\
- name##ActionP8(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5, p6##_type gmock_p6, \
- p7##_type gmock_p7) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)), \
- p6(::std::forward<p6##_type>(gmock_p6)), \
- p7(::std::forward<p7##_type>(gmock_p7)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5, \
- p6##_type gmock_p6, \
- p7##_type gmock_p7) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)), \
- p6(::std::forward<p6##_type>(gmock_p6)), \
- p7(::std::forward<p7##_type>(gmock_p7)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- p6##_type p6;\
- p7##_type p7;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1, p2, p3, p4, p5, \
- p6, p7));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- p6##_type p6;\
- p7##_type p7;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP8);\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type>\
- inline name##ActionP8<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type> name(p0##_type p0, \
- p1##_type p1, p2##_type p2, p3##_type p3, p4##_type p4, p5##_type p5, \
- p6##_type p6, p7##_type p7) {\
- return name##ActionP8<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type>(p0, p1, p2, p3, p4, p5, \
- p6, p7);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP8<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type, p6##_type, \
- p7##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P9(name, p0, p1, p2, p3, p4, p5, p6, p7, p8)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type>\
- class name##ActionP9 {\
- public:\
- name##ActionP9(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5, p6##_type gmock_p6, p7##_type gmock_p7, \
- p8##_type gmock_p8) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)), \
- p6(::std::forward<p6##_type>(gmock_p6)), \
- p7(::std::forward<p7##_type>(gmock_p7)), \
- p8(::std::forward<p8##_type>(gmock_p8)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5, \
- p6##_type gmock_p6, p7##_type gmock_p7, \
- p8##_type gmock_p8) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)), \
- p6(::std::forward<p6##_type>(gmock_p6)), \
- p7(::std::forward<p7##_type>(gmock_p7)), \
- p8(::std::forward<p8##_type>(gmock_p8)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- p6##_type p6;\
- p7##_type p7;\
- p8##_type p8;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1, p2, p3, p4, p5, \
- p6, p7, p8));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- p6##_type p6;\
- p7##_type p7;\
- p8##_type p8;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP9);\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type>\
- inline name##ActionP9<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, \
- p8##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, p3##_type p3, \
- p4##_type p4, p5##_type p5, p6##_type p6, p7##_type p7, \
- p8##_type p8) {\
- return name##ActionP9<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, p8##_type>(p0, p1, p2, \
- p3, p4, p5, p6, p7, p8);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP9<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type, p6##_type, p7##_type, \
- p8##_type>::gmock_Impl<F>::gmock_PerformImpl(\
- GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
-
-#define ACTION_P10(name, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type, \
- typename p9##_type>\
- class name##ActionP10 {\
- public:\
- name##ActionP10(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5, p6##_type gmock_p6, p7##_type gmock_p7, \
- p8##_type gmock_p8, \
- p9##_type gmock_p9) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)), \
- p6(::std::forward<p6##_type>(gmock_p6)), \
- p7(::std::forward<p7##_type>(gmock_p7)), \
- p8(::std::forward<p8##_type>(gmock_p8)), \
- p9(::std::forward<p9##_type>(gmock_p9)) {}\
- template <typename F>\
- class gmock_Impl : public ::testing::ActionInterface<F> {\
- public:\
- typedef F function_type;\
- typedef typename ::testing::internal::Function<F>::Result return_type;\
- typedef typename ::testing::internal::Function<F>::ArgumentTuple\
- args_type;\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5, \
- p6##_type gmock_p6, p7##_type gmock_p7, p8##_type gmock_p8, \
- p9##_type gmock_p9) : p0(::std::forward<p0##_type>(gmock_p0)), \
- p1(::std::forward<p1##_type>(gmock_p1)), \
- p2(::std::forward<p2##_type>(gmock_p2)), \
- p3(::std::forward<p3##_type>(gmock_p3)), \
- p4(::std::forward<p4##_type>(gmock_p4)), \
- p5(::std::forward<p5##_type>(gmock_p5)), \
- p6(::std::forward<p6##_type>(gmock_p6)), \
- p7(::std::forward<p7##_type>(gmock_p7)), \
- p8(::std::forward<p8##_type>(gmock_p8)), \
- p9(::std::forward<p9##_type>(gmock_p9)) {}\
- virtual return_type Perform(const args_type& args) {\
- return ::testing::internal::ActionHelper<return_type, gmock_Impl>::\
- Perform(this, args);\
- }\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- return_type gmock_PerformImpl(const args_type& args, arg0_type arg0, \
- arg1_type arg1, arg2_type arg2, arg3_type arg3, arg4_type arg4, \
- arg5_type arg5, arg6_type arg6, arg7_type arg7, arg8_type arg8, \
- arg9_type arg9) const;\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- p6##_type p6;\
- p7##_type p7;\
- p8##_type p8;\
- p9##_type p9;\
- private:\
- GTEST_DISALLOW_ASSIGN_(gmock_Impl);\
- };\
- template <typename F> operator ::testing::Action<F>() const {\
- return ::testing::Action<F>(new gmock_Impl<F>(p0, p1, p2, p3, p4, p5, \
- p6, p7, p8, p9));\
- }\
- p0##_type p0;\
- p1##_type p1;\
- p2##_type p2;\
- p3##_type p3;\
- p4##_type p4;\
- p5##_type p5;\
- p6##_type p6;\
- p7##_type p7;\
- p8##_type p8;\
- p9##_type p9;\
- private:\
- GTEST_DISALLOW_ASSIGN_(name##ActionP10);\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type, \
- typename p9##_type>\
- inline name##ActionP10<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, p8##_type, \
- p9##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, p3##_type p3, \
- p4##_type p4, p5##_type p5, p6##_type p6, p7##_type p7, p8##_type p8, \
- p9##_type p9) {\
- return name##ActionP10<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, p8##_type, p9##_type>(p0, \
- p1, p2, p3, p4, p5, p6, p7, p8, p9);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type, \
- typename p9##_type>\
- template <typename F>\
- template <typename arg0_type, typename arg1_type, typename arg2_type, \
- typename arg3_type, typename arg4_type, typename arg5_type, \
- typename arg6_type, typename arg7_type, typename arg8_type, \
- typename arg9_type>\
- typename ::testing::internal::Function<F>::Result\
- name##ActionP10<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type, p6##_type, p7##_type, p8##_type, \
- p9##_type>::gmock_Impl<F>::gmock_PerformImpl(\
+#define ACTION_TEMPLATE(name, template_params, value_params) \
+ template <GMOCK_INTERNAL_DECL_##template_params \
+ GMOCK_INTERNAL_DECL_TYPE_##value_params> \
+ class GMOCK_ACTION_CLASS_(name, value_params) { \
+ public: \
+ explicit GMOCK_ACTION_CLASS_(name, value_params)( \
+ GMOCK_INTERNAL_DECL_##value_params) \
+ GMOCK_PP_IF(GMOCK_PP_IS_EMPTY(GMOCK_INTERNAL_COUNT_##value_params), \
+ = default; , \
+ : impl_(std::make_shared<gmock_Impl>( \
+ GMOCK_INTERNAL_LIST_##value_params)) { }) \
+ GMOCK_ACTION_CLASS_(name, value_params)( \
+ const GMOCK_ACTION_CLASS_(name, value_params)&) noexcept \
+ GMOCK_INTERNAL_DEFN_COPY_##value_params \
+ GMOCK_ACTION_CLASS_(name, value_params)( \
+ GMOCK_ACTION_CLASS_(name, value_params)&&) noexcept \
+ GMOCK_INTERNAL_DEFN_COPY_##value_params \
+ template <typename F> \
+ operator ::testing::Action<F>() const { \
+ return GMOCK_PP_IF( \
+ GMOCK_PP_IS_EMPTY(GMOCK_INTERNAL_COUNT_##value_params), \
+ (::testing::internal::MakeAction<F, gmock_Impl>()), \
+ (::testing::internal::MakeAction<F>(impl_))); \
+ } \
+ private: \
+ class gmock_Impl { \
+ public: \
+ explicit gmock_Impl GMOCK_INTERNAL_INIT_##value_params {} \
+ template <typename function_type, typename return_type, \
+ typename args_type, GMOCK_ACTION_TEMPLATE_ARGS_NAMES_> \
+ return_type gmock_PerformImpl(GMOCK_ACTION_ARG_TYPES_AND_NAMES_) const; \
+ GMOCK_INTERNAL_DEFN_##value_params \
+ }; \
+ GMOCK_PP_IF(GMOCK_PP_IS_EMPTY(GMOCK_INTERNAL_COUNT_##value_params), \
+ , std::shared_ptr<const gmock_Impl> impl_;) \
+ }; \
+ template <GMOCK_INTERNAL_DECL_##template_params \
+ GMOCK_INTERNAL_DECL_TYPE_##value_params> \
+ GMOCK_ACTION_CLASS_(name, value_params)< \
+ GMOCK_INTERNAL_LIST_##template_params \
+ GMOCK_INTERNAL_LIST_TYPE_##value_params> name( \
+ GMOCK_INTERNAL_DECL_##value_params) GTEST_MUST_USE_RESULT_; \
+ template <GMOCK_INTERNAL_DECL_##template_params \
+ GMOCK_INTERNAL_DECL_TYPE_##value_params> \
+ inline GMOCK_ACTION_CLASS_(name, value_params)< \
+ GMOCK_INTERNAL_LIST_##template_params \
+ GMOCK_INTERNAL_LIST_TYPE_##value_params> name( \
+ GMOCK_INTERNAL_DECL_##value_params) { \
+ return GMOCK_ACTION_CLASS_(name, value_params)< \
+ GMOCK_INTERNAL_LIST_##template_params \
+ GMOCK_INTERNAL_LIST_TYPE_##value_params>( \
+ GMOCK_INTERNAL_LIST_##value_params); \
+ } \
+ template <GMOCK_INTERNAL_DECL_##template_params \
+ GMOCK_INTERNAL_DECL_TYPE_##value_params> \
+ template <typename function_type, typename return_type, typename args_type, \
+ GMOCK_ACTION_TEMPLATE_ARGS_NAMES_> \
+ return_type GMOCK_ACTION_CLASS_(name, value_params)< \
+ GMOCK_INTERNAL_LIST_##template_params \
+ GMOCK_INTERNAL_LIST_TYPE_##value_params>::gmock_Impl::gmock_PerformImpl( \
GMOCK_ACTION_ARG_TYPES_AND_NAMES_UNUSED_) const
namespace testing {
-
// The ACTION*() macros trigger warning C4100 (unreferenced formal
// parameter) in MSVC with -W4. Unfortunately they cannot be fixed in
// the macro definition, as the warnings are generated when the macro
@@ -11529,8 +11191,37 @@
# pragma warning(disable:4100)
#endif
-// Various overloads for InvokeArgument<N>().
-//
+namespace internal {
+
+// internal::InvokeArgument - a helper for InvokeArgument action.
+// The basic overloads are provided here for generic functors.
+// Overloads for other custom-callables are provided in the
+// internal/custom/gmock-generated-actions.h header.
+template <typename F, typename... Args>
+auto InvokeArgument(F f, Args... args) -> decltype(f(args...)) {
+ return f(args...);
+}
+
+template <std::size_t index, typename... Params>
+struct InvokeArgumentAction {
+ template <typename... Args>
+ auto operator()(Args&&... args) const -> decltype(internal::InvokeArgument(
+ std::get<index>(std::forward_as_tuple(std::forward<Args>(args)...)),
+ std::declval<const Params&>()...)) {
+ internal::FlatTuple<Args&&...> args_tuple(FlatTupleConstructTag{},
+ std::forward<Args>(args)...);
+ return params.Apply([&](const Params&... unpacked_params) {
+ auto&& callable = args_tuple.template Get<index>();
+ return internal::InvokeArgument(
+ std::forward<decltype(callable)>(callable), unpacked_params...);
+ });
+ }
+
+ internal::FlatTuple<Params...> params;
+};
+
+} // namespace internal
+
// The InvokeArgument<N>(a1, a2, ..., a_k) action invokes the N-th
// (0-based) argument, which must be a k-ary callable, of the mock
// function, with arguments a1, a2, ..., a_k.
@@ -11538,15 +11229,15 @@
// Notes:
//
// 1. The arguments are passed by value by default. If you need to
-// pass an argument by reference, wrap it inside ByRef(). For
+// pass an argument by reference, wrap it inside std::ref(). For
// example,
//
-// InvokeArgument<1>(5, string("Hello"), ByRef(foo))
+// InvokeArgument<1>(5, string("Hello"), std::ref(foo))
//
// passes 5 and string("Hello") by value, and passes foo by
// reference.
//
-// 2. If the callable takes an argument by reference but ByRef() is
+// 2. If the callable takes an argument by reference but std::ref() is
// not used, it will receive the reference to a copy of the value,
// instead of the original value. For example, when the 0-th
// argument of the mock function takes a const string&, the action
@@ -11558,247 +11249,11 @@
// to the callable. This makes it easy for a user to define an
// InvokeArgument action from temporary values and have it performed
// later.
-
-namespace internal {
-namespace invoke_argument {
-
-// Appears in InvokeArgumentAdl's argument list to help avoid
-// accidental calls to user functions of the same name.
-struct AdlTag {};
-
-// InvokeArgumentAdl - a helper for InvokeArgument.
-// The basic overloads are provided here for generic functors.
-// Overloads for other custom-callables are provided in the
-// internal/custom/callback-actions.h header.
-
-template <typename R, typename F>
-R InvokeArgumentAdl(AdlTag, F f) {
- return f();
-}
-template <typename R, typename F, typename A1>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1) {
- return f(a1);
-}
-template <typename R, typename F, typename A1, typename A2>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2) {
- return f(a1, a2);
-}
-template <typename R, typename F, typename A1, typename A2, typename A3>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2, A3 a3) {
- return f(a1, a2, a3);
-}
-template <typename R, typename F, typename A1, typename A2, typename A3,
- typename A4>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2, A3 a3, A4 a4) {
- return f(a1, a2, a3, a4);
-}
-template <typename R, typename F, typename A1, typename A2, typename A3,
- typename A4, typename A5>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2, A3 a3, A4 a4, A5 a5) {
- return f(a1, a2, a3, a4, a5);
-}
-template <typename R, typename F, typename A1, typename A2, typename A3,
- typename A4, typename A5, typename A6>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2, A3 a3, A4 a4, A5 a5, A6 a6) {
- return f(a1, a2, a3, a4, a5, a6);
-}
-template <typename R, typename F, typename A1, typename A2, typename A3,
- typename A4, typename A5, typename A6, typename A7>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2, A3 a3, A4 a4, A5 a5, A6 a6,
- A7 a7) {
- return f(a1, a2, a3, a4, a5, a6, a7);
-}
-template <typename R, typename F, typename A1, typename A2, typename A3,
- typename A4, typename A5, typename A6, typename A7, typename A8>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2, A3 a3, A4 a4, A5 a5, A6 a6,
- A7 a7, A8 a8) {
- return f(a1, a2, a3, a4, a5, a6, a7, a8);
-}
-template <typename R, typename F, typename A1, typename A2, typename A3,
- typename A4, typename A5, typename A6, typename A7, typename A8,
- typename A9>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2, A3 a3, A4 a4, A5 a5, A6 a6,
- A7 a7, A8 a8, A9 a9) {
- return f(a1, a2, a3, a4, a5, a6, a7, a8, a9);
-}
-template <typename R, typename F, typename A1, typename A2, typename A3,
- typename A4, typename A5, typename A6, typename A7, typename A8,
- typename A9, typename A10>
-R InvokeArgumentAdl(AdlTag, F f, A1 a1, A2 a2, A3 a3, A4 a4, A5 a5, A6 a6,
- A7 a7, A8 a8, A9 a9, A10 a10) {
- return f(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10);
-}
-} // namespace invoke_argument
-} // namespace internal
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_0_VALUE_PARAMS()) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args));
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_1_VALUE_PARAMS(p0)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_2_VALUE_PARAMS(p0, p1)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_3_VALUE_PARAMS(p0, p1, p2)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1, p2);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_4_VALUE_PARAMS(p0, p1, p2, p3)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1, p2, p3);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_5_VALUE_PARAMS(p0, p1, p2, p3, p4)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1, p2, p3, p4);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_6_VALUE_PARAMS(p0, p1, p2, p3, p4, p5)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1, p2, p3, p4, p5);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_7_VALUE_PARAMS(p0, p1, p2, p3, p4, p5, p6)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1, p2, p3, p4, p5, p6);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_8_VALUE_PARAMS(p0, p1, p2, p3, p4, p5, p6, p7)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1, p2, p3, p4, p5, p6, p7);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_9_VALUE_PARAMS(p0, p1, p2, p3, p4, p5, p6, p7, p8)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1, p2, p3, p4, p5, p6, p7, p8);
-}
-
-ACTION_TEMPLATE(InvokeArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_10_VALUE_PARAMS(p0, p1, p2, p3, p4, p5, p6, p7, p8, p9)) {
- using internal::invoke_argument::InvokeArgumentAdl;
- return InvokeArgumentAdl<return_type>(
- internal::invoke_argument::AdlTag(),
- ::std::get<k>(args), p0, p1, p2, p3, p4, p5, p6, p7, p8, p9);
-}
-
-// Various overloads for ReturnNew<T>().
-//
-// The ReturnNew<T>(a1, a2, ..., a_k) action returns a pointer to a new
-// instance of type T, constructed on the heap with constructor arguments
-// a1, a2, ..., and a_k. The caller assumes ownership of the returned value.
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_0_VALUE_PARAMS()) {
- return new T();
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_1_VALUE_PARAMS(p0)) {
- return new T(p0);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_2_VALUE_PARAMS(p0, p1)) {
- return new T(p0, p1);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_3_VALUE_PARAMS(p0, p1, p2)) {
- return new T(p0, p1, p2);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_4_VALUE_PARAMS(p0, p1, p2, p3)) {
- return new T(p0, p1, p2, p3);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_5_VALUE_PARAMS(p0, p1, p2, p3, p4)) {
- return new T(p0, p1, p2, p3, p4);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_6_VALUE_PARAMS(p0, p1, p2, p3, p4, p5)) {
- return new T(p0, p1, p2, p3, p4, p5);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_7_VALUE_PARAMS(p0, p1, p2, p3, p4, p5, p6)) {
- return new T(p0, p1, p2, p3, p4, p5, p6);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_8_VALUE_PARAMS(p0, p1, p2, p3, p4, p5, p6, p7)) {
- return new T(p0, p1, p2, p3, p4, p5, p6, p7);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_9_VALUE_PARAMS(p0, p1, p2, p3, p4, p5, p6, p7, p8)) {
- return new T(p0, p1, p2, p3, p4, p5, p6, p7, p8);
-}
-
-ACTION_TEMPLATE(ReturnNew,
- HAS_1_TEMPLATE_PARAMS(typename, T),
- AND_10_VALUE_PARAMS(p0, p1, p2, p3, p4, p5, p6, p7, p8, p9)) {
- return new T(p0, p1, p2, p3, p4, p5, p6, p7, p8, p9);
+template <std::size_t index, typename... Params>
+internal::InvokeArgumentAction<index, typename std::decay<Params>::type...>
+InvokeArgument(Params&&... params) {
+ return {internal::FlatTuple<typename std::decay<Params>::type...>(
+ internal::FlatTupleConstructTag{}, std::forward<Params>(params)...)};
}
#ifdef _MSC_VER
@@ -11807,1281 +11262,7 @@
} // namespace testing
-// Include any custom callback actions added by the local installation.
-// We must include this header at the end to make sure it can use the
-// declarations from this file.
-// This file was GENERATED by command:
-// pump.py gmock-generated-actions.h.pump
-// DO NOT EDIT BY HAND!!!
-
-// GOOGLETEST_CM0002 DO NOT DELETE
-
-#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_
-#define GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_
-
-#endif // GMOCK_INCLUDE_GMOCK_INTERNAL_CUSTOM_GMOCK_GENERATED_ACTIONS_H_
-
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_ACTIONS_H_
-// This file was GENERATED by command:
-// pump.py gmock-generated-matchers.h.pump
-// DO NOT EDIT BY HAND!!!
-
-// Copyright 2008, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-// Google Mock - a framework for writing C++ mock classes.
-//
-// This file implements some commonly used variadic matchers.
-
-// GOOGLETEST_CM0002 DO NOT DELETE
-
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_MATCHERS_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_MATCHERS_H_
-
-#include <iterator>
-#include <sstream>
-#include <string>
-#include <utility>
-#include <vector>
-
-// The MATCHER* family of macros can be used in a namespace scope to
-// define custom matchers easily.
-//
-// Basic Usage
-// ===========
-//
-// The syntax
-//
-// MATCHER(name, description_string) { statements; }
-//
-// defines a matcher with the given name that executes the statements,
-// which must return a bool to indicate if the match succeeds. Inside
-// the statements, you can refer to the value being matched by 'arg',
-// and refer to its type by 'arg_type'.
-//
-// The description string documents what the matcher does, and is used
-// to generate the failure message when the match fails. Since a
-// MATCHER() is usually defined in a header file shared by multiple
-// C++ source files, we require the description to be a C-string
-// literal to avoid possible side effects. It can be empty, in which
-// case we'll use the sequence of words in the matcher name as the
-// description.
-//
-// For example:
-//
-// MATCHER(IsEven, "") { return (arg % 2) == 0; }
-//
-// allows you to write
-//
-// // Expects mock_foo.Bar(n) to be called where n is even.
-// EXPECT_CALL(mock_foo, Bar(IsEven()));
-//
-// or,
-//
-// // Verifies that the value of some_expression is even.
-// EXPECT_THAT(some_expression, IsEven());
-//
-// If the above assertion fails, it will print something like:
-//
-// Value of: some_expression
-// Expected: is even
-// Actual: 7
-//
-// where the description "is even" is automatically calculated from the
-// matcher name IsEven.
-//
-// Argument Type
-// =============
-//
-// Note that the type of the value being matched (arg_type) is
-// determined by the context in which you use the matcher and is
-// supplied to you by the compiler, so you don't need to worry about
-// declaring it (nor can you). This allows the matcher to be
-// polymorphic. For example, IsEven() can be used to match any type
-// where the value of "(arg % 2) == 0" can be implicitly converted to
-// a bool. In the "Bar(IsEven())" example above, if method Bar()
-// takes an int, 'arg_type' will be int; if it takes an unsigned long,
-// 'arg_type' will be unsigned long; and so on.
-//
-// Parameterizing Matchers
-// =======================
-//
-// Sometimes you'll want to parameterize the matcher. For that you
-// can use another macro:
-//
-// MATCHER_P(name, param_name, description_string) { statements; }
-//
-// For example:
-//
-// MATCHER_P(HasAbsoluteValue, value, "") { return abs(arg) == value; }
-//
-// will allow you to write:
-//
-// EXPECT_THAT(Blah("a"), HasAbsoluteValue(n));
-//
-// which may lead to this message (assuming n is 10):
-//
-// Value of: Blah("a")
-// Expected: has absolute value 10
-// Actual: -9
-//
-// Note that both the matcher description and its parameter are
-// printed, making the message human-friendly.
-//
-// In the matcher definition body, you can write 'foo_type' to
-// reference the type of a parameter named 'foo'. For example, in the
-// body of MATCHER_P(HasAbsoluteValue, value) above, you can write
-// 'value_type' to refer to the type of 'value'.
-//
-// We also provide MATCHER_P2, MATCHER_P3, ..., up to MATCHER_P10 to
-// support multi-parameter matchers.
-//
-// Describing Parameterized Matchers
-// =================================
-//
-// The last argument to MATCHER*() is a string-typed expression. The
-// expression can reference all of the matcher's parameters and a
-// special bool-typed variable named 'negation'. When 'negation' is
-// false, the expression should evaluate to the matcher's description;
-// otherwise it should evaluate to the description of the negation of
-// the matcher. For example,
-//
-// using testing::PrintToString;
-//
-// MATCHER_P2(InClosedRange, low, hi,
-// std::string(negation ? "is not" : "is") + " in range [" +
-// PrintToString(low) + ", " + PrintToString(hi) + "]") {
-// return low <= arg && arg <= hi;
-// }
-// ...
-// EXPECT_THAT(3, InClosedRange(4, 6));
-// EXPECT_THAT(3, Not(InClosedRange(2, 4)));
-//
-// would generate two failures that contain the text:
-//
-// Expected: is in range [4, 6]
-// ...
-// Expected: is not in range [2, 4]
-//
-// If you specify "" as the description, the failure message will
-// contain the sequence of words in the matcher name followed by the
-// parameter values printed as a tuple. For example,
-//
-// MATCHER_P2(InClosedRange, low, hi, "") { ... }
-// ...
-// EXPECT_THAT(3, InClosedRange(4, 6));
-// EXPECT_THAT(3, Not(InClosedRange(2, 4)));
-//
-// would generate two failures that contain the text:
-//
-// Expected: in closed range (4, 6)
-// ...
-// Expected: not (in closed range (2, 4))
-//
-// Types of Matcher Parameters
-// ===========================
-//
-// For the purpose of typing, you can view
-//
-// MATCHER_Pk(Foo, p1, ..., pk, description_string) { ... }
-//
-// as shorthand for
-//
-// template <typename p1_type, ..., typename pk_type>
-// FooMatcherPk<p1_type, ..., pk_type>
-// Foo(p1_type p1, ..., pk_type pk) { ... }
-//
-// When you write Foo(v1, ..., vk), the compiler infers the types of
-// the parameters v1, ..., and vk for you. If you are not happy with
-// the result of the type inference, you can specify the types by
-// explicitly instantiating the template, as in Foo<long, bool>(5,
-// false). As said earlier, you don't get to (or need to) specify
-// 'arg_type' as that's determined by the context in which the matcher
-// is used. You can assign the result of expression Foo(p1, ..., pk)
-// to a variable of type FooMatcherPk<p1_type, ..., pk_type>. This
-// can be useful when composing matchers.
-//
-// While you can instantiate a matcher template with reference types,
-// passing the parameters by pointer usually makes your code more
-// readable. If, however, you still want to pass a parameter by
-// reference, be aware that in the failure message generated by the
-// matcher you will see the value of the referenced object but not its
-// address.
-//
-// Explaining Match Results
-// ========================
-//
-// Sometimes the matcher description alone isn't enough to explain why
-// the match has failed or succeeded. For example, when expecting a
-// long string, it can be very helpful to also print the diff between
-// the expected string and the actual one. To achieve that, you can
-// optionally stream additional information to a special variable
-// named result_listener, whose type is a pointer to class
-// MatchResultListener:
-//
-// MATCHER_P(EqualsLongString, str, "") {
-// if (arg == str) return true;
-//
-// *result_listener << "the difference: "
-/// << DiffStrings(str, arg);
-// return false;
-// }
-//
-// Overloading Matchers
-// ====================
-//
-// You can overload matchers with different numbers of parameters:
-//
-// MATCHER_P(Blah, a, description_string1) { ... }
-// MATCHER_P2(Blah, a, b, description_string2) { ... }
-//
-// Caveats
-// =======
-//
-// When defining a new matcher, you should also consider implementing
-// MatcherInterface or using MakePolymorphicMatcher(). These
-// approaches require more work than the MATCHER* macros, but also
-// give you more control on the types of the value being matched and
-// the matcher parameters, which may leads to better compiler error
-// messages when the matcher is used wrong. They also allow
-// overloading matchers based on parameter types (as opposed to just
-// based on the number of parameters).
-//
-// MATCHER*() can only be used in a namespace scope. The reason is
-// that C++ doesn't yet allow function-local types to be used to
-// instantiate templates. The up-coming C++0x standard will fix this.
-// Once that's done, we'll consider supporting using MATCHER*() inside
-// a function.
-//
-// More Information
-// ================
-//
-// To learn more about using these macros, please search for 'MATCHER'
-// on
-// https://github.com/google/googletest/blob/master/googlemock/docs/CookBook.md
-
-#define MATCHER(name, description)\
- class name##Matcher {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl()\
- {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<>()));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>());\
- }\
- name##Matcher() {\
- }\
- private:\
- };\
- inline name##Matcher name() {\
- return name##Matcher();\
- }\
- template <typename arg_type>\
- bool name##Matcher::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P(name, p0, description)\
- template <typename p0##_type>\
- class name##MatcherP {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- explicit gmock_Impl(p0##_type gmock_p0)\
- : p0(::std::move(gmock_p0)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type>(p0)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0));\
- }\
- explicit name##MatcherP(p0##_type gmock_p0) : p0(::std::move(gmock_p0)) {\
- }\
- p0##_type const p0;\
- private:\
- };\
- template <typename p0##_type>\
- inline name##MatcherP<p0##_type> name(p0##_type p0) {\
- return name##MatcherP<p0##_type>(p0);\
- }\
- template <typename p0##_type>\
- template <typename arg_type>\
- bool name##MatcherP<p0##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P2(name, p0, p1, description)\
- template <typename p0##_type, typename p1##_type>\
- class name##MatcherP2 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type>(p0, p1)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1));\
- }\
- name##MatcherP2(p0##_type gmock_p0, \
- p1##_type gmock_p1) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type>\
- inline name##MatcherP2<p0##_type, p1##_type> name(p0##_type p0, \
- p1##_type p1) {\
- return name##MatcherP2<p0##_type, p1##_type>(p0, p1);\
- }\
- template <typename p0##_type, typename p1##_type>\
- template <typename arg_type>\
- bool name##MatcherP2<p0##_type, \
- p1##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P3(name, p0, p1, p2, description)\
- template <typename p0##_type, typename p1##_type, typename p2##_type>\
- class name##MatcherP3 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)), \
- p2(::std::move(gmock_p2)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type, p2##_type>(p0, p1, p2)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1, p2));\
- }\
- name##MatcherP3(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)), p2(::std::move(gmock_p2)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type>\
- inline name##MatcherP3<p0##_type, p1##_type, p2##_type> name(p0##_type p0, \
- p1##_type p1, p2##_type p2) {\
- return name##MatcherP3<p0##_type, p1##_type, p2##_type>(p0, p1, p2);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type>\
- template <typename arg_type>\
- bool name##MatcherP3<p0##_type, p1##_type, \
- p2##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P4(name, p0, p1, p2, p3, description)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type>\
- class name##MatcherP4 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)), \
- p2(::std::move(gmock_p2)), p3(::std::move(gmock_p3)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type, p2##_type, p3##_type>(p0, \
- p1, p2, p3)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1, p2, p3));\
- }\
- name##MatcherP4(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)), p2(::std::move(gmock_p2)), \
- p3(::std::move(gmock_p3)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type>\
- inline name##MatcherP4<p0##_type, p1##_type, p2##_type, \
- p3##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, \
- p3##_type p3) {\
- return name##MatcherP4<p0##_type, p1##_type, p2##_type, p3##_type>(p0, \
- p1, p2, p3);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type>\
- template <typename arg_type>\
- bool name##MatcherP4<p0##_type, p1##_type, p2##_type, \
- p3##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P5(name, p0, p1, p2, p3, p4, description)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type>\
- class name##MatcherP5 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)), \
- p2(::std::move(gmock_p2)), p3(::std::move(gmock_p3)), \
- p4(::std::move(gmock_p4)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type>(p0, p1, p2, p3, p4)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1, p2, p3, p4));\
- }\
- name##MatcherP5(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, \
- p4##_type gmock_p4) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)), p2(::std::move(gmock_p2)), \
- p3(::std::move(gmock_p3)), p4(::std::move(gmock_p4)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type>\
- inline name##MatcherP5<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, p3##_type p3, \
- p4##_type p4) {\
- return name##MatcherP5<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type>(p0, p1, p2, p3, p4);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type>\
- template <typename arg_type>\
- bool name##MatcherP5<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P6(name, p0, p1, p2, p3, p4, p5, description)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type>\
- class name##MatcherP6 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)), \
- p2(::std::move(gmock_p2)), p3(::std::move(gmock_p3)), \
- p4(::std::move(gmock_p4)), p5(::std::move(gmock_p5)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type>(p0, p1, p2, p3, p4, p5)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1, p2, p3, p4, p5));\
- }\
- name##MatcherP6(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)), p2(::std::move(gmock_p2)), \
- p3(::std::move(gmock_p3)), p4(::std::move(gmock_p4)), \
- p5(::std::move(gmock_p5)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type>\
- inline name##MatcherP6<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, \
- p3##_type p3, p4##_type p4, p5##_type p5) {\
- return name##MatcherP6<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type>(p0, p1, p2, p3, p4, p5);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type>\
- template <typename arg_type>\
- bool name##MatcherP6<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P7(name, p0, p1, p2, p3, p4, p5, p6, description)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type>\
- class name##MatcherP7 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5, \
- p6##_type gmock_p6)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)), \
- p2(::std::move(gmock_p2)), p3(::std::move(gmock_p3)), \
- p4(::std::move(gmock_p4)), p5(::std::move(gmock_p5)), \
- p6(::std::move(gmock_p6)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- p6##_type const p6;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type>(p0, p1, p2, p3, p4, p5, \
- p6)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1, p2, p3, p4, p5, p6));\
- }\
- name##MatcherP7(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5, p6##_type gmock_p6) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)), p2(::std::move(gmock_p2)), \
- p3(::std::move(gmock_p3)), p4(::std::move(gmock_p4)), \
- p5(::std::move(gmock_p5)), p6(::std::move(gmock_p6)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- p6##_type const p6;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type>\
- inline name##MatcherP7<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type> name(p0##_type p0, p1##_type p1, \
- p2##_type p2, p3##_type p3, p4##_type p4, p5##_type p5, \
- p6##_type p6) {\
- return name##MatcherP7<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type>(p0, p1, p2, p3, p4, p5, p6);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type>\
- template <typename arg_type>\
- bool name##MatcherP7<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type, p6##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P8(name, p0, p1, p2, p3, p4, p5, p6, p7, description)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type>\
- class name##MatcherP8 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5, \
- p6##_type gmock_p6, p7##_type gmock_p7)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)), \
- p2(::std::move(gmock_p2)), p3(::std::move(gmock_p3)), \
- p4(::std::move(gmock_p4)), p5(::std::move(gmock_p5)), \
- p6(::std::move(gmock_p6)), p7(::std::move(gmock_p7)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- p6##_type const p6;\
- p7##_type const p7;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type>(p0, p1, p2, \
- p3, p4, p5, p6, p7)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1, p2, p3, p4, p5, p6, p7));\
- }\
- name##MatcherP8(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5, p6##_type gmock_p6, \
- p7##_type gmock_p7) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)), p2(::std::move(gmock_p2)), \
- p3(::std::move(gmock_p3)), p4(::std::move(gmock_p4)), \
- p5(::std::move(gmock_p5)), p6(::std::move(gmock_p6)), \
- p7(::std::move(gmock_p7)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- p6##_type const p6;\
- p7##_type const p7;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type>\
- inline name##MatcherP8<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type> name(p0##_type p0, \
- p1##_type p1, p2##_type p2, p3##_type p3, p4##_type p4, p5##_type p5, \
- p6##_type p6, p7##_type p7) {\
- return name##MatcherP8<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type>(p0, p1, p2, p3, p4, p5, \
- p6, p7);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type>\
- template <typename arg_type>\
- bool name##MatcherP8<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type, p6##_type, \
- p7##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P9(name, p0, p1, p2, p3, p4, p5, p6, p7, p8, description)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type>\
- class name##MatcherP9 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5, \
- p6##_type gmock_p6, p7##_type gmock_p7, p8##_type gmock_p8)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)), \
- p2(::std::move(gmock_p2)), p3(::std::move(gmock_p3)), \
- p4(::std::move(gmock_p4)), p5(::std::move(gmock_p5)), \
- p6(::std::move(gmock_p6)), p7(::std::move(gmock_p7)), \
- p8(::std::move(gmock_p8)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- p6##_type const p6;\
- p7##_type const p7;\
- p8##_type const p8;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, \
- p8##_type>(p0, p1, p2, p3, p4, p5, p6, p7, p8)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1, p2, p3, p4, p5, p6, p7, p8));\
- }\
- name##MatcherP9(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5, p6##_type gmock_p6, p7##_type gmock_p7, \
- p8##_type gmock_p8) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)), p2(::std::move(gmock_p2)), \
- p3(::std::move(gmock_p3)), p4(::std::move(gmock_p4)), \
- p5(::std::move(gmock_p5)), p6(::std::move(gmock_p6)), \
- p7(::std::move(gmock_p7)), p8(::std::move(gmock_p8)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- p6##_type const p6;\
- p7##_type const p7;\
- p8##_type const p8;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type>\
- inline name##MatcherP9<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, \
- p8##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, p3##_type p3, \
- p4##_type p4, p5##_type p5, p6##_type p6, p7##_type p7, \
- p8##_type p8) {\
- return name##MatcherP9<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, p8##_type>(p0, p1, p2, \
- p3, p4, p5, p6, p7, p8);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type>\
- template <typename arg_type>\
- bool name##MatcherP9<p0##_type, p1##_type, p2##_type, p3##_type, p4##_type, \
- p5##_type, p6##_type, p7##_type, \
- p8##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#define MATCHER_P10(name, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, description)\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type, \
- typename p9##_type>\
- class name##MatcherP10 {\
- public:\
- template <typename arg_type>\
- class gmock_Impl : public ::testing::MatcherInterface<\
- GTEST_REFERENCE_TO_CONST_(arg_type)> {\
- public:\
- gmock_Impl(p0##_type gmock_p0, p1##_type gmock_p1, p2##_type gmock_p2, \
- p3##_type gmock_p3, p4##_type gmock_p4, p5##_type gmock_p5, \
- p6##_type gmock_p6, p7##_type gmock_p7, p8##_type gmock_p8, \
- p9##_type gmock_p9)\
- : p0(::std::move(gmock_p0)), p1(::std::move(gmock_p1)), \
- p2(::std::move(gmock_p2)), p3(::std::move(gmock_p3)), \
- p4(::std::move(gmock_p4)), p5(::std::move(gmock_p5)), \
- p6(::std::move(gmock_p6)), p7(::std::move(gmock_p7)), \
- p8(::std::move(gmock_p8)), p9(::std::move(gmock_p9)) {}\
- virtual bool MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener) const;\
- virtual void DescribeTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(false);\
- }\
- virtual void DescribeNegationTo(::std::ostream* gmock_os) const {\
- *gmock_os << FormatDescription(true);\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- p6##_type const p6;\
- p7##_type const p7;\
- p8##_type const p8;\
- p9##_type const p9;\
- private:\
- ::std::string FormatDescription(bool negation) const {\
- ::std::string gmock_description = (description);\
- if (!gmock_description.empty()) {\
- return gmock_description;\
- }\
- return ::testing::internal::FormatMatcherDescription(\
- negation, #name, \
- ::testing::internal::UniversalTersePrintTupleFieldsToStrings(\
- ::std::tuple<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, p8##_type, \
- p9##_type>(p0, p1, p2, p3, p4, p5, p6, p7, p8, p9)));\
- }\
- };\
- template <typename arg_type>\
- operator ::testing::Matcher<arg_type>() const {\
- return ::testing::Matcher<arg_type>(\
- new gmock_Impl<arg_type>(p0, p1, p2, p3, p4, p5, p6, p7, p8, p9));\
- }\
- name##MatcherP10(p0##_type gmock_p0, p1##_type gmock_p1, \
- p2##_type gmock_p2, p3##_type gmock_p3, p4##_type gmock_p4, \
- p5##_type gmock_p5, p6##_type gmock_p6, p7##_type gmock_p7, \
- p8##_type gmock_p8, p9##_type gmock_p9) : p0(::std::move(gmock_p0)), \
- p1(::std::move(gmock_p1)), p2(::std::move(gmock_p2)), \
- p3(::std::move(gmock_p3)), p4(::std::move(gmock_p4)), \
- p5(::std::move(gmock_p5)), p6(::std::move(gmock_p6)), \
- p7(::std::move(gmock_p7)), p8(::std::move(gmock_p8)), \
- p9(::std::move(gmock_p9)) {\
- }\
- p0##_type const p0;\
- p1##_type const p1;\
- p2##_type const p2;\
- p3##_type const p3;\
- p4##_type const p4;\
- p5##_type const p5;\
- p6##_type const p6;\
- p7##_type const p7;\
- p8##_type const p8;\
- p9##_type const p9;\
- private:\
- };\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type, \
- typename p9##_type>\
- inline name##MatcherP10<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, p8##_type, \
- p9##_type> name(p0##_type p0, p1##_type p1, p2##_type p2, p3##_type p3, \
- p4##_type p4, p5##_type p5, p6##_type p6, p7##_type p7, p8##_type p8, \
- p9##_type p9) {\
- return name##MatcherP10<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, p8##_type, p9##_type>(p0, \
- p1, p2, p3, p4, p5, p6, p7, p8, p9);\
- }\
- template <typename p0##_type, typename p1##_type, typename p2##_type, \
- typename p3##_type, typename p4##_type, typename p5##_type, \
- typename p6##_type, typename p7##_type, typename p8##_type, \
- typename p9##_type>\
- template <typename arg_type>\
- bool name##MatcherP10<p0##_type, p1##_type, p2##_type, p3##_type, \
- p4##_type, p5##_type, p6##_type, p7##_type, p8##_type, \
- p9##_type>::gmock_Impl<arg_type>::MatchAndExplain(\
- GTEST_REFERENCE_TO_CONST_(arg_type) arg,\
- ::testing::MatchResultListener* result_listener GTEST_ATTRIBUTE_UNUSED_)\
- const
-
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_MATCHERS_H_
-// Copyright 2007, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-// Google Mock - a framework for writing C++ mock classes.
-//
-// This file implements some actions that depend on gmock-generated-actions.h.
-
-// GOOGLETEST_CM0002 DO NOT DELETE
-
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
-
-#include <algorithm>
-#include <type_traits>
-
-
-namespace testing {
-namespace internal {
-
-// An internal replacement for std::copy which mimics its behavior. This is
-// necessary because Visual Studio deprecates ::std::copy, issuing warning 4996.
-// However Visual Studio 2010 and later do not honor #pragmas which disable that
-// warning.
-template<typename InputIterator, typename OutputIterator>
-inline OutputIterator CopyElements(InputIterator first,
- InputIterator last,
- OutputIterator output) {
- for (; first != last; ++first, ++output) {
- *output = *first;
- }
- return output;
-}
-
-} // namespace internal
-
-// Various overloads for Invoke().
-
-// The ACTION*() macros trigger warning C4100 (unreferenced formal
-// parameter) in MSVC with -W4. Unfortunately they cannot be fixed in
-// the macro definition, as the warnings are generated when the macro
-// is expanded and macro expansion cannot contain #pragma. Therefore
-// we suppress them here.
-#ifdef _MSC_VER
-# pragma warning(push)
-# pragma warning(disable:4100)
-#endif
-
-// Action ReturnArg<k>() returns the k-th argument of the mock function.
-ACTION_TEMPLATE(ReturnArg,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_0_VALUE_PARAMS()) {
- return ::std::get<k>(args);
-}
-
-// Action SaveArg<k>(pointer) saves the k-th (0-based) argument of the
-// mock function to *pointer.
-ACTION_TEMPLATE(SaveArg,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_1_VALUE_PARAMS(pointer)) {
- *pointer = ::std::get<k>(args);
-}
-
-// Action SaveArgPointee<k>(pointer) saves the value pointed to
-// by the k-th (0-based) argument of the mock function to *pointer.
-ACTION_TEMPLATE(SaveArgPointee,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_1_VALUE_PARAMS(pointer)) {
- *pointer = *::std::get<k>(args);
-}
-
-// Action SetArgReferee<k>(value) assigns 'value' to the variable
-// referenced by the k-th (0-based) argument of the mock function.
-ACTION_TEMPLATE(SetArgReferee,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_1_VALUE_PARAMS(value)) {
- typedef typename ::std::tuple_element<k, args_type>::type argk_type;
- // Ensures that argument #k is a reference. If you get a compiler
- // error on the next line, you are using SetArgReferee<k>(value) in
- // a mock function whose k-th (0-based) argument is not a reference.
- GTEST_COMPILE_ASSERT_(internal::is_reference<argk_type>::value,
- SetArgReferee_must_be_used_with_a_reference_argument);
- ::std::get<k>(args) = value;
-}
-
-// Action SetArrayArgument<k>(first, last) copies the elements in
-// source range [first, last) to the array pointed to by the k-th
-// (0-based) argument, which can be either a pointer or an
-// iterator. The action does not take ownership of the elements in the
-// source range.
-ACTION_TEMPLATE(SetArrayArgument,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_2_VALUE_PARAMS(first, last)) {
- // Visual Studio deprecates ::std::copy, so we use our own copy in that case.
-#ifdef _MSC_VER
- internal::CopyElements(first, last, ::std::get<k>(args));
-#else
- ::std::copy(first, last, ::std::get<k>(args));
-#endif
-}
-
-// Action DeleteArg<k>() deletes the k-th (0-based) argument of the mock
-// function.
-ACTION_TEMPLATE(DeleteArg,
- HAS_1_TEMPLATE_PARAMS(int, k),
- AND_0_VALUE_PARAMS()) {
- delete ::std::get<k>(args);
-}
-
-// This action returns the value pointed to by 'pointer'.
-ACTION_P(ReturnPointee, pointer) { return *pointer; }
-
-// Action Throw(exception) can be used in a mock function of any type
-// to throw the given exception. Any copyable value can be thrown.
-#if GTEST_HAS_EXCEPTIONS
-
-// Suppresses the 'unreachable code' warning that VC generates in opt modes.
-# ifdef _MSC_VER
-# pragma warning(push) // Saves the current warning state.
-# pragma warning(disable:4702) // Temporarily disables warning 4702.
-# endif
-ACTION_P(Throw, exception) { throw exception; }
-# ifdef _MSC_VER
-# pragma warning(pop) // Restores the warning state.
-# endif
-
-#endif // GTEST_HAS_EXCEPTIONS
-
-#ifdef _MSC_VER
-# pragma warning(pop)
-#endif
-
-} // namespace testing
-
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
// Copyright 2013, Google Inc.
// All rights reserved.
//
@@ -13114,15 +11295,15 @@
// Google Mock - a framework for writing C++ mock classes.
//
-// This file implements some matchers that depend on gmock-generated-matchers.h.
+// This file implements some matchers that depend on gmock-matchers.h.
//
// Note that tests are implemented in gmock-matchers_test.cc rather than
// gmock-more-matchers-test.cc.
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_MORE_MATCHERS_H_
-#define GMOCK_INCLUDE_GMOCK_MORE_MATCHERS_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MORE_MATCHERS_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MORE_MATCHERS_H_
namespace testing {
@@ -13172,7 +11353,7 @@
} // namespace testing
-#endif // GMOCK_INCLUDE_GMOCK_MORE_MATCHERS_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_MORE_MATCHERS_H_
// Copyright 2008, Google Inc.
// All rights reserved.
//
@@ -13235,18 +11416,89 @@
// GOOGLETEST_CM0002 DO NOT DELETE
-#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_NICE_STRICT_H_
-#define GMOCK_INCLUDE_GMOCK_GMOCK_NICE_STRICT_H_
+#ifndef GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_NICE_STRICT_H_
+#define GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_NICE_STRICT_H_
+
+#include <type_traits>
namespace testing {
+template <class MockClass>
+class NiceMock;
+template <class MockClass>
+class NaggyMock;
+template <class MockClass>
+class StrictMock;
+
+namespace internal {
+template <typename T>
+std::true_type StrictnessModifierProbe(const NiceMock<T>&);
+template <typename T>
+std::true_type StrictnessModifierProbe(const NaggyMock<T>&);
+template <typename T>
+std::true_type StrictnessModifierProbe(const StrictMock<T>&);
+std::false_type StrictnessModifierProbe(...);
+
+template <typename T>
+constexpr bool HasStrictnessModifier() {
+ return decltype(StrictnessModifierProbe(std::declval<const T&>()))::value;
+}
+
+// Base classes that register and deregister with testing::Mock to alter the
+// default behavior around uninteresting calls. Inheriting from one of these
+// classes first and then MockClass ensures the MockClass constructor is run
+// after registration, and that the MockClass destructor runs before
+// deregistration. This guarantees that MockClass's constructor and destructor
+// run with the same level of strictness as its instance methods.
+
+#if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MINGW && \
+ (defined(_MSC_VER) || defined(__clang__))
+// We need to mark these classes with this declspec to ensure that
+// the empty base class optimization is performed.
+#define GTEST_INTERNAL_EMPTY_BASE_CLASS __declspec(empty_bases)
+#else
+#define GTEST_INTERNAL_EMPTY_BASE_CLASS
+#endif
+
+template <typename Base>
+class NiceMockImpl {
+ public:
+ NiceMockImpl() { ::testing::Mock::AllowUninterestingCalls(this); }
+
+ ~NiceMockImpl() { ::testing::Mock::UnregisterCallReaction(this); }
+};
+
+template <typename Base>
+class NaggyMockImpl {
+ public:
+ NaggyMockImpl() { ::testing::Mock::WarnUninterestingCalls(this); }
+
+ ~NaggyMockImpl() { ::testing::Mock::UnregisterCallReaction(this); }
+};
+
+template <typename Base>
+class StrictMockImpl {
+ public:
+ StrictMockImpl() { ::testing::Mock::FailUninterestingCalls(this); }
+
+ ~StrictMockImpl() { ::testing::Mock::UnregisterCallReaction(this); }
+};
+
+} // namespace internal
template <class MockClass>
-class NiceMock : public MockClass {
+class GTEST_INTERNAL_EMPTY_BASE_CLASS NiceMock
+ : private internal::NiceMockImpl<MockClass>,
+ public MockClass {
public:
+ static_assert(!internal::HasStrictnessModifier<MockClass>(),
+ "Can't apply NiceMock to a class hierarchy that already has a "
+ "strictness modifier. See "
+ "https://google.github.io/googletest/"
+ "gmock_cook_book.html#NiceStrictNaggy");
NiceMock() : MockClass() {
- ::testing::Mock::AllowUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
// Ideally, we would inherit base class's constructors through a using
@@ -13258,21 +11510,16 @@
// made explicit.
template <typename A>
explicit NiceMock(A&& arg) : MockClass(std::forward<A>(arg)) {
- ::testing::Mock::AllowUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
- template <typename A1, typename A2, typename... An>
- NiceMock(A1&& arg1, A2&& arg2, An&&... args)
- : MockClass(std::forward<A1>(arg1), std::forward<A2>(arg2),
+ template <typename TArg1, typename TArg2, typename... An>
+ NiceMock(TArg1&& arg1, TArg2&& arg2, An&&... args)
+ : MockClass(std::forward<TArg1>(arg1), std::forward<TArg2>(arg2),
std::forward<An>(args)...) {
- ::testing::Mock::AllowUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
- }
-
- ~NiceMock() { // NOLINT
- ::testing::Mock::UnregisterCallReaction(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
private:
@@ -13280,11 +11527,19 @@
};
template <class MockClass>
-class NaggyMock : public MockClass {
+class GTEST_INTERNAL_EMPTY_BASE_CLASS NaggyMock
+ : private internal::NaggyMockImpl<MockClass>,
+ public MockClass {
+ static_assert(!internal::HasStrictnessModifier<MockClass>(),
+ "Can't apply NaggyMock to a class hierarchy that already has a "
+ "strictness modifier. See "
+ "https://google.github.io/googletest/"
+ "gmock_cook_book.html#NiceStrictNaggy");
+
public:
NaggyMock() : MockClass() {
- ::testing::Mock::WarnUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
// Ideally, we would inherit base class's constructors through a using
@@ -13296,21 +11551,16 @@
// made explicit.
template <typename A>
explicit NaggyMock(A&& arg) : MockClass(std::forward<A>(arg)) {
- ::testing::Mock::WarnUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
- template <typename A1, typename A2, typename... An>
- NaggyMock(A1&& arg1, A2&& arg2, An&&... args)
- : MockClass(std::forward<A1>(arg1), std::forward<A2>(arg2),
+ template <typename TArg1, typename TArg2, typename... An>
+ NaggyMock(TArg1&& arg1, TArg2&& arg2, An&&... args)
+ : MockClass(std::forward<TArg1>(arg1), std::forward<TArg2>(arg2),
std::forward<An>(args)...) {
- ::testing::Mock::WarnUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
- }
-
- ~NaggyMock() { // NOLINT
- ::testing::Mock::UnregisterCallReaction(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
private:
@@ -13318,11 +11568,19 @@
};
template <class MockClass>
-class StrictMock : public MockClass {
+class GTEST_INTERNAL_EMPTY_BASE_CLASS StrictMock
+ : private internal::StrictMockImpl<MockClass>,
+ public MockClass {
public:
+ static_assert(
+ !internal::HasStrictnessModifier<MockClass>(),
+ "Can't apply StrictMock to a class hierarchy that already has a "
+ "strictness modifier. See "
+ "https://google.github.io/googletest/"
+ "gmock_cook_book.html#NiceStrictNaggy");
StrictMock() : MockClass() {
- ::testing::Mock::FailUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
// Ideally, we would inherit base class's constructors through a using
@@ -13334,58 +11592,27 @@
// made explicit.
template <typename A>
explicit StrictMock(A&& arg) : MockClass(std::forward<A>(arg)) {
- ::testing::Mock::FailUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
- template <typename A1, typename A2, typename... An>
- StrictMock(A1&& arg1, A2&& arg2, An&&... args)
- : MockClass(std::forward<A1>(arg1), std::forward<A2>(arg2),
+ template <typename TArg1, typename TArg2, typename... An>
+ StrictMock(TArg1&& arg1, TArg2&& arg2, An&&... args)
+ : MockClass(std::forward<TArg1>(arg1), std::forward<TArg2>(arg2),
std::forward<An>(args)...) {
- ::testing::Mock::FailUninterestingCalls(
- internal::ImplicitCast_<MockClass*>(this));
- }
-
- ~StrictMock() { // NOLINT
- ::testing::Mock::UnregisterCallReaction(
- internal::ImplicitCast_<MockClass*>(this));
+ static_assert(sizeof(*this) == sizeof(MockClass),
+ "The impl subclass shouldn't introduce any padding");
}
private:
GTEST_DISALLOW_COPY_AND_ASSIGN_(StrictMock);
};
-// The following specializations catch some (relatively more common)
-// user errors of nesting nice and strict mocks. They do NOT catch
-// all possible errors.
-
-// These specializations are declared but not defined, as NiceMock,
-// NaggyMock, and StrictMock cannot be nested.
-
-template <typename MockClass>
-class NiceMock<NiceMock<MockClass> >;
-template <typename MockClass>
-class NiceMock<NaggyMock<MockClass> >;
-template <typename MockClass>
-class NiceMock<StrictMock<MockClass> >;
-
-template <typename MockClass>
-class NaggyMock<NiceMock<MockClass> >;
-template <typename MockClass>
-class NaggyMock<NaggyMock<MockClass> >;
-template <typename MockClass>
-class NaggyMock<StrictMock<MockClass> >;
-
-template <typename MockClass>
-class StrictMock<NiceMock<MockClass> >;
-template <typename MockClass>
-class StrictMock<NaggyMock<MockClass> >;
-template <typename MockClass>
-class StrictMock<StrictMock<MockClass> >;
+#undef GTEST_INTERNAL_EMPTY_BASE_CLASS
} // namespace testing
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_NICE_STRICT_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_NICE_STRICT_H_
namespace testing {
@@ -13417,4 +11644,4 @@
} // namespace testing
-#endif // GMOCK_INCLUDE_GMOCK_GMOCK_H_
+#endif // GOOGLEMOCK_INCLUDE_GMOCK_GMOCK_H_
diff --git a/internal/ceres/gmock/mock-log.h b/internal/ceres/gmock/mock-log.h
index 54669b7..91b5939 100644
--- a/internal/ceres/gmock/mock-log.h
+++ b/internal/ceres/gmock/mock-log.h
@@ -71,7 +71,7 @@
ScopedMockLog() { AddLogSink(this); }
// When the object is destructed, it stops intercepting logs.
- virtual ~ScopedMockLog() { RemoveLogSink(this); }
+ ~ScopedMockLog() override { RemoveLogSink(this); }
// Implements the mock method:
//
@@ -112,10 +112,10 @@
// be running simultaneously, we ensure thread-safety of the exchange between
// send() and WaitTillSent(), and that for each message, LOG(), send(),
// WaitTillSent() and Log() are executed in the same thread.
- virtual void send(google::LogSeverity severity,
+ void send(google::LogSeverity severity,
const char* full_filename,
const char* base_filename, int line, const tm* tm_time,
- const char* message, size_t message_len) {
+ const char* message, size_t message_len) override {
// We are only interested in the log severity, full file name, and
// log message.
message_info_.severity = severity;
@@ -130,7 +130,7 @@
//
// LOG(), send(), WaitTillSent() and Log() will occur in the same thread for
// a given log message.
- virtual void WaitTillSent() {
+ void WaitTillSent() override {
// First, and very importantly, we save a copy of the message being
// processed before calling Log(), since Log() may indirectly call send()
// and WaitTillSent() in the same thread again.
diff --git a/internal/ceres/gmock_gtest_all.cc b/internal/ceres/gmock_gtest_all.cc
index dd43444..9ea7029 100644
--- a/internal/ceres/gmock_gtest_all.cc
+++ b/internal/ceres/gmock_gtest_all.cc
@@ -105,8 +105,8 @@
// GOOGLETEST_CM0004 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_SPI_H_
-#define GTEST_INCLUDE_GTEST_GTEST_SPI_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_SPI_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_SPI_H_
GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
@@ -306,10 +306,9 @@
}\
} while (::testing::internal::AlwaysFalse())
-#endif // GTEST_INCLUDE_GTEST_GTEST_SPI_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_SPI_H_
#include <ctype.h>
-#include <math.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
@@ -318,6 +317,9 @@
#include <wctype.h>
#include <algorithm>
+#include <chrono> // NOLINT
+#include <cmath>
+#include <cstdint>
#include <iomanip>
#include <limits>
#include <list>
@@ -328,8 +330,6 @@
#if GTEST_OS_LINUX
-# define GTEST_HAS_GETTIMEOFDAY_ 1
-
# include <fcntl.h> // NOLINT
# include <limits.h> // NOLINT
# include <sched.h> // NOLINT
@@ -341,7 +341,6 @@
# include <string>
#elif GTEST_OS_ZOS
-# define GTEST_HAS_GETTIMEOFDAY_ 1
# include <sys/time.h> // NOLINT
// On z/OS we additionally need strings.h for strcasecmp.
@@ -354,27 +353,24 @@
#elif GTEST_OS_WINDOWS // We are on Windows proper.
+# include <windows.h> // NOLINT
+# undef min
+
+#ifdef _MSC_VER
+# include <crtdbg.h> // NOLINT
+#endif
+
# include <io.h> // NOLINT
# include <sys/timeb.h> // NOLINT
# include <sys/types.h> // NOLINT
# include <sys/stat.h> // NOLINT
# if GTEST_OS_WINDOWS_MINGW
-// MinGW has gettimeofday() but not _ftime64().
-# define GTEST_HAS_GETTIMEOFDAY_ 1
# include <sys/time.h> // NOLINT
# endif // GTEST_OS_WINDOWS_MINGW
-// cpplint thinks that the header is already included, so we want to
-// silence it.
-# include <windows.h> // NOLINT
-# undef min
-
#else
-// Assume other platforms have gettimeofday().
-# define GTEST_HAS_GETTIMEOFDAY_ 1
-
// cpplint thinks that the header is already included, so we want to
// silence it.
# include <sys/time.h> // NOLINT
@@ -426,8 +422,8 @@
// This file contains purely Google Test's internal implementation. Please
// DO NOT #INCLUDE IT IN A USER PROGRAM.
-#ifndef GTEST_SRC_GTEST_INTERNAL_INL_H_
-#define GTEST_SRC_GTEST_INTERNAL_INL_H_
+#ifndef GOOGLETEST_SRC_GTEST_INTERNAL_INL_H_
+#define GOOGLETEST_SRC_GTEST_INTERNAL_INL_H_
#ifndef _WIN32_WCE
# include <errno.h>
@@ -437,6 +433,7 @@
#include <string.h> // For memmove.
#include <algorithm>
+#include <cstdint>
#include <memory>
#include <string>
#include <vector>
@@ -475,9 +472,11 @@
const char kBreakOnFailureFlag[] = "break_on_failure";
const char kCatchExceptionsFlag[] = "catch_exceptions";
const char kColorFlag[] = "color";
+const char kFailFast[] = "fail_fast";
const char kFilterFlag[] = "filter";
const char kListTestsFlag[] = "list_tests";
const char kOutputFlag[] = "output";
+const char kBriefFlag[] = "brief";
const char kPrintTimeFlag[] = "print_time";
const char kPrintUTF8Flag[] = "print_utf8";
const char kRandomSeedFlag[] = "random_seed";
@@ -491,14 +490,14 @@
// A valid random seed must be in [1, kMaxRandomSeed].
const int kMaxRandomSeed = 99999;
-// g_help_flag is true iff the --help flag or an equivalent form is
-// specified on the command line.
+// g_help_flag is true if and only if the --help flag or an equivalent form
+// is specified on the command line.
GTEST_API_ extern bool g_help_flag;
// Returns the current time in milliseconds.
GTEST_API_ TimeInMillis GetTimeInMillis();
-// Returns true iff Google Test should use colors in the output.
+// Returns true if and only if Google Test should use colors in the output.
GTEST_API_ bool ShouldUseColor(bool stdout_is_tty);
// Formats the given time in milliseconds as seconds.
@@ -515,11 +514,11 @@
// On success, stores the value of the flag in *value, and returns
// true. On failure, returns false without changing *value.
GTEST_API_ bool ParseInt32Flag(
- const char* str, const char* flag, Int32* value);
+ const char* str, const char* flag, int32_t* value);
// Returns a random seed in range [1, kMaxRandomSeed] based on the
// given --gtest_random_seed flag value.
-inline int GetRandomSeedFromFlag(Int32 random_seed_flag) {
+inline int GetRandomSeedFromFlag(int32_t random_seed_flag) {
const unsigned int raw_seed = (random_seed_flag == 0) ?
static_cast<unsigned int>(GetTimeInMillis()) :
static_cast<unsigned int>(random_seed_flag);
@@ -555,10 +554,12 @@
color_ = GTEST_FLAG(color);
death_test_style_ = GTEST_FLAG(death_test_style);
death_test_use_fork_ = GTEST_FLAG(death_test_use_fork);
+ fail_fast_ = GTEST_FLAG(fail_fast);
filter_ = GTEST_FLAG(filter);
internal_run_death_test_ = GTEST_FLAG(internal_run_death_test);
list_tests_ = GTEST_FLAG(list_tests);
output_ = GTEST_FLAG(output);
+ brief_ = GTEST_FLAG(brief);
print_time_ = GTEST_FLAG(print_time);
print_utf8_ = GTEST_FLAG(print_utf8);
random_seed_ = GTEST_FLAG(random_seed);
@@ -578,9 +579,11 @@
GTEST_FLAG(death_test_style) = death_test_style_;
GTEST_FLAG(death_test_use_fork) = death_test_use_fork_;
GTEST_FLAG(filter) = filter_;
+ GTEST_FLAG(fail_fast) = fail_fast_;
GTEST_FLAG(internal_run_death_test) = internal_run_death_test_;
GTEST_FLAG(list_tests) = list_tests_;
GTEST_FLAG(output) = output_;
+ GTEST_FLAG(brief) = brief_;
GTEST_FLAG(print_time) = print_time_;
GTEST_FLAG(print_utf8) = print_utf8_;
GTEST_FLAG(random_seed) = random_seed_;
@@ -599,16 +602,18 @@
std::string color_;
std::string death_test_style_;
bool death_test_use_fork_;
+ bool fail_fast_;
std::string filter_;
std::string internal_run_death_test_;
bool list_tests_;
std::string output_;
+ bool brief_;
bool print_time_;
bool print_utf8_;
- internal::Int32 random_seed_;
- internal::Int32 repeat_;
+ int32_t random_seed_;
+ int32_t repeat_;
bool shuffle_;
- internal::Int32 stack_trace_depth_;
+ int32_t stack_trace_depth_;
std::string stream_result_to_;
bool throw_on_failure_;
} GTEST_ATTRIBUTE_UNUSED_;
@@ -619,7 +624,7 @@
// If the code_point is not a valid Unicode code point
// (i.e. outside of Unicode range U+0 to U+10FFFF) it will be converted
// to "(Invalid Unicode 0xXXXXXXXX)".
-GTEST_API_ std::string CodePointToUtf8(UInt32 code_point);
+GTEST_API_ std::string CodePointToUtf8(uint32_t code_point);
// Converts a wide string to a narrow string in UTF-8 encoding.
// The wide string is assumed to have the following encoding:
@@ -652,14 +657,14 @@
const char* shard_index_str,
bool in_subprocess_for_death_test);
-// Parses the environment variable var as an Int32. If it is unset,
-// returns default_val. If it is not an Int32, prints an error and
+// Parses the environment variable var as a 32-bit integer. If it is unset,
+// returns default_val. If it is not a 32-bit integer, prints an error and
// and aborts.
-GTEST_API_ Int32 Int32FromEnvOrDie(const char* env_var, Int32 default_val);
+GTEST_API_ int32_t Int32FromEnvOrDie(const char* env_var, int32_t default_val);
// Given the total number of shards, the shard index, and the test id,
-// returns true iff the test should be run on this shard. The test id is
-// some arbitrary but unique non-negative integer assigned to each test
+// returns true if and only if the test should be run on this shard. The test id
+// is some arbitrary but unique non-negative integer assigned to each test
// method. Assumes that 0 <= shard_index < total_shards.
GTEST_API_ bool ShouldRunTestOnShard(
int total_shards, int shard_index, int test_id);
@@ -690,7 +695,8 @@
// in range [0, v.size()).
template <typename E>
inline E GetElementOr(const std::vector<E>& v, int i, E default_value) {
- return (i < 0 || i >= static_cast<int>(v.size())) ? default_value : v[i];
+ return (i < 0 || i >= static_cast<int>(v.size())) ? default_value
+ : v[static_cast<size_t>(i)];
}
// Performs an in-place shuffle of a range of the vector's elements.
@@ -712,8 +718,11 @@
// http://en.wikipedia.org/wiki/Fisher-Yates_shuffle
for (int range_width = end - begin; range_width >= 2; range_width--) {
const int last_in_range = begin + range_width - 1;
- const int selected = begin + random->Generate(range_width);
- std::swap((*v)[selected], (*v)[last_in_range]);
+ const int selected =
+ begin +
+ static_cast<int>(random->Generate(static_cast<uint32_t>(range_width)));
+ std::swap((*v)[static_cast<size_t>(selected)],
+ (*v)[static_cast<size_t>(last_in_range)]);
}
}
@@ -740,7 +749,7 @@
// TestPropertyKeyIs has NO default constructor.
explicit TestPropertyKeyIs(const std::string& key) : key_(key) {}
- // Returns true iff the test name of test property matches on key_.
+ // Returns true if and only if the test name of test property matches on key_.
bool operator()(const TestProperty& test_property) const {
return test_property.key() == key_;
}
@@ -773,15 +782,8 @@
// Functions for processing the gtest_filter flag.
- // Returns true iff the wildcard pattern matches the string. The
- // first ':' or '\0' character in pattern marks the end of it.
- //
- // This recursive algorithm isn't very efficient, but is clear and
- // works well enough for matching test names, which are short.
- static bool PatternMatchesString(const char *pattern, const char *str);
-
- // Returns true iff the user-specified filter matches the test suite
- // name and the test name.
+ // Returns true if and only if the user-specified filter matches the test
+ // suite name and the test name.
static bool FilterMatchesTest(const std::string& test_suite_name,
const std::string& test_name);
@@ -965,11 +967,12 @@
// Gets the elapsed time, in milliseconds.
TimeInMillis elapsed_time() const { return elapsed_time_; }
- // Returns true iff the unit test passed (i.e. all test suites passed).
+ // Returns true if and only if the unit test passed (i.e. all test suites
+ // passed).
bool Passed() const { return !Failed(); }
- // Returns true iff the unit test failed (i.e. some test suite failed
- // or something outside of all tests failed).
+ // Returns true if and only if the unit test failed (i.e. some test suite
+ // failed or something outside of all tests failed).
bool Failed() const {
return failed_test_suite_count() > 0 || ad_hoc_test_result()->Failed();
}
@@ -978,7 +981,7 @@
// total_test_suite_count() - 1. If i is not in that range, returns NULL.
const TestSuite* GetTestSuite(int i) const {
const int index = GetElementOr(test_suite_indices_, i, -1);
- return index < 0 ? nullptr : test_suites_[i];
+ return index < 0 ? nullptr : test_suites_[static_cast<size_t>(i)];
}
// Legacy API is deprecated but still available
@@ -990,7 +993,7 @@
// total_test_suite_count() - 1. If i is not in that range, returns NULL.
TestSuite* GetMutableSuiteCase(int i) {
const int index = GetElementOr(test_suite_indices_, i, -1);
- return index < 0 ? nullptr : test_suites_[index];
+ return index < 0 ? nullptr : test_suites_[static_cast<size_t>(index)];
}
// Provides access to the event listener list.
@@ -1033,10 +1036,10 @@
// Arguments:
//
// test_suite_name: name of the test suite
- // type_param: the name of the test's type parameter, or NULL if
- // this is not a typed or a type-parameterized test.
- // set_up_tc: pointer to the function that sets up the test suite
- // tear_down_tc: pointer to the function that tears down the test suite
+ // type_param: the name of the test's type parameter, or NULL if
+ // this is not a typed or a type-parameterized test.
+ // set_up_tc: pointer to the function that sets up the test suite
+ // tear_down_tc: pointer to the function that tears down the test suite
TestSuite* GetTestSuite(const char* test_suite_name, const char* type_param,
internal::SetUpTestSuiteFunc set_up_tc,
internal::TearDownTestSuiteFunc tear_down_tc);
@@ -1060,6 +1063,7 @@
void AddTestInfo(internal::SetUpTestSuiteFunc set_up_tc,
internal::TearDownTestSuiteFunc tear_down_tc,
TestInfo* test_info) {
+#if GTEST_HAS_DEATH_TEST
// In order to support thread-safe death tests, we need to
// remember the original working directory when the test program
// was first invoked. We cannot do this in RUN_ALL_TESTS(), as
@@ -1072,6 +1076,7 @@
GTEST_CHECK_(!original_working_dir_.IsEmpty())
<< "Failed to get the current working directory.";
}
+#endif // GTEST_HAS_DEATH_TEST
GetTestSuite(test_info->test_suite_name(), test_info->type_param(),
set_up_tc, tear_down_tc)
@@ -1084,6 +1089,17 @@
return parameterized_test_registry_;
}
+ std::set<std::string>* ignored_parameterized_test_suites() {
+ return &ignored_parameterized_test_suites_;
+ }
+
+ // Returns TypeParameterizedTestSuiteRegistry object used to keep track of
+ // type-parameterized tests and instantiations of them.
+ internal::TypeParameterizedTestSuiteRegistry&
+ type_parameterized_test_registry() {
+ return type_parameterized_test_registry_;
+ }
+
// Sets the TestSuite object for the test that's currently running.
void set_current_test_suite(TestSuite* a_current_test_suite) {
current_test_suite_ = a_current_test_suite;
@@ -1260,6 +1276,12 @@
// ParameterizedTestRegistry object used to register value-parameterized
// tests.
internal::ParameterizedTestSuiteRegistry parameterized_test_registry_;
+ internal::TypeParameterizedTestSuiteRegistry
+ type_parameterized_test_registry_;
+
+ // The set holding the name of parameterized
+ // test suites that may go uninstantiated.
+ std::set<std::string> ignored_parameterized_test_suites_;
// Indicates whether RegisterParameterizedTests() has been called already.
bool parameterized_tests_registered_;
@@ -1299,7 +1321,7 @@
// desired.
OsStackTraceGetterInterface* os_stack_trace_getter_;
- // True iff PostFlagParsingInit() has been called.
+ // True if and only if PostFlagParsingInit() has been called.
bool post_flag_parse_init_performed_;
// The random number seed used at the beginning of the test run.
@@ -1386,20 +1408,9 @@
char* end;
// BiggestConvertible is the largest integer type that system-provided
// string-to-number conversion routines can return.
+ using BiggestConvertible = unsigned long long; // NOLINT
-# if GTEST_OS_WINDOWS && !defined(__GNUC__)
-
- // MSVC and C++ Builder define __int64 instead of the standard long long.
- typedef unsigned __int64 BiggestConvertible;
- const BiggestConvertible parsed = _strtoui64(str.c_str(), &end, 10);
-
-# else
-
- typedef unsigned long long BiggestConvertible; // NOLINT
- const BiggestConvertible parsed = strtoull(str.c_str(), &end, 10);
-
-# endif // GTEST_OS_WINDOWS && !defined(__GNUC__)
-
+ const BiggestConvertible parsed = strtoull(str.c_str(), &end, 10); // NOLINT
const bool parse_success = *end == '\0' && errno == 0;
GTEST_CHECK_(sizeof(Integer) <= sizeof(parsed));
@@ -1475,8 +1486,8 @@
GTEST_CHECK_(sockfd_ != -1)
<< "Send() can be called only when there is a connection.";
- const int len = static_cast<int>(message.length());
- if (write(sockfd_, message.c_str(), len) != len) {
+ const auto len = static_cast<size_t>(message.length());
+ if (write(sockfd_, message.c_str(), len) != static_cast<ssize_t>(len)) {
GTEST_LOG_(WARNING)
<< "stream_result_to: failed to stream to "
<< host_name_ << ":" << port_num_;
@@ -1541,13 +1552,13 @@
}
// Note that "event=TestCaseStart" is a wire format and has to remain
- // "case" for compatibilty
+ // "case" for compatibility
void OnTestCaseStart(const TestCase& test_case) override {
SendLn(std::string("event=TestCaseStart&name=") + test_case.name());
}
// Note that "event=TestCaseEnd" is a wire format and has to remain
- // "case" for compatibilty
+ // "case" for compatibility
void OnTestCaseEnd(const TestCase& test_case) override {
SendLn("event=TestCaseEnd&passed=" + FormatBool(test_case.Passed()) +
"&elapsed_time=" + StreamableToString(test_case.elapsed_time()) +
@@ -1595,7 +1606,7 @@
GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251
-#endif // GTEST_SRC_GTEST_INTERNAL_INL_H_
+#endif // GOOGLETEST_SRC_GTEST_INTERNAL_INL_H_
#if GTEST_OS_WINDOWS
# define vsnprintf _vsnprintf
@@ -1653,8 +1664,8 @@
// stack trace.
const char kStackTraceMarker[] = "\nStack trace:\n";
-// g_help_flag is true iff the --help flag or an equivalent form is
-// specified on the command line.
+// g_help_flag is true if and only if the --help flag or an equivalent form
+// is specified on the command line.
bool g_help_flag = false;
// Utilty function to Open File for Writing
@@ -1685,21 +1696,35 @@
return kUniversalFilter;
}
+// Bazel passes in the argument to '--test_runner_fail_fast' via the
+// TESTBRIDGE_TEST_RUNNER_FAIL_FAST environment variable.
+static bool GetDefaultFailFast() {
+ const char* const testbridge_test_runner_fail_fast =
+ internal::posix::GetEnv("TESTBRIDGE_TEST_RUNNER_FAIL_FAST");
+ if (testbridge_test_runner_fail_fast != nullptr) {
+ return strcmp(testbridge_test_runner_fail_fast, "1") == 0;
+ }
+ return false;
+}
+
+GTEST_DEFINE_bool_(
+ fail_fast, internal::BoolFromGTestEnv("fail_fast", GetDefaultFailFast()),
+ "True if and only if a test failure should stop further test execution.");
+
GTEST_DEFINE_bool_(
also_run_disabled_tests,
internal::BoolFromGTestEnv("also_run_disabled_tests", false),
"Run disabled tests too, in addition to the tests normally being run.");
GTEST_DEFINE_bool_(
- break_on_failure,
- internal::BoolFromGTestEnv("break_on_failure", false),
- "True iff a failed assertion should be a debugger break-point.");
+ break_on_failure, internal::BoolFromGTestEnv("break_on_failure", false),
+ "True if and only if a failed assertion should be a debugger "
+ "break-point.");
-GTEST_DEFINE_bool_(
- catch_exceptions,
- internal::BoolFromGTestEnv("catch_exceptions", true),
- "True iff " GTEST_NAME_
- " should catch exceptions and treat them as test failures.");
+GTEST_DEFINE_bool_(catch_exceptions,
+ internal::BoolFromGTestEnv("catch_exceptions", true),
+ "True if and only if " GTEST_NAME_
+ " should catch exceptions and treat them as test failures.");
GTEST_DEFINE_string_(
color,
@@ -1747,16 +1772,16 @@
"digits.");
GTEST_DEFINE_bool_(
- print_time,
- internal::BoolFromGTestEnv("print_time", true),
- "True iff " GTEST_NAME_
- " should display elapsed time in text output.");
+ brief, internal::BoolFromGTestEnv("brief", false),
+ "True if only test failures should be displayed in text output.");
-GTEST_DEFINE_bool_(
- print_utf8,
- internal::BoolFromGTestEnv("print_utf8", true),
- "True iff " GTEST_NAME_
- " prints UTF8 characters as text.");
+GTEST_DEFINE_bool_(print_time, internal::BoolFromGTestEnv("print_time", true),
+ "True if and only if " GTEST_NAME_
+ " should display elapsed time in text output.");
+
+GTEST_DEFINE_bool_(print_utf8, internal::BoolFromGTestEnv("print_utf8", true),
+ "True if and only if " GTEST_NAME_
+ " prints UTF8 characters as text.");
GTEST_DEFINE_int32_(
random_seed,
@@ -1770,16 +1795,14 @@
"How many times to repeat each test. Specify a negative number "
"for repeating forever. Useful for shaking out flaky tests.");
-GTEST_DEFINE_bool_(
- show_internal_stack_frames, false,
- "True iff " GTEST_NAME_ " should include internal stack frames when "
- "printing test failure stack traces.");
+GTEST_DEFINE_bool_(show_internal_stack_frames, false,
+ "True if and only if " GTEST_NAME_
+ " should include internal stack frames when "
+ "printing test failure stack traces.");
-GTEST_DEFINE_bool_(
- shuffle,
- internal::BoolFromGTestEnv("shuffle", false),
- "True iff " GTEST_NAME_
- " should randomize tests' order on every run.");
+GTEST_DEFINE_bool_(shuffle, internal::BoolFromGTestEnv("shuffle", false),
+ "True if and only if " GTEST_NAME_
+ " should randomize tests' order on every run.");
GTEST_DEFINE_int32_(
stack_trace_depth,
@@ -1813,10 +1836,10 @@
// Generates a random number from [0, range), using a Linear
// Congruential Generator (LCG). Crashes if 'range' is 0 or greater
// than kMaxRange.
-UInt32 Random::Generate(UInt32 range) {
+uint32_t Random::Generate(uint32_t range) {
// These constants are the same as are used in glibc's rand(3).
// Use wider types than necessary to prevent unsigned overflow diagnostics.
- state_ = static_cast<UInt32>(1103515245ULL*state_ + 12345U) % kMaxRange;
+ state_ = static_cast<uint32_t>(1103515245ULL*state_ + 12345U) % kMaxRange;
GTEST_CHECK_(range > 0)
<< "Cannot generate a number in the range [0, 0).";
@@ -1830,7 +1853,7 @@
return state_ % range;
}
-// GTestIsInitialized() returns true iff the user has initialized
+// GTestIsInitialized() returns true if and only if the user has initialized
// Google Test. Useful for catching the user mistake of not initializing
// Google Test before calling RUN_ALL_TESTS().
static bool GTestIsInitialized() { return GetArgvs().size() > 0; }
@@ -1847,18 +1870,18 @@
return sum;
}
-// Returns true iff the test suite passed.
+// Returns true if and only if the test suite passed.
static bool TestSuitePassed(const TestSuite* test_suite) {
return test_suite->should_run() && test_suite->Passed();
}
-// Returns true iff the test suite failed.
+// Returns true if and only if the test suite failed.
static bool TestSuiteFailed(const TestSuite* test_suite) {
return test_suite->should_run() && test_suite->Failed();
}
-// Returns true iff test_suite contains at least one test that should
-// run.
+// Returns true if and only if test_suite contains at least one test that
+// should run.
static bool ShouldRunTestSuite(const TestSuite* test_suite) {
return test_suite->should_run();
}
@@ -1886,6 +1909,162 @@
); // NOLINT
}
+namespace {
+
+// When TEST_P is found without a matching INSTANTIATE_TEST_SUITE_P
+// to creates test cases for it, a syntetic test case is
+// inserted to report ether an error or a log message.
+//
+// This configuration bit will likely be removed at some point.
+constexpr bool kErrorOnUninstantiatedParameterizedTest = true;
+constexpr bool kErrorOnUninstantiatedTypeParameterizedTest = true;
+
+// A test that fails at a given file/line location with a given message.
+class FailureTest : public Test {
+ public:
+ explicit FailureTest(const CodeLocation& loc, std::string error_message,
+ bool as_error)
+ : loc_(loc),
+ error_message_(std::move(error_message)),
+ as_error_(as_error) {}
+
+ void TestBody() override {
+ if (as_error_) {
+ AssertHelper(TestPartResult::kNonFatalFailure, loc_.file.c_str(),
+ loc_.line, "") = Message() << error_message_;
+ } else {
+ std::cout << error_message_ << std::endl;
+ }
+ }
+
+ private:
+ const CodeLocation loc_;
+ const std::string error_message_;
+ const bool as_error_;
+};
+
+
+} // namespace
+
+std::set<std::string>* GetIgnoredParameterizedTestSuites() {
+ return UnitTest::GetInstance()->impl()->ignored_parameterized_test_suites();
+}
+
+// Add a given test_suit to the list of them allow to go un-instantiated.
+MarkAsIgnored::MarkAsIgnored(const char* test_suite) {
+ GetIgnoredParameterizedTestSuites()->insert(test_suite);
+}
+
+// If this parameterized test suite has no instantiations (and that
+// has not been marked as okay), emit a test case reporting that.
+void InsertSyntheticTestCase(const std::string& name, CodeLocation location,
+ bool has_test_p) {
+ const auto& ignored = *GetIgnoredParameterizedTestSuites();
+ if (ignored.find(name) != ignored.end()) return;
+
+ const char kMissingInstantiation[] = //
+ " is defined via TEST_P, but never instantiated. None of the test cases "
+ "will run. Either no INSTANTIATE_TEST_SUITE_P is provided or the only "
+ "ones provided expand to nothing."
+ "\n\n"
+ "Ideally, TEST_P definitions should only ever be included as part of "
+ "binaries that intend to use them. (As opposed to, for example, being "
+ "placed in a library that may be linked in to get other utilities.)";
+
+ const char kMissingTestCase[] = //
+ " is instantiated via INSTANTIATE_TEST_SUITE_P, but no tests are "
+ "defined via TEST_P . No test cases will run."
+ "\n\n"
+ "Ideally, INSTANTIATE_TEST_SUITE_P should only ever be invoked from "
+ "code that always depend on code that provides TEST_P. Failing to do "
+ "so is often an indication of dead code, e.g. the last TEST_P was "
+ "removed but the rest got left behind.";
+
+ std::string message =
+ "Parameterized test suite " + name +
+ (has_test_p ? kMissingInstantiation : kMissingTestCase) +
+ "\n\n"
+ "To suppress this error for this test suite, insert the following line "
+ "(in a non-header) in the namespace it is defined in:"
+ "\n\n"
+ "GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(" + name + ");";
+
+ std::string full_name = "UninstantiatedParameterizedTestSuite<" + name + ">";
+ RegisterTest( //
+ "GoogleTestVerification", full_name.c_str(),
+ nullptr, // No type parameter.
+ nullptr, // No value parameter.
+ location.file.c_str(), location.line, [message, location] {
+ return new FailureTest(location, message,
+ kErrorOnUninstantiatedParameterizedTest);
+ });
+}
+
+void RegisterTypeParameterizedTestSuite(const char* test_suite_name,
+ CodeLocation code_location) {
+ GetUnitTestImpl()->type_parameterized_test_registry().RegisterTestSuite(
+ test_suite_name, code_location);
+}
+
+void RegisterTypeParameterizedTestSuiteInstantiation(const char* case_name) {
+ GetUnitTestImpl()
+ ->type_parameterized_test_registry()
+ .RegisterInstantiation(case_name);
+}
+
+void TypeParameterizedTestSuiteRegistry::RegisterTestSuite(
+ const char* test_suite_name, CodeLocation code_location) {
+ suites_.emplace(std::string(test_suite_name),
+ TypeParameterizedTestSuiteInfo(code_location));
+}
+
+void TypeParameterizedTestSuiteRegistry::RegisterInstantiation(
+ const char* test_suite_name) {
+ auto it = suites_.find(std::string(test_suite_name));
+ if (it != suites_.end()) {
+ it->second.instantiated = true;
+ } else {
+ GTEST_LOG_(ERROR) << "Unknown type parameterized test suit '"
+ << test_suite_name << "'";
+ }
+}
+
+void TypeParameterizedTestSuiteRegistry::CheckForInstantiations() {
+ const auto& ignored = *GetIgnoredParameterizedTestSuites();
+ for (const auto& testcase : suites_) {
+ if (testcase.second.instantiated) continue;
+ if (ignored.find(testcase.first) != ignored.end()) continue;
+
+ std::string message =
+ "Type parameterized test suite " + testcase.first +
+ " is defined via REGISTER_TYPED_TEST_SUITE_P, but never instantiated "
+ "via INSTANTIATE_TYPED_TEST_SUITE_P. None of the test cases will run."
+ "\n\n"
+ "Ideally, TYPED_TEST_P definitions should only ever be included as "
+ "part of binaries that intend to use them. (As opposed to, for "
+ "example, being placed in a library that may be linked in to get other "
+ "utilities.)"
+ "\n\n"
+ "To suppress this error for this test suite, insert the following line "
+ "(in a non-header) in the namespace it is defined in:"
+ "\n\n"
+ "GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(" +
+ testcase.first + ");";
+
+ std::string full_name =
+ "UninstantiatedTypeParameterizedTestSuite<" + testcase.first + ">";
+ RegisterTest( //
+ "GoogleTestVerification", full_name.c_str(),
+ nullptr, // No type parameter.
+ nullptr, // No value parameter.
+ testcase.second.code_location.file.c_str(),
+ testcase.second.code_location.line, [message, testcase] {
+ return new FailureTest(testcase.second.code_location, message,
+ kErrorOnUninstantiatedTypeParameterizedTest);
+ });
+ }
+}
+
// A copy of all command line arguments. Set by InitGoogleTest().
static ::std::vector<std::string> g_argvs;
@@ -1922,7 +2101,8 @@
const char* const colon = strchr(gtest_output_flag, ':');
return (colon == nullptr)
? std::string(gtest_output_flag)
- : std::string(gtest_output_flag, colon - gtest_output_flag);
+ : std::string(gtest_output_flag,
+ static_cast<size_t>(colon - gtest_output_flag));
}
// Returns the name of the requested output file, or the default if none
@@ -1957,51 +2137,86 @@
return result.string();
}
-// Returns true iff the wildcard pattern matches the string. The
-// first ':' or '\0' character in pattern marks the end of it.
+// Returns true if and only if the wildcard pattern matches the string. Each
+// pattern consists of regular characters, single-character wildcards (?), and
+// multi-character wildcards (*).
//
-// This recursive algorithm isn't very efficient, but is clear and
-// works well enough for matching test names, which are short.
-bool UnitTestOptions::PatternMatchesString(const char *pattern,
- const char *str) {
- switch (*pattern) {
- case '\0':
- case ':': // Either ':' or '\0' marks the end of the pattern.
- return *str == '\0';
- case '?': // Matches any single character.
- return *str != '\0' && PatternMatchesString(pattern + 1, str + 1);
- case '*': // Matches any string (possibly empty) of characters.
- return (*str != '\0' && PatternMatchesString(pattern, str + 1)) ||
- PatternMatchesString(pattern + 1, str);
- default: // Non-special character. Matches itself.
- return *pattern == *str &&
- PatternMatchesString(pattern + 1, str + 1);
+// This function implements a linear-time string globbing algorithm based on
+// https://research.swtch.com/glob.
+static bool PatternMatchesString(const std::string& name_str,
+ const char* pattern, const char* pattern_end) {
+ const char* name = name_str.c_str();
+ const char* const name_begin = name;
+ const char* const name_end = name + name_str.size();
+
+ const char* pattern_next = pattern;
+ const char* name_next = name;
+
+ while (pattern < pattern_end || name < name_end) {
+ if (pattern < pattern_end) {
+ switch (*pattern) {
+ default: // Match an ordinary character.
+ if (name < name_end && *name == *pattern) {
+ ++pattern;
+ ++name;
+ continue;
+ }
+ break;
+ case '?': // Match any single character.
+ if (name < name_end) {
+ ++pattern;
+ ++name;
+ continue;
+ }
+ break;
+ case '*':
+ // Match zero or more characters. Start by skipping over the wildcard
+ // and matching zero characters from name. If that fails, restart and
+ // match one more character than the last attempt.
+ pattern_next = pattern;
+ name_next = name + 1;
+ ++pattern;
+ continue;
+ }
+ }
+ // Failed to match a character. Restart if possible.
+ if (name_begin < name_next && name_next <= name_end) {
+ pattern = pattern_next;
+ name = name_next;
+ continue;
+ }
+ return false;
}
+ return true;
}
-bool UnitTestOptions::MatchesFilter(
- const std::string& name, const char* filter) {
- const char *cur_pattern = filter;
- for (;;) {
- if (PatternMatchesString(cur_pattern, name.c_str())) {
+bool UnitTestOptions::MatchesFilter(const std::string& name_str,
+ const char* filter) {
+ // The filter is a list of patterns separated by colons (:).
+ const char* pattern = filter;
+ while (true) {
+ // Find the bounds of this pattern.
+ const char* const next_sep = strchr(pattern, ':');
+ const char* const pattern_end =
+ next_sep != nullptr ? next_sep : pattern + strlen(pattern);
+
+ // Check if this pattern matches name_str.
+ if (PatternMatchesString(name_str, pattern, pattern_end)) {
return true;
}
- // Finds the next pattern in the filter.
- cur_pattern = strchr(cur_pattern, ':');
-
- // Returns if no more pattern can be found.
- if (cur_pattern == nullptr) {
+ // Give up on this pattern. However, if we found a pattern separator (:),
+ // advance to the next pattern (skipping over the separator) and restart.
+ if (next_sep == nullptr) {
return false;
}
-
- // Skips the pattern separater (the ':' character).
- cur_pattern++;
+ pattern = next_sep + 1;
}
+ return true;
}
-// Returns true iff the user-specified filter matches the test suite
-// name and the test name.
+// Returns true if and only if the user-specified filter matches the test
+// suite name and the test name.
bool UnitTestOptions::FilterMatchesTest(const std::string& test_suite_name,
const std::string& test_name) {
const std::string& full_name = test_suite_name + "." + test_name.c_str();
@@ -2307,44 +2522,30 @@
); // NOLINT
}
-// Returns the current time in milliseconds.
-TimeInMillis GetTimeInMillis() {
-#if GTEST_OS_WINDOWS_MOBILE || defined(__BORLANDC__)
- // Difference between 1970-01-01 and 1601-01-01 in milliseconds.
- // http://analogous.blogspot.com/2005/04/epoch.html
- const TimeInMillis kJavaEpochToWinFileTimeDelta =
- static_cast<TimeInMillis>(116444736UL) * 100000UL;
- const DWORD kTenthMicrosInMilliSecond = 10000;
+// A helper class for measuring elapsed times.
+class Timer {
+ public:
+ Timer() : start_(std::chrono::steady_clock::now()) {}
- SYSTEMTIME now_systime;
- FILETIME now_filetime;
- ULARGE_INTEGER now_int64;
- GetSystemTime(&now_systime);
- if (SystemTimeToFileTime(&now_systime, &now_filetime)) {
- now_int64.LowPart = now_filetime.dwLowDateTime;
- now_int64.HighPart = now_filetime.dwHighDateTime;
- now_int64.QuadPart = (now_int64.QuadPart / kTenthMicrosInMilliSecond) -
- kJavaEpochToWinFileTimeDelta;
- return now_int64.QuadPart;
+ // Return time elapsed in milliseconds since the timer was created.
+ TimeInMillis Elapsed() {
+ return std::chrono::duration_cast<std::chrono::milliseconds>(
+ std::chrono::steady_clock::now() - start_)
+ .count();
}
- return 0;
-#elif GTEST_OS_WINDOWS && !GTEST_HAS_GETTIMEOFDAY_
- __timeb64 now;
- // MSVC 8 deprecates _ftime64(), so we want to suppress warning 4996
- // (deprecated function) there.
- GTEST_DISABLE_MSC_DEPRECATED_PUSH_()
- _ftime64(&now);
- GTEST_DISABLE_MSC_DEPRECATED_POP_()
+ private:
+ std::chrono::steady_clock::time_point start_;
+};
- return static_cast<TimeInMillis>(now.time) * 1000 + now.millitm;
-#elif GTEST_HAS_GETTIMEOFDAY_
- struct timeval now;
- gettimeofday(&now, nullptr);
- return static_cast<TimeInMillis>(now.tv_sec) * 1000 + now.tv_usec / 1000;
-#else
-# error "Don't know how to get the current time on your system."
-#endif
+// Returns a timestamp as milliseconds since the epoch. Note this time may jump
+// around subject to adjustments by the system, to measure elapsed time use
+// Timer instead.
+TimeInMillis GetTimeInMillis() {
+ return std::chrono::duration_cast<std::chrono::milliseconds>(
+ std::chrono::system_clock::now() -
+ std::chrono::system_clock::from_time_t(0))
+ .count();
}
// Utilities
@@ -2385,7 +2586,8 @@
#endif // GTEST_OS_WINDOWS_MOBILE
-// Compares two C strings. Returns true iff they have the same content.
+// Compares two C strings. Returns true if and only if they have the same
+// content.
//
// Unlike strcmp(), this function can handle NULL argument(s). A NULL
// C string is considered different to any non-NULL C string,
@@ -2398,7 +2600,7 @@
return strcmp(lhs, rhs) == 0;
}
-#if GTEST_HAS_STD_WSTRING || GTEST_HAS_GLOBAL_WSTRING
+#if GTEST_HAS_STD_WSTRING
// Converts an array of wide chars to a narrow string using the UTF-8
// encoding, and streams the result to the given Message object.
@@ -2416,7 +2618,7 @@
}
}
-#endif // GTEST_HAS_STD_WSTRING || GTEST_HAS_GLOBAL_WSTRING
+#endif // GTEST_HAS_STD_WSTRING
void SplitString(const ::std::string& str, char delimiter,
::std::vector< ::std::string>* dest) {
@@ -2466,15 +2668,6 @@
}
#endif // GTEST_HAS_STD_WSTRING
-#if GTEST_HAS_GLOBAL_WSTRING
-// Converts the given wide string to a narrow string using the UTF-8
-// encoding, and streams the result to this Message object.
-Message& Message::operator <<(const ::wstring& wstr) {
- internal::StreamWideCharsToMessage(wstr.c_str(), wstr.length(), this);
- return *this;
-}
-#endif // GTEST_HAS_GLOBAL_WSTRING
-
// Gets the text streamed to this object so far as an std::string.
// Each '\0' character in the buffer is replaced with "\\0".
std::string Message::GetString() const {
@@ -2725,9 +2918,10 @@
for (; edit_i < edits.size(); ++edit_i) {
if (n_suffix >= context) {
// Continue only if the next hunk is very close.
- std::vector<EditType>::const_iterator it = edits.begin() + edit_i;
+ auto it = edits.begin() + static_cast<int>(edit_i);
while (it != edits.end() && *it == kMatch) ++it;
- if (it == edits.end() || (it - edits.begin()) - edit_i >= context) {
+ if (it == edits.end() ||
+ static_cast<size_t>(it - edits.begin()) - edit_i >= context) {
// There is no next edit or it is too far away.
break;
}
@@ -2803,7 +2997,7 @@
// lhs_value: "5"
// rhs_value: "6"
//
-// The ignoring_case parameter is true iff the assertion is a
+// The ignoring_case parameter is true if and only if the assertion is a
// *_STRCASEEQ*. When it's true, the string "Ignoring case" will
// be inserted into the message.
AssertionResult EqFailure(const char* lhs_expression,
@@ -2866,6 +3060,31 @@
const double diff = fabs(val1 - val2);
if (diff <= abs_error) return AssertionSuccess();
+ // Find the value which is closest to zero.
+ const double min_abs = std::min(fabs(val1), fabs(val2));
+ // Find the distance to the next double from that value.
+ const double epsilon =
+ nextafter(min_abs, std::numeric_limits<double>::infinity()) - min_abs;
+ // Detect the case where abs_error is so small that EXPECT_NEAR is
+ // effectively the same as EXPECT_EQUAL, and give an informative error
+ // message so that the situation can be more easily understood without
+ // requiring exotic floating-point knowledge.
+ // Don't do an epsilon check if abs_error is zero because that implies
+ // that an equality check was actually intended.
+ if (!(std::isnan)(val1) && !(std::isnan)(val2) && abs_error > 0 &&
+ abs_error < epsilon) {
+ return AssertionFailure()
+ << "The difference between " << expr1 << " and " << expr2 << " is "
+ << diff << ", where\n"
+ << expr1 << " evaluates to " << val1 << ",\n"
+ << expr2 << " evaluates to " << val2 << ".\nThe abs_error parameter "
+ << abs_error_expr << " evaluates to " << abs_error
+ << " which is smaller than the minimum distance between doubles for "
+ "numbers of this magnitude which is "
+ << epsilon
+ << ", thus making this EXPECT_NEAR check equivalent to "
+ "EXPECT_EQUAL. Consider using EXPECT_DOUBLE_EQ instead.";
+ }
return AssertionFailure()
<< "The difference between " << expr1 << " and " << expr2
<< " is " << diff << ", which exceeds " << abs_error_expr << ", where\n"
@@ -2928,57 +3147,6 @@
namespace internal {
-// The helper function for {ASSERT|EXPECT}_EQ with int or enum
-// arguments.
-AssertionResult CmpHelperEQ(const char* lhs_expression,
- const char* rhs_expression,
- BiggestInt lhs,
- BiggestInt rhs) {
- if (lhs == rhs) {
- return AssertionSuccess();
- }
-
- return EqFailure(lhs_expression,
- rhs_expression,
- FormatForComparisonFailureMessage(lhs, rhs),
- FormatForComparisonFailureMessage(rhs, lhs),
- false);
-}
-
-// A macro for implementing the helper functions needed to implement
-// ASSERT_?? and EXPECT_?? with integer or enum arguments. It is here
-// just to avoid copy-and-paste of similar code.
-#define GTEST_IMPL_CMP_HELPER_(op_name, op)\
-AssertionResult CmpHelper##op_name(const char* expr1, const char* expr2, \
- BiggestInt val1, BiggestInt val2) {\
- if (val1 op val2) {\
- return AssertionSuccess();\
- } else {\
- return AssertionFailure() \
- << "Expected: (" << expr1 << ") " #op " (" << expr2\
- << "), actual: " << FormatForComparisonFailureMessage(val1, val2)\
- << " vs " << FormatForComparisonFailureMessage(val2, val1);\
- }\
-}
-
-// Implements the helper function for {ASSERT|EXPECT}_NE with int or
-// enum arguments.
-GTEST_IMPL_CMP_HELPER_(NE, !=)
-// Implements the helper function for {ASSERT|EXPECT}_LE with int or
-// enum arguments.
-GTEST_IMPL_CMP_HELPER_(LE, <=)
-// Implements the helper function for {ASSERT|EXPECT}_LT with int or
-// enum arguments.
-GTEST_IMPL_CMP_HELPER_(LT, < )
-// Implements the helper function for {ASSERT|EXPECT}_GE with int or
-// enum arguments.
-GTEST_IMPL_CMP_HELPER_(GE, >=)
-// Implements the helper function for {ASSERT|EXPECT}_GT with int or
-// enum arguments.
-GTEST_IMPL_CMP_HELPER_(GT, > )
-
-#undef GTEST_IMPL_CMP_HELPER_
-
// The helper function for {ASSERT|EXPECT}_STREQ.
AssertionResult CmpHelperSTREQ(const char* lhs_expression,
const char* rhs_expression,
@@ -3046,9 +3214,9 @@
// Helper functions for implementing IsSubString() and IsNotSubstring().
-// This group of overloaded functions return true iff needle is a
-// substring of haystack. NULL is considered a substring of itself
-// only.
+// This group of overloaded functions return true if and only if needle
+// is a substring of haystack. NULL is considered a substring of
+// itself only.
bool IsSubstringPred(const char* needle, const char* haystack) {
if (needle == nullptr || haystack == nullptr) return needle == haystack;
@@ -3174,7 +3342,7 @@
char error_text[kBufSize] = { '\0' };
DWORD message_length = ::FormatMessageA(kFlags,
0, // no source, we're asking system
- hr, // the error
+ static_cast<DWORD>(hr), // the error
0, // no line width restrictions
error_text, // output buffer
kBufSize, // buf size
@@ -3224,35 +3392,35 @@
// 17 - 21 bits 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
// The maximum code-point a one-byte UTF-8 sequence can represent.
-const UInt32 kMaxCodePoint1 = (static_cast<UInt32>(1) << 7) - 1;
+constexpr uint32_t kMaxCodePoint1 = (static_cast<uint32_t>(1) << 7) - 1;
// The maximum code-point a two-byte UTF-8 sequence can represent.
-const UInt32 kMaxCodePoint2 = (static_cast<UInt32>(1) << (5 + 6)) - 1;
+constexpr uint32_t kMaxCodePoint2 = (static_cast<uint32_t>(1) << (5 + 6)) - 1;
// The maximum code-point a three-byte UTF-8 sequence can represent.
-const UInt32 kMaxCodePoint3 = (static_cast<UInt32>(1) << (4 + 2*6)) - 1;
+constexpr uint32_t kMaxCodePoint3 = (static_cast<uint32_t>(1) << (4 + 2*6)) - 1;
// The maximum code-point a four-byte UTF-8 sequence can represent.
-const UInt32 kMaxCodePoint4 = (static_cast<UInt32>(1) << (3 + 3*6)) - 1;
+constexpr uint32_t kMaxCodePoint4 = (static_cast<uint32_t>(1) << (3 + 3*6)) - 1;
// Chops off the n lowest bits from a bit pattern. Returns the n
// lowest bits. As a side effect, the original bit pattern will be
// shifted to the right by n bits.
-inline UInt32 ChopLowBits(UInt32* bits, int n) {
- const UInt32 low_bits = *bits & ((static_cast<UInt32>(1) << n) - 1);
+inline uint32_t ChopLowBits(uint32_t* bits, int n) {
+ const uint32_t low_bits = *bits & ((static_cast<uint32_t>(1) << n) - 1);
*bits >>= n;
return low_bits;
}
// Converts a Unicode code point to a narrow string in UTF-8 encoding.
-// code_point parameter is of type UInt32 because wchar_t may not be
+// code_point parameter is of type uint32_t because wchar_t may not be
// wide enough to contain a code point.
// If the code_point is not a valid Unicode code point
// (i.e. outside of Unicode range U+0 to U+10FFFF) it will be converted
// to "(Invalid Unicode 0xXXXXXXXX)".
-std::string CodePointToUtf8(UInt32 code_point) {
+std::string CodePointToUtf8(uint32_t code_point) {
if (code_point > kMaxCodePoint4) {
- return "(Invalid Unicode 0x" + String::FormatHexInt(code_point) + ")";
+ return "(Invalid Unicode 0x" + String::FormatHexUInt32(code_point) + ")";
}
char str[5]; // Big enough for the largest valid code point.
@@ -3291,14 +3459,17 @@
}
// Creates a Unicode code point from UTF16 surrogate pair.
-inline UInt32 CreateCodePointFromUtf16SurrogatePair(wchar_t first,
- wchar_t second) {
- const UInt32 mask = (1 << 10) - 1;
- return (sizeof(wchar_t) == 2) ?
- (((first & mask) << 10) | (second & mask)) + 0x10000 :
- // This function should not be called when the condition is
- // false, but we provide a sensible default in case it is.
- static_cast<UInt32>(first);
+inline uint32_t CreateCodePointFromUtf16SurrogatePair(wchar_t first,
+ wchar_t second) {
+ const auto first_u = static_cast<uint32_t>(first);
+ const auto second_u = static_cast<uint32_t>(second);
+ const uint32_t mask = (1 << 10) - 1;
+ return (sizeof(wchar_t) == 2)
+ ? (((first_u & mask) << 10) | (second_u & mask)) + 0x10000
+ :
+ // This function should not be called when the condition is
+ // false, but we provide a sensible default in case it is.
+ first_u;
}
// Converts a wide string to a narrow string in UTF-8 encoding.
@@ -3320,7 +3491,7 @@
::std::stringstream stream;
for (int i = 0; i < num_chars; ++i) {
- UInt32 unicode_code_point;
+ uint32_t unicode_code_point;
if (str[i] == L'\0') {
break;
@@ -3329,7 +3500,7 @@
str[i + 1]);
i++;
} else {
- unicode_code_point = static_cast<UInt32>(str[i]);
+ unicode_code_point = static_cast<uint32_t>(str[i]);
}
stream << CodePointToUtf8(unicode_code_point);
@@ -3345,8 +3516,8 @@
return internal::WideStringToUtf8(wide_c_str, -1);
}
-// Compares two wide C strings. Returns true iff they have the same
-// content.
+// Compares two wide C strings. Returns true if and only if they have the
+// same content.
//
// Unlike wcscmp(), this function can handle NULL argument(s). A NULL
// C string is considered different to any non-NULL C string,
@@ -3390,7 +3561,7 @@
<< " vs " << PrintToString(s2);
}
-// Compares two C strings, ignoring case. Returns true iff they have
+// Compares two C strings, ignoring case. Returns true if and only if they have
// the same content.
//
// Unlike strcasecmp(), this function can handle NULL argument(s). A
@@ -3402,18 +3573,18 @@
return posix::StrCaseCmp(lhs, rhs) == 0;
}
- // Compares two wide C strings, ignoring case. Returns true iff they
- // have the same content.
- //
- // Unlike wcscasecmp(), this function can handle NULL argument(s).
- // A NULL C string is considered different to any non-NULL wide C string,
- // including the empty string.
- // NB: The implementations on different platforms slightly differ.
- // On windows, this method uses _wcsicmp which compares according to LC_CTYPE
- // environment variable. On GNU platform this method uses wcscasecmp
- // which compares according to LC_CTYPE category of the current locale.
- // On MacOS X, it uses towlower, which also uses LC_CTYPE category of the
- // current locale.
+// Compares two wide C strings, ignoring case. Returns true if and only if they
+// have the same content.
+//
+// Unlike wcscasecmp(), this function can handle NULL argument(s).
+// A NULL C string is considered different to any non-NULL wide C string,
+// including the empty string.
+// NB: The implementations on different platforms slightly differ.
+// On windows, this method uses _wcsicmp which compares according to LC_CTYPE
+// environment variable. On GNU platform this method uses wcscasecmp
+// which compares according to LC_CTYPE category of the current locale.
+// On MacOS X, it uses towlower, which also uses LC_CTYPE category of the
+// current locale.
bool String::CaseInsensitiveWideCStringEquals(const wchar_t* lhs,
const wchar_t* rhs) {
if (lhs == nullptr) return rhs == nullptr;
@@ -3429,14 +3600,14 @@
// Other unknown OSes may not define it either.
wint_t left, right;
do {
- left = towlower(*lhs++);
- right = towlower(*rhs++);
+ left = towlower(static_cast<wint_t>(*lhs++));
+ right = towlower(static_cast<wint_t>(*rhs++));
} while (left && left == right);
return left == right;
#endif // OS selector
}
-// Returns true iff str ends with the given suffix, ignoring case.
+// Returns true if and only if str ends with the given suffix, ignoring case.
// Any string is considered to end with an empty suffix.
bool String::EndsWithCaseInsensitive(
const std::string& str, const std::string& suffix) {
@@ -3449,16 +3620,26 @@
// Formats an int value as "%02d".
std::string String::FormatIntWidth2(int value) {
+ return FormatIntWidthN(value, 2);
+}
+
+// Formats an int value to given width with leading zeros.
+std::string String::FormatIntWidthN(int value, int width) {
std::stringstream ss;
- ss << std::setfill('0') << std::setw(2) << value;
+ ss << std::setfill('0') << std::setw(width) << value;
+ return ss.str();
+}
+
+// Formats an int value as "%X".
+std::string String::FormatHexUInt32(uint32_t value) {
+ std::stringstream ss;
+ ss << std::hex << std::uppercase << value;
return ss.str();
}
// Formats an int value as "%X".
std::string String::FormatHexInt(int value) {
- std::stringstream ss;
- ss << std::hex << std::uppercase << value;
- return ss.str();
+ return FormatHexUInt32(static_cast<uint32_t>(value));
}
// Formats a byte as "%02X".
@@ -3477,7 +3658,7 @@
const char* const end = start + str.length();
std::string result;
- result.reserve(2 * (end - start));
+ result.reserve(static_cast<size_t>(2 * (end - start)));
for (const char* ch = start; ch != end; ++ch) {
if (*ch == '\0') {
result += "\\0"; // Replaces NUL with "\\0";
@@ -3497,7 +3678,9 @@
if (user_msg_string.empty()) {
return gtest_msg;
}
-
+ if (gtest_msg.empty()) {
+ return user_msg_string;
+ }
return gtest_msg + "\n" + user_msg_string;
}
@@ -3507,9 +3690,7 @@
// Creates an empty TestResult.
TestResult::TestResult()
- : death_test_count_(0),
- elapsed_time_(0) {
-}
+ : death_test_count_(0), start_timestamp_(0), elapsed_time_(0) {}
// D'tor.
TestResult::~TestResult() {
@@ -3521,7 +3702,7 @@
const TestPartResult& TestResult::GetTestPartResult(int i) const {
if (i < 0 || i >= total_part_count())
internal::posix::Abort();
- return test_part_results_.at(i);
+ return test_part_results_.at(static_cast<size_t>(i));
}
// Returns the i-th test property. i can range from 0 to
@@ -3530,7 +3711,7 @@
const TestProperty& TestResult::GetTestProperty(int i) const {
if (i < 0 || i >= test_property_count())
internal::posix::Abort();
- return test_properties_.at(i);
+ return test_properties_.at(static_cast<size_t>(i));
}
// Clears the test part results.
@@ -3551,7 +3732,7 @@
if (!ValidateTestProperty(xml_element, test_property)) {
return;
}
- internal::MutexLock lock(&test_properites_mutex_);
+ internal::MutexLock lock(&test_properties_mutex_);
const std::vector<TestProperty>::iterator property_with_matching_key =
std::find_if(test_properties_.begin(), test_properties_.end(),
internal::TestPropertyKeyIs(test_property.key()));
@@ -3578,20 +3759,21 @@
// The list of reserved attributes used in the <testsuite> element of XML
// output.
static const char* const kReservedTestSuiteAttributes[] = {
- "disabled",
- "errors",
- "failures",
- "name",
- "tests",
- "time"
-};
+ "disabled", "errors", "failures", "name",
+ "tests", "time", "timestamp", "skipped"};
// The list of reserved attributes used in the <testcase> element of XML output.
static const char* const kReservedTestCaseAttributes[] = {
- "classname", "name", "status", "time",
- "type_param", "value_param", "file", "line"};
+ "classname", "name", "status", "time", "type_param",
+ "value_param", "file", "line"};
-template <int kSize>
+// Use a slightly different set for allowed output to ensure existing tests can
+// still RecordProperty("result") or "RecordProperty(timestamp")
+static const char* const kReservedOutputTestCaseAttributes[] = {
+ "classname", "name", "status", "time", "type_param",
+ "value_param", "file", "line", "result", "timestamp"};
+
+template <size_t kSize>
std::vector<std::string> ArrayAsVector(const char* const (&array)[kSize]) {
return std::vector<std::string>(array, array + kSize);
}
@@ -3611,6 +3793,22 @@
return std::vector<std::string>();
}
+// TODO(jdesprez): Merge the two getReserved attributes once skip is improved
+static std::vector<std::string> GetReservedOutputAttributesForElement(
+ const std::string& xml_element) {
+ if (xml_element == "testsuites") {
+ return ArrayAsVector(kReservedTestSuitesAttributes);
+ } else if (xml_element == "testsuite") {
+ return ArrayAsVector(kReservedTestSuiteAttributes);
+ } else if (xml_element == "testcase") {
+ return ArrayAsVector(kReservedOutputTestCaseAttributes);
+ } else {
+ GTEST_CHECK_(false) << "Unrecognized xml_element provided: " << xml_element;
+ }
+ // This code is unreachable but some compilers may not realizes that.
+ return std::vector<std::string>();
+}
+
static std::string FormatWordList(const std::vector<std::string>& words) {
Message word_list;
for (size_t i = 0; i < words.size(); ++i) {
@@ -3659,12 +3857,12 @@
return result.skipped();
}
-// Returns true iff the test was skipped.
+// Returns true if and only if the test was skipped.
bool TestResult::Skipped() const {
return !Failed() && CountIf(test_part_results_, TestPartSkipped) > 0;
}
-// Returns true iff the test failed.
+// Returns true if and only if the test failed.
bool TestResult::Failed() const {
for (int i = 0; i < total_part_count(); ++i) {
if (GetTestPartResult(i).failed())
@@ -3673,22 +3871,22 @@
return false;
}
-// Returns true iff the test part fatally failed.
+// Returns true if and only if the test part fatally failed.
static bool TestPartFatallyFailed(const TestPartResult& result) {
return result.fatally_failed();
}
-// Returns true iff the test fatally failed.
+// Returns true if and only if the test fatally failed.
bool TestResult::HasFatalFailure() const {
return CountIf(test_part_results_, TestPartFatallyFailed) > 0;
}
-// Returns true iff the test part non-fatally failed.
+// Returns true if and only if the test part non-fatally failed.
static bool TestPartNonfatallyFailed(const TestPartResult& result) {
return result.nonfatally_failed();
}
-// Returns true iff the test has a non-fatal failure.
+// Returns true if and only if the test has a non-fatal failure.
bool TestResult::HasNonfatalFailure() const {
return CountIf(test_part_results_, TestPartNonfatallyFailed) > 0;
}
@@ -3984,18 +4182,18 @@
this, &Test::TearDown, "TearDown()");
}
-// Returns true iff the current test has a fatal failure.
+// Returns true if and only if the current test has a fatal failure.
bool Test::HasFatalFailure() {
return internal::GetUnitTestImpl()->current_test_result()->HasFatalFailure();
}
-// Returns true iff the current test has a non-fatal failure.
+// Returns true if and only if the current test has a non-fatal failure.
bool Test::HasNonfatalFailure() {
return internal::GetUnitTestImpl()->current_test_result()->
HasNonfatalFailure();
}
-// Returns true iff the current test was skipped.
+// Returns true if and only if the current test was skipped.
bool Test::IsSkipped() {
return internal::GetUnitTestImpl()->current_test_result()->Skipped();
}
@@ -4019,6 +4217,7 @@
should_run_(false),
is_disabled_(false),
matches_filter_(false),
+ is_in_another_shard_(false),
factory_(factory),
result_() {}
@@ -4032,7 +4231,7 @@
//
// Arguments:
//
-// test_suite_name: name of the test suite
+// test_suite_name: name of the test suite
// name: name of the test
// type_param: the name of the test's type parameter, or NULL if
// this is not a typed or a type-parameterized test.
@@ -4094,7 +4293,7 @@
explicit TestNameIs(const char* name)
: name_(name) {}
- // Returns true iff the test name of test_info matches name_.
+ // Returns true if and only if the test name of test_info matches name_.
bool operator()(const TestInfo * test_info) const {
return test_info && test_info->name() == name_;
}
@@ -4113,6 +4312,7 @@
void UnitTestImpl::RegisterParameterizedTests() {
if (!parameterized_tests_registered_) {
parameterized_test_registry_.RegisterTests();
+ type_parameterized_test_registry_.CheckForInstantiations();
parameterized_tests_registered_ = true;
}
}
@@ -4133,7 +4333,8 @@
// Notifies the unit test event listeners that a test is about to start.
repeater->OnTestStart(*this);
- const TimeInMillis start = internal::GetTimeInMillis();
+ result_.set_start_timestamp(internal::GetTimeInMillis());
+ internal::Timer timer;
impl->os_stack_trace_getter()->UponLeavingGTest();
@@ -4158,7 +4359,7 @@
test, &Test::DeleteSelf_, "the test fixture's destructor");
}
- result_.set_elapsed_time(internal::GetTimeInMillis() - start);
+ result_.set_elapsed_time(timer.Elapsed());
// Notifies the unit test event listener that a test has just finished.
repeater->OnTestEnd(*this);
@@ -4168,6 +4369,28 @@
impl->set_current_test_info(nullptr);
}
+// Skip and records a skipped test result for this object.
+void TestInfo::Skip() {
+ if (!should_run_) return;
+
+ internal::UnitTestImpl* const impl = internal::GetUnitTestImpl();
+ impl->set_current_test_info(this);
+
+ TestEventListener* repeater = UnitTest::GetInstance()->listeners().repeater();
+
+ // Notifies the unit test event listeners that a test is about to start.
+ repeater->OnTestStart(*this);
+
+ const TestPartResult test_part_result =
+ TestPartResult(TestPartResult::kSkip, this->file(), this->line(), "");
+ impl->GetTestPartResultReporterForCurrentThread()->ReportTestPartResult(
+ test_part_result);
+
+ // Notifies the unit test event listener that a test has just finished.
+ repeater->OnTestEnd(*this);
+ impl->set_current_test_info(nullptr);
+}
+
// class TestSuite
// Gets the number of successful tests in this test suite.
@@ -4214,7 +4437,7 @@
//
// Arguments:
//
-// name: name of the test suite
+// a_name: name of the test suite
// a_type_param: the name of the test suite's type parameter, or NULL if
// this is not a typed or a type-parameterized test suite.
// set_up_tc: pointer to the function that sets up the test suite
@@ -4227,6 +4450,7 @@
set_up_tc_(set_up_tc),
tear_down_tc_(tear_down_tc),
should_run_(false),
+ start_timestamp_(0),
elapsed_time_(0) {}
// Destructor of TestSuite.
@@ -4239,14 +4463,14 @@
// total_test_count() - 1. If i is not in that range, returns NULL.
const TestInfo* TestSuite::GetTestInfo(int i) const {
const int index = GetElementOr(test_indices_, i, -1);
- return index < 0 ? nullptr : test_info_list_[index];
+ return index < 0 ? nullptr : test_info_list_[static_cast<size_t>(index)];
}
// Returns the i-th test among all the tests. i can range from 0 to
// total_test_count() - 1. If i is not in that range, returns NULL.
TestInfo* TestSuite::GetMutableTestInfo(int i) {
const int index = GetElementOr(test_indices_, i, -1);
- return index < 0 ? nullptr : test_info_list_[index];
+ return index < 0 ? nullptr : test_info_list_[static_cast<size_t>(index)];
}
// Adds a test to this test suite. Will delete the test upon
@@ -4268,19 +4492,26 @@
// Call both legacy and the new API
repeater->OnTestSuiteStart(*this);
// Legacy API is deprecated but still available
-#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
repeater->OnTestCaseStart(*this);
-#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
impl->os_stack_trace_getter()->UponLeavingGTest();
internal::HandleExceptionsInMethodIfSupported(
this, &TestSuite::RunSetUpTestSuite, "SetUpTestSuite()");
- const internal::TimeInMillis start = internal::GetTimeInMillis();
+ start_timestamp_ = internal::GetTimeInMillis();
+ internal::Timer timer;
for (int i = 0; i < total_test_count(); i++) {
GetMutableTestInfo(i)->Run();
+ if (GTEST_FLAG(fail_fast) && GetMutableTestInfo(i)->result()->Failed()) {
+ for (int j = i + 1; j < total_test_count(); j++) {
+ GetMutableTestInfo(j)->Skip();
+ }
+ break;
+ }
}
- elapsed_time_ = internal::GetTimeInMillis() - start;
+ elapsed_time_ = timer.Elapsed();
impl->os_stack_trace_getter()->UponLeavingGTest();
internal::HandleExceptionsInMethodIfSupported(
@@ -4289,9 +4520,39 @@
// Call both legacy and the new API
repeater->OnTestSuiteEnd(*this);
// Legacy API is deprecated but still available
-#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
repeater->OnTestCaseEnd(*this);
-#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+
+ impl->set_current_test_suite(nullptr);
+}
+
+// Skips all tests under this TestSuite.
+void TestSuite::Skip() {
+ if (!should_run_) return;
+
+ internal::UnitTestImpl* const impl = internal::GetUnitTestImpl();
+ impl->set_current_test_suite(this);
+
+ TestEventListener* repeater = UnitTest::GetInstance()->listeners().repeater();
+
+ // Call both legacy and the new API
+ repeater->OnTestSuiteStart(*this);
+// Legacy API is deprecated but still available
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+ repeater->OnTestCaseStart(*this);
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+
+ for (int i = 0; i < total_test_count(); i++) {
+ GetMutableTestInfo(i)->Skip();
+ }
+
+ // Call both legacy and the new API
+ repeater->OnTestSuiteEnd(*this);
+ // Legacy API is deprecated but still available
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+ repeater->OnTestCaseEnd(*this);
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
impl->set_current_test_suite(nullptr);
}
@@ -4343,7 +4604,7 @@
static const char * TestPartResultTypeToString(TestPartResult::Type type) {
switch (type) {
case TestPartResult::kSkip:
- return "Skipped";
+ return "Skipped\n";
case TestPartResult::kSuccess:
return "Success";
@@ -4360,6 +4621,9 @@
}
namespace internal {
+namespace {
+enum class GTestColor { kDefault, kRed, kGreen, kYellow };
+} // namespace
// Prints a TestPartResult to an std::string.
static std::string PrintTestPartResultToString(
@@ -4397,9 +4661,12 @@
// Returns the character attribute for the given color.
static WORD GetColorAttribute(GTestColor color) {
switch (color) {
- case COLOR_RED: return FOREGROUND_RED;
- case COLOR_GREEN: return FOREGROUND_GREEN;
- case COLOR_YELLOW: return FOREGROUND_RED | FOREGROUND_GREEN;
+ case GTestColor::kRed:
+ return FOREGROUND_RED;
+ case GTestColor::kGreen:
+ return FOREGROUND_GREEN;
+ case GTestColor::kYellow:
+ return FOREGROUND_RED | FOREGROUND_GREEN;
default: return 0;
}
}
@@ -4437,21 +4704,24 @@
#else
-// Returns the ANSI color code for the given color. COLOR_DEFAULT is
+// Returns the ANSI color code for the given color. GTestColor::kDefault is
// an invalid input.
static const char* GetAnsiColorCode(GTestColor color) {
switch (color) {
- case COLOR_RED: return "1";
- case COLOR_GREEN: return "2";
- case COLOR_YELLOW: return "3";
+ case GTestColor::kRed:
+ return "1";
+ case GTestColor::kGreen:
+ return "2";
+ case GTestColor::kYellow:
+ return "3";
default:
return nullptr;
- };
+ }
}
#endif // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE
-// Returns true iff Google Test should use colors in the output.
+// Returns true if and only if Google Test should use colors in the output.
bool ShouldUseColor(bool stdout_is_tty) {
const char* const gtest_color = GTEST_FLAG(color).c_str();
@@ -4492,17 +4762,19 @@
// cannot simply emit special characters and have the terminal change colors.
// This routine must actually emit the characters rather than return a string
// that would be colored when printed, as can be done on Linux.
-void ColoredPrintf(GTestColor color, const char* fmt, ...) {
+
+GTEST_ATTRIBUTE_PRINTF_(2, 3)
+static void ColoredPrintf(GTestColor color, const char *fmt, ...) {
va_list args;
va_start(args, fmt);
#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_ZOS || GTEST_OS_IOS || \
- GTEST_OS_WINDOWS_PHONE || GTEST_OS_WINDOWS_RT
+ GTEST_OS_WINDOWS_PHONE || GTEST_OS_WINDOWS_RT || defined(ESP_PLATFORM)
const bool use_color = AlwaysFalse();
#else
static const bool in_color_mode =
ShouldUseColor(posix::IsATTY(posix::FileNo(stdout)) != 0);
- const bool use_color = in_color_mode && (color != COLOR_DEFAULT);
+ const bool use_color = in_color_mode && (color != GTestColor::kDefault);
#endif // GTEST_OS_WINDOWS_MOBILE || GTEST_OS_ZOS
if (!use_color) {
@@ -4576,11 +4848,22 @@
void OnTestIterationStart(const UnitTest& unit_test, int iteration) override;
void OnEnvironmentsSetUpStart(const UnitTest& unit_test) override;
void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) override {}
- void OnTestCaseStart(const TestSuite& test_suite) override;
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+ void OnTestCaseStart(const TestCase& test_case) override;
+#else
+ void OnTestSuiteStart(const TestSuite& test_suite) override;
+#endif // OnTestCaseStart
+
void OnTestStart(const TestInfo& test_info) override;
+
void OnTestPartResult(const TestPartResult& result) override;
void OnTestEnd(const TestInfo& test_info) override;
- void OnTestCaseEnd(const TestSuite& test_suite) override;
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+ void OnTestCaseEnd(const TestCase& test_case) override;
+#else
+ void OnTestSuiteEnd(const TestSuite& test_suite) override;
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+
void OnEnvironmentsTearDownStart(const UnitTest& unit_test) override;
void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) override {}
void OnTestIterationEnd(const UnitTest& unit_test, int iteration) override;
@@ -4588,6 +4871,7 @@
private:
static void PrintFailedTests(const UnitTest& unit_test);
+ static void PrintFailedTestSuites(const UnitTest& unit_test);
static void PrintSkippedTests(const UnitTest& unit_test);
};
@@ -4602,25 +4886,24 @@
// Prints the filter if it's not *. This reminds the user that some
// tests may be skipped.
if (!String::CStringEquals(filter, kUniversalFilter)) {
- ColoredPrintf(COLOR_YELLOW,
- "Note: %s filter = %s\n", GTEST_NAME_, filter);
+ ColoredPrintf(GTestColor::kYellow, "Note: %s filter = %s\n", GTEST_NAME_,
+ filter);
}
if (internal::ShouldShard(kTestTotalShards, kTestShardIndex, false)) {
- const Int32 shard_index = Int32FromEnvOrDie(kTestShardIndex, -1);
- ColoredPrintf(COLOR_YELLOW,
- "Note: This is test shard %d of %s.\n",
+ const int32_t shard_index = Int32FromEnvOrDie(kTestShardIndex, -1);
+ ColoredPrintf(GTestColor::kYellow, "Note: This is test shard %d of %s.\n",
static_cast<int>(shard_index) + 1,
internal::posix::GetEnv(kTestTotalShards));
}
if (GTEST_FLAG(shuffle)) {
- ColoredPrintf(COLOR_YELLOW,
+ ColoredPrintf(GTestColor::kYellow,
"Note: Randomizing tests' orders with a seed of %d .\n",
unit_test.random_seed());
}
- ColoredPrintf(COLOR_GREEN, "[==========] ");
+ ColoredPrintf(GTestColor::kGreen, "[==========] ");
printf("Running %s from %s.\n",
FormatTestCount(unit_test.test_to_run_count()).c_str(),
FormatTestSuiteCount(unit_test.test_suite_to_run_count()).c_str());
@@ -4629,15 +4912,30 @@
void PrettyUnitTestResultPrinter::OnEnvironmentsSetUpStart(
const UnitTest& /*unit_test*/) {
- ColoredPrintf(COLOR_GREEN, "[----------] ");
+ ColoredPrintf(GTestColor::kGreen, "[----------] ");
printf("Global test environment set-up.\n");
fflush(stdout);
}
-void PrettyUnitTestResultPrinter::OnTestCaseStart(const TestSuite& test_suite) {
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+void PrettyUnitTestResultPrinter::OnTestCaseStart(const TestCase& test_case) {
+ const std::string counts =
+ FormatCountableNoun(test_case.test_to_run_count(), "test", "tests");
+ ColoredPrintf(GTestColor::kGreen, "[----------] ");
+ printf("%s from %s", counts.c_str(), test_case.name());
+ if (test_case.type_param() == nullptr) {
+ printf("\n");
+ } else {
+ printf(", where %s = %s\n", kTypeParamLabel, test_case.type_param());
+ }
+ fflush(stdout);
+}
+#else
+void PrettyUnitTestResultPrinter::OnTestSuiteStart(
+ const TestSuite& test_suite) {
const std::string counts =
FormatCountableNoun(test_suite.test_to_run_count(), "test", "tests");
- ColoredPrintf(COLOR_GREEN, "[----------] ");
+ ColoredPrintf(GTestColor::kGreen, "[----------] ");
printf("%s from %s", counts.c_str(), test_suite.name());
if (test_suite.type_param() == nullptr) {
printf("\n");
@@ -4646,9 +4944,10 @@
}
fflush(stdout);
}
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
void PrettyUnitTestResultPrinter::OnTestStart(const TestInfo& test_info) {
- ColoredPrintf(COLOR_GREEN, "[ RUN ] ");
+ ColoredPrintf(GTestColor::kGreen, "[ RUN ] ");
PrintTestName(test_info.test_suite_name(), test_info.name());
printf("\n");
fflush(stdout);
@@ -4658,9 +4957,7 @@
void PrettyUnitTestResultPrinter::OnTestPartResult(
const TestPartResult& result) {
switch (result.type()) {
- // If the test part succeeded, or was skipped,
- // we don't need to do anything.
- case TestPartResult::kSkip:
+ // If the test part succeeded, we don't need to do anything.
case TestPartResult::kSuccess:
return;
default:
@@ -4673,11 +4970,11 @@
void PrettyUnitTestResultPrinter::OnTestEnd(const TestInfo& test_info) {
if (test_info.result()->Passed()) {
- ColoredPrintf(COLOR_GREEN, "[ OK ] ");
+ ColoredPrintf(GTestColor::kGreen, "[ OK ] ");
} else if (test_info.result()->Skipped()) {
- ColoredPrintf(COLOR_GREEN, "[ SKIPPED ] ");
+ ColoredPrintf(GTestColor::kGreen, "[ SKIPPED ] ");
} else {
- ColoredPrintf(COLOR_RED, "[ FAILED ] ");
+ ColoredPrintf(GTestColor::kRed, "[ FAILED ] ");
}
PrintTestName(test_info.test_suite_name(), test_info.name());
if (test_info.result()->Failed())
@@ -4692,20 +4989,33 @@
fflush(stdout);
}
-void PrettyUnitTestResultPrinter::OnTestCaseEnd(const TestSuite& test_suite) {
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+void PrettyUnitTestResultPrinter::OnTestCaseEnd(const TestCase& test_case) {
+ if (!GTEST_FLAG(print_time)) return;
+
+ const std::string counts =
+ FormatCountableNoun(test_case.test_to_run_count(), "test", "tests");
+ ColoredPrintf(GTestColor::kGreen, "[----------] ");
+ printf("%s from %s (%s ms total)\n\n", counts.c_str(), test_case.name(),
+ internal::StreamableToString(test_case.elapsed_time()).c_str());
+ fflush(stdout);
+}
+#else
+void PrettyUnitTestResultPrinter::OnTestSuiteEnd(const TestSuite& test_suite) {
if (!GTEST_FLAG(print_time)) return;
const std::string counts =
FormatCountableNoun(test_suite.test_to_run_count(), "test", "tests");
- ColoredPrintf(COLOR_GREEN, "[----------] ");
+ ColoredPrintf(GTestColor::kGreen, "[----------] ");
printf("%s from %s (%s ms total)\n\n", counts.c_str(), test_suite.name(),
internal::StreamableToString(test_suite.elapsed_time()).c_str());
fflush(stdout);
}
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
void PrettyUnitTestResultPrinter::OnEnvironmentsTearDownStart(
const UnitTest& /*unit_test*/) {
- ColoredPrintf(COLOR_GREEN, "[----------] ");
+ ColoredPrintf(GTestColor::kGreen, "[----------] ");
printf("Global test environment tear-down\n");
fflush(stdout);
}
@@ -4713,9 +5023,8 @@
// Internal helper for printing the list of failed tests.
void PrettyUnitTestResultPrinter::PrintFailedTests(const UnitTest& unit_test) {
const int failed_test_count = unit_test.failed_test_count();
- if (failed_test_count == 0) {
- return;
- }
+ ColoredPrintf(GTestColor::kRed, "[ FAILED ] ");
+ printf("%s, listed below:\n", FormatTestCount(failed_test_count).c_str());
for (int i = 0; i < unit_test.total_test_suite_count(); ++i) {
const TestSuite& test_suite = *unit_test.GetTestSuite(i);
@@ -4727,12 +5036,36 @@
if (!test_info.should_run() || !test_info.result()->Failed()) {
continue;
}
- ColoredPrintf(COLOR_RED, "[ FAILED ] ");
+ ColoredPrintf(GTestColor::kRed, "[ FAILED ] ");
printf("%s.%s", test_suite.name(), test_info.name());
PrintFullTestCommentIfPresent(test_info);
printf("\n");
}
}
+ printf("\n%2d FAILED %s\n", failed_test_count,
+ failed_test_count == 1 ? "TEST" : "TESTS");
+}
+
+// Internal helper for printing the list of test suite failures not covered by
+// PrintFailedTests.
+void PrettyUnitTestResultPrinter::PrintFailedTestSuites(
+ const UnitTest& unit_test) {
+ int suite_failure_count = 0;
+ for (int i = 0; i < unit_test.total_test_suite_count(); ++i) {
+ const TestSuite& test_suite = *unit_test.GetTestSuite(i);
+ if (!test_suite.should_run()) {
+ continue;
+ }
+ if (test_suite.ad_hoc_test_result().Failed()) {
+ ColoredPrintf(GTestColor::kRed, "[ FAILED ] ");
+ printf("%s: SetUpTestSuite or TearDownTestSuite\n", test_suite.name());
+ ++suite_failure_count;
+ }
+ }
+ if (suite_failure_count > 0) {
+ printf("\n%2d FAILED TEST %s\n", suite_failure_count,
+ suite_failure_count == 1 ? "SUITE" : "SUITES");
+ }
}
// Internal helper for printing the list of skipped tests.
@@ -4752,7 +5085,7 @@
if (!test_info.should_run() || !test_info.result()->Skipped()) {
continue;
}
- ColoredPrintf(COLOR_GREEN, "[ SKIPPED ] ");
+ ColoredPrintf(GTestColor::kGreen, "[ SKIPPED ] ");
printf("%s.%s", test_suite.name(), test_info.name());
printf("\n");
}
@@ -4761,7 +5094,7 @@
void PrettyUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test,
int /*iteration*/) {
- ColoredPrintf(COLOR_GREEN, "[==========] ");
+ ColoredPrintf(GTestColor::kGreen, "[==========] ");
printf("%s from %s ran.",
FormatTestCount(unit_test.test_to_run_count()).c_str(),
FormatTestSuiteCount(unit_test.test_suite_to_run_count()).c_str());
@@ -4770,35 +5103,28 @@
internal::StreamableToString(unit_test.elapsed_time()).c_str());
}
printf("\n");
- ColoredPrintf(COLOR_GREEN, "[ PASSED ] ");
+ ColoredPrintf(GTestColor::kGreen, "[ PASSED ] ");
printf("%s.\n", FormatTestCount(unit_test.successful_test_count()).c_str());
const int skipped_test_count = unit_test.skipped_test_count();
if (skipped_test_count > 0) {
- ColoredPrintf(COLOR_GREEN, "[ SKIPPED ] ");
+ ColoredPrintf(GTestColor::kGreen, "[ SKIPPED ] ");
printf("%s, listed below:\n", FormatTestCount(skipped_test_count).c_str());
PrintSkippedTests(unit_test);
}
- int num_failures = unit_test.failed_test_count();
if (!unit_test.Passed()) {
- const int failed_test_count = unit_test.failed_test_count();
- ColoredPrintf(COLOR_RED, "[ FAILED ] ");
- printf("%s, listed below:\n", FormatTestCount(failed_test_count).c_str());
PrintFailedTests(unit_test);
- printf("\n%2d FAILED %s\n", num_failures,
- num_failures == 1 ? "TEST" : "TESTS");
+ PrintFailedTestSuites(unit_test);
}
int num_disabled = unit_test.reportable_disabled_test_count();
if (num_disabled && !GTEST_FLAG(also_run_disabled_tests)) {
- if (!num_failures) {
+ if (unit_test.Passed()) {
printf("\n"); // Add a spacer if no FAILURE banner is displayed.
}
- ColoredPrintf(COLOR_YELLOW,
- " YOU HAVE %d DISABLED %s\n\n",
- num_disabled,
- num_disabled == 1 ? "TEST" : "TESTS");
+ ColoredPrintf(GTestColor::kYellow, " YOU HAVE %d DISABLED %s\n\n",
+ num_disabled, num_disabled == 1 ? "TEST" : "TESTS");
}
// Ensure that Google Test output is printed before, e.g., heapchecker output.
fflush(stdout);
@@ -4806,6 +5132,110 @@
// End PrettyUnitTestResultPrinter
+// This class implements the TestEventListener interface.
+//
+// Class BriefUnitTestResultPrinter is copyable.
+class BriefUnitTestResultPrinter : public TestEventListener {
+ public:
+ BriefUnitTestResultPrinter() {}
+ static void PrintTestName(const char* test_suite, const char* test) {
+ printf("%s.%s", test_suite, test);
+ }
+
+ // The following methods override what's in the TestEventListener class.
+ void OnTestProgramStart(const UnitTest& /*unit_test*/) override {}
+ void OnTestIterationStart(const UnitTest& /*unit_test*/,
+ int /*iteration*/) override {}
+ void OnEnvironmentsSetUpStart(const UnitTest& /*unit_test*/) override {}
+ void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) override {}
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+ void OnTestCaseStart(const TestCase& /*test_case*/) override {}
+#else
+ void OnTestSuiteStart(const TestSuite& /*test_suite*/) override {}
+#endif // OnTestCaseStart
+
+ void OnTestStart(const TestInfo& /*test_info*/) override {}
+
+ void OnTestPartResult(const TestPartResult& result) override;
+ void OnTestEnd(const TestInfo& test_info) override;
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+ void OnTestCaseEnd(const TestCase& /*test_case*/) override {}
+#else
+ void OnTestSuiteEnd(const TestSuite& /*test_suite*/) override {}
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+
+ void OnEnvironmentsTearDownStart(const UnitTest& /*unit_test*/) override {}
+ void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) override {}
+ void OnTestIterationEnd(const UnitTest& unit_test, int iteration) override;
+ void OnTestProgramEnd(const UnitTest& /*unit_test*/) override {}
+};
+
+// Called after an assertion failure.
+void BriefUnitTestResultPrinter::OnTestPartResult(
+ const TestPartResult& result) {
+ switch (result.type()) {
+ // If the test part succeeded, we don't need to do anything.
+ case TestPartResult::kSuccess:
+ return;
+ default:
+ // Print failure message from the assertion
+ // (e.g. expected this and got that).
+ PrintTestPartResult(result);
+ fflush(stdout);
+ }
+}
+
+void BriefUnitTestResultPrinter::OnTestEnd(const TestInfo& test_info) {
+ if (test_info.result()->Failed()) {
+ ColoredPrintf(GTestColor::kRed, "[ FAILED ] ");
+ PrintTestName(test_info.test_suite_name(), test_info.name());
+ PrintFullTestCommentIfPresent(test_info);
+
+ if (GTEST_FLAG(print_time)) {
+ printf(" (%s ms)\n",
+ internal::StreamableToString(test_info.result()->elapsed_time())
+ .c_str());
+ } else {
+ printf("\n");
+ }
+ fflush(stdout);
+ }
+}
+
+void BriefUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test,
+ int /*iteration*/) {
+ ColoredPrintf(GTestColor::kGreen, "[==========] ");
+ printf("%s from %s ran.",
+ FormatTestCount(unit_test.test_to_run_count()).c_str(),
+ FormatTestSuiteCount(unit_test.test_suite_to_run_count()).c_str());
+ if (GTEST_FLAG(print_time)) {
+ printf(" (%s ms total)",
+ internal::StreamableToString(unit_test.elapsed_time()).c_str());
+ }
+ printf("\n");
+ ColoredPrintf(GTestColor::kGreen, "[ PASSED ] ");
+ printf("%s.\n", FormatTestCount(unit_test.successful_test_count()).c_str());
+
+ const int skipped_test_count = unit_test.skipped_test_count();
+ if (skipped_test_count > 0) {
+ ColoredPrintf(GTestColor::kGreen, "[ SKIPPED ] ");
+ printf("%s.\n", FormatTestCount(skipped_test_count).c_str());
+ }
+
+ int num_disabled = unit_test.reportable_disabled_test_count();
+ if (num_disabled && !GTEST_FLAG(also_run_disabled_tests)) {
+ if (unit_test.Passed()) {
+ printf("\n"); // Add a spacer if no FAILURE banner is displayed.
+ }
+ ColoredPrintf(GTestColor::kYellow, " YOU HAVE %d DISABLED %s\n\n",
+ num_disabled, num_disabled == 1 ? "TEST" : "TESTS");
+ }
+ // Ensure that Google Test output is printed before, e.g., heapchecker output.
+ fflush(stdout);
+}
+
+// End BriefUnitTestResultPrinter
+
// class TestEventRepeater
//
// This class forwards events to other event listeners.
@@ -4826,17 +5256,17 @@
void OnEnvironmentsSetUpStart(const UnitTest& unit_test) override;
void OnEnvironmentsSetUpEnd(const UnitTest& unit_test) override;
// Legacy API is deprecated but still available
-#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
void OnTestCaseStart(const TestSuite& parameter) override;
-#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
void OnTestSuiteStart(const TestSuite& parameter) override;
void OnTestStart(const TestInfo& test_info) override;
void OnTestPartResult(const TestPartResult& result) override;
void OnTestEnd(const TestInfo& test_info) override;
// Legacy API is deprecated but still available
-#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI
- void OnTestCaseEnd(const TestSuite& parameter) override;
-#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
+ void OnTestCaseEnd(const TestCase& parameter) override;
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
void OnTestSuiteEnd(const TestSuite& parameter) override;
void OnEnvironmentsTearDownStart(const UnitTest& unit_test) override;
void OnEnvironmentsTearDownEnd(const UnitTest& unit_test) override;
@@ -4864,7 +5294,7 @@
TestEventListener* TestEventRepeater::Release(TestEventListener *listener) {
for (size_t i = 0; i < listeners_.size(); ++i) {
if (listeners_[i] == listener) {
- listeners_.erase(listeners_.begin() + i);
+ listeners_.erase(listeners_.begin() + static_cast<int>(i));
return listener;
}
}
@@ -4884,14 +5314,14 @@
}
// This defines a member that forwards the call to all listeners in reverse
// order.
-#define GTEST_REVERSE_REPEATER_METHOD_(Name, Type) \
-void TestEventRepeater::Name(const Type& parameter) { \
- if (forwarding_enabled_) { \
- for (int i = static_cast<int>(listeners_.size()) - 1; i >= 0; i--) { \
- listeners_[i]->Name(parameter); \
- } \
- } \
-}
+#define GTEST_REVERSE_REPEATER_METHOD_(Name, Type) \
+ void TestEventRepeater::Name(const Type& parameter) { \
+ if (forwarding_enabled_) { \
+ for (size_t i = listeners_.size(); i != 0; i--) { \
+ listeners_[i - 1]->Name(parameter); \
+ } \
+ } \
+ }
GTEST_REPEATER_METHOD_(OnTestProgramStart, UnitTest)
GTEST_REPEATER_METHOD_(OnEnvironmentsSetUpStart, UnitTest)
@@ -4928,8 +5358,8 @@
void TestEventRepeater::OnTestIterationEnd(const UnitTest& unit_test,
int iteration) {
if (forwarding_enabled_) {
- for (int i = static_cast<int>(listeners_.size()) - 1; i >= 0; i--) {
- listeners_[i]->OnTestIterationEnd(unit_test, iteration);
+ for (size_t i = listeners_.size(); i > 0; i--) {
+ listeners_[i - 1]->OnTestIterationEnd(unit_test, iteration);
}
}
}
@@ -4989,6 +5419,16 @@
// Streams an XML CDATA section, escaping invalid CDATA sequences as needed.
static void OutputXmlCDataSection(::std::ostream* stream, const char* data);
+ // Streams a test suite XML stanza containing the given test result.
+ //
+ // Requires: result.Failed()
+ static void OutputXmlTestSuiteForTestResult(::std::ostream* stream,
+ const TestResult& result);
+
+ // Streams an XML representation of a TestResult object.
+ static void OutputXmlTestResult(::std::ostream* stream,
+ const TestResult& result);
+
// Streams an XML representation of a TestInfo object.
static void OutputXmlTestInfo(::std::ostream* stream,
const char* test_suite_name,
@@ -5147,6 +5587,10 @@
if (tm_ptr == nullptr) return false;
*out = *tm_ptr;
return true;
+#elif defined(__STDC_LIB_EXT1__)
+ // Uses localtime_s when available as localtime_r is only available from
+ // C23 standard.
+ return localtime_s(&seconds, out) != nullptr;
#else
return localtime_r(&seconds, out) != nullptr;
#endif
@@ -5158,13 +5602,14 @@
struct tm time_struct;
if (!PortableLocaltime(static_cast<time_t>(ms / 1000), &time_struct))
return "";
- // YYYY-MM-DDThh:mm:ss
+ // YYYY-MM-DDThh:mm:ss.sss
return StreamableToString(time_struct.tm_year + 1900) + "-" +
String::FormatIntWidth2(time_struct.tm_mon + 1) + "-" +
String::FormatIntWidth2(time_struct.tm_mday) + "T" +
String::FormatIntWidth2(time_struct.tm_hour) + ":" +
String::FormatIntWidth2(time_struct.tm_min) + ":" +
- String::FormatIntWidth2(time_struct.tm_sec);
+ String::FormatIntWidth2(time_struct.tm_sec) + "." +
+ String::FormatIntWidthN(static_cast<int>(ms % 1000), 3);
}
// Streams an XML CDATA section, escaping invalid CDATA sequences as needed.
@@ -5193,7 +5638,7 @@
const std::string& name,
const std::string& value) {
const std::vector<std::string>& allowed_names =
- GetReservedAttributesForElement(element_name);
+ GetReservedOutputAttributesForElement(element_name);
GTEST_CHECK_(std::find(allowed_names.begin(), allowed_names.end(), name) !=
allowed_names.end())
@@ -5203,6 +5648,43 @@
*stream << " " << name << "=\"" << EscapeXmlAttribute(value) << "\"";
}
+// Streams a test suite XML stanza containing the given test result.
+void XmlUnitTestResultPrinter::OutputXmlTestSuiteForTestResult(
+ ::std::ostream* stream, const TestResult& result) {
+ // Output the boilerplate for a minimal test suite with one test.
+ *stream << " <testsuite";
+ OutputXmlAttribute(stream, "testsuite", "name", "NonTestSuiteFailure");
+ OutputXmlAttribute(stream, "testsuite", "tests", "1");
+ OutputXmlAttribute(stream, "testsuite", "failures", "1");
+ OutputXmlAttribute(stream, "testsuite", "disabled", "0");
+ OutputXmlAttribute(stream, "testsuite", "skipped", "0");
+ OutputXmlAttribute(stream, "testsuite", "errors", "0");
+ OutputXmlAttribute(stream, "testsuite", "time",
+ FormatTimeInMillisAsSeconds(result.elapsed_time()));
+ OutputXmlAttribute(
+ stream, "testsuite", "timestamp",
+ FormatEpochTimeInMillisAsIso8601(result.start_timestamp()));
+ *stream << ">";
+
+ // Output the boilerplate for a minimal test case with a single test.
+ *stream << " <testcase";
+ OutputXmlAttribute(stream, "testcase", "name", "");
+ OutputXmlAttribute(stream, "testcase", "status", "run");
+ OutputXmlAttribute(stream, "testcase", "result", "completed");
+ OutputXmlAttribute(stream, "testcase", "classname", "");
+ OutputXmlAttribute(stream, "testcase", "time",
+ FormatTimeInMillisAsSeconds(result.elapsed_time()));
+ OutputXmlAttribute(
+ stream, "testcase", "timestamp",
+ FormatEpochTimeInMillisAsIso8601(result.start_timestamp()));
+
+ // Output the actual test result.
+ OutputXmlTestResult(stream, result);
+
+ // Complete the test suite.
+ *stream << " </testsuite>\n";
+}
+
// Prints an XML representation of a TestInfo object.
void XmlUnitTestResultPrinter::OutputXmlTestInfo(::std::ostream* stream,
const char* test_suite_name,
@@ -5233,18 +5715,30 @@
return;
}
- OutputXmlAttribute(
- stream, kTestsuite, "status",
- result.Skipped() ? "skipped" : test_info.should_run() ? "run" : "notrun");
+ OutputXmlAttribute(stream, kTestsuite, "status",
+ test_info.should_run() ? "run" : "notrun");
+ OutputXmlAttribute(stream, kTestsuite, "result",
+ test_info.should_run()
+ ? (result.Skipped() ? "skipped" : "completed")
+ : "suppressed");
OutputXmlAttribute(stream, kTestsuite, "time",
FormatTimeInMillisAsSeconds(result.elapsed_time()));
+ OutputXmlAttribute(
+ stream, kTestsuite, "timestamp",
+ FormatEpochTimeInMillisAsIso8601(result.start_timestamp()));
OutputXmlAttribute(stream, kTestsuite, "classname", test_suite_name);
+ OutputXmlTestResult(stream, result);
+}
+
+void XmlUnitTestResultPrinter::OutputXmlTestResult(::std::ostream* stream,
+ const TestResult& result) {
int failures = 0;
+ int skips = 0;
for (int i = 0; i < result.total_part_count(); ++i) {
const TestPartResult& part = result.GetTestPartResult(i);
if (part.failed()) {
- if (++failures == 1) {
+ if (++failures == 1 && skips == 0) {
*stream << ">\n";
}
const std::string location =
@@ -5252,18 +5746,31 @@
part.line_number());
const std::string summary = location + "\n" + part.summary();
*stream << " <failure message=\""
- << EscapeXmlAttribute(summary.c_str())
+ << EscapeXmlAttribute(summary)
<< "\" type=\"\">";
const std::string detail = location + "\n" + part.message();
OutputXmlCDataSection(stream, RemoveInvalidXmlCharacters(detail).c_str());
*stream << "</failure>\n";
+ } else if (part.skipped()) {
+ if (++skips == 1 && failures == 0) {
+ *stream << ">\n";
+ }
+ const std::string location =
+ internal::FormatCompilerIndependentFileLocation(part.file_name(),
+ part.line_number());
+ const std::string summary = location + "\n" + part.summary();
+ *stream << " <skipped message=\""
+ << EscapeXmlAttribute(summary.c_str()) << "\">";
+ const std::string detail = location + "\n" + part.message();
+ OutputXmlCDataSection(stream, RemoveInvalidXmlCharacters(detail).c_str());
+ *stream << "</skipped>\n";
}
}
- if (failures == 0 && result.test_property_count() == 0) {
+ if (failures == 0 && skips == 0 && result.test_property_count() == 0) {
*stream << " />\n";
} else {
- if (failures == 0) {
+ if (failures == 0 && skips == 0) {
*stream << ">\n";
}
OutputXmlTestProperties(stream, result);
@@ -5285,9 +5792,16 @@
OutputXmlAttribute(
stream, kTestsuite, "disabled",
StreamableToString(test_suite.reportable_disabled_test_count()));
+ OutputXmlAttribute(stream, kTestsuite, "skipped",
+ StreamableToString(test_suite.skipped_test_count()));
+
OutputXmlAttribute(stream, kTestsuite, "errors", "0");
+
OutputXmlAttribute(stream, kTestsuite, "time",
FormatTimeInMillisAsSeconds(test_suite.elapsed_time()));
+ OutputXmlAttribute(
+ stream, kTestsuite, "timestamp",
+ FormatEpochTimeInMillisAsIso8601(test_suite.start_timestamp()));
*stream << TestPropertiesAsXmlAttributes(test_suite.ad_hoc_test_result());
}
*stream << ">\n";
@@ -5314,11 +5828,11 @@
stream, kTestsuites, "disabled",
StreamableToString(unit_test.reportable_disabled_test_count()));
OutputXmlAttribute(stream, kTestsuites, "errors", "0");
+ OutputXmlAttribute(stream, kTestsuites, "time",
+ FormatTimeInMillisAsSeconds(unit_test.elapsed_time()));
OutputXmlAttribute(
stream, kTestsuites, "timestamp",
FormatEpochTimeInMillisAsIso8601(unit_test.start_timestamp()));
- OutputXmlAttribute(stream, kTestsuites, "time",
- FormatTimeInMillisAsSeconds(unit_test.elapsed_time()));
if (GTEST_FLAG(shuffle)) {
OutputXmlAttribute(stream, kTestsuites, "random_seed",
@@ -5333,6 +5847,13 @@
if (unit_test.GetTestSuite(i)->reportable_test_count() > 0)
PrintXmlTestSuite(stream, *unit_test.GetTestSuite(i));
}
+
+ // If there was a test failure outside of one of the test suites (like in a
+ // test environment) include that in the output.
+ if (unit_test.ad_hoc_test_result().Failed()) {
+ OutputXmlTestSuiteForTestResult(stream, unit_test.ad_hoc_test_result());
+ }
+
*stream << "</" << kTestsuites << ">\n";
}
@@ -5423,6 +5944,16 @@
const std::string& indent,
bool comma = true);
+ // Streams a test suite JSON stanza containing the given test result.
+ //
+ // Requires: result.Failed()
+ static void OutputJsonTestSuiteForTestResult(::std::ostream* stream,
+ const TestResult& result);
+
+ // Streams a JSON representation of a TestResult object.
+ static void OutputJsonTestResult(::std::ostream* stream,
+ const TestResult& result);
+
// Streams a JSON representation of a TestInfo object.
static void OutputJsonTestInfo(::std::ostream* stream,
const char* test_suite_name,
@@ -5529,7 +6060,7 @@
String::FormatIntWidth2(time_struct.tm_sec) + "Z";
}
-static inline std::string Indent(int width) {
+static inline std::string Indent(size_t width) {
return std::string(width, ' ');
}
@@ -5541,7 +6072,7 @@
const std::string& indent,
bool comma) {
const std::vector<std::string>& allowed_names =
- GetReservedAttributesForElement(element_name);
+ GetReservedOutputAttributesForElement(element_name);
GTEST_CHECK_(std::find(allowed_names.begin(), allowed_names.end(), name) !=
allowed_names.end())
@@ -5561,7 +6092,7 @@
const std::string& indent,
bool comma) {
const std::vector<std::string>& allowed_names =
- GetReservedAttributesForElement(element_name);
+ GetReservedOutputAttributesForElement(element_name);
GTEST_CHECK_(std::find(allowed_names.begin(), allowed_names.end(), name) !=
allowed_names.end())
@@ -5573,6 +6104,48 @@
*stream << ",\n";
}
+// Streams a test suite JSON stanza containing the given test result.
+void JsonUnitTestResultPrinter::OutputJsonTestSuiteForTestResult(
+ ::std::ostream* stream, const TestResult& result) {
+ // Output the boilerplate for a new test suite.
+ *stream << Indent(4) << "{\n";
+ OutputJsonKey(stream, "testsuite", "name", "NonTestSuiteFailure", Indent(6));
+ OutputJsonKey(stream, "testsuite", "tests", 1, Indent(6));
+ if (!GTEST_FLAG(list_tests)) {
+ OutputJsonKey(stream, "testsuite", "failures", 1, Indent(6));
+ OutputJsonKey(stream, "testsuite", "disabled", 0, Indent(6));
+ OutputJsonKey(stream, "testsuite", "skipped", 0, Indent(6));
+ OutputJsonKey(stream, "testsuite", "errors", 0, Indent(6));
+ OutputJsonKey(stream, "testsuite", "time",
+ FormatTimeInMillisAsDuration(result.elapsed_time()),
+ Indent(6));
+ OutputJsonKey(stream, "testsuite", "timestamp",
+ FormatEpochTimeInMillisAsRFC3339(result.start_timestamp()),
+ Indent(6));
+ }
+ *stream << Indent(6) << "\"testsuite\": [\n";
+
+ // Output the boilerplate for a new test case.
+ *stream << Indent(8) << "{\n";
+ OutputJsonKey(stream, "testcase", "name", "", Indent(10));
+ OutputJsonKey(stream, "testcase", "status", "RUN", Indent(10));
+ OutputJsonKey(stream, "testcase", "result", "COMPLETED", Indent(10));
+ OutputJsonKey(stream, "testcase", "timestamp",
+ FormatEpochTimeInMillisAsRFC3339(result.start_timestamp()),
+ Indent(10));
+ OutputJsonKey(stream, "testcase", "time",
+ FormatTimeInMillisAsDuration(result.elapsed_time()),
+ Indent(10));
+ OutputJsonKey(stream, "testcase", "classname", "", Indent(10), false);
+ *stream << TestPropertiesAsJson(result, Indent(10));
+
+ // Output the actual test result.
+ OutputJsonTestResult(stream, result);
+
+ // Finish the test suite.
+ *stream << "\n" << Indent(6) << "]\n" << Indent(4) << "}";
+}
+
// Prints a JSON representation of a TestInfo object.
void JsonUnitTestResultPrinter::OutputJsonTestInfo(::std::ostream* stream,
const char* test_suite_name,
@@ -5599,16 +6172,29 @@
return;
}
- OutputJsonKey(
- stream, kTestsuite, "status",
- result.Skipped() ? "SKIPPED" : test_info.should_run() ? "RUN" : "NOTRUN",
- kIndent);
+ OutputJsonKey(stream, kTestsuite, "status",
+ test_info.should_run() ? "RUN" : "NOTRUN", kIndent);
+ OutputJsonKey(stream, kTestsuite, "result",
+ test_info.should_run()
+ ? (result.Skipped() ? "SKIPPED" : "COMPLETED")
+ : "SUPPRESSED",
+ kIndent);
+ OutputJsonKey(stream, kTestsuite, "timestamp",
+ FormatEpochTimeInMillisAsRFC3339(result.start_timestamp()),
+ kIndent);
OutputJsonKey(stream, kTestsuite, "time",
FormatTimeInMillisAsDuration(result.elapsed_time()), kIndent);
OutputJsonKey(stream, kTestsuite, "classname", test_suite_name, kIndent,
false);
*stream << TestPropertiesAsJson(result, kIndent);
+ OutputJsonTestResult(stream, result);
+}
+
+void JsonUnitTestResultPrinter::OutputJsonTestResult(::std::ostream* stream,
+ const TestResult& result) {
+ const std::string kIndent = Indent(10);
+
int failures = 0;
for (int i = 0; i < result.total_part_count(); ++i) {
const TestPartResult& part = result.GetTestPartResult(i);
@@ -5649,6 +6235,10 @@
OutputJsonKey(stream, kTestsuite, "disabled",
test_suite.reportable_disabled_test_count(), kIndent);
OutputJsonKey(stream, kTestsuite, "errors", 0, kIndent);
+ OutputJsonKey(
+ stream, kTestsuite, "timestamp",
+ FormatEpochTimeInMillisAsRFC3339(test_suite.start_timestamp()),
+ kIndent);
OutputJsonKey(stream, kTestsuite, "time",
FormatTimeInMillisAsDuration(test_suite.elapsed_time()),
kIndent, false);
@@ -5715,6 +6305,12 @@
}
}
+ // If there was a test failure outside of one of the test suites (like in a
+ // test environment) include that in the output.
+ if (unit_test.ad_hoc_test_result().Failed()) {
+ OutputJsonTestSuiteForTestResult(stream, unit_test.ad_hoc_test_result());
+ }
+
*stream << "\n" << kIndent << "]\n" << "}\n";
}
@@ -5914,6 +6510,7 @@
}
~ScopedPrematureExitFile() {
+#if !defined GTEST_OS_ESP8266
if (!premature_exit_filepath_.empty()) {
int retval = remove(premature_exit_filepath_.c_str());
if (retval) {
@@ -5922,6 +6519,7 @@
<< retval;
}
}
+#endif
}
private:
@@ -6109,11 +6707,12 @@
return impl()->elapsed_time();
}
-// Returns true iff the unit test passed (i.e. all test suites passed).
+// Returns true if and only if the unit test passed (i.e. all test suites
+// passed).
bool UnitTest::Passed() const { return impl()->Passed(); }
-// Returns true iff the unit test failed (i.e. some test suite failed
-// or something outside of all tests failed).
+// Returns true if and only if the unit test failed (i.e. some test suite
+// failed or something outside of all tests failed).
bool UnitTest::Failed() const { return impl()->Failed(); }
// Gets the i-th test suite among all the test suites. i can range from 0 to
@@ -6183,8 +6782,7 @@
if (impl_->gtest_trace_stack().size() > 0) {
msg << "\n" << GTEST_NAME_ << " trace:";
- for (int i = static_cast<int>(impl_->gtest_trace_stack().size());
- i > 0; --i) {
+ for (size_t i = impl_->gtest_trace_stack().size(); i > 0; --i) {
const internal::TraceInfo& trace = impl_->gtest_trace_stack()[i - 1];
msg << "\n" << internal::FormatFileLocation(trace.file, trace.line)
<< " " << trace.message;
@@ -6314,6 +6912,16 @@
_set_abort_behavior(
0x0, // Clear the following flags:
_WRITE_ABORT_MSG | _CALL_REPORTFAULT); // pop-up window, core dump.
+
+ // In debug mode, the Windows CRT can crash with an assertion over invalid
+ // input (e.g. passing an invalid file descriptor). The default handling
+ // for these assertions is to pop up a dialog and wait for user input.
+ // Instead ask the CRT to dump such assertions to stderr non-interactively.
+ if (!IsDebuggerPresent()) {
+ (void)_CrtSetReportMode(_CRT_ASSERT,
+ _CRTDBG_MODE_FILE | _CRTDBG_MODE_DEBUG);
+ (void)_CrtSetReportFile(_CRT_ASSERT, _CRTDBG_FILE_STDERR);
+ }
# endif
}
#endif // GTEST_OS_WINDOWS
@@ -6525,6 +7133,10 @@
// to shut down the default XML output before invoking RUN_ALL_TESTS.
ConfigureXmlOutput();
+ if (GTEST_FLAG(brief)) {
+ listeners()->SetDefaultResultPrinter(new BriefUnitTestResultPrinter);
+ }
+
#if GTEST_CAN_STREAM_RESULTS_
// Configures listeners for streaming test results to the specified server.
ConfigureStreamingOutput();
@@ -6552,7 +7164,7 @@
// Constructor.
explicit TestSuiteNameIs(const std::string& name) : name_(name) {}
- // Returns true iff the name of test_suite matches name_.
+ // Returns true if and only if the name of test_suite matches name_.
bool operator()(const TestSuite* test_suite) const {
return test_suite != nullptr &&
strcmp(test_suite->name(), name_.c_str()) == 0;
@@ -6570,10 +7182,10 @@
// Arguments:
//
// test_suite_name: name of the test suite
-// type_param: the name of the test suite's type parameter, or NULL if
-// this is not a typed or a type-parameterized test suite.
-// set_up_tc: pointer to the function that sets up the test suite
-// tear_down_tc: pointer to the function that tears down the test suite
+// type_param: the name of the test suite's type parameter, or NULL if
+// this is not a typed or a type-parameterized test suite.
+// set_up_tc: pointer to the function that sets up the test suite
+// tear_down_tc: pointer to the function that tears down the test suite
TestSuite* UnitTestImpl::GetTestSuite(
const char* test_suite_name, const char* type_param,
internal::SetUpTestSuiteFunc set_up_tc,
@@ -6623,7 +7235,8 @@
// All other functions called from RunAllTests() may safely assume that
// parameterized tests are ready to be counted and run.
bool UnitTestImpl::RunAllTests() {
- // True iff Google Test is initialized before RUN_ALL_TESTS() is called.
+ // True if and only if Google Test is initialized before RUN_ALL_TESTS() is
+ // called.
const bool gtest_is_initialized_before_run_all_tests = GTestIsInitialized();
// Do not run any test if the --help flag was specified.
@@ -6639,7 +7252,7 @@
// protocol.
internal::WriteToShardStatusFileIfNeeded();
- // True iff we are in a subprocess for running a thread-safe-style
+ // True if and only if we are in a subprocess for running a thread-safe-style
// death test.
bool in_subprocess_for_death_test = false;
@@ -6672,7 +7285,7 @@
random_seed_ = GTEST_FLAG(shuffle) ?
GetRandomSeedFromFlag(GTEST_FLAG(random_seed)) : 0;
- // True iff at least one test has failed.
+ // True if and only if at least one test has failed.
bool failed = false;
TestEventListener* repeater = listeners()->repeater();
@@ -6684,17 +7297,17 @@
// when we are inside the subprocess of a death test.
const int repeat = in_subprocess_for_death_test ? 1 : GTEST_FLAG(repeat);
// Repeats forever if the repeat count is negative.
- const bool forever = repeat < 0;
- for (int i = 0; forever || i != repeat; i++) {
+ const bool gtest_repeat_forever = repeat < 0;
+ for (int i = 0; gtest_repeat_forever || i != repeat; i++) {
// We want to preserve failures generated by ad-hoc test
// assertions executed before RUN_ALL_TESTS().
ClearNonAdHocTestResult();
- const TimeInMillis start = GetTimeInMillis();
+ Timer timer;
// Shuffles test suites and tests if requested.
if (has_tests_to_run && GTEST_FLAG(shuffle)) {
- random()->Reseed(random_seed_);
+ random()->Reseed(static_cast<uint32_t>(random_seed_));
// This should be done before calling OnTestIterationStart(),
// such that a test event listener can see the actual test order
// in the event.
@@ -6711,12 +7324,41 @@
ForEach(environments_, SetUpEnvironment);
repeater->OnEnvironmentsSetUpEnd(*parent_);
- // Runs the tests only if there was no fatal failure during global
- // set-up.
- if (!Test::HasFatalFailure()) {
+ // Runs the tests only if there was no fatal failure or skip triggered
+ // during global set-up.
+ if (Test::IsSkipped()) {
+ // Emit diagnostics when global set-up calls skip, as it will not be
+ // emitted by default.
+ TestResult& test_result =
+ *internal::GetUnitTestImpl()->current_test_result();
+ for (int j = 0; j < test_result.total_part_count(); ++j) {
+ const TestPartResult& test_part_result =
+ test_result.GetTestPartResult(j);
+ if (test_part_result.type() == TestPartResult::kSkip) {
+ const std::string& result = test_part_result.message();
+ printf("%s\n", result.c_str());
+ }
+ }
+ fflush(stdout);
+ } else if (!Test::HasFatalFailure()) {
for (int test_index = 0; test_index < total_test_suite_count();
test_index++) {
GetMutableSuiteCase(test_index)->Run();
+ if (GTEST_FLAG(fail_fast) &&
+ GetMutableSuiteCase(test_index)->Failed()) {
+ for (int j = test_index + 1; j < total_test_suite_count(); j++) {
+ GetMutableSuiteCase(j)->Skip();
+ }
+ break;
+ }
+ }
+ } else if (Test::HasFatalFailure()) {
+ // If there was a fatal failure during the global setup then we know we
+ // aren't going to run any tests. Explicitly mark all of the tests as
+ // skipped to make this obvious in the output.
+ for (int test_index = 0; test_index < total_test_suite_count();
+ test_index++) {
+ GetMutableSuiteCase(test_index)->Skip();
}
}
@@ -6727,7 +7369,7 @@
repeater->OnEnvironmentsTearDownEnd(*parent_);
}
- elapsed_time_ = GetTimeInMillis() - start;
+ elapsed_time_ = timer.Elapsed();
// Tells the unit test event listener that the tests have just finished.
repeater->OnTestIterationEnd(*parent_, i);
@@ -6755,14 +7397,14 @@
if (!gtest_is_initialized_before_run_all_tests) {
ColoredPrintf(
- COLOR_RED,
+ GTestColor::kRed,
"\nIMPORTANT NOTICE - DO NOT IGNORE:\n"
"This test program did NOT call " GTEST_INIT_GOOGLE_TEST_NAME_
"() before calling RUN_ALL_TESTS(). This is INVALID. Soon " GTEST_NAME_
" will start to enforce the valid usage. "
"Please fix it ASAP, or IT WILL START TO FAIL.\n"); // NOLINT
#if GTEST_FOR_GOOGLE_
- ColoredPrintf(COLOR_RED,
+ ColoredPrintf(GTestColor::kRed,
"For more details, see http://wiki/Main/ValidGUnitMain.\n");
#endif // GTEST_FOR_GOOGLE_
}
@@ -6779,7 +7421,7 @@
if (test_shard_file != nullptr) {
FILE* const file = posix::FOpen(test_shard_file, "w");
if (file == nullptr) {
- ColoredPrintf(COLOR_RED,
+ ColoredPrintf(GTestColor::kRed,
"Could not write to the test shard status file \"%s\" "
"specified by the %s environment variable.\n",
test_shard_file, kTestShardStatusFile);
@@ -6803,8 +7445,8 @@
return false;
}
- const Int32 total_shards = Int32FromEnvOrDie(total_shards_env, -1);
- const Int32 shard_index = Int32FromEnvOrDie(shard_index_env, -1);
+ const int32_t total_shards = Int32FromEnvOrDie(total_shards_env, -1);
+ const int32_t shard_index = Int32FromEnvOrDie(shard_index_env, -1);
if (total_shards == -1 && shard_index == -1) {
return false;
@@ -6813,7 +7455,7 @@
<< "Invalid environment variables: you have "
<< kTestShardIndex << " = " << shard_index
<< ", but have left " << kTestTotalShards << " unset.\n";
- ColoredPrintf(COLOR_RED, "%s", msg.GetString().c_str());
+ ColoredPrintf(GTestColor::kRed, "%s", msg.GetString().c_str());
fflush(stdout);
exit(EXIT_FAILURE);
} else if (total_shards != -1 && shard_index == -1) {
@@ -6821,7 +7463,7 @@
<< "Invalid environment variables: you have "
<< kTestTotalShards << " = " << total_shards
<< ", but have left " << kTestShardIndex << " unset.\n";
- ColoredPrintf(COLOR_RED, "%s", msg.GetString().c_str());
+ ColoredPrintf(GTestColor::kRed, "%s", msg.GetString().c_str());
fflush(stdout);
exit(EXIT_FAILURE);
} else if (shard_index < 0 || shard_index >= total_shards) {
@@ -6830,7 +7472,7 @@
<< kTestShardIndex << " < " << kTestTotalShards
<< ", but you have " << kTestShardIndex << "=" << shard_index
<< ", " << kTestTotalShards << "=" << total_shards << ".\n";
- ColoredPrintf(COLOR_RED, "%s", msg.GetString().c_str());
+ ColoredPrintf(GTestColor::kRed, "%s", msg.GetString().c_str());
fflush(stdout);
exit(EXIT_FAILURE);
}
@@ -6841,13 +7483,13 @@
// Parses the environment variable var as an Int32. If it is unset,
// returns default_val. If it is not an Int32, prints an error
// and aborts.
-Int32 Int32FromEnvOrDie(const char* var, Int32 default_val) {
+int32_t Int32FromEnvOrDie(const char* var, int32_t default_val) {
const char* str_val = posix::GetEnv(var);
if (str_val == nullptr) {
return default_val;
}
- Int32 result;
+ int32_t result;
if (!ParseInt32(Message() << "The value of environment variable " << var,
str_val, &result)) {
exit(EXIT_FAILURE);
@@ -6856,8 +7498,8 @@
}
// Given the total number of shards, the shard index, and the test id,
-// returns true iff the test should be run on this shard. The test id is
-// some arbitrary but unique non-negative integer assigned to each test
+// returns true if and only if the test should be run on this shard. The test id
+// is some arbitrary but unique non-negative integer assigned to each test
// method. Assumes that 0 <= shard_index < total_shards.
bool ShouldRunTestOnShard(int total_shards, int shard_index, int test_id) {
return (test_id % total_shards) == shard_index;
@@ -6871,9 +7513,9 @@
// https://github.com/google/googletest/blob/master/googletest/docs/advanced.md
// . Returns the number of tests that should run.
int UnitTestImpl::FilterTests(ReactionToSharding shard_tests) {
- const Int32 total_shards = shard_tests == HONOR_SHARDING_PROTOCOL ?
+ const int32_t total_shards = shard_tests == HONOR_SHARDING_PROTOCOL ?
Int32FromEnvOrDie(kTestTotalShards, -1) : -1;
- const Int32 shard_index = shard_tests == HONOR_SHARDING_PROTOCOL ?
+ const int32_t shard_index = shard_tests == HONOR_SHARDING_PROTOCOL ?
Int32FromEnvOrDie(kTestShardIndex, -1) : -1;
// num_runnable_tests are the number of tests that will
@@ -7162,12 +7804,11 @@
return true;
}
-// Parses a string for an Int32 flag, in the form of
-// "--flag=value".
+// Parses a string for an int32_t flag, in the form of "--flag=value".
//
// On success, stores the value of the flag in *value, and returns
// true. On failure, returns false without changing *value.
-bool ParseInt32Flag(const char* str, const char* flag, Int32* value) {
+bool ParseInt32Flag(const char* str, const char* flag, int32_t* value) {
// Gets the value of the flag as a string.
const char* const value_str = ParseFlagValue(str, flag, false);
@@ -7179,8 +7820,7 @@
value_str, value);
}
-// Parses a string for a string flag, in the form of
-// "--flag=value".
+// Parses a string for a string flag, in the form of "--flag=value".
//
// On success, stores the value of the flag in *value, and returns
// true. On failure, returns false without changing *value.
@@ -7222,7 +7862,7 @@
// @D changes to the default terminal text color.
//
static void PrintColorEncoded(const char* str) {
- GTestColor color = COLOR_DEFAULT; // The current color.
+ GTestColor color = GTestColor::kDefault; // The current color.
// Conceptually, we split the string into segments divided by escape
// sequences. Then we print one segment at a time. At the end of
@@ -7242,13 +7882,13 @@
if (ch == '@') {
ColoredPrintf(color, "@");
} else if (ch == 'D') {
- color = COLOR_DEFAULT;
+ color = GTestColor::kDefault;
} else if (ch == 'R') {
- color = COLOR_RED;
+ color = GTestColor::kRed;
} else if (ch == 'G') {
- color = COLOR_GREEN;
+ color = GTestColor::kGreen;
} else if (ch == 'Y') {
- color = COLOR_YELLOW;
+ color = GTestColor::kYellow;
} else {
--str;
}
@@ -7256,98 +7896,126 @@
}
static const char kColorEncodedHelpMessage[] =
-"This program contains tests written using " GTEST_NAME_ ". You can use the\n"
-"following command line flags to control its behavior:\n"
-"\n"
-"Test Selection:\n"
-" @G--" GTEST_FLAG_PREFIX_ "list_tests@D\n"
-" List the names of all tests instead of running them. The name of\n"
-" TEST(Foo, Bar) is \"Foo.Bar\".\n"
-" @G--" GTEST_FLAG_PREFIX_ "filter=@YPOSTIVE_PATTERNS"
+ "This program contains tests written using " GTEST_NAME_
+ ". You can use the\n"
+ "following command line flags to control its behavior:\n"
+ "\n"
+ "Test Selection:\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "list_tests@D\n"
+ " List the names of all tests instead of running them. The name of\n"
+ " TEST(Foo, Bar) is \"Foo.Bar\".\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "filter=@YPOSITIVE_PATTERNS"
"[@G-@YNEGATIVE_PATTERNS]@D\n"
-" Run only the tests whose name matches one of the positive patterns but\n"
-" none of the negative patterns. '?' matches any single character; '*'\n"
-" matches any substring; ':' separates two patterns.\n"
-" @G--" GTEST_FLAG_PREFIX_ "also_run_disabled_tests@D\n"
-" Run all disabled tests too.\n"
-"\n"
-"Test Execution:\n"
-" @G--" GTEST_FLAG_PREFIX_ "repeat=@Y[COUNT]@D\n"
-" Run the tests repeatedly; use a negative count to repeat forever.\n"
-" @G--" GTEST_FLAG_PREFIX_ "shuffle@D\n"
-" Randomize tests' orders on every iteration.\n"
-" @G--" GTEST_FLAG_PREFIX_ "random_seed=@Y[NUMBER]@D\n"
-" Random number seed to use for shuffling test orders (between 1 and\n"
-" 99999, or 0 to use a seed based on the current time).\n"
-"\n"
-"Test Output:\n"
-" @G--" GTEST_FLAG_PREFIX_ "color=@Y(@Gyes@Y|@Gno@Y|@Gauto@Y)@D\n"
-" Enable/disable colored output. The default is @Gauto@D.\n"
-" -@G-" GTEST_FLAG_PREFIX_ "print_time=0@D\n"
-" Don't print the elapsed time of each test.\n"
-" @G--" GTEST_FLAG_PREFIX_ "output=@Y(@Gjson@Y|@Gxml@Y)[@G:@YDIRECTORY_PATH@G"
- GTEST_PATH_SEP_ "@Y|@G:@YFILE_PATH]@D\n"
-" Generate a JSON or XML report in the given directory or with the given\n"
-" file name. @YFILE_PATH@D defaults to @Gtest_detail.xml@D.\n"
+ " Run only the tests whose name matches one of the positive patterns "
+ "but\n"
+ " none of the negative patterns. '?' matches any single character; "
+ "'*'\n"
+ " matches any substring; ':' separates two patterns.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "also_run_disabled_tests@D\n"
+ " Run all disabled tests too.\n"
+ "\n"
+ "Test Execution:\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "repeat=@Y[COUNT]@D\n"
+ " Run the tests repeatedly; use a negative count to repeat forever.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "shuffle@D\n"
+ " Randomize tests' orders on every iteration.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "random_seed=@Y[NUMBER]@D\n"
+ " Random number seed to use for shuffling test orders (between 1 and\n"
+ " 99999, or 0 to use a seed based on the current time).\n"
+ "\n"
+ "Test Output:\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "color=@Y(@Gyes@Y|@Gno@Y|@Gauto@Y)@D\n"
+ " Enable/disable colored output. The default is @Gauto@D.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "brief=1@D\n"
+ " Only print test failures.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "print_time=0@D\n"
+ " Don't print the elapsed time of each test.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "output=@Y(@Gjson@Y|@Gxml@Y)[@G:@YDIRECTORY_PATH@G" GTEST_PATH_SEP_
+ "@Y|@G:@YFILE_PATH]@D\n"
+ " Generate a JSON or XML report in the given directory or with the "
+ "given\n"
+ " file name. @YFILE_PATH@D defaults to @Gtest_detail.xml@D.\n"
# if GTEST_CAN_STREAM_RESULTS_
-" @G--" GTEST_FLAG_PREFIX_ "stream_result_to=@YHOST@G:@YPORT@D\n"
-" Stream test results to the given server.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "stream_result_to=@YHOST@G:@YPORT@D\n"
+ " Stream test results to the given server.\n"
# endif // GTEST_CAN_STREAM_RESULTS_
-"\n"
-"Assertion Behavior:\n"
+ "\n"
+ "Assertion Behavior:\n"
# if GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS
-" @G--" GTEST_FLAG_PREFIX_ "death_test_style=@Y(@Gfast@Y|@Gthreadsafe@Y)@D\n"
-" Set the default death test style.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "death_test_style=@Y(@Gfast@Y|@Gthreadsafe@Y)@D\n"
+ " Set the default death test style.\n"
# endif // GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS
-" @G--" GTEST_FLAG_PREFIX_ "break_on_failure@D\n"
-" Turn assertion failures into debugger break-points.\n"
-" @G--" GTEST_FLAG_PREFIX_ "throw_on_failure@D\n"
-" Turn assertion failures into C++ exceptions for use by an external\n"
-" test framework.\n"
-" @G--" GTEST_FLAG_PREFIX_ "catch_exceptions=0@D\n"
-" Do not report exceptions as test failures. Instead, allow them\n"
-" to crash the program or throw a pop-up (on Windows).\n"
-"\n"
-"Except for @G--" GTEST_FLAG_PREFIX_ "list_tests@D, you can alternatively set "
+ " @G--" GTEST_FLAG_PREFIX_
+ "break_on_failure@D\n"
+ " Turn assertion failures into debugger break-points.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "throw_on_failure@D\n"
+ " Turn assertion failures into C++ exceptions for use by an external\n"
+ " test framework.\n"
+ " @G--" GTEST_FLAG_PREFIX_
+ "catch_exceptions=0@D\n"
+ " Do not report exceptions as test failures. Instead, allow them\n"
+ " to crash the program or throw a pop-up (on Windows).\n"
+ "\n"
+ "Except for @G--" GTEST_FLAG_PREFIX_
+ "list_tests@D, you can alternatively set "
"the corresponding\n"
-"environment variable of a flag (all letters in upper-case). For example, to\n"
-"disable colored text output, you can either specify @G--" GTEST_FLAG_PREFIX_
+ "environment variable of a flag (all letters in upper-case). For example, "
+ "to\n"
+ "disable colored text output, you can either specify "
+ "@G--" GTEST_FLAG_PREFIX_
"color=no@D or set\n"
-"the @G" GTEST_FLAG_PREFIX_UPPER_ "COLOR@D environment variable to @Gno@D.\n"
-"\n"
-"For more information, please read the " GTEST_NAME_ " documentation at\n"
-"@G" GTEST_PROJECT_URL_ "@D. If you find a bug in " GTEST_NAME_ "\n"
-"(not one in your own code or tests), please report it to\n"
-"@G<" GTEST_DEV_EMAIL_ ">@D.\n";
+ "the @G" GTEST_FLAG_PREFIX_UPPER_
+ "COLOR@D environment variable to @Gno@D.\n"
+ "\n"
+ "For more information, please read the " GTEST_NAME_
+ " documentation at\n"
+ "@G" GTEST_PROJECT_URL_ "@D. If you find a bug in " GTEST_NAME_
+ "\n"
+ "(not one in your own code or tests), please report it to\n"
+ "@G<" GTEST_DEV_EMAIL_ ">@D.\n";
static bool ParseGoogleTestFlag(const char* const arg) {
return ParseBoolFlag(arg, kAlsoRunDisabledTestsFlag,
>EST_FLAG(also_run_disabled_tests)) ||
- ParseBoolFlag(arg, kBreakOnFailureFlag,
- >EST_FLAG(break_on_failure)) ||
- ParseBoolFlag(arg, kCatchExceptionsFlag,
- >EST_FLAG(catch_exceptions)) ||
- ParseStringFlag(arg, kColorFlag, >EST_FLAG(color)) ||
- ParseStringFlag(arg, kDeathTestStyleFlag,
- >EST_FLAG(death_test_style)) ||
- ParseBoolFlag(arg, kDeathTestUseFork,
- >EST_FLAG(death_test_use_fork)) ||
- ParseStringFlag(arg, kFilterFlag, >EST_FLAG(filter)) ||
- ParseStringFlag(arg, kInternalRunDeathTestFlag,
- >EST_FLAG(internal_run_death_test)) ||
- ParseBoolFlag(arg, kListTestsFlag, >EST_FLAG(list_tests)) ||
- ParseStringFlag(arg, kOutputFlag, >EST_FLAG(output)) ||
- ParseBoolFlag(arg, kPrintTimeFlag, >EST_FLAG(print_time)) ||
- ParseBoolFlag(arg, kPrintUTF8Flag, >EST_FLAG(print_utf8)) ||
- ParseInt32Flag(arg, kRandomSeedFlag, >EST_FLAG(random_seed)) ||
- ParseInt32Flag(arg, kRepeatFlag, >EST_FLAG(repeat)) ||
- ParseBoolFlag(arg, kShuffleFlag, >EST_FLAG(shuffle)) ||
- ParseInt32Flag(arg, kStackTraceDepthFlag,
- >EST_FLAG(stack_trace_depth)) ||
- ParseStringFlag(arg, kStreamResultToFlag,
- >EST_FLAG(stream_result_to)) ||
- ParseBoolFlag(arg, kThrowOnFailureFlag,
- >EST_FLAG(throw_on_failure));
+ ParseBoolFlag(arg, kBreakOnFailureFlag,
+ >EST_FLAG(break_on_failure)) ||
+ ParseBoolFlag(arg, kCatchExceptionsFlag,
+ >EST_FLAG(catch_exceptions)) ||
+ ParseStringFlag(arg, kColorFlag, >EST_FLAG(color)) ||
+ ParseStringFlag(arg, kDeathTestStyleFlag,
+ >EST_FLAG(death_test_style)) ||
+ ParseBoolFlag(arg, kDeathTestUseFork,
+ >EST_FLAG(death_test_use_fork)) ||
+ ParseBoolFlag(arg, kFailFast, >EST_FLAG(fail_fast)) ||
+ ParseStringFlag(arg, kFilterFlag, >EST_FLAG(filter)) ||
+ ParseStringFlag(arg, kInternalRunDeathTestFlag,
+ >EST_FLAG(internal_run_death_test)) ||
+ ParseBoolFlag(arg, kListTestsFlag, >EST_FLAG(list_tests)) ||
+ ParseStringFlag(arg, kOutputFlag, >EST_FLAG(output)) ||
+ ParseBoolFlag(arg, kBriefFlag, >EST_FLAG(brief)) ||
+ ParseBoolFlag(arg, kPrintTimeFlag, >EST_FLAG(print_time)) ||
+ ParseBoolFlag(arg, kPrintUTF8Flag, >EST_FLAG(print_utf8)) ||
+ ParseInt32Flag(arg, kRandomSeedFlag, >EST_FLAG(random_seed)) ||
+ ParseInt32Flag(arg, kRepeatFlag, >EST_FLAG(repeat)) ||
+ ParseBoolFlag(arg, kShuffleFlag, >EST_FLAG(shuffle)) ||
+ ParseInt32Flag(arg, kStackTraceDepthFlag,
+ >EST_FLAG(stack_trace_depth)) ||
+ ParseStringFlag(arg, kStreamResultToFlag,
+ >EST_FLAG(stream_result_to)) ||
+ ParseBoolFlag(arg, kThrowOnFailureFlag, >EST_FLAG(throw_on_failure));
}
#if GTEST_USE_OWN_FLAGFILE_FLAG_
@@ -7430,7 +8098,7 @@
void ParseGoogleTestFlagsOnly(int* argc, char** argv) {
ParseGoogleTestFlagsOnlyImpl(argc, argv);
- // Fix the value of *_NSGetArgc() on macOS, but iff
+ // Fix the value of *_NSGetArgc() on macOS, but if and only if
// *_NSGetArgv() == argv
// Only applicable to char** version of argv
#if GTEST_OS_MAC
@@ -7517,20 +8185,31 @@
std::string TempDir() {
#if defined(GTEST_CUSTOM_TEMPDIR_FUNCTION_)
return GTEST_CUSTOM_TEMPDIR_FUNCTION_();
-#endif
-
-#if GTEST_OS_WINDOWS_MOBILE
+#elif GTEST_OS_WINDOWS_MOBILE
return "\\temp\\";
#elif GTEST_OS_WINDOWS
const char* temp_dir = internal::posix::GetEnv("TEMP");
- if (temp_dir == nullptr || temp_dir[0] == '\0')
+ if (temp_dir == nullptr || temp_dir[0] == '\0') {
return "\\temp\\";
- else if (temp_dir[strlen(temp_dir) - 1] == '\\')
+ } else if (temp_dir[strlen(temp_dir) - 1] == '\\') {
return temp_dir;
- else
+ } else {
return std::string(temp_dir) + "\\";
+ }
#elif GTEST_OS_LINUX_ANDROID
- return "/sdcard/";
+ const char* temp_dir = internal::posix::GetEnv("TEST_TMPDIR");
+ if (temp_dir == nullptr || temp_dir[0] == '\0') {
+ return "/data/local/tmp/";
+ } else {
+ return temp_dir;
+ }
+#elif GTEST_OS_LINUX
+ const char* temp_dir = internal::posix::GetEnv("TEST_TMPDIR");
+ if (temp_dir == nullptr || temp_dir[0] == '\0') {
+ return "/tmp/";
+ } else {
+ return temp_dir;
+ }
#else
return "/tmp/";
#endif // GTEST_OS_WINDOWS_MOBILE
@@ -7589,6 +8268,7 @@
// This file implements death tests.
+#include <functional>
#include <utility>
@@ -7623,6 +8303,7 @@
# include <lib/fdio/fd.h>
# include <lib/fdio/io.h>
# include <lib/fdio/spawn.h>
+# include <lib/zx/channel.h>
# include <lib/zx/port.h>
# include <lib/zx/process.h>
# include <lib/zx/socket.h>
@@ -7673,8 +8354,8 @@
"Indicates the file, line number, temporal index of "
"the single death test to run, and a file descriptor to "
"which a success code may be sent, all separated by "
- "the '|' characters. This flag is specified if and only if the current "
- "process is a sub-process launched for running a thread-safe "
+ "the '|' characters. This flag is specified if and only if the "
+ "current process is a sub-process launched for running a thread-safe "
"death test. FOR INTERNAL USE ONLY.");
} // namespace internal
@@ -7798,7 +8479,7 @@
msg << "detected " << thread_count << " threads.";
}
msg << " See "
- "https://github.com/google/googletest/blob/master/googletest/docs/"
+ "https://github.com/google/googletest/blob/master/docs/"
"advanced.md#death-tests-and-threads"
<< " for more explanation and suggested solutions, especially if"
<< " this is the last message you see before your test times out.";
@@ -8114,8 +8795,8 @@
// status_ok: true if exit_status is acceptable in the context of
// this particular death test, which fails if it is false
//
-// Returns true iff all of the above conditions are met. Otherwise, the
-// first failing condition, in the order given above, is the one that is
+// Returns true if and only if all of the above conditions are met. Otherwise,
+// the first failing condition, in the order given above, is the one that is
// reported. Also sets the last death test message string.
bool DeathTestImpl::Passed(bool status_ok) {
if (!spawned())
@@ -8383,7 +9064,7 @@
std::string captured_stderr_;
zx::process child_process_;
- zx::port port_;
+ zx::channel exception_channel_;
zx::socket stderr_socket_;
};
@@ -8415,7 +9096,7 @@
}
int size() {
- return args_.size() - 1;
+ return static_cast<int>(args_.size()) - 1;
}
private:
@@ -8428,43 +9109,52 @@
int FuchsiaDeathTest::Wait() {
const int kProcessKey = 0;
const int kSocketKey = 1;
+ const int kExceptionKey = 2;
if (!spawned())
return 0;
- // Register to wait for the child process to terminate.
+ // Create a port to wait for socket/task/exception events.
zx_status_t status_zx;
- status_zx = child_process_.wait_async(
- port_, kProcessKey, ZX_PROCESS_TERMINATED, ZX_WAIT_ASYNC_ONCE);
+ zx::port port;
+ status_zx = zx::port::create(0, &port);
GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+
+ // Register to wait for the child process to terminate.
+ status_zx = child_process_.wait_async(
+ port, kProcessKey, ZX_PROCESS_TERMINATED, 0);
+ GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+
// Register to wait for the socket to be readable or closed.
status_zx = stderr_socket_.wait_async(
- port_, kSocketKey, ZX_SOCKET_READABLE | ZX_SOCKET_PEER_CLOSED,
- ZX_WAIT_ASYNC_REPEATING);
+ port, kSocketKey, ZX_SOCKET_READABLE | ZX_SOCKET_PEER_CLOSED, 0);
+ GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+
+ // Register to wait for an exception.
+ status_zx = exception_channel_.wait_async(
+ port, kExceptionKey, ZX_CHANNEL_READABLE, 0);
GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
bool process_terminated = false;
bool socket_closed = false;
do {
zx_port_packet_t packet = {};
- status_zx = port_.wait(zx::time::infinite(), &packet);
+ status_zx = port.wait(zx::time::infinite(), &packet);
GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
- if (packet.key == kProcessKey) {
- if (ZX_PKT_IS_EXCEPTION(packet.type)) {
- // Process encountered an exception. Kill it directly rather than
- // letting other handlers process the event. We will get a second
- // kProcessKey event when the process actually terminates.
- status_zx = child_process_.kill();
- GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
- } else {
- // Process terminated.
- GTEST_DEATH_TEST_CHECK_(ZX_PKT_IS_SIGNAL_ONE(packet.type));
- GTEST_DEATH_TEST_CHECK_(packet.signal.observed & ZX_PROCESS_TERMINATED);
- process_terminated = true;
- }
+ if (packet.key == kExceptionKey) {
+ // Process encountered an exception. Kill it directly rather than
+ // letting other handlers process the event. We will get a kProcessKey
+ // event when the process actually terminates.
+ status_zx = child_process_.kill();
+ GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+ } else if (packet.key == kProcessKey) {
+ // Process terminated.
+ GTEST_DEATH_TEST_CHECK_(ZX_PKT_IS_SIGNAL_ONE(packet.type));
+ GTEST_DEATH_TEST_CHECK_(packet.signal.observed & ZX_PROCESS_TERMINATED);
+ process_terminated = true;
} else if (packet.key == kSocketKey) {
- GTEST_DEATH_TEST_CHECK_(ZX_PKT_IS_SIGNAL_REP(packet.type));
+ GTEST_DEATH_TEST_CHECK_(ZX_PKT_IS_SIGNAL_ONE(packet.type));
if (packet.signal.observed & ZX_SOCKET_READABLE) {
// Read data from the socket.
constexpr size_t kBufferSize = 1024;
@@ -8481,6 +9171,9 @@
socket_closed = true;
} else {
GTEST_DEATH_TEST_CHECK_(status_zx == ZX_ERR_SHOULD_WAIT);
+ status_zx = stderr_socket_.wait_async(
+ port, kSocketKey, ZX_SOCKET_READABLE | ZX_SOCKET_PEER_CLOSED, 0);
+ GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
}
} else {
GTEST_DEATH_TEST_CHECK_(packet.signal.observed & ZX_SOCKET_PEER_CLOSED);
@@ -8492,12 +9185,12 @@
ReadAndInterpretStatusByte();
zx_info_process_t buffer;
- status_zx = child_process_.get_info(
- ZX_INFO_PROCESS, &buffer, sizeof(buffer), nullptr, nullptr);
+ status_zx = child_process_.get_info(ZX_INFO_PROCESS, &buffer, sizeof(buffer),
+ nullptr, nullptr);
GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
- GTEST_DEATH_TEST_CHECK_(buffer.exited);
- set_status(buffer.return_code);
+ GTEST_DEATH_TEST_CHECK_(buffer.flags & ZX_INFO_PROCESS_FLAG_EXITED);
+ set_status(static_cast<int>(buffer.return_code));
return status();
}
@@ -8540,16 +9233,16 @@
// Build the pipe for communication with the child.
zx_status_t status;
zx_handle_t child_pipe_handle;
- uint32_t type;
- status = fdio_pipe_half(&child_pipe_handle, &type);
- GTEST_DEATH_TEST_CHECK_(status >= 0);
- set_read_fd(status);
+ int child_pipe_fd;
+ status = fdio_pipe_half(&child_pipe_fd, &child_pipe_handle);
+ GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
+ set_read_fd(child_pipe_fd);
// Set the pipe handle for the child.
fdio_spawn_action_t spawn_actions[2] = {};
fdio_spawn_action_t* add_handle_action = &spawn_actions[0];
add_handle_action->action = FDIO_SPAWN_ACTION_ADD_HANDLE;
- add_handle_action->h.id = PA_HND(type, kFuchsiaReadPipeFd);
+ add_handle_action->h.id = PA_HND(PA_FD, kFuchsiaReadPipeFd);
add_handle_action->h.handle = child_pipe_handle;
// Create a socket pair will be used to receive the child process' stderr.
@@ -8581,12 +9274,11 @@
child_job, ZX_JOB_POL_RELATIVE, ZX_JOB_POL_BASIC, &policy, 1);
GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
- // Create an exception port and attach it to the |child_job|, to allow
+ // Create an exception channel attached to the |child_job|, to allow
// us to suppress the system default exception handler from firing.
- status = zx::port::create(0, &port_);
- GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
- status = zx_task_bind_exception_port(
- child_job, port_.get(), 0 /* key */, 0 /*options */);
+ status =
+ zx_task_create_exception_channel(
+ child_job, 0, exception_channel_.reset_and_get_address());
GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
// Spawn the child process.
@@ -8763,21 +9455,9 @@
int close_fd; // File descriptor to close; the read end of a pipe
};
-# if GTEST_OS_MAC
-inline char** GetEnviron() {
- // When Google Test is built as a framework on MacOS X, the environ variable
- // is unavailable. Apple's documentation (man environ) recommends using
- // _NSGetEnviron() instead.
- return *_NSGetEnviron();
-}
-# else
-// Some POSIX platforms expect you to declare environ. extern "C" makes
-// it reside in the global namespace.
+# if GTEST_OS_QNX
extern "C" char** environ;
-inline char** GetEnviron() { return environ; }
-# endif // GTEST_OS_MAC
-
-# if !GTEST_OS_QNX
+# else // GTEST_OS_QNX
// The main function for a threadsafe-style death test child process.
// This function is called in a clone()-ed process and thus must avoid
// any potentially unsafe operations like malloc or libc functions.
@@ -8797,18 +9477,18 @@
return EXIT_FAILURE;
}
- // We can safely call execve() as it's a direct system call. We
+ // We can safely call execv() as it's almost a direct system call. We
// cannot use execvp() as it's a libc function and thus potentially
- // unsafe. Since execve() doesn't search the PATH, the user must
+ // unsafe. Since execv() doesn't search the PATH, the user must
// invoke the test program via a valid path that contains at least
// one path separator.
- execve(args->argv[0], args->argv, GetEnviron());
- DeathTestAbort(std::string("execve(") + args->argv[0] + ", ...) in " +
+ execv(args->argv[0], args->argv);
+ DeathTestAbort(std::string("execv(") + args->argv[0] + ", ...) in " +
original_dir + " failed: " +
GetLastErrnoDescription());
return EXIT_FAILURE;
}
-# endif // !GTEST_OS_QNX
+# endif // GTEST_OS_QNX
# if GTEST_HAS_CLONE
// Two utility routines that together determine the direction the stack
@@ -8822,15 +9502,24 @@
// correct answer.
static void StackLowerThanAddress(const void* ptr,
bool* result) GTEST_NO_INLINE_;
+// Make sure sanitizers do not tamper with the stack here.
+// Ideally, we want to use `__builtin_frame_address` instead of a local variable
+// address with sanitizer disabled, but it does not work when the
+// compiler optimizes the stack frame out, which happens on PowerPC targets.
+// HWAddressSanitizer add a random tag to the MSB of the local variable address,
+// making comparison result unpredictable.
+GTEST_ATTRIBUTE_NO_SANITIZE_ADDRESS_
+GTEST_ATTRIBUTE_NO_SANITIZE_HWADDRESS_
static void StackLowerThanAddress(const void* ptr, bool* result) {
- int dummy;
- *result = (&dummy < ptr);
+ int dummy = 0;
+ *result = std::less<const void*>()(&dummy, ptr);
}
// Make sure AddressSanitizer does not tamper with the stack here.
GTEST_ATTRIBUTE_NO_SANITIZE_ADDRESS_
+GTEST_ATTRIBUTE_NO_SANITIZE_HWADDRESS_
static bool StackGrowsDown() {
- int dummy;
+ int dummy = 0;
bool result;
StackLowerThanAddress(&dummy, &result);
return result;
@@ -8873,8 +9562,7 @@
fd_flags | FD_CLOEXEC));
struct inheritance inherit = {0};
// spawn is a system call.
- child_pid =
- spawn(args.argv[0], 0, nullptr, &inherit, args.argv, GetEnviron());
+ child_pid = spawn(args.argv[0], 0, nullptr, &inherit, args.argv, environ);
// Restores the current working directory.
GTEST_DEATH_TEST_CHECK_(fchdir(cwd_fd) != -1);
GTEST_DEATH_TEST_CHECK_SYSCALL_(close(cwd_fd));
@@ -8898,7 +9586,7 @@
if (!use_fork) {
static const bool stack_grows_down = StackGrowsDown();
- const size_t stack_size = getpagesize();
+ const auto stack_size = static_cast<size_t>(getpagesize() * 2);
// MMAP_ANONYMOUS is not defined on Mac, so we use MAP_ANON instead.
void* const stack = mmap(nullptr, stack_size, PROT_READ | PROT_WRITE,
MAP_ANON | MAP_PRIVATE, -1, 0);
@@ -8914,8 +9602,9 @@
void* const stack_top =
static_cast<char*>(stack) +
(stack_grows_down ? stack_size - kMaxStackAlignment : 0);
- GTEST_DEATH_TEST_CHECK_(stack_size > kMaxStackAlignment &&
- reinterpret_cast<intptr_t>(stack_top) % kMaxStackAlignment == 0);
+ GTEST_DEATH_TEST_CHECK_(
+ static_cast<size_t>(stack_size) > kMaxStackAlignment &&
+ reinterpret_cast<uintptr_t>(stack_top) % kMaxStackAlignment == 0);
child_pid = clone(&ExecDeathTestChildMain, stack_top, SIGCHLD, &args);
@@ -9274,9 +9963,10 @@
// Returns the current working directory, or "" if unsuccessful.
FilePath FilePath::GetCurrentDir() {
-#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_PHONE || \
- GTEST_OS_WINDOWS_RT || ARDUINO
- // Windows CE and Arduino don't have a current directory, so we just return
+#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_PHONE || \
+ GTEST_OS_WINDOWS_RT || GTEST_OS_ESP8266 || GTEST_OS_ESP32 || \
+ GTEST_OS_XTENSA
+ // These platforms do not have a current directory, so we just return
// something reasonable.
return FilePath(kCurrentDirectoryString);
#elif GTEST_OS_WINDOWS
@@ -9345,7 +10035,7 @@
const char* const last_sep = FindLastPathSeparator();
std::string dir;
if (last_sep) {
- dir = std::string(c_str(), last_sep + 1 - c_str());
+ dir = std::string(c_str(), static_cast<size_t>(last_sep + 1 - c_str()));
} else {
dir = kCurrentDirectoryString;
}
@@ -9391,7 +10081,7 @@
delete [] unicode;
return attributes != kInvalidFileAttributes;
#else
- posix::StatStruct file_stat;
+ posix::StatStruct file_stat{};
return posix::Stat(pathname_.c_str(), &file_stat) == 0;
#endif // GTEST_OS_WINDOWS_MOBILE
}
@@ -9418,7 +10108,7 @@
result = true;
}
#else
- posix::StatStruct file_stat;
+ posix::StatStruct file_stat{};
result = posix::Stat(path.c_str(), &file_stat) == 0 &&
posix::IsDir(file_stat);
#endif // GTEST_OS_WINDOWS_MOBILE
@@ -9505,6 +10195,9 @@
delete [] unicode;
#elif GTEST_OS_WINDOWS
int result = _mkdir(pathname_.c_str());
+#elif GTEST_OS_ESP8266 || GTEST_OS_XTENSA
+ // do nothing
+ int result = 0;
#else
int result = mkdir(pathname_.c_str(), 0777);
#endif // GTEST_OS_WINDOWS_MOBILE
@@ -9528,33 +10221,19 @@
// For example, "bar///foo" becomes "bar/foo". Does not eliminate other
// redundancies that might be in a pathname involving "." or "..".
void FilePath::Normalize() {
- if (pathname_.c_str() == nullptr) {
- pathname_ = "";
- return;
- }
- const char* src = pathname_.c_str();
- char* const dest = new char[pathname_.length() + 1];
- char* dest_ptr = dest;
- memset(dest_ptr, 0, pathname_.length() + 1);
+ auto out = pathname_.begin();
- while (*src != '\0') {
- *dest_ptr = *src;
- if (!IsPathSeparator(*src)) {
- src++;
+ for (const char character : pathname_) {
+ if (!IsPathSeparator(character)) {
+ *(out++) = character;
+ } else if (out == pathname_.begin() || *std::prev(out) != kPathSeparator) {
+ *(out++) = kPathSeparator;
} else {
-#if GTEST_HAS_ALT_PATH_SEP_
- if (*dest_ptr == kAlternatePathSeparator) {
- *dest_ptr = kPathSeparator;
- }
-#endif
- while (IsPathSeparator(*src))
- src++;
+ continue;
}
- dest_ptr++;
}
- *dest_ptr = '\0';
- pathname_ = dest;
- delete[] dest;
+
+ pathname_.erase(out, pathname_.end());
}
} // namespace internal
@@ -9602,14 +10281,6 @@
// equal to s.
Matcher<const std::string&>::Matcher(const std::string& s) { *this = Eq(s); }
-#if GTEST_HAS_GLOBAL_STRING
-// Constructs a matcher that matches a const std::string& whose value is
-// equal to s.
-Matcher<const std::string&>::Matcher(const ::string& s) {
- *this = Eq(static_cast<std::string>(s));
-}
-#endif // GTEST_HAS_GLOBAL_STRING
-
// Constructs a matcher that matches a const std::string& whose value is
// equal to s.
Matcher<const std::string&>::Matcher(const char* s) {
@@ -9620,92 +10291,45 @@
// s.
Matcher<std::string>::Matcher(const std::string& s) { *this = Eq(s); }
-#if GTEST_HAS_GLOBAL_STRING
-// Constructs a matcher that matches a std::string whose value is equal to
-// s.
-Matcher<std::string>::Matcher(const ::string& s) {
- *this = Eq(static_cast<std::string>(s));
-}
-#endif // GTEST_HAS_GLOBAL_STRING
-
// Constructs a matcher that matches a std::string whose value is equal to
// s.
Matcher<std::string>::Matcher(const char* s) { *this = Eq(std::string(s)); }
-#if GTEST_HAS_GLOBAL_STRING
-// Constructs a matcher that matches a const ::string& whose value is
+#if GTEST_INTERNAL_HAS_STRING_VIEW
+// Constructs a matcher that matches a const StringView& whose value is
// equal to s.
-Matcher<const ::string&>::Matcher(const std::string& s) {
- *this = Eq(static_cast<::string>(s));
-}
-
-// Constructs a matcher that matches a const ::string& whose value is
-// equal to s.
-Matcher<const ::string&>::Matcher(const ::string& s) { *this = Eq(s); }
-
-// Constructs a matcher that matches a const ::string& whose value is
-// equal to s.
-Matcher<const ::string&>::Matcher(const char* s) { *this = Eq(::string(s)); }
-
-// Constructs a matcher that matches a ::string whose value is equal to s.
-Matcher<::string>::Matcher(const std::string& s) {
- *this = Eq(static_cast<::string>(s));
-}
-
-// Constructs a matcher that matches a ::string whose value is equal to s.
-Matcher<::string>::Matcher(const ::string& s) { *this = Eq(s); }
-
-// Constructs a matcher that matches a string whose value is equal to s.
-Matcher<::string>::Matcher(const char* s) { *this = Eq(::string(s)); }
-#endif // GTEST_HAS_GLOBAL_STRING
-
-#if GTEST_HAS_ABSL
-// Constructs a matcher that matches a const absl::string_view& whose value is
-// equal to s.
-Matcher<const absl::string_view&>::Matcher(const std::string& s) {
+Matcher<const internal::StringView&>::Matcher(const std::string& s) {
*this = Eq(s);
}
-#if GTEST_HAS_GLOBAL_STRING
-// Constructs a matcher that matches a const absl::string_view& whose value is
+// Constructs a matcher that matches a const StringView& whose value is
// equal to s.
-Matcher<const absl::string_view&>::Matcher(const ::string& s) { *this = Eq(s); }
-#endif // GTEST_HAS_GLOBAL_STRING
+Matcher<const internal::StringView&>::Matcher(const char* s) {
+ *this = Eq(std::string(s));
+}
-// Constructs a matcher that matches a const absl::string_view& whose value is
+// Constructs a matcher that matches a const StringView& whose value is
// equal to s.
-Matcher<const absl::string_view&>::Matcher(const char* s) {
+Matcher<const internal::StringView&>::Matcher(internal::StringView s) {
*this = Eq(std::string(s));
}
-// Constructs a matcher that matches a const absl::string_view& whose value is
-// equal to s.
-Matcher<const absl::string_view&>::Matcher(absl::string_view s) {
+// Constructs a matcher that matches a StringView whose value is equal to
+// s.
+Matcher<internal::StringView>::Matcher(const std::string& s) { *this = Eq(s); }
+
+// Constructs a matcher that matches a StringView whose value is equal to
+// s.
+Matcher<internal::StringView>::Matcher(const char* s) {
*this = Eq(std::string(s));
}
-// Constructs a matcher that matches a absl::string_view whose value is equal to
+// Constructs a matcher that matches a StringView whose value is equal to
// s.
-Matcher<absl::string_view>::Matcher(const std::string& s) { *this = Eq(s); }
-
-#if GTEST_HAS_GLOBAL_STRING
-// Constructs a matcher that matches a absl::string_view whose value is equal to
-// s.
-Matcher<absl::string_view>::Matcher(const ::string& s) { *this = Eq(s); }
-#endif // GTEST_HAS_GLOBAL_STRING
-
-// Constructs a matcher that matches a absl::string_view whose value is equal to
-// s.
-Matcher<absl::string_view>::Matcher(const char* s) {
+Matcher<internal::StringView>::Matcher(internal::StringView s) {
*this = Eq(std::string(s));
}
-
-// Constructs a matcher that matches a absl::string_view whose value is equal to
-// s.
-Matcher<absl::string_view>::Matcher(absl::string_view s) {
- *this = Eq(std::string(s));
-}
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_STRING_VIEW
} // namespace testing
// Copyright 2008, Google Inc.
@@ -9743,6 +10367,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
+#include <cstdint>
#include <fstream>
#include <memory>
@@ -9821,7 +10446,7 @@
size_t GetThreadCount() {
const std::string filename =
(Message() << "/proc/" << getpid() << "/stat").GetString();
- return ReadProcFileField<int>(filename, 19);
+ return ReadProcFileField<size_t>(filename, 19);
}
#elif GTEST_OS_MAC
@@ -9879,7 +10504,7 @@
if (sysctl(mib, miblen, &info, &size, NULL, 0)) {
return 0;
}
- return KP_NLWP(info);
+ return static_cast<size_t>(KP_NLWP(info));
}
#elif GTEST_OS_OPENBSD
@@ -9901,7 +10526,8 @@
if (sysctl(mib, miblen, NULL, &size, NULL, 0)) {
return 0;
}
- mib[5] = size / mib[4];
+
+ mib[5] = static_cast<int>(size / static_cast<size_t>(mib[4]));
// populate array of structs
struct kinfo_proc info[mib[5]];
@@ -9910,8 +10536,8 @@
}
// exclude empty members
- int nthreads = 0;
- for (int i = 0; i < size / mib[4]; i++) {
+ size_t nthreads = 0;
+ for (size_t i = 0; i < size / static_cast<size_t>(mib[4]); i++) {
if (info[i].p_tid != -1)
nthreads++;
}
@@ -9983,7 +10609,7 @@
#if GTEST_IS_THREADSAFE && GTEST_OS_WINDOWS
void SleepMilliseconds(int n) {
- ::Sleep(n);
+ ::Sleep(static_cast<DWORD>(n));
}
AutoHandle::AutoHandle()
@@ -10084,6 +10710,7 @@
namespace {
+#ifdef _MSC_VER
// Use the RAII idiom to flag mem allocs that are intentionally never
// deallocated. The motivation is to silence the false positive mem leaks
// that are reported by the debug version of MS's CRT which can only detect
@@ -10096,19 +10723,15 @@
{
public:
MemoryIsNotDeallocated() : old_crtdbg_flag_(0) {
-#ifdef _MSC_VER
old_crtdbg_flag_ = _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG);
// Set heap allocation block type to _IGNORE_BLOCK so that MS debug CRT
// doesn't report mem leak if there's no matching deallocation.
_CrtSetDbgFlag(old_crtdbg_flag_ & ~_CRTDBG_ALLOC_MEM_DF);
-#endif // _MSC_VER
}
~MemoryIsNotDeallocated() {
-#ifdef _MSC_VER
// Restore the original _CRTDBG_ALLOC_MEM_DF flag
_CrtSetDbgFlag(old_crtdbg_flag_);
-#endif // _MSC_VER
}
private:
@@ -10116,6 +10739,7 @@
GTEST_DISALLOW_COPY_AND_ASSIGN_(MemoryIsNotDeallocated);
};
+#endif // _MSC_VER
} // namespace
@@ -10131,7 +10755,9 @@
owner_thread_id_ = 0;
{
// Use RAII to flag that following mem alloc is never deallocated.
+#ifdef _MSC_VER
MemoryIsNotDeallocated memory_is_not_deallocated;
+#endif // _MSC_VER
critical_section_ = new CRITICAL_SECTION;
}
::InitializeCriticalSection(critical_section_);
@@ -10240,6 +10866,9 @@
// Returns a value that can be used to identify the thread from other threads.
static ThreadLocalValueHolderBase* GetValueOnCurrentThread(
const ThreadLocalBase* thread_local_instance) {
+#ifdef _MSC_VER
+ MemoryIsNotDeallocated memory_is_not_deallocated;
+#endif // _MSC_VER
DWORD current_thread = ::GetCurrentThreadId();
MutexLock lock(&mutex_);
ThreadIdToThreadLocals* const thread_to_thread_locals =
@@ -10374,7 +11003,9 @@
// Returns map of thread local instances.
static ThreadIdToThreadLocals* GetThreadLocalsMapLocked() {
mutex_.AssertHeld();
+#ifdef _MSC_VER
MemoryIsNotDeallocated memory_is_not_deallocated;
+#endif // _MSC_VER
static ThreadIdToThreadLocals* map = new ThreadIdToThreadLocals();
return map;
}
@@ -10385,8 +11016,8 @@
static Mutex thread_map_mutex_;
};
-Mutex ThreadLocalRegistryImpl::mutex_(Mutex::kStaticMutex);
-Mutex ThreadLocalRegistryImpl::thread_map_mutex_(Mutex::kStaticMutex);
+Mutex ThreadLocalRegistryImpl::mutex_(Mutex::kStaticMutex); // NOLINT
+Mutex ThreadLocalRegistryImpl::thread_map_mutex_(Mutex::kStaticMutex); // NOLINT
ThreadLocalValueHolderBase* ThreadLocalRegistry::GetValueOnCurrentThread(
const ThreadLocalBase* thread_local_instance) {
@@ -10417,7 +11048,7 @@
free(const_cast<char*>(pattern_));
}
-// Returns true iff regular expression re matches the entire str.
+// Returns true if and only if regular expression re matches the entire str.
bool RE::FullMatch(const char* str, const RE& re) {
if (!re.is_valid_) return false;
@@ -10425,8 +11056,8 @@
return regexec(&re.full_regex_, str, 1, &match, 0) == 0;
}
-// Returns true iff regular expression re matches a substring of str
-// (including str itself).
+// Returns true if and only if regular expression re matches a substring of
+// str (including str itself).
bool RE::PartialMatch(const char* str, const RE& re) {
if (!re.is_valid_) return false;
@@ -10466,14 +11097,14 @@
#elif GTEST_USES_SIMPLE_RE
-// Returns true iff ch appears anywhere in str (excluding the
+// Returns true if and only if ch appears anywhere in str (excluding the
// terminating '\0' character).
bool IsInSet(char ch, const char* str) {
return ch != '\0' && strchr(str, ch) != nullptr;
}
-// Returns true iff ch belongs to the given classification. Unlike
-// similar functions in <ctype.h>, these aren't affected by the
+// Returns true if and only if ch belongs to the given classification.
+// Unlike similar functions in <ctype.h>, these aren't affected by the
// current locale.
bool IsAsciiDigit(char ch) { return '0' <= ch && ch <= '9'; }
bool IsAsciiPunct(char ch) {
@@ -10486,13 +11117,13 @@
('0' <= ch && ch <= '9') || ch == '_';
}
-// Returns true iff "\\c" is a supported escape sequence.
+// Returns true if and only if "\\c" is a supported escape sequence.
bool IsValidEscape(char c) {
return (IsAsciiPunct(c) || IsInSet(c, "dDfnrsStvwW"));
}
-// Returns true iff the given atom (specified by escaped and pattern)
-// matches ch. The result is undefined if the atom is invalid.
+// Returns true if and only if the given atom (specified by escaped and
+// pattern) matches ch. The result is undefined if the atom is invalid.
bool AtomMatchesChar(bool escaped, char pattern_char, char ch) {
if (escaped) { // "\\p" where p is pattern_char.
switch (pattern_char) {
@@ -10530,7 +11161,7 @@
bool is_valid = true;
- // True iff ?, *, or + can follow the previous atom.
+ // True if and only if ?, *, or + can follow the previous atom.
bool prev_repeatable = false;
for (int i = 0; regex[i]; i++) {
if (regex[i] == '\\') { // An escape sequence
@@ -10606,8 +11237,8 @@
return false;
}
-// Returns true iff regex matches a prefix of str. regex must be a
-// valid simple regular expression and not start with "^", or the
+// Returns true if and only if regex matches a prefix of str. regex must
+// be a valid simple regular expression and not start with "^", or the
// result is undefined.
bool MatchRegexAtHead(const char* regex, const char* str) {
if (*regex == '\0') // An empty regex matches a prefix of anything.
@@ -10637,8 +11268,8 @@
}
}
-// Returns true iff regex matches any substring of str. regex must be
-// a valid simple regular expression, or the result is undefined.
+// Returns true if and only if regex matches any substring of str. regex must
+// be a valid simple regular expression, or the result is undefined.
//
// The algorithm is recursive, but the recursion depth doesn't exceed
// the regex length, so we won't need to worry about running out of
@@ -10666,13 +11297,13 @@
free(const_cast<char*>(full_pattern_));
}
-// Returns true iff regular expression re matches the entire str.
+// Returns true if and only if regular expression re matches the entire str.
bool RE::FullMatch(const char* str, const RE& re) {
return re.is_valid_ && MatchRegexAnywhere(re.full_pattern_, str);
}
-// Returns true iff regular expression re matches a substring of str
-// (including str itself).
+// Returns true if and only if regular expression re matches a substring of
+// str (including str itself).
bool RE::PartialMatch(const char* str, const RE& re) {
return re.is_valid_ && MatchRegexAnywhere(re.pattern_, str);
}
@@ -10792,9 +11423,9 @@
filename_ = temp_file_path;
# else
// There's no guarantee that a test has write access to the current
- // directory, so we create the temporary file in the /tmp directory
- // instead. We use /tmp on most systems, and /sdcard on Android.
- // That's because Android doesn't have /tmp.
+ // directory, so we create the temporary file in a temporary directory.
+ std::string name_template;
+
# if GTEST_OS_LINUX_ANDROID
// Note: Android applications are expected to call the framework's
// Context.getExternalStorageDirectory() method through JNI to get
@@ -10804,18 +11435,49 @@
// code as part of a regular standalone executable, which doesn't
// run in a Dalvik process (e.g. when running it through 'adb shell').
//
- // The location /sdcard is directly accessible from native code
- // and is the only location (unofficially) supported by the Android
- // team. It's generally a symlink to the real SD Card mount point
- // which can be /mnt/sdcard, /mnt/sdcard0, /system/media/sdcard, or
- // other OEM-customized locations. Never rely on these, and always
- // use /sdcard.
- char name_template[] = "/sdcard/gtest_captured_stream.XXXXXX";
+ // The location /data/local/tmp is directly accessible from native code.
+ // '/sdcard' and other variants cannot be relied on, as they are not
+ // guaranteed to be mounted, or may have a delay in mounting.
+ name_template = "/data/local/tmp/";
+# elif GTEST_OS_IOS
+ char user_temp_dir[PATH_MAX + 1];
+
+ // Documented alternative to NSTemporaryDirectory() (for obtaining creating
+ // a temporary directory) at
+ // https://developer.apple.com/library/archive/documentation/Security/Conceptual/SecureCodingGuide/Articles/RaceConditions.html#//apple_ref/doc/uid/TP40002585-SW10
+ //
+ // _CS_DARWIN_USER_TEMP_DIR (as well as _CS_DARWIN_USER_CACHE_DIR) is not
+ // documented in the confstr() man page at
+ // https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/confstr.3.html#//apple_ref/doc/man/3/confstr
+ // but are still available, according to the WebKit patches at
+ // https://trac.webkit.org/changeset/262004/webkit
+ // https://trac.webkit.org/changeset/263705/webkit
+ //
+ // The confstr() implementation falls back to getenv("TMPDIR"). See
+ // https://opensource.apple.com/source/Libc/Libc-1439.100.3/gen/confstr.c.auto.html
+ ::confstr(_CS_DARWIN_USER_TEMP_DIR, user_temp_dir, sizeof(user_temp_dir));
+
+ name_template = user_temp_dir;
+ if (name_template.back() != GTEST_PATH_SEP_[0])
+ name_template.push_back(GTEST_PATH_SEP_[0]);
# else
- char name_template[] = "/tmp/captured_stream.XXXXXX";
-# endif // GTEST_OS_LINUX_ANDROID
- const int captured_fd = mkstemp(name_template);
- filename_ = name_template;
+ name_template = "/tmp/";
+# endif
+ name_template.append("gtest_captured_stream.XXXXXX");
+
+ // mkstemp() modifies the string bytes in place, and does not go beyond the
+ // string's length. This results in well-defined behavior in C++17.
+ //
+ // The const_cast is needed below C++17. The constraints on std::string
+ // implementations in C++11 and above make assumption behind the const_cast
+ // fairly safe.
+ const int captured_fd = ::mkstemp(const_cast<char*>(name_template.data()));
+ if (captured_fd == -1) {
+ GTEST_LOG_(WARNING)
+ << "Failed to create tmp file " << name_template
+ << " for test; does the test have access to the /tmp directory?";
+ }
+ filename_ = std::move(name_template);
# endif // GTEST_OS_WINDOWS
fflush(nullptr);
dup2(captured_fd, fd_);
@@ -10836,6 +11498,10 @@
}
FILE* const file = posix::FOpen(filename_.c_str(), "r");
+ if (file == nullptr) {
+ GTEST_LOG_(FATAL) << "Failed to open tmp file " << filename_
+ << " for capturing stream.";
+ }
const std::string content = ReadEntireFile(file);
posix::FClose(file);
return content;
@@ -10949,13 +11615,6 @@
new std::vector<std::string>(new_argvs.begin(), new_argvs.end()));
}
-#if GTEST_HAS_GLOBAL_STRING
-void SetInjectableArgvs(const std::vector< ::string>& new_argvs) {
- SetInjectableArgvs(
- new std::vector<std::string>(new_argvs.begin(), new_argvs.end()));
-}
-#endif // GTEST_HAS_GLOBAL_STRING
-
void ClearInjectableArgvs() {
delete g_injected_test_argvs;
g_injected_test_argvs = nullptr;
@@ -10989,7 +11648,7 @@
// Parses 'str' for a 32-bit signed integer. If successful, writes
// the result to *value and returns true; otherwise leaves *value
// unchanged and returns false.
-bool ParseInt32(const Message& src_text, const char* str, Int32* value) {
+bool ParseInt32(const Message& src_text, const char* str, int32_t* value) {
// Parses the environment variable as a decimal integer.
char* end = nullptr;
const long long_value = strtol(str, &end, 10); // NOLINT
@@ -11006,13 +11665,13 @@
return false;
}
- // Is the parsed value in the range of an Int32?
- const Int32 result = static_cast<Int32>(long_value);
+ // Is the parsed value in the range of an int32_t?
+ const auto result = static_cast<int32_t>(long_value);
if (long_value == LONG_MAX || long_value == LONG_MIN ||
// The parsed value overflows as a long. (strtol() returns
// LONG_MAX or LONG_MIN when the input overflows.)
result != long_value
- // The parsed value overflows as an Int32.
+ // The parsed value overflows as an int32_t.
) {
Message msg;
msg << "WARNING: " << src_text
@@ -11030,7 +11689,7 @@
// Reads and returns the Boolean environment variable corresponding to
// the given flag; if it's not set, returns default_value.
//
-// The value is considered true iff it's not "0".
+// The value is considered true if and only if it's not "0".
bool BoolFromGTestEnv(const char* flag, bool default_value) {
#if defined(GTEST_GET_BOOL_FROM_ENV_)
return GTEST_GET_BOOL_FROM_ENV_(flag, default_value);
@@ -11045,7 +11704,7 @@
// Reads and returns a 32-bit integer stored in the environment
// variable corresponding to the given flag; if it isn't set or
// doesn't represent a valid 32-bit integer, returns default_value.
-Int32 Int32FromGTestEnv(const char* flag, Int32 default_value) {
+int32_t Int32FromGTestEnv(const char* flag, int32_t default_value) {
#if defined(GTEST_GET_INT32_FROM_ENV_)
return GTEST_GET_INT32_FROM_ENV_(flag, default_value);
#else
@@ -11056,7 +11715,7 @@
return default_value;
}
- Int32 result = default_value;
+ int32_t result = default_value;
if (!ParseInt32(Message() << "Environment variable " << env_var,
string_value, &result)) {
printf("The default value %s is used.\n",
@@ -11143,11 +11802,16 @@
// or void PrintTo(const Foo&, ::std::ostream*) in the namespace that
// defines Foo.
+
#include <stdio.h>
+
#include <cctype>
+#include <cstdint>
#include <cwchar>
#include <ostream> // NOLINT
#include <string>
+#include <type_traits>
+
namespace testing {
@@ -11158,6 +11822,7 @@
// Prints a segment of bytes in the given object.
GTEST_ATTRIBUTE_NO_SANITIZE_MEMORY_
GTEST_ATTRIBUTE_NO_SANITIZE_ADDRESS_
+GTEST_ATTRIBUTE_NO_SANITIZE_HWADDRESS_
GTEST_ATTRIBUTE_NO_SANITIZE_THREAD_
void PrintByteSegmentInObjectTo(const unsigned char* obj_bytes, size_t start,
size_t count, ostream* os) {
@@ -11200,9 +11865,19 @@
*os << ">";
}
+// Helpers for widening a character to char32_t. Since the standard does not
+// specify if char / wchar_t is signed or unsigned, it is important to first
+// convert it to the unsigned type of the same width before widening it to
+// char32_t.
+template <typename CharType>
+char32_t ToChar32(CharType in) {
+ return static_cast<char32_t>(
+ static_cast<typename std::make_unsigned<CharType>::type>(in));
+}
+
} // namespace
-namespace internal2 {
+namespace internal {
// Delegates to PrintBytesInObjectToImpl() to print the bytes in the
// given object. The delegation simplifies the implementation, which
@@ -11214,10 +11889,6 @@
PrintBytesInObjectToImpl(obj_bytes, count, os);
}
-} // namespace internal2
-
-namespace internal {
-
// Depending on the value of a char (or wchar_t), we print it in one
// of three formats:
// - as is if it's a printable ASCII (e.g. 'a', '2', ' '),
@@ -11232,17 +11903,15 @@
// Returns true if c is a printable ASCII character. We test the
// value of c directly instead of calling isprint(), which is buggy on
// Windows Mobile.
-inline bool IsPrintableAscii(wchar_t c) {
- return 0x20 <= c && c <= 0x7E;
-}
+inline bool IsPrintableAscii(char32_t c) { return 0x20 <= c && c <= 0x7E; }
-// Prints a wide or narrow char c as a character literal without the
-// quotes, escaping it when necessary; returns how c was formatted.
-// The template argument UnsignedChar is the unsigned version of Char,
-// which is the type of c.
-template <typename UnsignedChar, typename Char>
+// Prints c (of type char, char8_t, char16_t, char32_t, or wchar_t) as a
+// character literal without the quotes, escaping it when necessary; returns how
+// c was formatted.
+template <typename Char>
static CharFormat PrintAsCharLiteralTo(Char c, ostream* os) {
- switch (static_cast<wchar_t>(c)) {
+ const char32_t u_c = ToChar32(c);
+ switch (u_c) {
case L'\0':
*os << "\\0";
break;
@@ -11274,13 +11943,12 @@
*os << "\\v";
break;
default:
- if (IsPrintableAscii(c)) {
+ if (IsPrintableAscii(u_c)) {
*os << static_cast<char>(c);
return kAsIs;
} else {
ostream::fmtflags flags = os->flags();
- *os << "\\x" << std::hex << std::uppercase
- << static_cast<int>(static_cast<UnsignedChar>(c));
+ *os << "\\x" << std::hex << std::uppercase << static_cast<int>(u_c);
os->flags(flags);
return kHexEscape;
}
@@ -11288,9 +11956,9 @@
return kSpecialEscape;
}
-// Prints a wchar_t c as if it's part of a string literal, escaping it when
+// Prints a char32_t c as if it's part of a string literal, escaping it when
// necessary; returns how c was formatted.
-static CharFormat PrintAsStringLiteralTo(wchar_t c, ostream* os) {
+static CharFormat PrintAsStringLiteralTo(char32_t c, ostream* os) {
switch (c) {
case L'\'':
*os << "'";
@@ -11299,26 +11967,68 @@
*os << "\\\"";
return kSpecialEscape;
default:
- return PrintAsCharLiteralTo<wchar_t>(c, os);
+ return PrintAsCharLiteralTo(c, os);
}
}
+static const char* GetCharWidthPrefix(char) {
+ return "";
+}
+
+static const char* GetCharWidthPrefix(signed char) {
+ return "";
+}
+
+static const char* GetCharWidthPrefix(unsigned char) {
+ return "";
+}
+
+#ifdef __cpp_char8_t
+static const char* GetCharWidthPrefix(char8_t) {
+ return "u8";
+}
+#endif
+
+static const char* GetCharWidthPrefix(char16_t) {
+ return "u";
+}
+
+static const char* GetCharWidthPrefix(char32_t) {
+ return "U";
+}
+
+static const char* GetCharWidthPrefix(wchar_t) {
+ return "L";
+}
+
// Prints a char c as if it's part of a string literal, escaping it when
// necessary; returns how c was formatted.
static CharFormat PrintAsStringLiteralTo(char c, ostream* os) {
- return PrintAsStringLiteralTo(
- static_cast<wchar_t>(static_cast<unsigned char>(c)), os);
+ return PrintAsStringLiteralTo(ToChar32(c), os);
}
-// Prints a wide or narrow character c and its code. '\0' is printed
-// as "'\\0'", other unprintable characters are also properly escaped
-// using the standard C++ escape sequence. The template argument
-// UnsignedChar is the unsigned version of Char, which is the type of c.
-template <typename UnsignedChar, typename Char>
+#ifdef __cpp_char8_t
+static CharFormat PrintAsStringLiteralTo(char8_t c, ostream* os) {
+ return PrintAsStringLiteralTo(ToChar32(c), os);
+}
+#endif
+
+static CharFormat PrintAsStringLiteralTo(char16_t c, ostream* os) {
+ return PrintAsStringLiteralTo(ToChar32(c), os);
+}
+
+static CharFormat PrintAsStringLiteralTo(wchar_t c, ostream* os) {
+ return PrintAsStringLiteralTo(ToChar32(c), os);
+}
+
+// Prints a character c (of type char, char8_t, char16_t, char32_t, or wchar_t)
+// and its code. '\0' is printed as "'\\0'", other unprintable characters are
+// also properly escaped using the standard C++ escape sequence.
+template <typename Char>
void PrintCharAndCodeTo(Char c, ostream* os) {
// First, print c as a literal in the most readable form we can find.
- *os << ((sizeof(c) > 1) ? "L'" : "'");
- const CharFormat format = PrintAsCharLiteralTo<UnsignedChar>(c, os);
+ *os << GetCharWidthPrefix(c) << "'";
+ const CharFormat format = PrintAsCharLiteralTo(c, os);
*os << "'";
// To aid user debugging, we also print c's code in decimal, unless
@@ -11334,36 +12044,37 @@
if (format == kHexEscape || (1 <= c && c <= 9)) {
// Do nothing.
} else {
- *os << ", 0x" << String::FormatHexInt(static_cast<UnsignedChar>(c));
+ *os << ", 0x" << String::FormatHexInt(static_cast<int>(c));
}
*os << ")";
}
-void PrintTo(unsigned char c, ::std::ostream* os) {
- PrintCharAndCodeTo<unsigned char>(c, os);
-}
-void PrintTo(signed char c, ::std::ostream* os) {
- PrintCharAndCodeTo<unsigned char>(c, os);
-}
+void PrintTo(unsigned char c, ::std::ostream* os) { PrintCharAndCodeTo(c, os); }
+void PrintTo(signed char c, ::std::ostream* os) { PrintCharAndCodeTo(c, os); }
// Prints a wchar_t as a symbol if it is printable or as its internal
// code otherwise and also as its code. L'\0' is printed as "L'\\0'".
-void PrintTo(wchar_t wc, ostream* os) {
- PrintCharAndCodeTo<wchar_t>(wc, os);
+void PrintTo(wchar_t wc, ostream* os) { PrintCharAndCodeTo(wc, os); }
+
+// TODO(dcheng): Consider making this delegate to PrintCharAndCodeTo() as well.
+void PrintTo(char32_t c, ::std::ostream* os) {
+ *os << std::hex << "U+" << std::uppercase << std::setfill('0') << std::setw(4)
+ << static_cast<uint32_t>(c);
}
// Prints the given array of characters to the ostream. CharType must be either
-// char or wchar_t.
+// char, char8_t, char16_t, char32_t, or wchar_t.
// The array starts at begin, the length is len, it may include '\0' characters
// and may not be NUL-terminated.
template <typename CharType>
GTEST_ATTRIBUTE_NO_SANITIZE_MEMORY_
GTEST_ATTRIBUTE_NO_SANITIZE_ADDRESS_
+GTEST_ATTRIBUTE_NO_SANITIZE_HWADDRESS_
GTEST_ATTRIBUTE_NO_SANITIZE_THREAD_
static CharFormat PrintCharsAsStringTo(
const CharType* begin, size_t len, ostream* os) {
- const char* const kQuoteBegin = sizeof(CharType) == 1 ? "\"" : "L\"";
- *os << kQuoteBegin;
+ const char* const quote_prefix = GetCharWidthPrefix(*begin);
+ *os << quote_prefix << "\"";
bool is_previous_hex = false;
CharFormat print_format = kAsIs;
for (size_t index = 0; index < len; ++index) {
@@ -11372,7 +12083,7 @@
// Previous character is of '\x..' form and this character can be
// interpreted as another hexadecimal digit in its number. Break string to
// disambiguate.
- *os << "\" " << kQuoteBegin;
+ *os << "\" " << quote_prefix << "\"";
}
is_previous_hex = PrintAsStringLiteralTo(cur, os) == kHexEscape;
// Remember if any characters required hex escaping.
@@ -11389,6 +12100,7 @@
template <typename CharType>
GTEST_ATTRIBUTE_NO_SANITIZE_MEMORY_
GTEST_ATTRIBUTE_NO_SANITIZE_ADDRESS_
+GTEST_ATTRIBUTE_NO_SANITIZE_HWADDRESS_
GTEST_ATTRIBUTE_NO_SANITIZE_THREAD_
static void UniversalPrintCharArray(
const CharType* begin, size_t len, ostream* os) {
@@ -11417,22 +12129,57 @@
UniversalPrintCharArray(begin, len, os);
}
+#ifdef __cpp_char8_t
+// Prints a (const) char8_t array of 'len' elements, starting at address
+// 'begin'.
+void UniversalPrintArray(const char8_t* begin, size_t len, ostream* os) {
+ UniversalPrintCharArray(begin, len, os);
+}
+#endif
+
+// Prints a (const) char16_t array of 'len' elements, starting at address
+// 'begin'.
+void UniversalPrintArray(const char16_t* begin, size_t len, ostream* os) {
+ UniversalPrintCharArray(begin, len, os);
+}
+
+// Prints a (const) char32_t array of 'len' elements, starting at address
+// 'begin'.
+void UniversalPrintArray(const char32_t* begin, size_t len, ostream* os) {
+ UniversalPrintCharArray(begin, len, os);
+}
+
// Prints a (const) wchar_t array of 'len' elements, starting at address
// 'begin'.
void UniversalPrintArray(const wchar_t* begin, size_t len, ostream* os) {
UniversalPrintCharArray(begin, len, os);
}
-// Prints the given C string to the ostream.
-void PrintTo(const char* s, ostream* os) {
+namespace {
+
+// Prints a null-terminated C-style string to the ostream.
+template <typename Char>
+void PrintCStringTo(const Char* s, ostream* os) {
if (s == nullptr) {
*os << "NULL";
} else {
*os << ImplicitCast_<const void*>(s) << " pointing to ";
- PrintCharsAsStringTo(s, strlen(s), os);
+ PrintCharsAsStringTo(s, std::char_traits<Char>::length(s), os);
}
}
+} // anonymous namespace
+
+void PrintTo(const char* s, ostream* os) { PrintCStringTo(s, os); }
+
+#ifdef __cpp_char8_t
+void PrintTo(const char8_t* s, ostream* os) { PrintCStringTo(s, os); }
+#endif
+
+void PrintTo(const char16_t* s, ostream* os) { PrintCStringTo(s, os); }
+
+void PrintTo(const char32_t* s, ostream* os) { PrintCStringTo(s, os); }
+
// MSVC compiler can be configured to define whar_t as a typedef
// of unsigned short. Defining an overload for const wchar_t* in that case
// would cause pointers to unsigned shorts be printed as wide strings,
@@ -11441,14 +12188,7 @@
// wchar_t is implemented as a native type.
#if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED)
// Prints the given wide C string to the ostream.
-void PrintTo(const wchar_t* s, ostream* os) {
- if (s == nullptr) {
- *os << "NULL";
- } else {
- *os << ImplicitCast_<const void*>(s) << " pointing to ";
- PrintCharsAsStringTo(s, wcslen(s), os);
- }
-}
+void PrintTo(const wchar_t* s, ostream* os) { PrintCStringTo(s, os); }
#endif // wchar_t is native
namespace {
@@ -11518,17 +12258,6 @@
} // anonymous namespace
-// Prints a ::string object.
-#if GTEST_HAS_GLOBAL_STRING
-void PrintStringTo(const ::string& s, ostream* os) {
- if (PrintCharsAsStringTo(s.data(), s.size(), os) == kHexEscape) {
- if (GTEST_FLAG(print_utf8)) {
- ConditionalPrintAsText(s.data(), s.size(), os);
- }
- }
-}
-#endif // GTEST_HAS_GLOBAL_STRING
-
void PrintStringTo(const ::std::string& s, ostream* os) {
if (PrintCharsAsStringTo(s.data(), s.size(), os) == kHexEscape) {
if (GTEST_FLAG(print_utf8)) {
@@ -11537,12 +12266,19 @@
}
}
-// Prints a ::wstring object.
-#if GTEST_HAS_GLOBAL_WSTRING
-void PrintWideStringTo(const ::wstring& s, ostream* os) {
+#ifdef __cpp_char8_t
+void PrintU8StringTo(const ::std::u8string& s, ostream* os) {
PrintCharsAsStringTo(s.data(), s.size(), os);
}
-#endif // GTEST_HAS_GLOBAL_WSTRING
+#endif
+
+void PrintU16StringTo(const ::std::u16string& s, ostream* os) {
+ PrintCharsAsStringTo(s.data(), s.size(), os);
+}
+
+void PrintU32StringTo(const ::std::u32string& s, ostream* os) {
+ PrintCharsAsStringTo(s.data(), s.size(), os);
+}
#if GTEST_HAS_STD_WSTRING
void PrintWideStringTo(const ::std::wstring& s, ostream* os) {
@@ -11586,6 +12322,7 @@
// The Google C++ Testing and Mocking Framework (Google Test)
+
namespace testing {
using internal::GetUnitTestImpl;
@@ -11599,7 +12336,9 @@
// Prints a TestPartResult object.
std::ostream& operator<<(std::ostream& os, const TestPartResult& result) {
- return os << result.file_name() << ":" << result.line_number() << ": "
+ return os << internal::FormatFileLocation(result.file_name(),
+ result.line_number())
+ << " "
<< (result.type() == TestPartResult::kSuccess
? "Success"
: result.type() == TestPartResult::kSkip
@@ -11623,7 +12362,7 @@
internal::posix::Abort();
}
- return array_[index];
+ return array_[static_cast<size_t>(index)];
}
// Returns the number of TestPartResult objects in the array.
@@ -11655,7 +12394,7 @@
} // namespace internal
} // namespace testing
-// Copyright 2008 Google Inc.
+// Copyright 2023 Google Inc.
// All Rights Reserved.
//
// Redistribution and use in source and binary forms, with or without
@@ -11690,8 +12429,6 @@
namespace testing {
namespace internal {
-#if GTEST_HAS_TYPED_TEST_P
-
// Skips to the first non-space char in str. Returns an empty string if str
// contains only whitespace characters.
static const char* SkipSpaces(const char* str) {
@@ -11713,7 +12450,10 @@
// registered_tests_; returns registered_tests if successful, or
// aborts the program otherwise.
const char* TypedTestSuitePState::VerifyRegisteredTestNames(
- const char* file, int line, const char* registered_tests) {
+ const char* test_suite_name, const char* file, int line,
+ const char* registered_tests) {
+ RegisterTypeParameterizedTestSuite(test_suite_name, CodeLocation(file, line));
+
typedef RegisteredTestsMap::const_iterator RegisteredTestIter;
registered_ = true;
@@ -11730,17 +12470,7 @@
continue;
}
- bool found = false;
- for (RegisteredTestIter it = registered_tests_.begin();
- it != registered_tests_.end();
- ++it) {
- if (name == it->first) {
- found = true;
- break;
- }
- }
-
- if (found) {
+ if (registered_tests_.count(name) != 0) {
tests.insert(name);
} else {
errors << "No test named " << name
@@ -11767,8 +12497,6 @@
return registered_tests;
}
-#endif // GTEST_HAS_TYPED_TEST_P
-
} // namespace internal
} // namespace testing
// Copyright 2008, Google Inc.
@@ -12085,8 +12813,8 @@
// Protects global resources (stdout in particular) used by Log().
static GTEST_DEFINE_STATIC_MUTEX_(g_log_mutex);
-// Returns true iff a log with the given severity is visible according
-// to the --gmock_verbose flag.
+// Returns true if and only if a log with the given severity is visible
+// according to the --gmock_verbose flag.
GTEST_API_ bool LogIsVisible(LogSeverity severity) {
if (GMOCK_FLAG(verbose) == kInfoVerbosity) {
// Always show the log if --gmock_verbose=info.
@@ -12101,7 +12829,7 @@
}
}
-// Prints the given message to stdout iff 'severity' >= the level
+// Prints the given message to stdout if and only if 'severity' >= the level
// specified by the --gmock_verbose flag. If stack_frames_to_skip >=
// 0, also prints the stack trace excluding the top
// stack_frames_to_skip frames. In opt mode, any positive
@@ -12379,8 +13107,6 @@
// right_[left_[i]] = i.
::std::vector<size_t> left_;
::std::vector<size_t> right_;
-
- GTEST_DISALLOW_ASSIGN_(MaxBipartiteMatchState);
};
const size_t MaxBipartiteMatchState::kUnused;
@@ -12657,6 +13383,7 @@
#include <stdlib.h>
+
#include <iostream> // NOLINT
#include <map>
#include <memory>
@@ -12664,6 +13391,7 @@
#include <string>
#include <vector>
+
#if GTEST_OS_CYGWIN || GTEST_OS_LINUX || GTEST_OS_MAC
# include <unistd.h> // NOLINT
#endif
@@ -12689,7 +13417,8 @@
const char* file, int line,
const std::string& message) {
::std::ostringstream s;
- s << file << ":" << line << ": " << message << ::std::endl;
+ s << internal::FormatFileLocation(file, line) << " " << message
+ << ::std::endl;
Log(severity, s.str(), 0);
}
@@ -12745,8 +13474,8 @@
}
}
-// Returns true iff all pre-requisites of this expectation have been
-// satisfied.
+// Returns true if and only if all pre-requisites of this expectation
+// have been satisfied.
bool ExpectationBase::AllPrerequisitesAreSatisfied() const
GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
g_gmock_mutex.AssertHeld();
@@ -12910,8 +13639,8 @@
"call should not happen. Do not suppress it by blindly adding "
"an EXPECT_CALL() if you don't mean to enforce the call. "
"See "
- "https://github.com/google/googletest/blob/master/googlemock/"
- "docs/CookBook.md#"
+ "https://github.com/google/googletest/blob/master/docs/"
+ "gmock_cook_book.md#"
"knowing-when-to-expect for details.\n",
stack_frames_to_skip);
break;
@@ -13003,7 +13732,7 @@
const CallReaction reaction =
Mock::GetReactionOnUninterestingCalls(MockObject());
- // True iff we need to print this call's arguments and return
+ // True if and only if we need to print this call's arguments and return
// value. This definition must be kept in sync with
// the behavior of ReportUninterestingCall().
const bool need_to_report_uninteresting_call =
@@ -13048,13 +13777,14 @@
// The UntypedFindMatchingExpectation() function acquires and
// releases g_gmock_mutex.
+
const ExpectationBase* const untyped_expectation =
- this->UntypedFindMatchingExpectation(
- untyped_args, &untyped_action, &is_excessive,
- &ss, &why);
+ this->UntypedFindMatchingExpectation(untyped_args, &untyped_action,
+ &is_excessive, &ss, &why);
const bool found = untyped_expectation != nullptr;
- // True iff we need to print the call's arguments and return value.
+ // True if and only if we need to print the call's arguments
+ // and return value.
// This definition must be kept in sync with the uses of Expect()
// and Log() in this function.
const bool need_to_report_call =
@@ -13075,26 +13805,42 @@
untyped_expectation->DescribeLocationTo(&loc);
}
- UntypedActionResultHolderBase* const result =
- untyped_action == nullptr
- ? this->UntypedPerformDefaultAction(untyped_args, ss.str())
- : this->UntypedPerformAction(untyped_action, untyped_args);
- if (result != nullptr) result->PrintAsActionResult(&ss);
- ss << "\n" << why.str();
+ UntypedActionResultHolderBase* result = nullptr;
- if (!found) {
- // No expectation matches this call - reports a failure.
- Expect(false, nullptr, -1, ss.str());
- } else if (is_excessive) {
- // We had an upper-bound violation and the failure message is in ss.
- Expect(false, untyped_expectation->file(),
- untyped_expectation->line(), ss.str());
- } else {
- // We had an expected call and the matching expectation is
- // described in ss.
- Log(kInfo, loc.str() + ss.str(), 2);
+ auto perform_action = [&] {
+ return untyped_action == nullptr
+ ? this->UntypedPerformDefaultAction(untyped_args, ss.str())
+ : this->UntypedPerformAction(untyped_action, untyped_args);
+ };
+ auto handle_failures = [&] {
+ ss << "\n" << why.str();
+
+ if (!found) {
+ // No expectation matches this call - reports a failure.
+ Expect(false, nullptr, -1, ss.str());
+ } else if (is_excessive) {
+ // We had an upper-bound violation and the failure message is in ss.
+ Expect(false, untyped_expectation->file(), untyped_expectation->line(),
+ ss.str());
+ } else {
+ // We had an expected call and the matching expectation is
+ // described in ss.
+ Log(kInfo, loc.str() + ss.str(), 2);
+ }
+ };
+#if GTEST_HAS_EXCEPTIONS
+ try {
+ result = perform_action();
+ } catch (...) {
+ handle_failures();
+ throw;
}
+#else
+ result = perform_action();
+#endif
+ if (result != nullptr) result->PrintAsActionResult(&ss);
+ handle_failures();
return result;
}
@@ -13193,7 +13939,7 @@
int first_used_line;
::std::string first_used_test_suite;
::std::string first_used_test;
- bool leakable; // true iff it's OK to leak the object.
+ bool leakable; // true if and only if it's OK to leak the object.
FunctionMockers function_mockers; // All registered methods of the object.
};
@@ -13238,7 +13984,7 @@
if (leaked_count > 0) {
std::cout << "\nERROR: " << leaked_count << " leaked mock "
<< (leaked_count == 1 ? "object" : "objects")
- << " found at program exit. Expectations on a mock object is "
+ << " found at program exit. Expectations on a mock object are "
"verified when the object is destructed. Leaking a mock "
"means that its expectations aren't verified, which is "
"usually a test bug. If you really intend to leak a mock, "
@@ -13337,7 +14083,7 @@
}
// Verifies all expectations on the given mock object and clears its
-// default actions and expectations. Returns true iff the
+// default actions and expectations. Returns true if and only if the
// verification was successful.
bool Mock::VerifyAndClear(void* mock_obj)
GTEST_LOCK_EXCLUDED_(internal::g_gmock_mutex) {
@@ -13538,8 +14284,8 @@
namespace testing {
GMOCK_DEFINE_bool_(catch_leaked_mocks, true,
- "true iff Google Mock should report leaked mock objects "
- "as failures.");
+ "true if and only if Google Mock should report leaked "
+ "mock objects as failures.");
GMOCK_DEFINE_string_(verbose, internal::kWarningVerbosity,
"Controls how verbose Google Mock's output is."
@@ -13628,7 +14374,7 @@
}
static bool ParseGoogleMockIntFlag(const char* str, const char* flag,
- int* value) {
+ int32_t* value) {
// Gets the value of the flag as a string.
const char* const value_str = ParseGoogleMockFlagValue(str, flag, true);
diff --git a/internal/ceres/gradient_checker.cc b/internal/ceres/gradient_checker.cc
index dadaaa0..f49803c 100644
--- a/internal/ceres/gradient_checker.cc
+++ b/internal/ceres/gradient_checker.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -48,43 +48,39 @@
using internal::IsClose;
using internal::StringAppendF;
using internal::StringPrintf;
-using std::string;
-using std::vector;
namespace {
// Evaluate the cost function and transform the returned Jacobians to
-// the local space of the respective local parameterizations.
-bool EvaluateCostFunction(
- const ceres::CostFunction* function,
- double const* const* parameters,
- const std::vector<const ceres::LocalParameterization*>&
- local_parameterizations,
- Vector* residuals,
- std::vector<Matrix>* jacobians,
- std::vector<Matrix>* local_jacobians) {
+// the tangent space of the respective local parameterizations.
+bool EvaluateCostFunction(const CostFunction* function,
+ double const* const* parameters,
+ const std::vector<const Manifold*>& manifolds,
+ Vector* residuals,
+ std::vector<Matrix>* jacobians,
+ std::vector<Matrix>* local_jacobians) {
CHECK(residuals != nullptr);
CHECK(jacobians != nullptr);
CHECK(local_jacobians != nullptr);
- const vector<int32_t>& block_sizes = function->parameter_block_sizes();
+ const std::vector<int32_t>& block_sizes = function->parameter_block_sizes();
const int num_parameter_blocks = block_sizes.size();
- // Allocate Jacobian matrices in local space.
+ // Allocate Jacobian matrices in tangent space.
local_jacobians->resize(num_parameter_blocks);
- vector<double*> local_jacobian_data(num_parameter_blocks);
+ std::vector<double*> local_jacobian_data(num_parameter_blocks);
for (int i = 0; i < num_parameter_blocks; ++i) {
int block_size = block_sizes.at(i);
- if (local_parameterizations.at(i) != NULL) {
- block_size = local_parameterizations.at(i)->LocalSize();
+ if (manifolds.at(i) != nullptr) {
+ block_size = manifolds.at(i)->TangentSize();
}
local_jacobians->at(i).resize(function->num_residuals(), block_size);
local_jacobians->at(i).setZero();
local_jacobian_data.at(i) = local_jacobians->at(i).data();
}
- // Allocate Jacobian matrices in global space.
+ // Allocate Jacobian matrices in ambient space.
jacobians->resize(num_parameter_blocks);
- vector<double*> jacobian_data(num_parameter_blocks);
+ std::vector<double*> jacobian_data(num_parameter_blocks);
for (int i = 0; i < num_parameter_blocks; ++i) {
jacobians->at(i).resize(function->num_residuals(), block_sizes.at(i));
jacobians->at(i).setZero();
@@ -100,49 +96,46 @@
return false;
}
- // Convert Jacobians from global to local space.
+ // Convert Jacobians from ambient to local space.
for (size_t i = 0; i < local_jacobians->size(); ++i) {
- if (local_parameterizations.at(i) == NULL) {
+ if (manifolds.at(i) == nullptr) {
local_jacobians->at(i) = jacobians->at(i);
} else {
- int global_size = local_parameterizations.at(i)->GlobalSize();
- int local_size = local_parameterizations.at(i)->LocalSize();
- CHECK_EQ(jacobians->at(i).cols(), global_size);
- Matrix global_J_local(global_size, local_size);
- local_parameterizations.at(i)->ComputeJacobian(parameters[i],
- global_J_local.data());
- local_jacobians->at(i).noalias() = jacobians->at(i) * global_J_local;
+ int ambient_size = manifolds.at(i)->AmbientSize();
+ int tangent_size = manifolds.at(i)->TangentSize();
+ CHECK_EQ(jacobians->at(i).cols(), ambient_size);
+ Matrix ambient_J_tangent(ambient_size, tangent_size);
+ manifolds.at(i)->PlusJacobian(parameters[i], ambient_J_tangent.data());
+ local_jacobians->at(i).noalias() = jacobians->at(i) * ambient_J_tangent;
}
}
return true;
}
} // namespace
-GradientChecker::GradientChecker(
- const CostFunction* function,
- const vector<const LocalParameterization*>* local_parameterizations,
- const NumericDiffOptions& options)
+GradientChecker::GradientChecker(const CostFunction* function,
+ const std::vector<const Manifold*>* manifolds,
+ const NumericDiffOptions& options)
: function_(function) {
CHECK(function != nullptr);
- if (local_parameterizations != NULL) {
- local_parameterizations_ = *local_parameterizations;
+ if (manifolds != nullptr) {
+ manifolds_ = *manifolds;
} else {
- local_parameterizations_.resize(function->parameter_block_sizes().size(),
- NULL);
+ manifolds_.resize(function->parameter_block_sizes().size(), nullptr);
}
- DynamicNumericDiffCostFunction<CostFunction, RIDDERS>*
- finite_diff_cost_function =
- new DynamicNumericDiffCostFunction<CostFunction, RIDDERS>(
- function, DO_NOT_TAKE_OWNERSHIP, options);
- finite_diff_cost_function_.reset(finite_diff_cost_function);
- const vector<int32_t>& parameter_block_sizes =
+ auto finite_diff_cost_function =
+ std::make_unique<DynamicNumericDiffCostFunction<CostFunction, RIDDERS>>(
+ function, DO_NOT_TAKE_OWNERSHIP, options);
+ const std::vector<int32_t>& parameter_block_sizes =
function->parameter_block_sizes();
const int num_parameter_blocks = parameter_block_sizes.size();
for (int i = 0; i < num_parameter_blocks; ++i) {
finite_diff_cost_function->AddParameterBlock(parameter_block_sizes[i]);
}
finite_diff_cost_function->SetNumResiduals(function->num_residuals());
+
+ finite_diff_cost_function_ = std::move(finite_diff_cost_function);
}
bool GradientChecker::Probe(double const* const* parameters,
@@ -154,7 +147,7 @@
// provided an output argument.
ProbeResults* results;
ProbeResults results_local;
- if (results_param != NULL) {
+ if (results_param != nullptr) {
results = results_param;
results->residuals.resize(0);
results->jacobians.clear();
@@ -169,11 +162,11 @@
results->return_value = true;
// Evaluate the derivative using the user supplied code.
- vector<Matrix>& jacobians = results->jacobians;
- vector<Matrix>& local_jacobians = results->local_jacobians;
+ std::vector<Matrix>& jacobians = results->jacobians;
+ std::vector<Matrix>& local_jacobians = results->local_jacobians;
if (!EvaluateCostFunction(function_,
parameters,
- local_parameterizations_,
+ manifolds_,
&results->residuals,
&jacobians,
&local_jacobians)) {
@@ -182,12 +175,13 @@
}
// Evaluate the derivative using numeric derivatives.
- vector<Matrix>& numeric_jacobians = results->numeric_jacobians;
- vector<Matrix>& local_numeric_jacobians = results->local_numeric_jacobians;
+ std::vector<Matrix>& numeric_jacobians = results->numeric_jacobians;
+ std::vector<Matrix>& local_numeric_jacobians =
+ results->local_numeric_jacobians;
Vector finite_diff_residuals;
if (!EvaluateCostFunction(finite_diff_cost_function_.get(),
parameters,
- local_parameterizations_,
+ manifolds_,
&finite_diff_residuals,
&numeric_jacobians,
&local_numeric_jacobians)) {
@@ -205,8 +199,8 @@
if (!IsClose(results->residuals[i],
finite_diff_residuals[i],
relative_precision,
- NULL,
- NULL)) {
+ nullptr,
+ nullptr)) {
results->error_log =
"Function evaluation with and without Jacobians "
"resulted in different residuals.";
@@ -223,7 +217,7 @@
// Accumulate the error message for all the jacobians, since it won't get
// output if there are no bad jacobian components.
- string error_log;
+ std::string error_log;
for (int k = 0; k < function_->parameter_block_sizes().size(); k++) {
StringAppendF(&error_log,
"========== "
@@ -277,7 +271,7 @@
// Since there were some bad errors, dump comprehensive debug info.
if (num_bad_jacobian_components) {
- string header = StringPrintf(
+ std::string header = StringPrintf(
"\nDetected %d bad Jacobian component(s). "
"Worst relative error was %g.\n",
num_bad_jacobian_components,
diff --git a/internal/ceres/gradient_checker_test.cc b/internal/ceres/gradient_checker_test.cc
index 31dc97b..2a20470 100644
--- a/internal/ceres/gradient_checker_test.cc
+++ b/internal/ceres/gradient_checker_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,21 +33,19 @@
#include "ceres/gradient_checker.h"
#include <cmath>
-#include <cstdlib>
+#include <random>
+#include <utility>
#include <vector>
#include "ceres/cost_function.h"
#include "ceres/problem.h"
-#include "ceres/random.h"
#include "ceres/solver.h"
#include "ceres/test_util.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::vector;
const double kTolerance = 1e-12;
// We pick a (non-quadratic) function whose derivative are easy:
@@ -59,13 +57,16 @@
// version, they are both block vectors, of course.
class GoodTestTerm : public CostFunction {
public:
- GoodTestTerm(int arity, int const* dim) : arity_(arity), return_value_(true) {
+ template <class UniformRandomFunctor>
+ GoodTestTerm(int arity, int const* dim, UniformRandomFunctor&& randu)
+ : arity_(arity), return_value_(true) {
+ std::uniform_real_distribution<double> distribution(-1.0, 1.0);
// Make 'arity' random vectors.
a_.resize(arity_);
for (int j = 0; j < arity_; ++j) {
a_[j].resize(dim[j]);
for (int u = 0; u < dim[j]; ++u) {
- a_[j][u] = 2.0 * RandDouble() - 1.0;
+ a_[j][u] = randu();
}
}
@@ -77,7 +78,7 @@
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
if (!return_value_) {
return false;
}
@@ -113,18 +114,20 @@
private:
int arity_;
bool return_value_;
- vector<vector<double>> a_; // our vectors.
+ std::vector<std::vector<double>> a_; // our vectors.
};
class BadTestTerm : public CostFunction {
public:
- BadTestTerm(int arity, int const* dim) : arity_(arity) {
+ template <class UniformRandomFunctor>
+ BadTestTerm(int arity, int const* dim, UniformRandomFunctor&& randu)
+ : arity_(arity) {
// Make 'arity' random vectors.
a_.resize(arity_);
for (int j = 0; j < arity_; ++j) {
a_[j].resize(dim[j]);
for (int u = 0; u < dim[j]; ++u) {
- a_[j][u] = 2.0 * RandDouble() - 1.0;
+ a_[j][u] = randu();
}
}
@@ -136,7 +139,7 @@
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
// Compute a . x.
double ax = 0;
for (int j = 0; j < arity_; ++j) {
@@ -166,7 +169,7 @@
private:
int arity_;
- vector<vector<double>> a_; // our vectors.
+ std::vector<std::vector<double>> a_; // our vectors.
};
static void CheckDimensions(const GradientChecker::ProbeResults& results,
@@ -194,8 +197,6 @@
}
TEST(GradientChecker, SmokeTest) {
- srand(5);
-
// Test with 3 blocks of size 2, 3 and 4.
int const num_parameters = 3;
std::vector<int> parameter_sizes(3);
@@ -205,10 +206,13 @@
// Make a random set of blocks.
FixedArray<double*> parameters(num_parameters);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> distribution(-1.0, 1.0);
+ auto randu = [&prng, &distribution] { return distribution(prng); };
for (int j = 0; j < num_parameters; ++j) {
parameters[j] = new double[parameter_sizes[j]];
for (int u = 0; u < parameter_sizes[j]; ++u) {
- parameters[j][u] = 2.0 * RandDouble() - 1.0;
+ parameters[j][u] = randu();
}
}
@@ -216,9 +220,12 @@
GradientChecker::ProbeResults results;
// Test that Probe returns true for correct Jacobians.
- GoodTestTerm good_term(num_parameters, parameter_sizes.data());
- GradientChecker good_gradient_checker(&good_term, NULL, numeric_diff_options);
- EXPECT_TRUE(good_gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+ GoodTestTerm good_term(num_parameters, parameter_sizes.data(), randu);
+ std::vector<const Manifold*>* manifolds = nullptr;
+ GradientChecker good_gradient_checker(
+ &good_term, manifolds, numeric_diff_options);
+ EXPECT_TRUE(
+ good_gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
EXPECT_TRUE(
good_gradient_checker.Probe(parameters.data(), kTolerance, &results))
<< results.error_log;
@@ -233,7 +240,7 @@
// Test that if the cost function return false, Probe should return false.
good_term.SetReturnValue(false);
EXPECT_FALSE(
- good_gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+ good_gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
EXPECT_FALSE(
good_gradient_checker.Probe(parameters.data(), kTolerance, &results))
<< results.error_log;
@@ -250,9 +257,11 @@
EXPECT_FALSE(results.error_log.empty());
// Test that Probe returns false for incorrect Jacobians.
- BadTestTerm bad_term(num_parameters, parameter_sizes.data());
- GradientChecker bad_gradient_checker(&bad_term, NULL, numeric_diff_options);
- EXPECT_FALSE(bad_gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+ BadTestTerm bad_term(num_parameters, parameter_sizes.data(), randu);
+ GradientChecker bad_gradient_checker(
+ &bad_term, manifolds, numeric_diff_options);
+ EXPECT_FALSE(
+ bad_gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
EXPECT_FALSE(
bad_gradient_checker.Probe(parameters.data(), kTolerance, &results));
@@ -284,8 +293,8 @@
*/
class LinearCostFunction : public CostFunction {
public:
- explicit LinearCostFunction(const Vector& residuals_offset)
- : residuals_offset_(residuals_offset) {
+ explicit LinearCostFunction(Vector residuals_offset)
+ : residuals_offset_(std::move(residuals_offset)) {
set_num_residuals(residuals_offset_.size());
}
@@ -305,7 +314,7 @@
residuals += residual_J_param * param;
// Return Jacobian.
- if (residual_J_params != NULL && residual_J_params[i] != NULL) {
+ if (residual_J_params != nullptr && residual_J_params[i] != nullptr) {
Eigen::Map<Matrix> residual_J_param_out(residual_J_params[i],
residual_J_param.rows(),
residual_J_param.cols());
@@ -326,7 +335,7 @@
}
/// Add offset to the given Jacobian before returning it from Evaluate(),
- /// thus introducing an error in the comutation.
+ /// thus introducing an error in the computation.
void SetJacobianOffset(size_t index, Matrix offset) {
CHECK_LT(index, residual_J_params_.size());
CHECK_EQ(residual_J_params_[index].rows(), offset.rows());
@@ -340,32 +349,6 @@
Vector residuals_offset_;
};
-/**
- * Helper local parameterization that multiplies the delta vector by the given
- * jacobian and adds it to the parameter.
- */
-class MatrixParameterization : public LocalParameterization {
- public:
- bool Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const final {
- VectorRef(x_plus_delta, GlobalSize()) =
- ConstVectorRef(x, GlobalSize()) +
- (global_J_local * ConstVectorRef(delta, LocalSize()));
- return true;
- }
-
- bool ComputeJacobian(const double* /*x*/, double* jacobian) const final {
- MatrixRef(jacobian, GlobalSize(), LocalSize()) = global_J_local;
- return true;
- }
-
- int GlobalSize() const final { return global_J_local.rows(); }
- int LocalSize() const final { return global_J_local.cols(); }
-
- Matrix global_J_local;
-};
-
// Helper function to compare two Eigen matrices (used in the test below).
static void ExpectMatricesClose(Matrix p, Matrix q, double tolerance) {
ASSERT_EQ(p.rows(), q.rows());
@@ -373,7 +356,41 @@
ExpectArraysClose(p.size(), p.data(), q.data(), tolerance);
}
-TEST(GradientChecker, TestCorrectnessWithLocalParameterizations) {
+// Helper manifold that multiplies the delta vector by the given
+// jacobian and adds it to the parameter.
+class MatrixManifold : public Manifold {
+ public:
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const final {
+ VectorRef(x_plus_delta, AmbientSize()) =
+ ConstVectorRef(x, AmbientSize()) +
+ (global_to_local_ * ConstVectorRef(delta, TangentSize()));
+ return true;
+ }
+
+ bool PlusJacobian(const double* /*x*/, double* jacobian) const final {
+ MatrixRef(jacobian, AmbientSize(), TangentSize()) = global_to_local_;
+ return true;
+ }
+
+ bool Minus(const double* y, const double* x, double* y_minus_x) const final {
+ LOG(FATAL) << "Should not be called";
+ return true;
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian) const final {
+ LOG(FATAL) << "Should not be called";
+ return true;
+ }
+
+ int AmbientSize() const final { return global_to_local_.rows(); }
+ int TangentSize() const final { return global_to_local_.cols(); }
+
+ Matrix global_to_local_;
+};
+
+TEST(GradientChecker, TestCorrectnessWithManifolds) {
// Create cost function.
Eigen::Vector3d residual_offset(100.0, 200.0, 300.0);
LinearCostFunction cost_function(residual_offset);
@@ -395,9 +412,9 @@
std::vector<int> parameter_sizes(2);
parameter_sizes[0] = 3;
parameter_sizes[1] = 2;
- std::vector<int> local_parameter_sizes(2);
- local_parameter_sizes[0] = 2;
- local_parameter_sizes[1] = 2;
+ std::vector<int> tangent_sizes(2);
+ tangent_sizes[0] = 2;
+ tangent_sizes[1] = 2;
// Test cost function for correctness.
Eigen::Matrix<double, 3, 3, Eigen::RowMajor> j1_out;
@@ -417,57 +434,52 @@
ExpectMatricesClose(j2_out, j1, std::numeric_limits<double>::epsilon());
ExpectMatricesClose(residual, residual_expected, kTolerance);
- // Create local parameterization.
- Eigen::Matrix<double, 3, 2, Eigen::RowMajor> global_J_local;
- global_J_local.row(0) << 1.5, 2.5;
- global_J_local.row(1) << 3.5, 4.5;
- global_J_local.row(2) << 5.5, 6.5;
+ // Create manifold.
+ Eigen::Matrix<double, 3, 2, Eigen::RowMajor> global_to_local;
+ global_to_local.row(0) << 1.5, 2.5;
+ global_to_local.row(1) << 3.5, 4.5;
+ global_to_local.row(2) << 5.5, 6.5;
- MatrixParameterization parameterization;
- parameterization.global_J_local = global_J_local;
+ MatrixManifold manifold;
+ manifold.global_to_local_ = global_to_local;
- // Test local parameterization for correctness.
+ // Test manifold for correctness.
Eigen::Vector3d x(7.0, 8.0, 9.0);
Eigen::Vector2d delta(10.0, 11.0);
- Eigen::Matrix<double, 3, 2, Eigen::RowMajor> global_J_local_out;
- parameterization.ComputeJacobian(x.data(), global_J_local_out.data());
- ExpectMatricesClose(global_J_local_out,
- global_J_local,
+ Eigen::Matrix<double, 3, 2, Eigen::RowMajor> global_to_local_out;
+ manifold.PlusJacobian(x.data(), global_to_local_out.data());
+ ExpectMatricesClose(global_to_local_out,
+ global_to_local,
std::numeric_limits<double>::epsilon());
Eigen::Vector3d x_plus_delta;
- parameterization.Plus(x.data(), delta.data(), x_plus_delta.data());
- Eigen::Vector3d x_plus_delta_expected = x + (global_J_local * delta);
+ manifold.Plus(x.data(), delta.data(), x_plus_delta.data());
+ Eigen::Vector3d x_plus_delta_expected = x + (global_to_local * delta);
ExpectMatricesClose(x_plus_delta, x_plus_delta_expected, kTolerance);
// Now test GradientChecker.
- std::vector<const LocalParameterization*> parameterizations(2);
- parameterizations[0] = ¶meterization;
- parameterizations[1] = NULL;
+ std::vector<const Manifold*> manifolds(2);
+ manifolds[0] = &manifold;
+ manifolds[1] = nullptr;
NumericDiffOptions numeric_diff_options;
GradientChecker::ProbeResults results;
GradientChecker gradient_checker(
- &cost_function, ¶meterizations, numeric_diff_options);
+ &cost_function, &manifolds, numeric_diff_options);
Problem::Options problem_options;
problem_options.cost_function_ownership = DO_NOT_TAKE_OWNERSHIP;
- problem_options.local_parameterization_ownership = DO_NOT_TAKE_OWNERSHIP;
+ problem_options.manifold_ownership = DO_NOT_TAKE_OWNERSHIP;
Problem problem(problem_options);
Eigen::Vector3d param0_solver;
Eigen::Vector2d param1_solver;
- problem.AddParameterBlock(param0_solver.data(), 3, ¶meterization);
+ problem.AddParameterBlock(param0_solver.data(), 3, &manifold);
problem.AddParameterBlock(param1_solver.data(), 2);
problem.AddResidualBlock(
- &cost_function, NULL, param0_solver.data(), param1_solver.data());
- Solver::Options solver_options;
- solver_options.check_gradients = true;
- solver_options.initial_trust_region_radius = 1e10;
- Solver solver;
- Solver::Summary summary;
+ &cost_function, nullptr, param0_solver.data(), param1_solver.data());
// First test case: everything is correct.
- EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+ EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, &results))
<< results.error_log;
@@ -475,14 +487,14 @@
ASSERT_EQ(results.return_value, true);
ExpectMatricesClose(
results.residuals, residual, std::numeric_limits<double>::epsilon());
- CheckDimensions(results, parameter_sizes, local_parameter_sizes, 3);
+ CheckDimensions(results, parameter_sizes, tangent_sizes, 3);
ExpectMatricesClose(
- results.local_jacobians.at(0), j0 * global_J_local, kTolerance);
+ results.local_jacobians.at(0), j0 * global_to_local, kTolerance);
ExpectMatricesClose(results.local_jacobians.at(1),
j1,
std::numeric_limits<double>::epsilon());
ExpectMatricesClose(
- results.local_numeric_jacobians.at(0), j0 * global_J_local, kTolerance);
+ results.local_numeric_jacobians.at(0), j0 * global_to_local, kTolerance);
ExpectMatricesClose(results.local_numeric_jacobians.at(1), j1, kTolerance);
ExpectMatricesClose(
results.jacobians.at(0), j0, std::numeric_limits<double>::epsilon());
@@ -494,6 +506,13 @@
EXPECT_TRUE(results.error_log.empty());
// Test interaction with the 'check_gradients' option in Solver.
+ Solver::Options solver_options;
+ solver_options.linear_solver_type = DENSE_QR;
+ solver_options.check_gradients = true;
+ solver_options.initial_trust_region_radius = 1e10;
+ Solver solver;
+ Solver::Summary summary;
+
param0_solver = param0;
param1_solver = param1;
solver.Solve(solver_options, &problem, &summary);
@@ -506,7 +525,7 @@
j0_offset.setZero();
j0_offset.col(2).setConstant(0.001);
cost_function.SetJacobianOffset(0, j0_offset);
- EXPECT_FALSE(gradient_checker.Probe(parameters.data(), kTolerance, NULL));
+ EXPECT_FALSE(gradient_checker.Probe(parameters.data(), kTolerance, nullptr));
EXPECT_FALSE(gradient_checker.Probe(parameters.data(), kTolerance, &results))
<< results.error_log;
@@ -514,17 +533,17 @@
ASSERT_EQ(results.return_value, true);
ExpectMatricesClose(
results.residuals, residual, std::numeric_limits<double>::epsilon());
- CheckDimensions(results, parameter_sizes, local_parameter_sizes, 3);
+ CheckDimensions(results, parameter_sizes, tangent_sizes, 3);
ASSERT_EQ(results.local_jacobians.size(), 2);
ASSERT_EQ(results.local_numeric_jacobians.size(), 2);
ExpectMatricesClose(results.local_jacobians.at(0),
- (j0 + j0_offset) * global_J_local,
+ (j0 + j0_offset) * global_to_local,
kTolerance);
ExpectMatricesClose(results.local_jacobians.at(1),
j1,
std::numeric_limits<double>::epsilon());
ExpectMatricesClose(
- results.local_numeric_jacobians.at(0), j0 * global_J_local, kTolerance);
+ results.local_numeric_jacobians.at(0), j0 * global_to_local, kTolerance);
ExpectMatricesClose(results.local_numeric_jacobians.at(1), j1, kTolerance);
ExpectMatricesClose(results.jacobians.at(0), j0 + j0_offset, kTolerance);
ExpectMatricesClose(
@@ -540,10 +559,10 @@
solver.Solve(solver_options, &problem, &summary);
EXPECT_EQ(FAILURE, summary.termination_type);
- // Now, zero out the local parameterization Jacobian of the 1st parameter
- // with respect to the 3rd component. This makes the combination of
- // cost function and local parameterization return correct values again.
- parameterization.global_J_local.row(2).setZero();
+ // Now, zero out the manifold Jacobian with respect to the 3rd component of
+ // the 1st parameter. This makes the combination of cost function and manifold
+ // return correct values again.
+ manifold.global_to_local_.row(2).setZero();
// Verify that the gradient checker does not treat this as an error.
EXPECT_TRUE(gradient_checker.Probe(parameters.data(), kTolerance, &results))
@@ -553,17 +572,17 @@
ASSERT_EQ(results.return_value, true);
ExpectMatricesClose(
results.residuals, residual, std::numeric_limits<double>::epsilon());
- CheckDimensions(results, parameter_sizes, local_parameter_sizes, 3);
+ CheckDimensions(results, parameter_sizes, tangent_sizes, 3);
ASSERT_EQ(results.local_jacobians.size(), 2);
ASSERT_EQ(results.local_numeric_jacobians.size(), 2);
ExpectMatricesClose(results.local_jacobians.at(0),
- (j0 + j0_offset) * parameterization.global_J_local,
+ (j0 + j0_offset) * manifold.global_to_local_,
kTolerance);
ExpectMatricesClose(results.local_jacobians.at(1),
j1,
std::numeric_limits<double>::epsilon());
ExpectMatricesClose(results.local_numeric_jacobians.at(0),
- j0 * parameterization.global_J_local,
+ j0 * manifold.global_to_local_,
kTolerance);
ExpectMatricesClose(results.local_numeric_jacobians.at(1), j1, kTolerance);
ExpectMatricesClose(results.jacobians.at(0), j0 + j0_offset, kTolerance);
@@ -581,6 +600,4 @@
EXPECT_EQ(CONVERGENCE, summary.termination_type);
EXPECT_LE(summary.final_cost, 1e-12);
}
-
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/gradient_checking_cost_function.cc b/internal/ceres/gradient_checking_cost_function.cc
index 2eb6d62..8ca449b 100644
--- a/internal/ceres/gradient_checking_cost_function.cc
+++ b/internal/ceres/gradient_checking_cost_function.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,8 +34,10 @@
#include <algorithm>
#include <cmath>
#include <cstdint>
+#include <memory>
#include <numeric>
#include <string>
+#include <utility>
#include <vector>
#include "ceres/dynamic_numeric_diff_cost_function.h"
@@ -50,45 +52,36 @@
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::abs;
-using std::max;
-using std::string;
-using std::vector;
+namespace ceres::internal {
namespace {
-class GradientCheckingCostFunction : public CostFunction {
+class GradientCheckingCostFunction final : public CostFunction {
public:
- GradientCheckingCostFunction(
- const CostFunction* function,
- const std::vector<const LocalParameterization*>* local_parameterizations,
- const NumericDiffOptions& options,
- double relative_precision,
- const string& extra_info,
- GradientCheckingIterationCallback* callback)
+ GradientCheckingCostFunction(const CostFunction* function,
+ const std::vector<const Manifold*>* manifolds,
+ const NumericDiffOptions& options,
+ double relative_precision,
+ std::string extra_info,
+ GradientCheckingIterationCallback* callback)
: function_(function),
- gradient_checker_(function, local_parameterizations, options),
+ gradient_checker_(function, manifolds, options),
relative_precision_(relative_precision),
- extra_info_(extra_info),
+ extra_info_(std::move(extra_info)),
callback_(callback) {
CHECK(callback_ != nullptr);
- const vector<int32_t>& parameter_block_sizes =
+ const std::vector<int32_t>& parameter_block_sizes =
function->parameter_block_sizes();
*mutable_parameter_block_sizes() = parameter_block_sizes;
set_num_residuals(function->num_residuals());
}
- virtual ~GradientCheckingCostFunction() {}
-
bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const final {
if (!jacobians) {
// Nothing to check in this case; just forward.
- return function_->Evaluate(parameters, residuals, NULL);
+ return function_->Evaluate(parameters, residuals, nullptr);
}
GradientChecker::ProbeResults results;
@@ -106,9 +99,10 @@
MatrixRef(residuals, num_residuals, 1) = results.residuals;
// Copy the original jacobian blocks into the jacobians array.
- const vector<int32_t>& block_sizes = function_->parameter_block_sizes();
+ const std::vector<int32_t>& block_sizes =
+ function_->parameter_block_sizes();
for (int k = 0; k < block_sizes.size(); k++) {
- if (jacobians[k] != NULL) {
+ if (jacobians[k] != nullptr) {
MatrixRef(jacobians[k],
results.jacobians[k].rows(),
results.jacobians[k].cols()) = results.jacobians[k];
@@ -128,7 +122,7 @@
const CostFunction* function_;
GradientChecker gradient_checker_;
double relative_precision_;
- string extra_info_;
+ std::string extra_info_;
GradientCheckingIterationCallback* callback_;
};
@@ -138,13 +132,14 @@
: gradient_error_detected_(false) {}
CallbackReturnType GradientCheckingIterationCallback::operator()(
- const IterationSummary& summary) {
+ const IterationSummary& /*summary*/) {
if (gradient_error_detected_) {
LOG(ERROR) << "Gradient error detected. Terminating solver.";
return SOLVER_ABORT;
}
return SOLVER_CONTINUE;
}
+
void GradientCheckingIterationCallback::SetGradientErrorDetected(
std::string& error_log) {
std::lock_guard<std::mutex> l(mutex_);
@@ -152,9 +147,9 @@
error_log_ += "\n" + error_log;
}
-CostFunction* CreateGradientCheckingCostFunction(
+std::unique_ptr<CostFunction> CreateGradientCheckingCostFunction(
const CostFunction* cost_function,
- const std::vector<const LocalParameterization*>* local_parameterizations,
+ const std::vector<const Manifold*>* manifolds,
double relative_step_size,
double relative_precision,
const std::string& extra_info,
@@ -162,51 +157,49 @@
NumericDiffOptions numeric_diff_options;
numeric_diff_options.relative_step_size = relative_step_size;
- return new GradientCheckingCostFunction(cost_function,
- local_parameterizations,
- numeric_diff_options,
- relative_precision,
- extra_info,
- callback);
+ return std::make_unique<GradientCheckingCostFunction>(cost_function,
+ manifolds,
+ numeric_diff_options,
+ relative_precision,
+ extra_info,
+ callback);
}
-ProblemImpl* CreateGradientCheckingProblemImpl(
+std::unique_ptr<ProblemImpl> CreateGradientCheckingProblemImpl(
ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision,
GradientCheckingIterationCallback* callback) {
CHECK(callback != nullptr);
- // We create new CostFunctions by wrapping the original CostFunction
- // in a gradient checking CostFunction. So its okay for the
- // ProblemImpl to take ownership of it and destroy it. The
- // LossFunctions and LocalParameterizations are reused and since
- // they are owned by problem_impl, gradient_checking_problem_impl
+ // We create new CostFunctions by wrapping the original CostFunction in a
+ // gradient checking CostFunction. So its okay for the ProblemImpl to take
+ // ownership of it and destroy it. The LossFunctions and Manifolds are reused
+ // and since they are owned by problem_impl, gradient_checking_problem_impl
// should not take ownership of it.
Problem::Options gradient_checking_problem_options;
gradient_checking_problem_options.cost_function_ownership = TAKE_OWNERSHIP;
gradient_checking_problem_options.loss_function_ownership =
DO_NOT_TAKE_OWNERSHIP;
- gradient_checking_problem_options.local_parameterization_ownership =
- DO_NOT_TAKE_OWNERSHIP;
+ gradient_checking_problem_options.manifold_ownership = DO_NOT_TAKE_OWNERSHIP;
gradient_checking_problem_options.context = problem_impl->context();
NumericDiffOptions numeric_diff_options;
numeric_diff_options.relative_step_size = relative_step_size;
- ProblemImpl* gradient_checking_problem_impl =
- new ProblemImpl(gradient_checking_problem_options);
+ auto gradient_checking_problem_impl =
+ std::make_unique<ProblemImpl>(gradient_checking_problem_options);
Program* program = problem_impl->mutable_program();
- // For every ParameterBlock in problem_impl, create a new parameter
- // block with the same local parameterization and constancy.
- const vector<ParameterBlock*>& parameter_blocks = program->parameter_blocks();
- for (int i = 0; i < parameter_blocks.size(); ++i) {
- ParameterBlock* parameter_block = parameter_blocks[i];
+ // For every ParameterBlock in problem_impl, create a new parameter block with
+ // the same manifold and constancy.
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program->parameter_blocks();
+ for (auto* parameter_block : parameter_blocks) {
gradient_checking_problem_impl->AddParameterBlock(
parameter_block->mutable_user_state(),
parameter_block->Size(),
- parameter_block->mutable_local_parameterization());
+ parameter_block->mutable_manifold());
if (parameter_block->IsConstant()) {
gradient_checking_problem_impl->SetParameterBlockConstant(
@@ -228,32 +221,33 @@
// For every ResidualBlock in problem_impl, create a new
// ResidualBlock by wrapping its CostFunction inside a
// GradientCheckingCostFunction.
- const vector<ResidualBlock*>& residual_blocks = program->residual_blocks();
+ const std::vector<ResidualBlock*>& residual_blocks =
+ program->residual_blocks();
for (int i = 0; i < residual_blocks.size(); ++i) {
ResidualBlock* residual_block = residual_blocks[i];
// Build a human readable string which identifies the
// ResidualBlock. This is used by the GradientCheckingCostFunction
// when logging debugging information.
- string extra_info =
+ std::string extra_info =
StringPrintf("Residual block id %d; depends on parameters [", i);
- vector<double*> parameter_blocks;
- vector<const LocalParameterization*> local_parameterizations;
+ std::vector<double*> parameter_blocks;
+ std::vector<const Manifold*> manifolds;
parameter_blocks.reserve(residual_block->NumParameterBlocks());
- local_parameterizations.reserve(residual_block->NumParameterBlocks());
+ manifolds.reserve(residual_block->NumParameterBlocks());
for (int j = 0; j < residual_block->NumParameterBlocks(); ++j) {
ParameterBlock* parameter_block = residual_block->parameter_blocks()[j];
parameter_blocks.push_back(parameter_block->mutable_user_state());
StringAppendF(&extra_info, "%p", parameter_block->mutable_user_state());
extra_info += (j < residual_block->NumParameterBlocks() - 1) ? ", " : "]";
- local_parameterizations.push_back(problem_impl->GetParameterization(
- parameter_block->mutable_user_state()));
+ manifolds.push_back(
+ problem_impl->GetManifold(parameter_block->mutable_user_state()));
}
// Wrap the original CostFunction in a GradientCheckingCostFunction.
CostFunction* gradient_checking_cost_function =
new GradientCheckingCostFunction(residual_block->cost_function(),
- &local_parameterizations,
+ &manifolds,
numeric_diff_options,
relative_precision,
extra_info,
@@ -283,5 +277,4 @@
return gradient_checking_problem_impl;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/gradient_checking_cost_function.h b/internal/ceres/gradient_checking_cost_function.h
index ea6e9b3..4ad3b6c 100644
--- a/internal/ceres/gradient_checking_cost_function.h
+++ b/internal/ceres/gradient_checking_cost_function.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,22 +32,23 @@
#ifndef CERES_INTERNAL_GRADIENT_CHECKING_COST_FUNCTION_H_
#define CERES_INTERNAL_GRADIENT_CHECKING_COST_FUNCTION_H_
+#include <memory>
#include <mutex>
#include <string>
#include "ceres/cost_function.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/iteration_callback.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class ProblemImpl;
// Callback that collects information about gradient checking errors, and
// will abort the solve as soon as an error occurs.
-class CERES_EXPORT_INTERNAL GradientCheckingIterationCallback
+class CERES_NO_EXPORT GradientCheckingIterationCallback
: public IterationCallback {
public:
GradientCheckingIterationCallback();
@@ -73,9 +74,10 @@
// with finite differences. This API is only intended for unit tests that intend
// to check the functionality of the GradientCheckingCostFunction
// implementation directly.
-CERES_EXPORT_INTERNAL CostFunction* CreateGradientCheckingCostFunction(
+CERES_NO_EXPORT std::unique_ptr<CostFunction>
+CreateGradientCheckingCostFunction(
const CostFunction* cost_function,
- const std::vector<const LocalParameterization*>* local_parameterizations,
+ const std::vector<const Manifold*>* manifolds,
double relative_step_size,
double relative_precision,
const std::string& extra_info,
@@ -92,8 +94,6 @@
// iteration, the respective cost function will notify the
// GradientCheckingIterationCallback.
//
-// The caller owns the returned ProblemImpl object.
-//
// Note: This is quite inefficient and is intended only for debugging.
//
// relative_step_size and relative_precision are parameters to control
@@ -102,13 +102,14 @@
// jacobians obtained by numerically differentiating them. See the
// documentation of 'numeric_derivative_relative_step_size' in solver.h for a
// better explanation.
-CERES_EXPORT_INTERNAL ProblemImpl* CreateGradientCheckingProblemImpl(
+CERES_NO_EXPORT std::unique_ptr<ProblemImpl> CreateGradientCheckingProblemImpl(
ProblemImpl* problem_impl,
double relative_step_size,
double relative_precision,
GradientCheckingIterationCallback* callback);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_GRADIENT_CHECKING_COST_FUNCTION_H_
diff --git a/internal/ceres/gradient_checking_cost_function_test.cc b/internal/ceres/gradient_checking_cost_function_test.cc
index 9ca51f8..545fcd2 100644
--- a/internal/ceres/gradient_checking_cost_function_test.cc
+++ b/internal/ceres/gradient_checking_cost_function_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,15 +33,15 @@
#include <cmath>
#include <cstdint>
#include <memory>
+#include <random>
#include <vector>
#include "ceres/cost_function.h"
-#include "ceres/local_parameterization.h"
#include "ceres/loss_function.h"
+#include "ceres/manifold.h"
#include "ceres/parameter_block.h"
#include "ceres/problem_impl.h"
#include "ceres/program.h"
-#include "ceres/random.h"
#include "ceres/residual_block.h"
#include "ceres/sized_cost_function.h"
#include "ceres/types.h"
@@ -49,10 +49,8 @@
#include "gmock/gmock.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::vector;
using testing::_;
using testing::AllOf;
using testing::AnyNumber;
@@ -70,13 +68,15 @@
public:
// The constructor of this function needs to know the number
// of blocks desired, and the size of each block.
- TestTerm(int arity, int const* dim) : arity_(arity) {
+ template <class UniformRandomFunctor>
+ TestTerm(int arity, int const* dim, UniformRandomFunctor&& randu)
+ : arity_(arity) {
// Make 'arity' random vectors.
a_.resize(arity_);
for (int j = 0; j < arity_; ++j) {
a_[j].resize(dim[j]);
for (int u = 0; u < dim[j]; ++u) {
- a_[j][u] = 2.0 * RandDouble() - 1.0;
+ a_[j][u] = randu();
}
}
@@ -88,7 +88,7 @@
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
// Compute a . x.
double ax = 0;
for (int j = 0; j < arity_; ++j) {
@@ -127,29 +127,30 @@
private:
int arity_;
- vector<vector<double>> a_;
+ std::vector<std::vector<double>> a_;
};
TEST(GradientCheckingCostFunction, ResidualsAndJacobiansArePreservedTest) {
- srand(5);
-
// Test with 3 blocks of size 2, 3 and 4.
int const arity = 3;
int const dim[arity] = {2, 3, 4};
// Make a random set of blocks.
- vector<double*> parameters(arity);
+ std::vector<double*> parameters(arity);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> distribution(-1.0, 1.0);
+ auto randu = [&prng, &distribution] { return distribution(prng); };
for (int j = 0; j < arity; ++j) {
parameters[j] = new double[dim[j]];
for (int u = 0; u < dim[j]; ++u) {
- parameters[j][u] = 2.0 * RandDouble() - 1.0;
+ parameters[j][u] = randu();
}
}
double original_residual;
double residual;
- vector<double*> original_jacobians(arity);
- vector<double*> jacobians(arity);
+ std::vector<double*> original_jacobians(arity);
+ std::vector<double*> jacobians(arity);
for (int j = 0; j < arity; ++j) {
// Since residual is one dimensional the jacobians have the same
@@ -161,15 +162,15 @@
const double kRelativeStepSize = 1e-6;
const double kRelativePrecision = 1e-4;
- TestTerm<-1, -1> term(arity, dim);
+ TestTerm<-1, -1> term(arity, dim, randu);
GradientCheckingIterationCallback callback;
- std::unique_ptr<CostFunction> gradient_checking_cost_function(
+ auto gradient_checking_cost_function =
CreateGradientCheckingCostFunction(&term,
- NULL,
+ nullptr,
kRelativeStepSize,
kRelativePrecision,
"Ignored.",
- &callback));
+ &callback);
term.Evaluate(¶meters[0], &original_residual, &original_jacobians[0]);
gradient_checking_cost_function->Evaluate(
@@ -188,23 +189,24 @@
}
TEST(GradientCheckingCostFunction, SmokeTest) {
- srand(5);
-
// Test with 3 blocks of size 2, 3 and 4.
int const arity = 3;
int const dim[arity] = {2, 3, 4};
// Make a random set of blocks.
- vector<double*> parameters(arity);
+ std::vector<double*> parameters(arity);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> distribution(-1.0, 1.0);
+ auto randu = [&prng, &distribution] { return distribution(prng); };
for (int j = 0; j < arity; ++j) {
parameters[j] = new double[dim[j]];
for (int u = 0; u < dim[j]; ++u) {
- parameters[j][u] = 2.0 * RandDouble() - 1.0;
+ parameters[j][u] = randu();
}
}
double residual;
- vector<double*> jacobians(arity);
+ std::vector<double*> jacobians(arity);
for (int j = 0; j < arity; ++j) {
// Since residual is one dimensional the jacobians have the same size as the
// parameter blocks.
@@ -217,15 +219,15 @@
// Should have one term that's bad, causing everything to get dumped.
LOG(INFO) << "Bad gradient";
{
- TestTerm<1, 2> term(arity, dim);
+ TestTerm<1, 2> term(arity, dim, randu);
GradientCheckingIterationCallback callback;
- std::unique_ptr<CostFunction> gradient_checking_cost_function(
+ auto gradient_checking_cost_function =
CreateGradientCheckingCostFunction(&term,
- NULL,
+ nullptr,
kRelativeStepSize,
kRelativePrecision,
"Fuzzy banana",
- &callback));
+ &callback);
EXPECT_TRUE(gradient_checking_cost_function->Evaluate(
¶meters[0], &residual, &jacobians[0]));
EXPECT_TRUE(callback.gradient_error_detected());
@@ -237,15 +239,15 @@
// The gradient is correct, so no errors are reported.
LOG(INFO) << "Good gradient";
{
- TestTerm<-1, -1> term(arity, dim);
+ TestTerm<-1, -1> term(arity, dim, randu);
GradientCheckingIterationCallback callback;
- std::unique_ptr<CostFunction> gradient_checking_cost_function(
+ auto gradient_checking_cost_function =
CreateGradientCheckingCostFunction(&term,
- NULL,
+ nullptr,
kRelativeStepSize,
kRelativePrecision,
"Fuzzy banana",
- &callback));
+ &callback);
EXPECT_TRUE(gradient_checking_cost_function->Evaluate(
¶meters[0], &residual, &jacobians[0]));
EXPECT_FALSE(callback.gradient_error_detected());
@@ -267,7 +269,6 @@
set_num_residuals(num_residuals);
mutable_parameter_block_sizes()->push_back(parameter_block_size);
}
- virtual ~UnaryCostFunction() {}
bool Evaluate(double const* const* parameters,
double* residuals,
@@ -324,7 +325,7 @@
};
// Verify that the two ParameterBlocks are formed from the same user
-// array and have the same LocalParameterization object.
+// array and have the same Manifold objects.
static void ParameterBlocksAreEquivalent(const ParameterBlock* left,
const ParameterBlock* right) {
CHECK(left != nullptr);
@@ -332,8 +333,8 @@
EXPECT_EQ(left->user_state(), right->user_state());
EXPECT_EQ(left->Size(), right->Size());
EXPECT_EQ(left->Size(), right->Size());
- EXPECT_EQ(left->LocalSize(), right->LocalSize());
- EXPECT_EQ(left->local_parameterization(), right->local_parameterization());
+ EXPECT_EQ(left->TangentSize(), right->TangentSize());
+ EXPECT_EQ(left->manifold(), right->manifold());
EXPECT_EQ(left->IsConstant(), right->IsConstant());
}
@@ -349,23 +350,23 @@
problem_impl.AddParameterBlock(y, 4);
problem_impl.SetParameterBlockConstant(y);
problem_impl.AddParameterBlock(z, 5);
- problem_impl.AddParameterBlock(w, 4, new QuaternionParameterization);
+ problem_impl.AddParameterBlock(w, 4, new QuaternionManifold);
// clang-format off
problem_impl.AddResidualBlock(new UnaryCostFunction(2, 3),
- NULL, x);
+ nullptr, x);
problem_impl.AddResidualBlock(new BinaryCostFunction(6, 5, 4),
- NULL, z, y);
+ nullptr, z, y);
problem_impl.AddResidualBlock(new BinaryCostFunction(3, 3, 5),
new TrivialLoss, x, z);
problem_impl.AddResidualBlock(new BinaryCostFunction(7, 5, 3),
- NULL, z, x);
+ nullptr, z, x);
problem_impl.AddResidualBlock(new TernaryCostFunction(1, 5, 3, 4),
- NULL, z, x, y);
+ nullptr, z, x, y);
// clang-format on
GradientCheckingIterationCallback callback;
- std::unique_ptr<ProblemImpl> gradient_checking_problem_impl(
- CreateGradientCheckingProblemImpl(&problem_impl, 1.0, 1.0, &callback));
+ auto gradient_checking_problem_impl =
+ CreateGradientCheckingProblemImpl(&problem_impl, 1.0, 1.0, &callback);
// The dimensions of the two problems match.
EXPECT_EQ(problem_impl.NumParameterBlocks(),
@@ -420,13 +421,13 @@
double x[] = {1.0, 2.0, 3.0};
ProblemImpl problem_impl;
problem_impl.AddParameterBlock(x, 3);
- problem_impl.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
+ problem_impl.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
problem_impl.SetParameterLowerBound(x, 0, 0.9);
problem_impl.SetParameterUpperBound(x, 1, 2.5);
GradientCheckingIterationCallback callback;
- std::unique_ptr<ProblemImpl> gradient_checking_problem_impl(
- CreateGradientCheckingProblemImpl(&problem_impl, 1.0, 1.0, &callback));
+ auto gradient_checking_problem_impl =
+ CreateGradientCheckingProblemImpl(&problem_impl, 1.0, 1.0, &callback);
// The dimensions of the two problems match.
EXPECT_EQ(problem_impl.NumParameterBlocks(),
@@ -447,5 +448,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/gradient_problem.cc b/internal/ceres/gradient_problem.cc
index ba33fbc..ee228b8 100644
--- a/internal/ceres/gradient_problem.cc
+++ b/internal/ceres/gradient_problem.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,49 +30,57 @@
#include "ceres/gradient_problem.h"
-#include "ceres/local_parameterization.h"
+#include <memory>
+
#include "glog/logging.h"
namespace ceres {
GradientProblem::GradientProblem(FirstOrderFunction* function)
: function_(function),
- parameterization_(
- new IdentityParameterization(function_->NumParameters())),
- scratch_(new double[function_->NumParameters()]) {}
+ manifold_(std::make_unique<EuclideanManifold<DYNAMIC>>(
+ function_->NumParameters())),
+ scratch_(new double[function_->NumParameters()]) {
+ CHECK(function != nullptr);
+}
GradientProblem::GradientProblem(FirstOrderFunction* function,
- LocalParameterization* parameterization)
- : function_(function),
- parameterization_(parameterization),
- scratch_(new double[function_->NumParameters()]) {
- CHECK_EQ(function_->NumParameters(), parameterization_->GlobalSize());
+ Manifold* manifold)
+ : function_(function), scratch_(new double[function_->NumParameters()]) {
+ CHECK(function != nullptr);
+ if (manifold != nullptr) {
+ manifold_.reset(manifold);
+ } else {
+ manifold_ = std::make_unique<EuclideanManifold<DYNAMIC>>(
+ function_->NumParameters());
+ }
+ CHECK_EQ(function_->NumParameters(), manifold_->AmbientSize());
}
int GradientProblem::NumParameters() const {
return function_->NumParameters();
}
-int GradientProblem::NumLocalParameters() const {
- return parameterization_->LocalSize();
+int GradientProblem::NumTangentParameters() const {
+ return manifold_->TangentSize();
}
bool GradientProblem::Evaluate(const double* parameters,
double* cost,
double* gradient) const {
- if (gradient == NULL) {
- return function_->Evaluate(parameters, cost, NULL);
+ if (gradient == nullptr) {
+ return function_->Evaluate(parameters, cost, nullptr);
}
return (function_->Evaluate(parameters, cost, scratch_.get()) &&
- parameterization_->MultiplyByJacobian(
+ manifold_->RightMultiplyByPlusJacobian(
parameters, 1, scratch_.get(), gradient));
}
bool GradientProblem::Plus(const double* x,
const double* delta,
double* x_plus_delta) const {
- return parameterization_->Plus(x, delta, x_plus_delta);
+ return manifold_->Plus(x, delta, x_plus_delta);
}
} // namespace ceres
diff --git a/internal/ceres/gradient_problem_evaluator.h b/internal/ceres/gradient_problem_evaluator.h
index d224dbe..fe99767 100644
--- a/internal/ceres/gradient_problem_evaluator.h
+++ b/internal/ceres/gradient_problem_evaluator.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,30 +32,33 @@
#define CERES_INTERNAL_GRADIENT_PROBLEM_EVALUATOR_H_
#include <map>
+#include <memory>
#include <string>
#include "ceres/evaluator.h"
#include "ceres/execution_summary.h"
#include "ceres/gradient_problem.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
+#include "ceres/sparse_matrix.h"
#include "ceres/wall_time.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class GradientProblemEvaluator : public Evaluator {
+class CERES_NO_EXPORT GradientProblemEvaluator final : public Evaluator {
public:
explicit GradientProblemEvaluator(const GradientProblem& problem)
: problem_(problem) {}
- virtual ~GradientProblemEvaluator() {}
- SparseMatrix* CreateJacobian() const final { return nullptr; }
- bool Evaluate(const EvaluateOptions& evaluate_options,
+
+ std::unique_ptr<SparseMatrix> CreateJacobian() const final { return nullptr; }
+
+ bool Evaluate(const EvaluateOptions& /*evaluate_options*/,
const double* state,
double* cost,
- double* residuals,
+ double* /*residuals*/,
double* gradient,
SparseMatrix* jacobian) final {
- CHECK(jacobian == NULL);
+ CHECK(jacobian == nullptr);
ScopedExecutionTimer total_timer("Evaluator::Total", &execution_summary_);
// The reason we use Residual and Jacobian here even when we are
// only computing the cost and gradient has to do with the fact
@@ -65,7 +68,7 @@
// to be consistent across the code base for the time accounting
// to work.
ScopedExecutionTimer call_type_timer(
- gradient == NULL ? "Evaluator::Residual" : "Evaluator::Jacobian",
+ gradient == nullptr ? "Evaluator::Residual" : "Evaluator::Jacobian",
&execution_summary_);
return problem_.Evaluate(state, cost, gradient);
}
@@ -79,7 +82,7 @@
int NumParameters() const final { return problem_.NumParameters(); }
int NumEffectiveParameters() const final {
- return problem_.NumLocalParameters();
+ return problem_.NumTangentParameters();
}
int NumResiduals() const final { return 1; }
@@ -93,7 +96,8 @@
::ceres::internal::ExecutionSummary execution_summary_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_GRADIENT_PROBLEM_EVALUATOR_H_
diff --git a/internal/ceres/gradient_problem_solver.cc b/internal/ceres/gradient_problem_solver.cc
index b72fad9..ad2ea13 100644
--- a/internal/ceres/gradient_problem_solver.cc
+++ b/internal/ceres/gradient_problem_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,13 +30,15 @@
#include "ceres/gradient_problem_solver.h"
+#include <map>
#include <memory>
+#include <string>
#include "ceres/callbacks.h"
#include "ceres/gradient_problem.h"
#include "ceres/gradient_problem_evaluator.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/map_util.h"
#include "ceres/minimizer.h"
#include "ceres/solver.h"
@@ -48,7 +50,6 @@
namespace ceres {
using internal::StringAppendF;
using internal::StringPrintf;
-using std::string;
namespace {
@@ -92,7 +93,7 @@
return solver_options.IsValid(error);
}
-GradientProblemSolver::~GradientProblemSolver() {}
+GradientProblemSolver::~GradientProblemSolver() = default;
void GradientProblemSolver::Solve(const GradientProblemSolver::Options& options,
const GradientProblem& problem,
@@ -112,7 +113,7 @@
*summary = Summary();
// clang-format off
summary->num_parameters = problem.NumParameters();
- summary->num_local_parameters = problem.NumLocalParameters();
+ summary->num_tangent_parameters = problem.NumTangentParameters();
summary->line_search_direction_type = options.line_search_direction_type; // NOLINT
summary->line_search_interpolation_type = options.line_search_interpolation_type; // NOLINT
summary->line_search_type = options.line_search_type;
@@ -135,21 +136,22 @@
// now.
Minimizer::Options minimizer_options =
Minimizer::Options(GradientProblemSolverOptionsToSolverOptions(options));
- minimizer_options.evaluator.reset(new GradientProblemEvaluator(problem));
+ minimizer_options.evaluator =
+ std::make_unique<GradientProblemEvaluator>(problem);
std::unique_ptr<IterationCallback> logging_callback;
if (options.logging_type != SILENT) {
- logging_callback.reset(
- new LoggingCallback(LINE_SEARCH, options.minimizer_progress_to_stdout));
+ logging_callback = std::make_unique<LoggingCallback>(
+ LINE_SEARCH, options.minimizer_progress_to_stdout);
minimizer_options.callbacks.insert(minimizer_options.callbacks.begin(),
logging_callback.get());
}
std::unique_ptr<IterationCallback> state_updating_callback;
if (options.update_state_every_iteration) {
- state_updating_callback.reset(
- new GradientProblemSolverStateUpdatingCallback(
- problem.NumParameters(), solution.data(), parameters_ptr));
+ state_updating_callback =
+ std::make_unique<GradientProblemSolverStateUpdatingCallback>(
+ problem.NumParameters(), solution.data(), parameters_ptr);
minimizer_options.callbacks.insert(minimizer_options.callbacks.begin(),
state_updating_callback.get());
}
@@ -179,7 +181,7 @@
SetSummaryFinalCost(summary);
}
- const std::map<string, CallStatistics>& evaluator_statistics =
+ const std::map<std::string, CallStatistics>& evaluator_statistics =
minimizer_options.evaluator->Statistics();
{
const CallStatistics& call_stats = FindWithDefault(
@@ -202,7 +204,7 @@
return internal::IsSolutionUsable(*this);
}
-string GradientProblemSolver::Summary::BriefReport() const {
+std::string GradientProblemSolver::Summary::BriefReport() const {
return StringPrintf(
"Ceres GradientProblemSolver Report: "
"Iterations: %d, "
@@ -215,17 +217,20 @@
TerminationTypeToString(termination_type));
}
-string GradientProblemSolver::Summary::FullReport() const {
+std::string GradientProblemSolver::Summary::FullReport() const {
using internal::VersionString;
- string report = string("\nSolver Summary (v " + VersionString() + ")\n\n");
+ // NOTE operator+ is not usable for concatenating a string and a string_view.
+ std::string report =
+ std::string{"\nSolver Summary (v "}.append(VersionString()) + ")\n\n";
StringAppendF(&report, "Parameters % 25d\n", num_parameters);
- if (num_local_parameters != num_parameters) {
- StringAppendF(&report, "Local parameters % 25d\n", num_local_parameters);
+ if (num_tangent_parameters != num_parameters) {
+ StringAppendF(
+ &report, "Tangent parameters % 25d\n", num_tangent_parameters);
}
- string line_search_direction_string;
+ std::string line_search_direction_string;
if (line_search_direction_type == LBFGS) {
line_search_direction_string = StringPrintf("LBFGS (%d)", max_lbfgs_rank);
} else if (line_search_direction_type == NONLINEAR_CONJUGATE_GRADIENT) {
@@ -240,7 +245,7 @@
"Line search direction %19s\n",
line_search_direction_string.c_str());
- const string line_search_type_string = StringPrintf(
+ const std::string line_search_type_string = StringPrintf(
"%s %s",
LineSearchInterpolationTypeToString(line_search_interpolation_type),
LineSearchTypeToString(line_search_type));
diff --git a/internal/ceres/gradient_problem_solver_test.cc b/internal/ceres/gradient_problem_solver_test.cc
index f01d206..f8eabf6 100644
--- a/internal/ceres/gradient_problem_solver_test.cc
+++ b/internal/ceres/gradient_problem_solver_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,14 +33,11 @@
#include "ceres/gradient_problem.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Rosenbrock function; see http://en.wikipedia.org/wiki/Rosenbrock_function .
class Rosenbrock : public ceres::FirstOrderFunction {
public:
- virtual ~Rosenbrock() {}
-
bool Evaluate(const double* parameters,
double* cost,
double* gradient) const final {
@@ -48,7 +45,7 @@
const double y = parameters[1];
cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
- if (gradient != NULL) {
+ if (gradient != nullptr) {
gradient[0] = -2.0 * (1.0 - x) - 200.0 * (y - x * x) * 2.0 * x;
gradient[1] = 200.0 * (y - x * x);
}
@@ -73,13 +70,12 @@
}
class QuadraticFunction : public ceres::FirstOrderFunction {
- virtual ~QuadraticFunction() {}
bool Evaluate(const double* parameters,
double* cost,
double* gradient) const final {
const double x = parameters[0];
*cost = 0.5 * (5.0 - x) * (5.0 - x);
- if (gradient != NULL) {
+ if (gradient != nullptr) {
gradient[0] = x - 5.0;
}
@@ -90,7 +86,6 @@
struct RememberingCallback : public IterationCallback {
explicit RememberingCallback(double* x) : calls(0), x(x) {}
- virtual ~RememberingCallback() {}
CallbackReturnType operator()(const IterationSummary& summary) final {
x_values.push_back(*x);
return SOLVER_CONTINUE;
@@ -116,8 +111,8 @@
ceres::Solve(options, problem, &x, &summary);
num_iterations = summary.iterations.size() - 1;
EXPECT_GT(num_iterations, 1);
- for (int i = 0; i < callback.x_values.size(); ++i) {
- EXPECT_EQ(50.0, callback.x_values[i]);
+ for (double value : callback.x_values) {
+ EXPECT_EQ(50.0, value);
}
// Second try: with updating
@@ -131,5 +126,4 @@
EXPECT_NE(original_x, callback.x_values[1]);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/gradient_problem_test.cc b/internal/ceres/gradient_problem_test.cc
index 8934138..52757a3 100644
--- a/internal/ceres/gradient_problem_test.cc
+++ b/internal/ceres/gradient_problem_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,15 +32,14 @@
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class QuadraticTestFunction : public ceres::FirstOrderFunction {
public:
- explicit QuadraticTestFunction(bool* flag_to_set_on_destruction = NULL)
+ explicit QuadraticTestFunction(bool* flag_to_set_on_destruction = nullptr)
: flag_to_set_on_destruction_(flag_to_set_on_destruction) {}
- virtual ~QuadraticTestFunction() {
+ ~QuadraticTestFunction() override {
if (flag_to_set_on_destruction_) {
*flag_to_set_on_destruction_ = true;
}
@@ -51,7 +50,7 @@
double* gradient) const final {
const double x = parameters[0];
cost[0] = x * x;
- if (gradient != NULL) {
+ if (gradient != nullptr) {
gradient[0] = 2.0 * x;
}
return true;
@@ -69,24 +68,16 @@
EXPECT_TRUE(is_destructed);
}
-TEST(GradientProblem, EvaluationWithoutParameterizationOrGradient) {
- ceres::GradientProblem problem(new QuadraticTestFunction());
- double x = 7.0;
- double cost = 0;
- problem.Evaluate(&x, &cost, NULL);
- EXPECT_EQ(x * x, cost);
-}
-
-TEST(GradientProblem, EvalutaionWithParameterizationAndNoGradient) {
+TEST(GradientProblem, EvaluationWithManifoldAndNoGradient) {
ceres::GradientProblem problem(new QuadraticTestFunction(),
- new IdentityParameterization(1));
+ new EuclideanManifold<1>);
double x = 7.0;
double cost = 0;
- problem.Evaluate(&x, &cost, NULL);
+ problem.Evaluate(&x, &cost, nullptr);
EXPECT_EQ(x * x, cost);
}
-TEST(GradientProblem, EvaluationWithoutParameterizationAndWithGradient) {
+TEST(GradientProblem, EvaluationWithoutManifoldAndWithGradient) {
ceres::GradientProblem problem(new QuadraticTestFunction());
double x = 7.0;
double cost = 0;
@@ -95,9 +86,9 @@
EXPECT_EQ(2.0 * x, gradient);
}
-TEST(GradientProblem, EvaluationWithParameterizationAndWithGradient) {
+TEST(GradientProblem, EvaluationWithManifoldAndWithGradient) {
ceres::GradientProblem problem(new QuadraticTestFunction(),
- new IdentityParameterization(1));
+ new EuclideanManifold<1>);
double x = 7.0;
double cost = 0;
double gradient = 0;
@@ -105,5 +96,4 @@
EXPECT_EQ(2.0 * x, gradient);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/graph.h b/internal/ceres/graph.h
index 9b26158..4f8dfb9 100644
--- a/internal/ceres/graph.h
+++ b/internal/ceres/graph.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,21 +36,19 @@
#include <unordered_set>
#include <utility>
+#include "ceres/internal/export.h"
#include "ceres/map_util.h"
#include "ceres/pair_hash.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// A unweighted undirected graph templated over the vertex ids. Vertex
// should be hashable.
template <typename Vertex>
-class Graph {
+class CERES_NO_EXPORT Graph {
public:
- Graph() {}
-
// Add a vertex.
void AddVertex(const Vertex& vertex) {
if (vertices_.insert(vertex).second) {
@@ -106,8 +104,6 @@
template <typename Vertex>
class WeightedGraph {
public:
- WeightedGraph() {}
-
// Add a weighted vertex. If the vertex already exists in the graph,
// its weight is set to the new weight.
void AddVertex(const Vertex& vertex, double weight) {
@@ -209,7 +205,6 @@
edge_weights_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_GRAPH_H_
diff --git a/internal/ceres/graph_algorithms.h b/internal/ceres/graph_algorithms.h
index 7d63b33..4ebc8b3 100644
--- a/internal/ceres/graph_algorithms.h
+++ b/internal/ceres/graph_algorithms.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,22 +34,23 @@
#define CERES_INTERNAL_GRAPH_ALGORITHMS_H_
#include <algorithm>
+#include <memory>
#include <unordered_map>
#include <unordered_set>
#include <utility>
#include <vector>
#include "ceres/graph.h"
+#include "ceres/internal/export.h"
#include "ceres/wall_time.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Compare two vertices of a graph by their degrees, if the degrees
// are equal then order them by their ids.
template <typename Vertex>
-class VertexTotalOrdering {
+class CERES_NO_EXPORT VertexTotalOrdering {
public:
explicit VertexTotalOrdering(const Graph<Vertex>& graph) : graph_(graph) {}
@@ -257,11 +258,11 @@
// spanning forest, or a collection of linear paths that span the
// graph G.
template <typename Vertex>
-WeightedGraph<Vertex>* Degree2MaximumSpanningForest(
+std::unique_ptr<WeightedGraph<Vertex>> Degree2MaximumSpanningForest(
const WeightedGraph<Vertex>& graph) {
// Array of edges sorted in decreasing order of their weights.
std::vector<std::pair<double, std::pair<Vertex, Vertex>>> weighted_edges;
- WeightedGraph<Vertex>* forest = new WeightedGraph<Vertex>();
+ auto forest = std::make_unique<WeightedGraph<Vertex>>();
// Disjoint-set to keep track of the connected components in the
// maximum spanning tree.
@@ -338,7 +339,6 @@
return forest;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_GRAPH_ALGORITHMS_H_
diff --git a/internal/ceres/graph_algorithms_test.cc b/internal/ceres/graph_algorithms_test.cc
index d5dd02e..6c86668 100644
--- a/internal/ceres/graph_algorithms_test.cc
+++ b/internal/ceres/graph_algorithms_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,15 +33,13 @@
#include <algorithm>
#include <memory>
#include <unordered_set>
+#include <vector>
#include "ceres/graph.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
TEST(IndependentSetOrdering, Chain) {
Graph<int> graph;
@@ -58,7 +56,7 @@
// 0-1-2-3-4
// 0, 2, 4 should be in the independent set.
- vector<int> ordering;
+ std::vector<int> ordering;
int independent_set_size = IndependentSetOrdering(graph, &ordering);
sort(ordering.begin(), ordering.begin() + 3);
@@ -92,7 +90,7 @@
// |
// 3
// 1, 2, 3, 4 should be in the independent set.
- vector<int> ordering;
+ std::vector<int> ordering;
int independent_set_size = IndependentSetOrdering(graph, &ordering);
EXPECT_EQ(independent_set_size, 4);
EXPECT_EQ(ordering.size(), 5);
@@ -220,7 +218,7 @@
// guarantees that it will always be the first vertex in the
// ordering vector.
{
- vector<int> ordering;
+ std::vector<int> ordering;
ordering.push_back(0);
ordering.push_back(1);
ordering.push_back(2);
@@ -232,7 +230,7 @@
}
{
- vector<int> ordering;
+ std::vector<int> ordering;
ordering.push_back(1);
ordering.push_back(0);
ordering.push_back(2);
@@ -244,5 +242,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/graph_test.cc b/internal/ceres/graph_test.cc
index 2154a06..8c8afc6 100644
--- a/internal/ceres/graph_test.cc
+++ b/internal/ceres/graph_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,8 +34,7 @@
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST(Graph, EmptyGraph) {
Graph<int> graph;
@@ -148,5 +147,4 @@
EXPECT_EQ(graph.EdgeWeight(2, 3), 0);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/gtest/gtest.h b/internal/ceres/gtest/gtest.h
index a8344fe..e749057 100644
--- a/internal/ceres/gtest/gtest.h
+++ b/internal/ceres/gtest/gtest.h
@@ -49,12 +49,14 @@
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_H_
-#define GTEST_INCLUDE_GTEST_GTEST_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_H_
+#include <cstddef>
#include <limits>
#include <memory>
#include <ostream>
+#include <type_traits>
#include <vector>
// Copyright 2005, Google Inc.
@@ -93,8 +95,8 @@
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
// Copyright 2005, Google Inc.
// All rights reserved.
@@ -138,8 +140,8 @@
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
// Environment-describing macros
// -----------------------------
@@ -170,10 +172,6 @@
// is/isn't available.
// GTEST_HAS_EXCEPTIONS - Define it to 1/0 to indicate that exceptions
// are enabled.
-// GTEST_HAS_GLOBAL_STRING - Define it to 1/0 to indicate that ::string
-// is/isn't available
-// GTEST_HAS_GLOBAL_WSTRING - Define it to 1/0 to indicate that ::wstring
-// is/isn't available
// GTEST_HAS_POSIX_RE - Define it to 1/0 to indicate that POSIX regular
// expressions are/aren't available.
// GTEST_HAS_PTHREAD - Define it to 1/0 to indicate that <pthread.h>
@@ -219,6 +217,7 @@
// GTEST_OS_FREEBSD - FreeBSD
// GTEST_OS_FUCHSIA - Fuchsia
// GTEST_OS_GNU_KFREEBSD - GNU/kFreeBSD
+// GTEST_OS_HAIKU - Haiku
// GTEST_OS_HPUX - HP-UX
// GTEST_OS_LINUX - Linux
// GTEST_OS_LINUX_ANDROID - Google Android
@@ -238,7 +237,7 @@
// GTEST_OS_WINDOWS_RT - Windows Store App/WinRT
// GTEST_OS_ZOS - z/OS
//
-// Among the platforms, Cygwin, Linux, Max OS X, and Windows have the
+// Among the platforms, Cygwin, Linux, Mac OS X, and Windows have the
// most stable support. Since core members of the Google Test project
// don't have access to other platforms, support for them may be less
// stable. If you notice any problems on your platform, please notify
@@ -291,23 +290,32 @@
// GTEST_AMBIGUOUS_ELSE_BLOCKER_ - for disabling a gcc warning.
// GTEST_ATTRIBUTE_UNUSED_ - declares that a class' instances or a
// variable don't have to be used.
-// GTEST_DISALLOW_ASSIGN_ - disables operator=.
+// GTEST_DISALLOW_ASSIGN_ - disables copy operator=.
// GTEST_DISALLOW_COPY_AND_ASSIGN_ - disables copy ctor and operator=.
+// GTEST_DISALLOW_MOVE_ASSIGN_ - disables move operator=.
+// GTEST_DISALLOW_MOVE_AND_ASSIGN_ - disables move ctor and operator=.
// GTEST_MUST_USE_RESULT_ - declares that a function's result must be used.
// GTEST_INTENTIONAL_CONST_COND_PUSH_ - start code section where MSVC C4127 is
// suppressed (constant conditional).
// GTEST_INTENTIONAL_CONST_COND_POP_ - finish code section where MSVC C4127
// is suppressed.
+// GTEST_INTERNAL_HAS_ANY - for enabling UniversalPrinter<std::any> or
+// UniversalPrinter<absl::any> specializations.
+// GTEST_INTERNAL_HAS_OPTIONAL - for enabling UniversalPrinter<std::optional>
+// or
+// UniversalPrinter<absl::optional>
+// specializations.
+// GTEST_INTERNAL_HAS_STRING_VIEW - for enabling Matcher<std::string_view> or
+// Matcher<absl::string_view>
+// specializations.
+// GTEST_INTERNAL_HAS_VARIANT - for enabling UniversalPrinter<std::variant> or
+// UniversalPrinter<absl::variant>
+// specializations.
//
// Synchronization:
// Mutex, MutexLock, ThreadLocal, GetThreadCount()
// - synchronization primitives.
//
-// Template meta programming:
-// IteratorTraits - partial implementation of std::iterator_traits, which
-// is not available in libCstd when compiled with Sun C++.
-//
-//
// Regular expressions:
// RE - a simple regular expression class using the POSIX
// Extended Regular Expression syntax on UNIX-like platforms
@@ -329,8 +337,7 @@
//
// Integer types:
// TypeWithSize - maps an integer to a int type.
-// Int32, UInt32, Int64, UInt64, TimeInMillis
-// - integers of known sizes.
+// TimeInMillis - integers of known sizes.
// BiggestInt - the biggest signed integer type.
//
// Command-line utilities:
@@ -341,7 +348,7 @@
// Environment variable utilities:
// GetEnv() - gets the value of an environment variable.
// BoolFromGTestEnv() - parses a bool environment variable.
-// Int32FromGTestEnv() - parses an Int32 environment variable.
+// Int32FromGTestEnv() - parses an int32_t environment variable.
// StringFromGTestEnv() - parses a string environment variable.
//
// Deprecation warnings:
@@ -354,7 +361,10 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
-#include <memory>
+
+#include <cerrno>
+#include <cstdint>
+#include <limits>
#include <type_traits>
#ifndef _WIN32_WCE
@@ -367,14 +377,11 @@
# include <TargetConditionals.h>
#endif
-// Brings in the definition of HAS_GLOBAL_STRING. This must be done
-// BEFORE we test HAS_GLOBAL_STRING.
-#include <string> // NOLINT
-#include <algorithm> // NOLINT
-#include <iostream> // NOLINT
-#include <sstream> // NOLINT
+#include <iostream> // NOLINT
+#include <locale>
+#include <memory>
+#include <string> // NOLINT
#include <tuple>
-#include <utility>
#include <vector> // NOLINT
// Copyright 2015, Google Inc.
@@ -406,13 +413,50 @@
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
+// Injection point for custom user configurations. See README for details
+//
+// ** Custom implementation starts here **
+
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PORT_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PORT_H_
+
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PORT_H_
+// Copyright 2015, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
// The Google C++ Testing and Mocking Framework (Google Test)
//
// This header file defines the GTEST_OS_* macro.
// It is separate from gtest-port.h so that custom/gtest-port.h can include it.
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_ARCH_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_ARCH_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_ARCH_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_ARCH_H_
// Determines the platform on which Google Test is compiled.
#ifdef __CYGWIN__
@@ -447,6 +491,7 @@
# define GTEST_OS_OS2 1
#elif defined __APPLE__
# define GTEST_OS_MAC 1
+# include <TargetConditionals.h>
# if TARGET_OS_IPHONE
# define GTEST_OS_IOS 1
# endif
@@ -479,46 +524,17 @@
# define GTEST_OS_OPENBSD 1
#elif defined __QNX__
# define GTEST_OS_QNX 1
+#elif defined(__HAIKU__)
+#define GTEST_OS_HAIKU 1
+#elif defined ESP8266
+#define GTEST_OS_ESP8266 1
+#elif defined ESP32
+#define GTEST_OS_ESP32 1
+#elif defined(__XTENSA__)
+#define GTEST_OS_XTENSA 1
#endif // __CYGWIN__
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_ARCH_H_
-// Copyright 2015, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Injection point for custom user configurations. See README for details
-//
-// ** Custom implementation starts here **
-
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PORT_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PORT_H_
-
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PORT_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_ARCH_H_
#if !defined(GTEST_DEV_EMAIL_)
# define GTEST_DEV_EMAIL_ "googletestframework@@googlegroups.com"
@@ -592,6 +608,10 @@
// WindowsTypesTest.CRITICAL_SECTIONIs_RTL_CRITICAL_SECTION.
typedef struct _RTL_CRITICAL_SECTION GTEST_CRITICAL_SECTION;
#endif
+#elif GTEST_OS_XTENSA
+#include <unistd.h>
+// Xtensa toolchains define strcasecmp in the string.h header instead of
+// strings.h. string.h is already included.
#else
// This assumes that non-Windows OSes provide unistd.h. For OSes where this
// is not the case, we need to include headers that provide the functions
@@ -605,13 +625,14 @@
# include <android/api-level.h> // NOLINT
#endif
-// Defines this to true iff Google Test can use POSIX regular expressions.
+// Defines this to true if and only if Google Test can use POSIX regular
+// expressions.
#ifndef GTEST_HAS_POSIX_RE
# if GTEST_OS_LINUX_ANDROID
// On Android, <regex.h> is only available starting with Gingerbread.
# define GTEST_HAS_POSIX_RE (__ANDROID_API__ >= 9)
# else
-# define GTEST_HAS_POSIX_RE (!GTEST_OS_WINDOWS)
+#define GTEST_HAS_POSIX_RE (!GTEST_OS_WINDOWS && !GTEST_OS_XTENSA)
# endif
#endif
@@ -646,7 +667,7 @@
// The user didn't tell us whether exceptions are enabled, so we need
// to figure it out.
# if defined(_MSC_VER) && defined(_CPPUNWIND)
-// MSVC defines _CPPUNWIND to 1 iff exceptions are enabled.
+// MSVC defines _CPPUNWIND to 1 if and only if exceptions are enabled.
# define GTEST_HAS_EXCEPTIONS 1
# elif defined(__BORLANDC__)
// C++Builder's implementation of the STL uses the _HAS_EXCEPTIONS
@@ -657,16 +678,17 @@
# endif // _HAS_EXCEPTIONS
# define GTEST_HAS_EXCEPTIONS _HAS_EXCEPTIONS
# elif defined(__clang__)
-// clang defines __EXCEPTIONS iff exceptions are enabled before clang 220714,
-// but iff cleanups are enabled after that. In Obj-C++ files, there can be
-// cleanups for ObjC exceptions which also need cleanups, even if C++ exceptions
-// are disabled. clang has __has_feature(cxx_exceptions) which checks for C++
-// exceptions starting at clang r206352, but which checked for cleanups prior to
-// that. To reliably check for C++ exception availability with clang, check for
+// clang defines __EXCEPTIONS if and only if exceptions are enabled before clang
+// 220714, but if and only if cleanups are enabled after that. In Obj-C++ files,
+// there can be cleanups for ObjC exceptions which also need cleanups, even if
+// C++ exceptions are disabled. clang has __has_feature(cxx_exceptions) which
+// checks for C++ exceptions starting at clang r206352, but which checked for
+// cleanups prior to that. To reliably check for C++ exception availability with
+// clang, check for
// __EXCEPTIONS && __has_feature(cxx_exceptions).
# define GTEST_HAS_EXCEPTIONS (__EXCEPTIONS && __has_feature(cxx_exceptions))
# elif defined(__GNUC__) && __EXCEPTIONS
-// gcc defines __EXCEPTIONS to 1 iff exceptions are enabled.
+// gcc defines __EXCEPTIONS to 1 if and only if exceptions are enabled.
# define GTEST_HAS_EXCEPTIONS 1
# elif defined(__SUNPRO_CC)
// Sun Pro CC supports exceptions. However, there is no compile-time way of
@@ -674,7 +696,7 @@
// they are enabled unless the user tells us otherwise.
# define GTEST_HAS_EXCEPTIONS 1
# elif defined(__IBMCPP__) && __EXCEPTIONS
-// xlC defines __EXCEPTIONS to 1 iff exceptions are enabled.
+// xlC defines __EXCEPTIONS to 1 if and only if exceptions are enabled.
# define GTEST_HAS_EXCEPTIONS 1
# elif defined(__HP_aCC)
// Exception handling is in effect by default in HP aCC compiler. It has to
@@ -687,37 +709,18 @@
# endif // defined(_MSC_VER) || defined(__BORLANDC__)
#endif // GTEST_HAS_EXCEPTIONS
-#if !defined(GTEST_HAS_STD_STRING)
-// Even though we don't use this macro any longer, we keep it in case
-// some clients still depend on it.
-# define GTEST_HAS_STD_STRING 1
-#elif !GTEST_HAS_STD_STRING
-// The user told us that ::std::string isn't available.
-# error "::std::string isn't available."
-#endif // !defined(GTEST_HAS_STD_STRING)
-
-#ifndef GTEST_HAS_GLOBAL_STRING
-# define GTEST_HAS_GLOBAL_STRING 0
-#endif // GTEST_HAS_GLOBAL_STRING
-
#ifndef GTEST_HAS_STD_WSTRING
// The user didn't tell us whether ::std::wstring is available, so we need
// to figure it out.
// Cygwin 1.7 and below doesn't support ::std::wstring.
// Solaris' libc++ doesn't support it either. Android has
// no support for it at least as recent as Froyo (2.2).
-# define GTEST_HAS_STD_WSTRING \
- (!(GTEST_OS_LINUX_ANDROID || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS))
+#define GTEST_HAS_STD_WSTRING \
+ (!(GTEST_OS_LINUX_ANDROID || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS || \
+ GTEST_OS_HAIKU || GTEST_OS_ESP32 || GTEST_OS_ESP8266 || GTEST_OS_XTENSA))
#endif // GTEST_HAS_STD_WSTRING
-#ifndef GTEST_HAS_GLOBAL_WSTRING
-// The user didn't tell us whether ::wstring is available, so we need
-// to figure it out.
-# define GTEST_HAS_GLOBAL_WSTRING \
- (GTEST_HAS_STD_WSTRING && GTEST_HAS_GLOBAL_STRING)
-#endif // GTEST_HAS_GLOBAL_WSTRING
-
// Determines whether RTTI is available.
#ifndef GTEST_HAS_RTTI
// The user didn't tell us whether RTTI is enabled, so we need to
@@ -725,13 +728,14 @@
# ifdef _MSC_VER
-# ifdef _CPPRTTI // MSVC defines this macro iff RTTI is enabled.
+#ifdef _CPPRTTI // MSVC defines this macro if and only if RTTI is enabled.
# define GTEST_HAS_RTTI 1
# else
# define GTEST_HAS_RTTI 0
# endif
-// Starting with version 4.3.2, gcc defines __GXX_RTTI iff RTTI is enabled.
+// Starting with version 4.3.2, gcc defines __GXX_RTTI if and only if RTTI is
+// enabled.
# elif defined(__GNUC__)
# ifdef __GXX_RTTI
@@ -788,10 +792,11 @@
//
// To disable threading support in Google Test, add -DGTEST_HAS_PTHREAD=0
// to your compiler flags.
-#define GTEST_HAS_PTHREAD \
- (GTEST_OS_LINUX || GTEST_OS_MAC || GTEST_OS_HPUX || GTEST_OS_QNX || \
+#define GTEST_HAS_PTHREAD \
+ (GTEST_OS_LINUX || GTEST_OS_MAC || GTEST_OS_HPUX || GTEST_OS_QNX || \
GTEST_OS_FREEBSD || GTEST_OS_NACL || GTEST_OS_NETBSD || GTEST_OS_FUCHSIA || \
- GTEST_OS_DRAGONFLY || GTEST_OS_GNU_KFREEBSD || GTEST_OS_OPENBSD)
+ GTEST_OS_DRAGONFLY || GTEST_OS_GNU_KFREEBSD || GTEST_OS_OPENBSD || \
+ GTEST_OS_HAIKU)
#endif // GTEST_HAS_PTHREAD
#if GTEST_HAS_PTHREAD
@@ -836,7 +841,8 @@
#ifndef GTEST_HAS_STREAM_REDIRECTION
// By default, we assume that stream redirection is supported on all
// platforms except known mobile ones.
-# if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_PHONE || GTEST_OS_WINDOWS_RT
+#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_PHONE || \
+ GTEST_OS_WINDOWS_RT || GTEST_OS_ESP8266 || GTEST_OS_XTENSA
# define GTEST_HAS_STREAM_REDIRECTION 0
# else
# define GTEST_HAS_STREAM_REDIRECTION 1
@@ -845,13 +851,12 @@
// Determines whether to support death tests.
// pops up a dialog window that cannot be suppressed programmatically.
-#if (GTEST_OS_LINUX || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS || \
- (GTEST_OS_MAC && !GTEST_OS_IOS) || \
- (GTEST_OS_WINDOWS_DESKTOP && _MSC_VER) || \
- GTEST_OS_WINDOWS_MINGW || GTEST_OS_AIX || GTEST_OS_HPUX || \
- GTEST_OS_OPENBSD || GTEST_OS_QNX || GTEST_OS_FREEBSD || \
- GTEST_OS_NETBSD || GTEST_OS_FUCHSIA || GTEST_OS_DRAGONFLY || \
- GTEST_OS_GNU_KFREEBSD)
+#if (GTEST_OS_LINUX || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS || \
+ (GTEST_OS_MAC && !GTEST_OS_IOS) || \
+ (GTEST_OS_WINDOWS_DESKTOP && _MSC_VER) || GTEST_OS_WINDOWS_MINGW || \
+ GTEST_OS_AIX || GTEST_OS_HPUX || GTEST_OS_OPENBSD || GTEST_OS_QNX || \
+ GTEST_OS_FREEBSD || GTEST_OS_NETBSD || GTEST_OS_FUCHSIA || \
+ GTEST_OS_DRAGONFLY || GTEST_OS_GNU_KFREEBSD || GTEST_OS_HAIKU)
# define GTEST_HAS_DEATH_TEST 1
#endif
@@ -931,16 +936,27 @@
#endif
-// A macro to disallow operator=
+// A macro to disallow copy operator=
// This should be used in the private: declarations for a class.
#define GTEST_DISALLOW_ASSIGN_(type) \
- void operator=(type const &) = delete
+ type& operator=(type const &) = delete
// A macro to disallow copy constructor and operator=
// This should be used in the private: declarations for a class.
#define GTEST_DISALLOW_COPY_AND_ASSIGN_(type) \
- type(type const &) = delete; \
- GTEST_DISALLOW_ASSIGN_(type)
+ type(type const&) = delete; \
+ type& operator=(type const&) = delete
+
+// A macro to disallow move operator=
+// This should be used in the private: declarations for a class.
+#define GTEST_DISALLOW_MOVE_ASSIGN_(type) \
+ type& operator=(type &&) noexcept = delete
+
+// A macro to disallow move constructor and operator=
+// This should be used in the private: declarations for a class.
+#define GTEST_DISALLOW_MOVE_AND_ASSIGN_(type) \
+ type(type&&) noexcept = delete; \
+ type& operator=(type&&) noexcept = delete
// Tell the compiler to warn about unused return values for functions declared
// with this macro. The macro should be used on function declarations
@@ -1057,6 +1073,18 @@
# define GTEST_ATTRIBUTE_NO_SANITIZE_ADDRESS_
#endif // __clang__
+// A function level attribute to disable HWAddressSanitizer instrumentation.
+#if defined(__clang__)
+# if __has_feature(hwaddress_sanitizer)
+# define GTEST_ATTRIBUTE_NO_SANITIZE_HWADDRESS_ \
+ __attribute__((no_sanitize("hwaddress")))
+# else
+# define GTEST_ATTRIBUTE_NO_SANITIZE_HWADDRESS_
+# endif // __has_feature(hwaddress_sanitizer)
+#else
+# define GTEST_ATTRIBUTE_NO_SANITIZE_HWADDRESS_
+#endif // __clang__
+
// A function level attribute to disable ThreadSanitizer instrumentation.
#if defined(__clang__)
# if __has_feature(thread_sanitizer)
@@ -1099,42 +1127,6 @@
// expression is false, compiler will issue an error containing this identifier.
#define GTEST_COMPILE_ASSERT_(expr, msg) static_assert(expr, #msg)
-// StaticAssertTypeEqHelper is used by StaticAssertTypeEq defined in gtest.h.
-//
-// This template is declared, but intentionally undefined.
-template <typename T1, typename T2>
-struct StaticAssertTypeEqHelper;
-
-template <typename T>
-struct StaticAssertTypeEqHelper<T, T> {
- enum { value = true };
-};
-
-// Same as std::is_same<>.
-template <typename T, typename U>
-struct IsSame {
- enum { value = false };
-};
-template <typename T>
-struct IsSame<T, T> {
- enum { value = true };
-};
-
-// Evaluates to the number of elements in 'array'.
-#define GTEST_ARRAY_SIZE_(array) (sizeof(array) / sizeof(array[0]))
-
-#if GTEST_HAS_GLOBAL_STRING
-typedef ::string string;
-#else
-typedef ::std::string string;
-#endif // GTEST_HAS_GLOBAL_STRING
-
-#if GTEST_HAS_GLOBAL_WSTRING
-typedef ::wstring wstring;
-#elif GTEST_HAS_STD_WSTRING
-typedef ::std::wstring wstring;
-#endif // GTEST_HAS_GLOBAL_WSTRING
-
// A helper for suppressing warnings on constant condition. It just
// returns 'condition'.
GTEST_API_ bool IsTrue(bool condition);
@@ -1156,21 +1148,15 @@
// Constructs an RE from a string.
RE(const ::std::string& regex) { Init(regex.c_str()); } // NOLINT
-# if GTEST_HAS_GLOBAL_STRING
-
- RE(const ::string& regex) { Init(regex.c_str()); } // NOLINT
-
-# endif // GTEST_HAS_GLOBAL_STRING
-
RE(const char* regex) { Init(regex); } // NOLINT
~RE();
// Returns the string representation of the regex.
const char* pattern() const { return pattern_; }
- // FullMatch(str, re) returns true iff regular expression re matches
- // the entire str.
- // PartialMatch(str, re) returns true iff regular expression re
+ // FullMatch(str, re) returns true if and only if regular expression re
+ // matches the entire str.
+ // PartialMatch(str, re) returns true if and only if regular expression re
// matches a substring of str (including str itself).
static bool FullMatch(const ::std::string& str, const RE& re) {
return FullMatch(str.c_str(), re);
@@ -1179,17 +1165,6 @@
return PartialMatch(str.c_str(), re);
}
-# if GTEST_HAS_GLOBAL_STRING
-
- static bool FullMatch(const ::string& str, const RE& re) {
- return FullMatch(str.c_str(), re);
- }
- static bool PartialMatch(const ::string& str, const RE& re) {
- return PartialMatch(str.c_str(), re);
- }
-
-# endif // GTEST_HAS_GLOBAL_STRING
-
static bool FullMatch(const char* str, const RE& re);
static bool PartialMatch(const char* str, const RE& re);
@@ -1208,8 +1183,6 @@
const char* full_pattern_; // For FullMatch();
# endif
-
- GTEST_DISALLOW_ASSIGN_(RE);
};
#endif // GTEST_USES_PCRE
@@ -1299,19 +1272,6 @@
GTEST_LOG_(FATAL) << #posix_call << "failed with error " \
<< gtest_error
-// Adds reference to a type if it is not a reference type,
-// otherwise leaves it unchanged. This is the same as
-// tr1::add_reference, which is not widely available yet.
-template <typename T>
-struct AddReference { typedef T& type; }; // NOLINT
-template <typename T>
-struct AddReference<T&> { typedef T& type; }; // NOLINT
-
-// A handy wrapper around AddReference that works when the argument T
-// depends on template parameters.
-#define GTEST_ADD_REFERENCE_(T) \
- typename ::testing::internal::AddReference<T>::type
-
// Transforms "T" into "const T&" according to standard reference collapsing
// rules (this is only needed as a backport for C++98 compilers that do not
// support reference collapsing). Specifically, it transforms:
@@ -1445,9 +1405,6 @@
// Deprecated: pass the args vector by value instead.
void SetInjectableArgvs(const std::vector<std::string>* new_argvs);
void SetInjectableArgvs(const std::vector<std::string>& new_argvs);
-#if GTEST_HAS_GLOBAL_STRING
-void SetInjectableArgvs(const std::vector< ::string>& new_argvs);
-#endif // GTEST_HAS_GLOBAL_STRING
void ClearInjectableArgvs();
#endif // GTEST_HAS_DEATH_TEST
@@ -1539,7 +1496,8 @@
void Reset(Handle handle);
private:
- // Returns true iff the handle is a valid handle object that can be closed.
+ // Returns true if and only if the handle is a valid handle object that can be
+ // closed.
bool IsCloseable() const;
Handle handle_;
@@ -1641,7 +1599,8 @@
// When non-NULL, used to block execution until the controller thread
// notifies.
Notification* const thread_can_start_;
- bool finished_; // true iff we know that the thread function has finished.
+ bool finished_; // true if and only if we know that the thread function has
+ // finished.
pthread_t thread_; // The native thread object.
GTEST_DISALLOW_COPY_AND_ASSIGN_(ThreadWithParam);
@@ -1906,7 +1865,7 @@
class DefaultValueHolderFactory : public ValueHolderFactory {
public:
DefaultValueHolderFactory() {}
- virtual ValueHolder* MakeNewHolder() const { return new ValueHolder(); }
+ ValueHolder* MakeNewHolder() const override { return new ValueHolder(); }
private:
GTEST_DISALLOW_COPY_AND_ASSIGN_(DefaultValueHolderFactory);
@@ -1915,7 +1874,7 @@
class InstanceValueHolderFactory : public ValueHolderFactory {
public:
explicit InstanceValueHolderFactory(const T& value) : value_(value) {}
- virtual ValueHolder* MakeNewHolder() const {
+ ValueHolder* MakeNewHolder() const override {
return new ValueHolder(value_);
}
@@ -2115,7 +2074,7 @@
class DefaultValueHolderFactory : public ValueHolderFactory {
public:
DefaultValueHolderFactory() {}
- virtual ValueHolder* MakeNewHolder() const { return new ValueHolder(); }
+ ValueHolder* MakeNewHolder() const override { return new ValueHolder(); }
private:
GTEST_DISALLOW_COPY_AND_ASSIGN_(DefaultValueHolderFactory);
@@ -2124,7 +2083,7 @@
class InstanceValueHolderFactory : public ValueHolderFactory {
public:
explicit InstanceValueHolderFactory(const T& value) : value_(value) {}
- virtual ValueHolder* MakeNewHolder() const {
+ ValueHolder* MakeNewHolder() const override {
return new ValueHolder(value_);
}
@@ -2194,47 +2153,12 @@
// we cannot detect it.
GTEST_API_ size_t GetThreadCount();
-template <bool bool_value>
-struct bool_constant {
- typedef bool_constant<bool_value> type;
- static const bool value = bool_value;
-};
-template <bool bool_value> const bool bool_constant<bool_value>::value;
-
-typedef bool_constant<false> false_type;
-typedef bool_constant<true> true_type;
-
-template <typename T, typename U>
-struct is_same : public false_type {};
-
-template <typename T>
-struct is_same<T, T> : public true_type {};
-
-template <typename Iterator>
-struct IteratorTraits {
- typedef typename Iterator::value_type value_type;
-};
-
-
-template <typename T>
-struct IteratorTraits<T*> {
- typedef T value_type;
-};
-
-template <typename T>
-struct IteratorTraits<const T*> {
- typedef T value_type;
-};
-
#if GTEST_OS_WINDOWS
# define GTEST_PATH_SEP_ "\\"
# define GTEST_HAS_ALT_PATH_SEP_ 1
-// The biggest signed integer type the compiler supports.
-typedef __int64 BiggestInt;
#else
# define GTEST_PATH_SEP_ "/"
# define GTEST_HAS_ALT_PATH_SEP_ 0
-typedef long long BiggestInt; // NOLINT
#endif // GTEST_OS_WINDOWS
// Utilities for char.
@@ -2265,6 +2189,19 @@
inline bool IsXDigit(char ch) {
return isxdigit(static_cast<unsigned char>(ch)) != 0;
}
+#ifdef __cpp_char8_t
+inline bool IsXDigit(char8_t ch) {
+ return isxdigit(static_cast<unsigned char>(ch)) != 0;
+}
+#endif
+inline bool IsXDigit(char16_t ch) {
+ const unsigned char low_byte = static_cast<unsigned char>(ch);
+ return ch == low_byte && isxdigit(low_byte) != 0;
+}
+inline bool IsXDigit(char32_t ch) {
+ const unsigned char low_byte = static_cast<unsigned char>(ch);
+ return ch == low_byte && isxdigit(low_byte) != 0;
+}
inline bool IsXDigit(wchar_t ch) {
const unsigned char low_byte = static_cast<unsigned char>(ch);
return ch == low_byte && isxdigit(low_byte) != 0;
@@ -2299,16 +2236,16 @@
typedef struct _stat StatStruct;
# ifdef __BORLANDC__
-inline int IsATTY(int fd) { return isatty(fd); }
+inline int DoIsATTY(int fd) { return isatty(fd); }
inline int StrCaseCmp(const char* s1, const char* s2) {
return stricmp(s1, s2);
}
inline char* StrDup(const char* src) { return strdup(src); }
# else // !__BORLANDC__
# if GTEST_OS_WINDOWS_MOBILE
-inline int IsATTY(int /* fd */) { return 0; }
+inline int DoIsATTY(int /* fd */) { return 0; }
# else
-inline int IsATTY(int fd) { return _isatty(fd); }
+inline int DoIsATTY(int fd) { return _isatty(fd); }
# endif // GTEST_OS_WINDOWS_MOBILE
inline int StrCaseCmp(const char* s1, const char* s2) {
return _stricmp(s1, s2);
@@ -2329,12 +2266,28 @@
}
# endif // GTEST_OS_WINDOWS_MOBILE
+#elif GTEST_OS_ESP8266
+typedef struct stat StatStruct;
+
+inline int FileNo(FILE* file) { return fileno(file); }
+inline int DoIsATTY(int fd) { return isatty(fd); }
+inline int Stat(const char* path, StatStruct* buf) {
+ // stat function not implemented on ESP8266
+ return 0;
+}
+inline int StrCaseCmp(const char* s1, const char* s2) {
+ return strcasecmp(s1, s2);
+}
+inline char* StrDup(const char* src) { return strdup(src); }
+inline int RmDir(const char* dir) { return rmdir(dir); }
+inline bool IsDir(const StatStruct& st) { return S_ISDIR(st.st_mode); }
+
#else
typedef struct stat StatStruct;
inline int FileNo(FILE* file) { return fileno(file); }
-inline int IsATTY(int fd) { return isatty(fd); }
+inline int DoIsATTY(int fd) { return isatty(fd); }
inline int Stat(const char* path, StatStruct* buf) { return stat(path, buf); }
inline int StrCaseCmp(const char* s1, const char* s2) {
return strcasecmp(s1, s2);
@@ -2345,23 +2298,39 @@
#endif // GTEST_OS_WINDOWS
+inline int IsATTY(int fd) {
+ // DoIsATTY might change errno (for example ENOTTY in case you redirect stdout
+ // to a file on Linux), which is unexpected, so save the previous value, and
+ // restore it after the call.
+ int savedErrno = errno;
+ int isAttyValue = DoIsATTY(fd);
+ errno = savedErrno;
+
+ return isAttyValue;
+}
+
// Functions deprecated by MSVC 8.0.
GTEST_DISABLE_MSC_DEPRECATED_PUSH_()
-inline const char* StrNCpy(char* dest, const char* src, size_t n) {
- return strncpy(dest, src, n);
-}
-
// ChDir(), FReopen(), FDOpen(), Read(), Write(), Close(), and
// StrError() aren't needed on Windows CE at this time and thus not
// defined there.
-#if !GTEST_OS_WINDOWS_MOBILE && !GTEST_OS_WINDOWS_PHONE && !GTEST_OS_WINDOWS_RT
+#if !GTEST_OS_WINDOWS_MOBILE && !GTEST_OS_WINDOWS_PHONE && \
+ !GTEST_OS_WINDOWS_RT && !GTEST_OS_ESP8266 && !GTEST_OS_XTENSA
inline int ChDir(const char* dir) { return chdir(dir); }
#endif
inline FILE* FOpen(const char* path, const char* mode) {
+#if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MINGW
+ struct wchar_codecvt : public std::codecvt<wchar_t, char, std::mbstate_t> {};
+ std::wstring_convert<wchar_codecvt> converter;
+ std::wstring wide_path = converter.from_bytes(path);
+ std::wstring wide_mode = converter.from_bytes(mode);
+ return _wfopen(wide_path.c_str(), wide_mode.c_str());
+#else // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MINGW
return fopen(path, mode);
+#endif // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MINGW
}
#if !GTEST_OS_WINDOWS_MOBILE
inline FILE *FReopen(const char* path, const char* mode, FILE* stream) {
@@ -2381,8 +2350,9 @@
inline const char* StrError(int errnum) { return strerror(errnum); }
#endif
inline const char* GetEnv(const char* name) {
-#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_PHONE || GTEST_OS_WINDOWS_RT
- // We are on Windows CE, which has no environment variables.
+#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_PHONE || \
+ GTEST_OS_WINDOWS_RT || GTEST_OS_ESP8266 || GTEST_OS_XTENSA
+ // We are on an embedded platform, which has no environment variables.
static_cast<void>(name); // To prevent 'unused argument' warning.
return nullptr;
#elif defined(__BORLANDC__) || defined(__SunOS_5_8) || defined(__SunOS_5_9)
@@ -2424,15 +2394,13 @@
# define GTEST_SNPRINTF_ snprintf
#endif
-// The maximum number a BiggestInt can represent. This definition
-// works no matter BiggestInt is represented in one's complement or
-// two's complement.
+// The biggest signed integer type the compiler supports.
//
-// We cannot rely on numeric_limits in STL, as __int64 and long long
-// are not part of standard C++ and numeric_limits doesn't need to be
-// defined for them.
-const BiggestInt kMaxBiggestInt =
- ~(static_cast<BiggestInt>(1) << (8*sizeof(BiggestInt) - 1));
+// long long is guaranteed to be at least 64-bits in C++11.
+using BiggestInt = long long; // NOLINT
+
+// The maximum number a BiggestInt can represent.
+constexpr BiggestInt kMaxBiggestInt = (std::numeric_limits<BiggestInt>::max)();
// This template class serves as a compile-time function from size to
// type. It maps a size in bytes to a primitive type with that
@@ -2457,40 +2425,27 @@
public:
// This prevents the user from using TypeWithSize<N> with incorrect
// values of N.
- typedef void UInt;
+ using UInt = void;
};
// The specialization for size 4.
template <>
class TypeWithSize<4> {
public:
- // unsigned int has size 4 in both gcc and MSVC.
- //
- // As base/basictypes.h doesn't compile on Windows, we cannot use
- // uint32, uint64, and etc here.
- typedef int Int;
- typedef unsigned int UInt;
+ using Int = std::int32_t;
+ using UInt = std::uint32_t;
};
// The specialization for size 8.
template <>
class TypeWithSize<8> {
public:
-#if GTEST_OS_WINDOWS
- typedef __int64 Int;
- typedef unsigned __int64 UInt;
-#else
- typedef long long Int; // NOLINT
- typedef unsigned long long UInt; // NOLINT
-#endif // GTEST_OS_WINDOWS
+ using Int = std::int64_t;
+ using UInt = std::uint64_t;
};
// Integer types of known sizes.
-typedef TypeWithSize<4>::Int Int32;
-typedef TypeWithSize<4>::UInt UInt32;
-typedef TypeWithSize<8>::Int Int64;
-typedef TypeWithSize<8>::UInt UInt64;
-typedef TypeWithSize<8>::Int TimeInMillis; // Represents time in milliseconds.
+using TimeInMillis = int64_t; // Represents time in milliseconds.
// Utilities for command line flags and environment variables.
@@ -2509,7 +2464,7 @@
// Macros for declaring flags.
# define GTEST_DECLARE_bool_(name) GTEST_API_ extern bool GTEST_FLAG(name)
# define GTEST_DECLARE_int32_(name) \
- GTEST_API_ extern ::testing::internal::Int32 GTEST_FLAG(name)
+ GTEST_API_ extern std::int32_t GTEST_FLAG(name)
# define GTEST_DECLARE_string_(name) \
GTEST_API_ extern ::std::string GTEST_FLAG(name)
@@ -2517,7 +2472,7 @@
# define GTEST_DEFINE_bool_(name, default_val, doc) \
GTEST_API_ bool GTEST_FLAG(name) = (default_val)
# define GTEST_DEFINE_int32_(name, default_val, doc) \
- GTEST_API_ ::testing::internal::Int32 GTEST_FLAG(name) = (default_val)
+ GTEST_API_ std::int32_t GTEST_FLAG(name) = (default_val)
# define GTEST_DEFINE_string_(name, default_val, doc) \
GTEST_API_ ::std::string GTEST_FLAG(name) = (default_val)
@@ -2532,12 +2487,13 @@
// Parses 'str' for a 32-bit signed integer. If successful, writes the result
// to *value and returns true; otherwise leaves *value unchanged and returns
// false.
-bool ParseInt32(const Message& src_text, const char* str, Int32* value);
+GTEST_API_ bool ParseInt32(const Message& src_text, const char* str,
+ int32_t* value);
-// Parses a bool/Int32/string from the environment variable
+// Parses a bool/int32_t/string from the environment variable
// corresponding to the given Google Test flag.
bool BoolFromGTestEnv(const char* flag, bool default_val);
-GTEST_API_ Int32 Int32FromGTestEnv(const char* flag, Int32 default_val);
+GTEST_API_ int32_t Int32FromGTestEnv(const char* flag, int32_t default_val);
std::string OutputFlagAlsoCheckEnvVar();
const char* StringFromGTestEnv(const char* flag, const char* default_val);
@@ -2564,7 +2520,122 @@
#endif // !defined(GTEST_INTERNAL_DEPRECATED)
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
+#if GTEST_HAS_ABSL
+// Always use absl::any for UniversalPrinter<> specializations if googletest
+// is built with absl support.
+#define GTEST_INTERNAL_HAS_ANY 1
+#include "absl/types/any.h"
+namespace testing {
+namespace internal {
+using Any = ::absl::any;
+} // namespace internal
+} // namespace testing
+#else
+#ifdef __has_include
+#if __has_include(<any>) && __cplusplus >= 201703L
+// Otherwise for C++17 and higher use std::any for UniversalPrinter<>
+// specializations.
+#define GTEST_INTERNAL_HAS_ANY 1
+#include <any>
+namespace testing {
+namespace internal {
+using Any = ::std::any;
+} // namespace internal
+} // namespace testing
+// The case where absl is configured NOT to alias std::any is not
+// supported.
+#endif // __has_include(<any>) && __cplusplus >= 201703L
+#endif // __has_include
+#endif // GTEST_HAS_ABSL
+
+#if GTEST_HAS_ABSL
+// Always use absl::optional for UniversalPrinter<> specializations if
+// googletest is built with absl support.
+#define GTEST_INTERNAL_HAS_OPTIONAL 1
+#include "absl/types/optional.h"
+namespace testing {
+namespace internal {
+template <typename T>
+using Optional = ::absl::optional<T>;
+} // namespace internal
+} // namespace testing
+#else
+#ifdef __has_include
+#if __has_include(<optional>) && __cplusplus >= 201703L
+// Otherwise for C++17 and higher use std::optional for UniversalPrinter<>
+// specializations.
+#define GTEST_INTERNAL_HAS_OPTIONAL 1
+#include <optional>
+namespace testing {
+namespace internal {
+template <typename T>
+using Optional = ::std::optional<T>;
+} // namespace internal
+} // namespace testing
+// The case where absl is configured NOT to alias std::optional is not
+// supported.
+#endif // __has_include(<optional>) && __cplusplus >= 201703L
+#endif // __has_include
+#endif // GTEST_HAS_ABSL
+
+#if GTEST_HAS_ABSL
+// Always use absl::string_view for Matcher<> specializations if googletest
+// is built with absl support.
+# define GTEST_INTERNAL_HAS_STRING_VIEW 1
+#include "absl/strings/string_view.h"
+namespace testing {
+namespace internal {
+using StringView = ::absl::string_view;
+} // namespace internal
+} // namespace testing
+#else
+# ifdef __has_include
+# if __has_include(<string_view>) && __cplusplus >= 201703L
+// Otherwise for C++17 and higher use std::string_view for Matcher<>
+// specializations.
+# define GTEST_INTERNAL_HAS_STRING_VIEW 1
+#include <string_view>
+namespace testing {
+namespace internal {
+using StringView = ::std::string_view;
+} // namespace internal
+} // namespace testing
+// The case where absl is configured NOT to alias std::string_view is not
+// supported.
+# endif // __has_include(<string_view>) && __cplusplus >= 201703L
+# endif // __has_include
+#endif // GTEST_HAS_ABSL
+
+#if GTEST_HAS_ABSL
+// Always use absl::variant for UniversalPrinter<> specializations if googletest
+// is built with absl support.
+#define GTEST_INTERNAL_HAS_VARIANT 1
+#include "absl/types/variant.h"
+namespace testing {
+namespace internal {
+template <typename... T>
+using Variant = ::absl::variant<T...>;
+} // namespace internal
+} // namespace testing
+#else
+#ifdef __has_include
+#if __has_include(<variant>) && __cplusplus >= 201703L
+// Otherwise for C++17 and higher use std::variant for UniversalPrinter<>
+// specializations.
+#define GTEST_INTERNAL_HAS_VARIANT 1
+#include <variant>
+namespace testing {
+namespace internal {
+template <typename... T>
+using Variant = ::std::variant<T...>;
+} // namespace internal
+} // namespace testing
+// The case where absl is configured NOT to alias std::variant is not supported.
+#endif // __has_include(<variant>) && __cplusplus >= 201703L
+#endif // __has_include
+#endif // GTEST_HAS_ABSL
+
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
#if GTEST_OS_LINUX
# include <stdlib.h>
@@ -2580,6 +2651,7 @@
#include <ctype.h>
#include <float.h>
#include <string.h>
+#include <cstdint>
#include <iomanip>
#include <limits>
#include <map>
@@ -2634,11 +2706,12 @@
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
-#define GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
#include <limits>
#include <memory>
+#include <sstream>
GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
@@ -2768,12 +2841,6 @@
Message& operator <<(const ::std::wstring& wstr);
#endif // GTEST_HAS_STD_WSTRING
-#if GTEST_HAS_GLOBAL_WSTRING
- // Converts the given wide string to a narrow string using the UTF-8
- // encoding, and streams the result to this Message object.
- Message& operator <<(const ::wstring& wstr);
-#endif // GTEST_HAS_GLOBAL_WSTRING
-
// Gets the text streamed to this object so far as an std::string.
// Each '\0' character in the buffer is replaced with "\\0".
//
@@ -2810,7 +2877,7 @@
GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251
-#endif // GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
// Copyright 2008, Google Inc.
// All rights reserved.
//
@@ -2850,8 +2917,8 @@
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
// Copyright 2005, Google Inc.
// All rights reserved.
@@ -2893,8 +2960,8 @@
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_
#ifdef __BORLANDC__
// string.h is not guaranteed to provide strcpy on C++ Builder.
@@ -2902,6 +2969,7 @@
#endif
#include <string.h>
+#include <cstdint>
#include <string>
@@ -2948,7 +3016,8 @@
static const char* Utf16ToAnsi(LPCWSTR utf16_str);
#endif
- // Compares two C strings. Returns true iff they have the same content.
+ // Compares two C strings. Returns true if and only if they have the same
+ // content.
//
// Unlike strcmp(), this function can handle NULL argument(s). A
// NULL C string is considered different to any non-NULL C string,
@@ -2961,16 +3030,16 @@
// returned.
static std::string ShowWideCString(const wchar_t* wide_c_str);
- // Compares two wide C strings. Returns true iff they have the same
- // content.
+ // Compares two wide C strings. Returns true if and only if they have the
+ // same content.
//
// Unlike wcscmp(), this function can handle NULL argument(s). A
// NULL C string is considered different to any non-NULL C string,
// including the empty string.
static bool WideCStringEquals(const wchar_t* lhs, const wchar_t* rhs);
- // Compares two C strings, ignoring case. Returns true iff they
- // have the same content.
+ // Compares two C strings, ignoring case. Returns true if and only if
+ // they have the same content.
//
// Unlike strcasecmp(), this function can handle NULL argument(s).
// A NULL C string is considered different to any non-NULL C string,
@@ -2978,8 +3047,8 @@
static bool CaseInsensitiveCStringEquals(const char* lhs,
const char* rhs);
- // Compares two wide C strings, ignoring case. Returns true iff they
- // have the same content.
+ // Compares two wide C strings, ignoring case. Returns true if and only if
+ // they have the same content.
//
// Unlike wcscasecmp(), this function can handle NULL argument(s).
// A NULL C string is considered different to any non-NULL wide C string,
@@ -2993,17 +3062,23 @@
static bool CaseInsensitiveWideCStringEquals(const wchar_t* lhs,
const wchar_t* rhs);
- // Returns true iff the given string ends with the given suffix, ignoring
- // case. Any string is considered to end with an empty suffix.
+ // Returns true if and only if the given string ends with the given suffix,
+ // ignoring case. Any string is considered to end with an empty suffix.
static bool EndsWithCaseInsensitive(
const std::string& str, const std::string& suffix);
// Formats an int value as "%02d".
static std::string FormatIntWidth2(int value); // "%02d" for width == 2
+ // Formats an int value to given width with leading zeros.
+ static std::string FormatIntWidthN(int value, int width);
+
// Formats an int value as "%X".
static std::string FormatHexInt(int value);
+ // Formats an int value as "%X".
+ static std::string FormatHexUInt32(uint32_t value);
+
// Formats a byte as "%02X".
static std::string FormatByte(unsigned char value);
@@ -3018,7 +3093,7 @@
} // namespace internal
} // namespace testing
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_
GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
/* class A needs to have dll-interface to be used by clients of class B */)
@@ -3088,7 +3163,7 @@
const FilePath& base_name,
const char* extension);
- // Returns true iff the path is "".
+ // Returns true if and only if the path is "".
bool IsEmpty() const { return pathname_.empty(); }
// If input name has a trailing separator character, removes it and returns
@@ -3173,7 +3248,7 @@
void Normalize();
- // Returns a pointer to the last occurence of a valid path separator in
+ // Returns a pointer to the last occurrence of a valid path separator in
// the FilePath. On Windows, for example, both '/' and '\' are valid path
// separators. Returns NULL if no path separator was found.
const char* FindLastPathSeparator() const;
@@ -3186,11 +3261,7 @@
GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
-// This file was GENERATED by command:
-// pump.py gtest-type-util.h.pump
-// DO NOT EDIT BY HAND!!!
-
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
// Copyright 2008 Google Inc.
// All Rights Reserved.
//
@@ -3221,17 +3292,12 @@
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Type utilities needed for implementing typed and type-parameterized
-// tests. This file is generated by a SCRIPT. DO NOT EDIT BY HAND!
-//
-// Currently we support at most 50 types in a list, and at most 50
-// type-parameterized tests in one type-parameterized test suite.
-// Please contact googletestframework@googlegroups.com if you need
-// more.
+// tests.
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
// #ifdef __GNUC__ is too general here. It is possible to use gcc without using
@@ -3261,1568 +3327,43 @@
return s;
}
-// GetTypeName<T>() returns a human-readable name of type T.
-// NB: This function is also used in Google Mock, so don't move it inside of
-// the typed-test-only section below.
-template <typename T>
-std::string GetTypeName() {
-# if GTEST_HAS_RTTI
-
- const char* const name = typeid(T).name();
-# if GTEST_HAS_CXXABI_H_ || defined(__HP_aCC)
+#if GTEST_HAS_RTTI
+// GetTypeName(const std::type_info&) returns a human-readable name of type T.
+inline std::string GetTypeName(const std::type_info& type) {
+ const char* const name = type.name();
+#if GTEST_HAS_CXXABI_H_ || defined(__HP_aCC)
int status = 0;
// gcc's implementation of typeid(T).name() mangles the type name,
// so we have to demangle it.
-# if GTEST_HAS_CXXABI_H_
+#if GTEST_HAS_CXXABI_H_
using abi::__cxa_demangle;
-# endif // GTEST_HAS_CXXABI_H_
+#endif // GTEST_HAS_CXXABI_H_
char* const readable_name = __cxa_demangle(name, nullptr, nullptr, &status);
const std::string name_str(status == 0 ? readable_name : name);
free(readable_name);
return CanonicalizeForStdLibVersioning(name_str);
-# else
+#else
return name;
-# endif // GTEST_HAS_CXXABI_H_ || __HP_aCC
+#endif // GTEST_HAS_CXXABI_H_ || __HP_aCC
+}
+#endif // GTEST_HAS_RTTI
-# else
-
+// GetTypeName<T>() returns a human-readable name of type T if and only if
+// RTTI is enabled, otherwise it returns a dummy type name.
+// NB: This function is also used in Google Mock, so don't move it inside of
+// the typed-test-only section below.
+template <typename T>
+std::string GetTypeName() {
+#if GTEST_HAS_RTTI
+ return GetTypeName(typeid(T));
+#else
return "<type>";
-
-# endif // GTEST_HAS_RTTI
+#endif // GTEST_HAS_RTTI
}
-#if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P
-
-// AssertyTypeEq<T1, T2>::type is defined iff T1 and T2 are the same
-// type. This can be used as a compile-time assertion to ensure that
-// two types are equal.
-
-template <typename T1, typename T2>
-struct AssertTypeEq;
-
-template <typename T>
-struct AssertTypeEq<T, T> {
- typedef bool type;
-};
-
-// A unique type used as the default value for the arguments of class
-// template Types. This allows us to simulate variadic templates
-// (e.g. Types<int>, Type<int, double>, and etc), which C++ doesn't
-// support directly.
+// A unique type indicating an empty node
struct None {};
-// The following family of struct and struct templates are used to
-// represent type lists. In particular, TypesN<T1, T2, ..., TN>
-// represents a type list with N types (T1, T2, ..., and TN) in it.
-// Except for Types0, every struct in the family has two member types:
-// Head for the first type in the list, and Tail for the rest of the
-// list.
-
-// The empty type list.
-struct Types0 {};
-
-// Type lists of length 1, 2, 3, and so on.
-
-template <typename T1>
-struct Types1 {
- typedef T1 Head;
- typedef Types0 Tail;
-};
-template <typename T1, typename T2>
-struct Types2 {
- typedef T1 Head;
- typedef Types1<T2> Tail;
-};
-
-template <typename T1, typename T2, typename T3>
-struct Types3 {
- typedef T1 Head;
- typedef Types2<T2, T3> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4>
-struct Types4 {
- typedef T1 Head;
- typedef Types3<T2, T3, T4> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5>
-struct Types5 {
- typedef T1 Head;
- typedef Types4<T2, T3, T4, T5> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6>
-struct Types6 {
- typedef T1 Head;
- typedef Types5<T2, T3, T4, T5, T6> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7>
-struct Types7 {
- typedef T1 Head;
- typedef Types6<T2, T3, T4, T5, T6, T7> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8>
-struct Types8 {
- typedef T1 Head;
- typedef Types7<T2, T3, T4, T5, T6, T7, T8> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9>
-struct Types9 {
- typedef T1 Head;
- typedef Types8<T2, T3, T4, T5, T6, T7, T8, T9> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10>
-struct Types10 {
- typedef T1 Head;
- typedef Types9<T2, T3, T4, T5, T6, T7, T8, T9, T10> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11>
-struct Types11 {
- typedef T1 Head;
- typedef Types10<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12>
-struct Types12 {
- typedef T1 Head;
- typedef Types11<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13>
-struct Types13 {
- typedef T1 Head;
- typedef Types12<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14>
-struct Types14 {
- typedef T1 Head;
- typedef Types13<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15>
-struct Types15 {
- typedef T1 Head;
- typedef Types14<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16>
-struct Types16 {
- typedef T1 Head;
- typedef Types15<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17>
-struct Types17 {
- typedef T1 Head;
- typedef Types16<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18>
-struct Types18 {
- typedef T1 Head;
- typedef Types17<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19>
-struct Types19 {
- typedef T1 Head;
- typedef Types18<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20>
-struct Types20 {
- typedef T1 Head;
- typedef Types19<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21>
-struct Types21 {
- typedef T1 Head;
- typedef Types20<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22>
-struct Types22 {
- typedef T1 Head;
- typedef Types21<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23>
-struct Types23 {
- typedef T1 Head;
- typedef Types22<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24>
-struct Types24 {
- typedef T1 Head;
- typedef Types23<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25>
-struct Types25 {
- typedef T1 Head;
- typedef Types24<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26>
-struct Types26 {
- typedef T1 Head;
- typedef Types25<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27>
-struct Types27 {
- typedef T1 Head;
- typedef Types26<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28>
-struct Types28 {
- typedef T1 Head;
- typedef Types27<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29>
-struct Types29 {
- typedef T1 Head;
- typedef Types28<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30>
-struct Types30 {
- typedef T1 Head;
- typedef Types29<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31>
-struct Types31 {
- typedef T1 Head;
- typedef Types30<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32>
-struct Types32 {
- typedef T1 Head;
- typedef Types31<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33>
-struct Types33 {
- typedef T1 Head;
- typedef Types32<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34>
-struct Types34 {
- typedef T1 Head;
- typedef Types33<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35>
-struct Types35 {
- typedef T1 Head;
- typedef Types34<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36>
-struct Types36 {
- typedef T1 Head;
- typedef Types35<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37>
-struct Types37 {
- typedef T1 Head;
- typedef Types36<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38>
-struct Types38 {
- typedef T1 Head;
- typedef Types37<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39>
-struct Types39 {
- typedef T1 Head;
- typedef Types38<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40>
-struct Types40 {
- typedef T1 Head;
- typedef Types39<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41>
-struct Types41 {
- typedef T1 Head;
- typedef Types40<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42>
-struct Types42 {
- typedef T1 Head;
- typedef Types41<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43>
-struct Types43 {
- typedef T1 Head;
- typedef Types42<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44>
-struct Types44 {
- typedef T1 Head;
- typedef Types43<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- T44> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45>
-struct Types45 {
- typedef T1 Head;
- typedef Types44<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- T44, T45> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46>
-struct Types46 {
- typedef T1 Head;
- typedef Types45<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- T44, T45, T46> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46, typename T47>
-struct Types47 {
- typedef T1 Head;
- typedef Types46<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- T44, T45, T46, T47> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46, typename T47, typename T48>
-struct Types48 {
- typedef T1 Head;
- typedef Types47<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- T44, T45, T46, T47, T48> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46, typename T47, typename T48, typename T49>
-struct Types49 {
- typedef T1 Head;
- typedef Types48<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- T44, T45, T46, T47, T48, T49> Tail;
-};
-
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46, typename T47, typename T48, typename T49, typename T50>
-struct Types50 {
- typedef T1 Head;
- typedef Types49<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- T44, T45, T46, T47, T48, T49, T50> Tail;
-};
-
-
-} // namespace internal
-
-// We don't want to require the users to write TypesN<...> directly,
-// as that would require them to count the length. Types<...> is much
-// easier to write, but generates horrible messages when there is a
-// compiler error, as gcc insists on printing out each template
-// argument, even if it has the default value (this means Types<int>
-// will appear as Types<int, None, None, ..., None> in the compiler
-// errors).
-//
-// Our solution is to combine the best part of the two approaches: a
-// user would write Types<T1, ..., TN>, and Google Test will translate
-// that to TypesN<T1, ..., TN> internally to make error messages
-// readable. The translation is done by the 'type' member of the
-// Types template.
-template <typename T1 = internal::None, typename T2 = internal::None,
- typename T3 = internal::None, typename T4 = internal::None,
- typename T5 = internal::None, typename T6 = internal::None,
- typename T7 = internal::None, typename T8 = internal::None,
- typename T9 = internal::None, typename T10 = internal::None,
- typename T11 = internal::None, typename T12 = internal::None,
- typename T13 = internal::None, typename T14 = internal::None,
- typename T15 = internal::None, typename T16 = internal::None,
- typename T17 = internal::None, typename T18 = internal::None,
- typename T19 = internal::None, typename T20 = internal::None,
- typename T21 = internal::None, typename T22 = internal::None,
- typename T23 = internal::None, typename T24 = internal::None,
- typename T25 = internal::None, typename T26 = internal::None,
- typename T27 = internal::None, typename T28 = internal::None,
- typename T29 = internal::None, typename T30 = internal::None,
- typename T31 = internal::None, typename T32 = internal::None,
- typename T33 = internal::None, typename T34 = internal::None,
- typename T35 = internal::None, typename T36 = internal::None,
- typename T37 = internal::None, typename T38 = internal::None,
- typename T39 = internal::None, typename T40 = internal::None,
- typename T41 = internal::None, typename T42 = internal::None,
- typename T43 = internal::None, typename T44 = internal::None,
- typename T45 = internal::None, typename T46 = internal::None,
- typename T47 = internal::None, typename T48 = internal::None,
- typename T49 = internal::None, typename T50 = internal::None>
-struct Types {
- typedef internal::Types50<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43, T44, T45, T46, T47, T48, T49, T50> type;
-};
-
-template <>
-struct Types<internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types0 type;
-};
-template <typename T1>
-struct Types<T1, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types1<T1> type;
-};
-template <typename T1, typename T2>
-struct Types<T1, T2, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types2<T1, T2> type;
-};
-template <typename T1, typename T2, typename T3>
-struct Types<T1, T2, T3, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types3<T1, T2, T3> type;
-};
-template <typename T1, typename T2, typename T3, typename T4>
-struct Types<T1, T2, T3, T4, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types4<T1, T2, T3, T4> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5>
-struct Types<T1, T2, T3, T4, T5, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types5<T1, T2, T3, T4, T5> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6>
-struct Types<T1, T2, T3, T4, T5, T6, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types6<T1, T2, T3, T4, T5, T6> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7>
-struct Types<T1, T2, T3, T4, T5, T6, T7, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types7<T1, T2, T3, T4, T5, T6, T7> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types8<T1, T2, T3, T4, T5, T6, T7, T8> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types9<T1, T2, T3, T4, T5, T6, T7, T8, T9> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types10<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types11<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types12<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11,
- T12> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types13<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types14<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types15<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types16<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types17<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types18<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types19<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types20<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types21<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types22<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types23<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types24<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types25<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types26<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25,
- T26> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types27<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types28<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types29<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types30<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types31<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types32<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types33<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types34<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types35<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types36<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types37<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types38<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types39<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types40<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39,
- T40> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types41<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, internal::None,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types42<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None, internal::None> {
- typedef internal::Types43<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,
- internal::None, internal::None, internal::None, internal::None,
- internal::None, internal::None> {
- typedef internal::Types44<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43, T44> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,
- internal::None, internal::None, internal::None, internal::None,
- internal::None> {
- typedef internal::Types45<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43, T44, T45> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,
- T46, internal::None, internal::None, internal::None, internal::None> {
- typedef internal::Types46<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43, T44, T45, T46> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46, typename T47>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,
- T46, T47, internal::None, internal::None, internal::None> {
- typedef internal::Types47<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43, T44, T45, T46, T47> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46, typename T47, typename T48>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,
- T46, T47, T48, internal::None, internal::None> {
- typedef internal::Types48<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43, T44, T45, T46, T47, T48> type;
-};
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46, typename T47, typename T48, typename T49>
-struct Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15,
- T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29, T30,
- T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44, T45,
- T46, T47, T48, T49, internal::None> {
- typedef internal::Types49<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43, T44, T45, T46, T47, T48, T49> type;
-};
-
-namespace internal {
-
# define GTEST_TEMPLATE_ template <typename T> class
// The template "selector" struct TemplateSel<Tmpl> is used to
@@ -4844,1695 +3385,65 @@
# define GTEST_BIND_(TmplSel, T) \
TmplSel::template Bind<T>::type
-// A unique struct template used as the default value for the
-// arguments of class template Templates. This allows us to simulate
-// variadic templates (e.g. Templates<int>, Templates<int, double>,
-// and etc), which C++ doesn't support directly.
-template <typename T>
-struct NoneT {};
-
-// The following family of struct and struct templates are used to
-// represent template lists. In particular, TemplatesN<T1, T2, ...,
-// TN> represents a list of N templates (T1, T2, ..., and TN). Except
-// for Templates0, every struct in the family has two member types:
-// Head for the selector of the first template in the list, and Tail
-// for the rest of the list.
-
-// The empty template list.
-struct Templates0 {};
-
-// Template lists of length 1, 2, 3, and so on.
-
-template <GTEST_TEMPLATE_ T1>
-struct Templates1 {
- typedef TemplateSel<T1> Head;
- typedef Templates0 Tail;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2>
-struct Templates2 {
- typedef TemplateSel<T1> Head;
- typedef Templates1<T2> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3>
-struct Templates3 {
- typedef TemplateSel<T1> Head;
- typedef Templates2<T2, T3> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4>
-struct Templates4 {
- typedef TemplateSel<T1> Head;
- typedef Templates3<T2, T3, T4> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5>
-struct Templates5 {
- typedef TemplateSel<T1> Head;
- typedef Templates4<T2, T3, T4, T5> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6>
-struct Templates6 {
- typedef TemplateSel<T1> Head;
- typedef Templates5<T2, T3, T4, T5, T6> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7>
-struct Templates7 {
- typedef TemplateSel<T1> Head;
- typedef Templates6<T2, T3, T4, T5, T6, T7> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8>
-struct Templates8 {
- typedef TemplateSel<T1> Head;
- typedef Templates7<T2, T3, T4, T5, T6, T7, T8> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9>
-struct Templates9 {
- typedef TemplateSel<T1> Head;
- typedef Templates8<T2, T3, T4, T5, T6, T7, T8, T9> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10>
-struct Templates10 {
- typedef TemplateSel<T1> Head;
- typedef Templates9<T2, T3, T4, T5, T6, T7, T8, T9, T10> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11>
-struct Templates11 {
- typedef TemplateSel<T1> Head;
- typedef Templates10<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12>
-struct Templates12 {
- typedef TemplateSel<T1> Head;
- typedef Templates11<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13>
-struct Templates13 {
- typedef TemplateSel<T1> Head;
- typedef Templates12<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14>
-struct Templates14 {
- typedef TemplateSel<T1> Head;
- typedef Templates13<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15>
-struct Templates15 {
- typedef TemplateSel<T1> Head;
- typedef Templates14<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16>
-struct Templates16 {
- typedef TemplateSel<T1> Head;
- typedef Templates15<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17>
-struct Templates17 {
- typedef TemplateSel<T1> Head;
- typedef Templates16<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18>
-struct Templates18 {
- typedef TemplateSel<T1> Head;
- typedef Templates17<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19>
-struct Templates19 {
- typedef TemplateSel<T1> Head;
- typedef Templates18<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20>
-struct Templates20 {
- typedef TemplateSel<T1> Head;
- typedef Templates19<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21>
-struct Templates21 {
- typedef TemplateSel<T1> Head;
- typedef Templates20<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22>
-struct Templates22 {
- typedef TemplateSel<T1> Head;
- typedef Templates21<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23>
-struct Templates23 {
- typedef TemplateSel<T1> Head;
- typedef Templates22<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24>
-struct Templates24 {
- typedef TemplateSel<T1> Head;
- typedef Templates23<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25>
-struct Templates25 {
- typedef TemplateSel<T1> Head;
- typedef Templates24<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26>
-struct Templates26 {
- typedef TemplateSel<T1> Head;
- typedef Templates25<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27>
-struct Templates27 {
- typedef TemplateSel<T1> Head;
- typedef Templates26<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28>
-struct Templates28 {
- typedef TemplateSel<T1> Head;
- typedef Templates27<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29>
-struct Templates29 {
- typedef TemplateSel<T1> Head;
- typedef Templates28<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30>
-struct Templates30 {
- typedef TemplateSel<T1> Head;
- typedef Templates29<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31>
-struct Templates31 {
- typedef TemplateSel<T1> Head;
- typedef Templates30<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32>
-struct Templates32 {
- typedef TemplateSel<T1> Head;
- typedef Templates31<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33>
-struct Templates33 {
- typedef TemplateSel<T1> Head;
- typedef Templates32<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34>
-struct Templates34 {
- typedef TemplateSel<T1> Head;
- typedef Templates33<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35>
-struct Templates35 {
- typedef TemplateSel<T1> Head;
- typedef Templates34<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36>
-struct Templates36 {
- typedef TemplateSel<T1> Head;
- typedef Templates35<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37>
-struct Templates37 {
- typedef TemplateSel<T1> Head;
- typedef Templates36<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38>
-struct Templates38 {
- typedef TemplateSel<T1> Head;
- typedef Templates37<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39>
-struct Templates39 {
- typedef TemplateSel<T1> Head;
- typedef Templates38<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40>
-struct Templates40 {
- typedef TemplateSel<T1> Head;
- typedef Templates39<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41>
-struct Templates41 {
- typedef TemplateSel<T1> Head;
- typedef Templates40<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42>
-struct Templates42 {
- typedef TemplateSel<T1> Head;
- typedef Templates41<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43>
-struct Templates43 {
- typedef TemplateSel<T1> Head;
- typedef Templates42<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44>
-struct Templates44 {
- typedef TemplateSel<T1> Head;
- typedef Templates43<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43, T44> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45>
-struct Templates45 {
- typedef TemplateSel<T1> Head;
- typedef Templates44<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43, T44, T45> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46>
-struct Templates46 {
- typedef TemplateSel<T1> Head;
- typedef Templates45<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43, T44, T45, T46> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47>
-struct Templates47 {
- typedef TemplateSel<T1> Head;
- typedef Templates46<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43, T44, T45, T46, T47> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48>
-struct Templates48 {
- typedef TemplateSel<T1> Head;
- typedef Templates47<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43, T44, T45, T46, T47, T48> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48,
- GTEST_TEMPLATE_ T49>
-struct Templates49 {
- typedef TemplateSel<T1> Head;
- typedef Templates48<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43, T44, T45, T46, T47, T48, T49> Tail;
-};
-
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48,
- GTEST_TEMPLATE_ T49, GTEST_TEMPLATE_ T50>
-struct Templates50 {
- typedef TemplateSel<T1> Head;
- typedef Templates49<T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42,
- T43, T44, T45, T46, T47, T48, T49, T50> Tail;
-};
-
-
-// We don't want to require the users to write TemplatesN<...> directly,
-// as that would require them to count the length. Templates<...> is much
-// easier to write, but generates horrible messages when there is a
-// compiler error, as gcc insists on printing out each template
-// argument, even if it has the default value (this means Templates<list>
-// will appear as Templates<list, NoneT, NoneT, ..., NoneT> in the compiler
-// errors).
-//
-// Our solution is to combine the best part of the two approaches: a
-// user would write Templates<T1, ..., TN>, and Google Test will translate
-// that to TemplatesN<T1, ..., TN> internally to make error messages
-// readable. The translation is done by the 'type' member of the
-// Templates template.
-template <GTEST_TEMPLATE_ T1 = NoneT, GTEST_TEMPLATE_ T2 = NoneT,
- GTEST_TEMPLATE_ T3 = NoneT, GTEST_TEMPLATE_ T4 = NoneT,
- GTEST_TEMPLATE_ T5 = NoneT, GTEST_TEMPLATE_ T6 = NoneT,
- GTEST_TEMPLATE_ T7 = NoneT, GTEST_TEMPLATE_ T8 = NoneT,
- GTEST_TEMPLATE_ T9 = NoneT, GTEST_TEMPLATE_ T10 = NoneT,
- GTEST_TEMPLATE_ T11 = NoneT, GTEST_TEMPLATE_ T12 = NoneT,
- GTEST_TEMPLATE_ T13 = NoneT, GTEST_TEMPLATE_ T14 = NoneT,
- GTEST_TEMPLATE_ T15 = NoneT, GTEST_TEMPLATE_ T16 = NoneT,
- GTEST_TEMPLATE_ T17 = NoneT, GTEST_TEMPLATE_ T18 = NoneT,
- GTEST_TEMPLATE_ T19 = NoneT, GTEST_TEMPLATE_ T20 = NoneT,
- GTEST_TEMPLATE_ T21 = NoneT, GTEST_TEMPLATE_ T22 = NoneT,
- GTEST_TEMPLATE_ T23 = NoneT, GTEST_TEMPLATE_ T24 = NoneT,
- GTEST_TEMPLATE_ T25 = NoneT, GTEST_TEMPLATE_ T26 = NoneT,
- GTEST_TEMPLATE_ T27 = NoneT, GTEST_TEMPLATE_ T28 = NoneT,
- GTEST_TEMPLATE_ T29 = NoneT, GTEST_TEMPLATE_ T30 = NoneT,
- GTEST_TEMPLATE_ T31 = NoneT, GTEST_TEMPLATE_ T32 = NoneT,
- GTEST_TEMPLATE_ T33 = NoneT, GTEST_TEMPLATE_ T34 = NoneT,
- GTEST_TEMPLATE_ T35 = NoneT, GTEST_TEMPLATE_ T36 = NoneT,
- GTEST_TEMPLATE_ T37 = NoneT, GTEST_TEMPLATE_ T38 = NoneT,
- GTEST_TEMPLATE_ T39 = NoneT, GTEST_TEMPLATE_ T40 = NoneT,
- GTEST_TEMPLATE_ T41 = NoneT, GTEST_TEMPLATE_ T42 = NoneT,
- GTEST_TEMPLATE_ T43 = NoneT, GTEST_TEMPLATE_ T44 = NoneT,
- GTEST_TEMPLATE_ T45 = NoneT, GTEST_TEMPLATE_ T46 = NoneT,
- GTEST_TEMPLATE_ T47 = NoneT, GTEST_TEMPLATE_ T48 = NoneT,
- GTEST_TEMPLATE_ T49 = NoneT, GTEST_TEMPLATE_ T50 = NoneT>
+template <GTEST_TEMPLATE_ Head_, GTEST_TEMPLATE_... Tail_>
struct Templates {
- typedef Templates50<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42, T43, T44, T45, T46, T47, T48, T49, T50> type;
+ using Head = TemplateSel<Head_>;
+ using Tail = Templates<Tail_...>;
};
-template <>
-struct Templates<NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT> {
- typedef Templates0 type;
-};
-template <GTEST_TEMPLATE_ T1>
-struct Templates<T1, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT> {
- typedef Templates1<T1> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2>
-struct Templates<T1, T2, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT> {
- typedef Templates2<T1, T2> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3>
-struct Templates<T1, T2, T3, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates3<T1, T2, T3> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4>
-struct Templates<T1, T2, T3, T4, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates4<T1, T2, T3, T4> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5>
-struct Templates<T1, T2, T3, T4, T5, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates5<T1, T2, T3, T4, T5> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6>
-struct Templates<T1, T2, T3, T4, T5, T6, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates6<T1, T2, T3, T4, T5, T6> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates7<T1, T2, T3, T4, T5, T6, T7> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates8<T1, T2, T3, T4, T5, T6, T7, T8> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates9<T1, T2, T3, T4, T5, T6, T7, T8, T9> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates10<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates11<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates12<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates13<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates14<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates15<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates16<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates17<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT> {
- typedef Templates18<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT> {
- typedef Templates19<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT> {
- typedef Templates20<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT> {
- typedef Templates21<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT> {
- typedef Templates22<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT> {
- typedef Templates23<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT> {
- typedef Templates24<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT> {
- typedef Templates25<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT> {
- typedef Templates26<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT> {
- typedef Templates27<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT> {
- typedef Templates28<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT> {
- typedef Templates29<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates30<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates31<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates32<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates33<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates34<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates35<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates36<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, NoneT, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates37<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, NoneT, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates38<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates39<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, NoneT, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates40<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, NoneT, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates41<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, NoneT,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates42<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates43<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42, T43> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,
- NoneT, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates44<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42, T43, T44> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,
- T45, NoneT, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates45<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42, T43, T44, T45> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,
- T45, T46, NoneT, NoneT, NoneT, NoneT> {
- typedef Templates46<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42, T43, T44, T45, T46> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,
- T45, T46, T47, NoneT, NoneT, NoneT> {
- typedef Templates47<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42, T43, T44, T45, T46, T47> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,
- T45, T46, T47, T48, NoneT, NoneT> {
- typedef Templates48<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42, T43, T44, T45, T46, T47, T48> type;
-};
-template <GTEST_TEMPLATE_ T1, GTEST_TEMPLATE_ T2, GTEST_TEMPLATE_ T3,
- GTEST_TEMPLATE_ T4, GTEST_TEMPLATE_ T5, GTEST_TEMPLATE_ T6,
- GTEST_TEMPLATE_ T7, GTEST_TEMPLATE_ T8, GTEST_TEMPLATE_ T9,
- GTEST_TEMPLATE_ T10, GTEST_TEMPLATE_ T11, GTEST_TEMPLATE_ T12,
- GTEST_TEMPLATE_ T13, GTEST_TEMPLATE_ T14, GTEST_TEMPLATE_ T15,
- GTEST_TEMPLATE_ T16, GTEST_TEMPLATE_ T17, GTEST_TEMPLATE_ T18,
- GTEST_TEMPLATE_ T19, GTEST_TEMPLATE_ T20, GTEST_TEMPLATE_ T21,
- GTEST_TEMPLATE_ T22, GTEST_TEMPLATE_ T23, GTEST_TEMPLATE_ T24,
- GTEST_TEMPLATE_ T25, GTEST_TEMPLATE_ T26, GTEST_TEMPLATE_ T27,
- GTEST_TEMPLATE_ T28, GTEST_TEMPLATE_ T29, GTEST_TEMPLATE_ T30,
- GTEST_TEMPLATE_ T31, GTEST_TEMPLATE_ T32, GTEST_TEMPLATE_ T33,
- GTEST_TEMPLATE_ T34, GTEST_TEMPLATE_ T35, GTEST_TEMPLATE_ T36,
- GTEST_TEMPLATE_ T37, GTEST_TEMPLATE_ T38, GTEST_TEMPLATE_ T39,
- GTEST_TEMPLATE_ T40, GTEST_TEMPLATE_ T41, GTEST_TEMPLATE_ T42,
- GTEST_TEMPLATE_ T43, GTEST_TEMPLATE_ T44, GTEST_TEMPLATE_ T45,
- GTEST_TEMPLATE_ T46, GTEST_TEMPLATE_ T47, GTEST_TEMPLATE_ T48,
- GTEST_TEMPLATE_ T49>
-struct Templates<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14,
- T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28, T29,
- T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43, T44,
- T45, T46, T47, T48, T49, NoneT> {
- typedef Templates49<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27,
- T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41,
- T42, T43, T44, T45, T46, T47, T48, T49> type;
+template <GTEST_TEMPLATE_ Head_>
+struct Templates<Head_> {
+ using Head = TemplateSel<Head_>;
+ using Tail = None;
};
-// The TypeList template makes it possible to use either a single type
-// or a Types<...> list in TYPED_TEST_SUITE() and
-// INSTANTIATE_TYPED_TEST_SUITE_P().
+// Tuple-like type lists
+template <typename Head_, typename... Tail_>
+struct Types {
+ using Head = Head_;
+ using Tail = Types<Tail_...>;
+};
+template <typename Head_>
+struct Types<Head_> {
+ using Head = Head_;
+ using Tail = None;
+};
+
+// Helper metafunctions to tell apart a single type from types
+// generated by ::testing::Types
+template <typename... Ts>
+struct ProxyTypeList {
+ using type = Types<Ts...>;
+};
+
+template <typename>
+struct is_proxy_type_list : std::false_type {};
+
+template <typename... Ts>
+struct is_proxy_type_list<ProxyTypeList<Ts...>> : std::true_type {};
+
+// Generator which conditionally creates type lists.
+// It recognizes if a requested type list should be created
+// and prevents creating a new type list nested within another one.
template <typename T>
-struct TypeList {
- typedef Types1<T> type;
-};
+struct GenerateTypeList {
+ private:
+ using proxy = typename std::conditional<is_proxy_type_list<T>::value, T,
+ ProxyTypeList<T>>::type;
-template <typename T1, typename T2, typename T3, typename T4, typename T5,
- typename T6, typename T7, typename T8, typename T9, typename T10,
- typename T11, typename T12, typename T13, typename T14, typename T15,
- typename T16, typename T17, typename T18, typename T19, typename T20,
- typename T21, typename T22, typename T23, typename T24, typename T25,
- typename T26, typename T27, typename T28, typename T29, typename T30,
- typename T31, typename T32, typename T33, typename T34, typename T35,
- typename T36, typename T37, typename T38, typename T39, typename T40,
- typename T41, typename T42, typename T43, typename T44, typename T45,
- typename T46, typename T47, typename T48, typename T49, typename T50>
-struct TypeList<Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13,
- T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26, T27, T28,
- T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40, T41, T42, T43,
- T44, T45, T46, T47, T48, T49, T50> > {
- typedef typename Types<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12,
- T13, T14, T15, T16, T17, T18, T19, T20, T21, T22, T23, T24, T25, T26,
- T27, T28, T29, T30, T31, T32, T33, T34, T35, T36, T37, T38, T39, T40,
- T41, T42, T43, T44, T45, T46, T47, T48, T49, T50>::type type;
+ public:
+ using type = typename proxy::type;
};
-#endif // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P
-
} // namespace internal
+
+template <typename... Ts>
+using Types = internal::ProxyTypeList<Ts...>;
+
} // namespace testing
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
// Due to C++ preprocessor weirdness, we need double indirection to
// concatenate two tokens when one of them is __LINE__. Writing
@@ -6546,10 +3457,20 @@
#define GTEST_CONCAT_TOKEN_IMPL_(foo, bar) foo ## bar
// Stringifies its argument.
-#define GTEST_STRINGIFY_(name) #name
+// Work around a bug in visual studio which doesn't accept code like this:
+//
+// #define GTEST_STRINGIFY_(name) #name
+// #define MACRO(a, b, c) ... GTEST_STRINGIFY_(a) ...
+// MACRO(, x, y)
+//
+// Complaining about the argument to GTEST_STRINGIFY_ being empty.
+// This is allowed by the spec.
+#define GTEST_STRINGIFY_HELPER_(name, ...) #name
+#define GTEST_STRINGIFY_(...) GTEST_STRINGIFY_HELPER_(__VA_ARGS__, )
-class ProtocolMessage;
-namespace proto2 { class Message; }
+namespace proto2 {
+class MessageLite;
+}
namespace testing {
@@ -6592,37 +3513,6 @@
IgnoredValue(const T& /* ignored */) {} // NOLINT(runtime/explicit)
};
-// The only type that should be convertible to Secret* is nullptr.
-// The other null pointer constants are not of a type that is convertible to
-// Secret*. Only the literal with the right value is.
-template <typename T>
-using TypeIsValidNullptrConstant = std::integral_constant<
- bool, std::is_same<typename std::decay<T>::type, std::nullptr_t>::value ||
- !std::is_convertible<T, Secret*>::value>;
-
-// Two overloaded helpers for checking at compile time whether an
-// expression is a null pointer literal (i.e. NULL or any 0-valued
-// compile-time integral constant). These helpers have no
-// implementations, as we only need their signatures.
-//
-// Given IsNullLiteralHelper(x), the compiler will pick the first
-// version if x can be implicitly converted to Secret*, and pick the
-// second version otherwise. Since Secret is a secret and incomplete
-// type, the only expression a user can write that has type Secret* is
-// a null pointer literal. Therefore, we know that x is a null
-// pointer literal if and only if the first version is picked by the
-// compiler.
-std::true_type IsNullLiteralHelper(Secret*, std::true_type);
-std::false_type IsNullLiteralHelper(IgnoredValue, std::false_type);
-std::false_type IsNullLiteralHelper(IgnoredValue, std::true_type);
-
-// A compile-time bool constant that is true if and only if x is a null pointer
-// literal (i.e. nullptr, NULL or any 0-valued compile-time integral constant).
-#define GTEST_IS_NULL_LITERAL_(x) \
- decltype(::testing::internal::IsNullLiteralHelper( \
- x, \
- ::testing::internal::TypeIsValidNullptrConstant<decltype(x)>()))::value
-
// Appends the user-supplied message to the Google-Test-generated message.
GTEST_API_ std::string AppendUserMessage(
const std::string& gtest_msg, const Message& user_msg);
@@ -6689,7 +3579,7 @@
// expected_value: "5"
// actual_value: "6"
//
-// The ignoring_case parameter is true iff the assertion is a
+// The ignoring_case parameter is true if and only if the assertion is a
// *_STRCASEEQ*. When it's true, the string " (ignoring case)" will
// be inserted into the message.
GTEST_API_ AssertionResult EqFailure(const char* expected_expression,
@@ -6775,7 +3665,7 @@
//
// See the following article for more details on ULP:
// http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
- static const size_t kMaxUlps = 4;
+ static const uint32_t kMaxUlps = 4;
// Constructs a FloatingPoint from a raw floating-point number.
//
@@ -6818,15 +3708,15 @@
// Returns the sign bit of this number.
Bits sign_bit() const { return kSignBitMask & u_.bits_; }
- // Returns true iff this is NAN (not a number).
+ // Returns true if and only if this is NAN (not a number).
bool is_nan() const {
// It's a NAN if the exponent bits are all ones and the fraction
// bits are not entirely zeros.
return (exponent_bits() == kExponentBitMask) && (fraction_bits() != 0);
}
- // Returns true iff this number is at most kMaxUlps ULP's away from
- // rhs. In particular, this function:
+ // Returns true if and only if this number is at most kMaxUlps ULP's away
+ // from rhs. In particular, this function:
//
// - returns false if either number is (or both are) NAN.
// - treats really large numbers as almost equal to infinity.
@@ -7006,7 +3896,9 @@
using Test =
typename std::conditional<sizeof(T) != 0, ::testing::Test, void>::type;
- static SetUpTearDownSuiteFuncType GetSetUpCaseOrSuite() {
+ static SetUpTearDownSuiteFuncType GetSetUpCaseOrSuite(const char* filename,
+ int line_num) {
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
SetUpTearDownSuiteFuncType test_case_fp =
GetNotDefaultOrNull(&T::SetUpTestCase, &Test::SetUpTestCase);
SetUpTearDownSuiteFuncType test_suite_fp =
@@ -7014,12 +3906,20 @@
GTEST_CHECK_(!test_case_fp || !test_suite_fp)
<< "Test can not provide both SetUpTestSuite and SetUpTestCase, please "
- "make sure there is only one present ";
+ "make sure there is only one present at "
+ << filename << ":" << line_num;
return test_case_fp != nullptr ? test_case_fp : test_suite_fp;
+#else
+ (void)(filename);
+ (void)(line_num);
+ return &T::SetUpTestSuite;
+#endif
}
- static SetUpTearDownSuiteFuncType GetTearDownCaseOrSuite() {
+ static SetUpTearDownSuiteFuncType GetTearDownCaseOrSuite(const char* filename,
+ int line_num) {
+#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
SetUpTearDownSuiteFuncType test_case_fp =
GetNotDefaultOrNull(&T::TearDownTestCase, &Test::TearDownTestCase);
SetUpTearDownSuiteFuncType test_suite_fp =
@@ -7027,9 +3927,15 @@
GTEST_CHECK_(!test_case_fp || !test_suite_fp)
<< "Test can not provide both TearDownTestSuite and TearDownTestCase,"
- " please make sure there is only one present ";
+ " please make sure there is only one present at"
+ << filename << ":" << line_num;
return test_case_fp != nullptr ? test_case_fp : test_suite_fp;
+#else
+ (void)(filename);
+ (void)(line_num);
+ return &T::TearDownTestSuite;
+#endif
}
};
@@ -7038,11 +3944,11 @@
//
// Arguments:
//
-// test_suite_name: name of the test suite
+// test_suite_name: name of the test suite
// name: name of the test
-// type_param the name of the test's type parameter, or NULL if
+// type_param: the name of the test's type parameter, or NULL if
// this is not a typed or a type-parameterized test.
-// value_param text representation of the test's value parameter,
+// value_param: text representation of the test's value parameter,
// or NULL if this is not a type-parameterized test.
// code_location: code location where the test is defined
// fixture_class_id: ID of the test fixture class
@@ -7062,8 +3968,6 @@
// and returns false. None of pstr, *pstr, and prefix can be NULL.
GTEST_API_ bool SkipPrefix(const char* prefix, const char** pstr);
-#if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P
-
GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
/* class A needs to have dll-interface to be used by clients of class B */)
@@ -7103,8 +4007,9 @@
// Verifies that registered_tests match the test names in
// defined_test_names_; returns registered_tests if successful, or
// aborts the program otherwise.
- const char* VerifyRegisteredTestNames(
- const char* file, int line, const char* registered_tests);
+ const char* VerifyRegisteredTestNames(const char* test_suite_name,
+ const char* file, int line,
+ const char* registered_tests);
private:
typedef ::std::map<std::string, CodeLocation> RegisteredTestsMap;
@@ -7158,7 +4063,7 @@
};
template <typename NameGenerator>
-void GenerateNamesRecursively(Types0, std::vector<std::string>*, int) {}
+void GenerateNamesRecursively(internal::None, std::vector<std::string>*, int) {}
template <typename NameGenerator, typename Types>
void GenerateNamesRecursively(Types, std::vector<std::string>* result, int i) {
@@ -7200,14 +4105,16 @@
// list.
MakeAndRegisterTestInfo(
(std::string(prefix) + (prefix[0] == '\0' ? "" : "/") + case_name +
- "/" + type_names[index])
+ "/" + type_names[static_cast<size_t>(index)])
.c_str(),
StripTrailingSpaces(GetPrefixUntilComma(test_names)).c_str(),
GetTypeName<Type>().c_str(),
nullptr, // No value parameter.
code_location, GetTypeId<FixtureClass>(),
- SuiteApiResolver<TestClass>::GetSetUpCaseOrSuite(),
- SuiteApiResolver<TestClass>::GetTearDownCaseOrSuite(),
+ SuiteApiResolver<TestClass>::GetSetUpCaseOrSuite(
+ code_location.file.c_str(), code_location.line),
+ SuiteApiResolver<TestClass>::GetTearDownCaseOrSuite(
+ code_location.file.c_str(), code_location.line),
new TestFactoryImpl<TestClass>);
// Next, recurses (at compile time) with the tail of the type list.
@@ -7223,7 +4130,7 @@
// The base case for the compile time recursion.
template <GTEST_TEMPLATE_ Fixture, class TestSel>
-class TypeParameterizedTest<Fixture, TestSel, Types0> {
+class TypeParameterizedTest<Fixture, TestSel, internal::None> {
public:
static bool Register(const char* /*prefix*/, const CodeLocation&,
const char* /*case_name*/, const char* /*test_names*/,
@@ -7234,6 +4141,11 @@
}
};
+GTEST_API_ void RegisterTypeParameterizedTestSuite(const char* test_suite_name,
+ CodeLocation code_location);
+GTEST_API_ void RegisterTypeParameterizedTestSuiteInstantiation(
+ const char* case_name);
+
// TypeParameterizedTestSuite<Fixture, Tests, Types>::Register()
// registers *all combinations* of 'Tests' and 'Types' with Google
// Test. The return value is insignificant - we just need to return
@@ -7246,6 +4158,7 @@
const char* test_names,
const std::vector<std::string>& type_names =
GenerateNames<DefaultNameGenerator, Types>()) {
+ RegisterTypeParameterizedTestSuiteInstantiation(case_name);
std::string test_name = StripTrailingSpaces(
GetPrefixUntilComma(test_names));
if (!state->TestExists(test_name)) {
@@ -7275,7 +4188,7 @@
// The base case for the compile time recursion.
template <GTEST_TEMPLATE_ Fixture, typename Types>
-class TypeParameterizedTestSuite<Fixture, Templates0, Types> {
+class TypeParameterizedTestSuite<Fixture, internal::None, Types> {
public:
static bool Register(const char* /*prefix*/, const CodeLocation&,
const TypedTestSuitePState* /*state*/,
@@ -7286,8 +4199,6 @@
}
};
-#endif // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P
-
// Returns the current OS stack trace as an std::string.
//
// The maximum number of stack frames to be included is specified by
@@ -7319,6 +4230,16 @@
const char* value;
};
+// Helper for declaring std::string within 'if' statement
+// in pre C++17 build environment.
+struct TrueWithString {
+ TrueWithString() = default;
+ explicit TrueWithString(const char* str) : value(str) {}
+ explicit TrueWithString(const std::string& str) : value(str) {}
+ explicit operator bool() const { return true; }
+ std::string value;
+};
+
// A simple Linear Congruential Generator for generating random
// numbers with a uniform distribution. Unlike rand() and srand(), it
// doesn't use global state (and therefore can't interfere with user
@@ -7326,78 +4247,54 @@
// but it's good enough for our purposes.
class GTEST_API_ Random {
public:
- static const UInt32 kMaxRange = 1u << 31;
+ static const uint32_t kMaxRange = 1u << 31;
- explicit Random(UInt32 seed) : state_(seed) {}
+ explicit Random(uint32_t seed) : state_(seed) {}
- void Reseed(UInt32 seed) { state_ = seed; }
+ void Reseed(uint32_t seed) { state_ = seed; }
// Generates a random number from [0, range). Crashes if 'range' is
// 0 or greater than kMaxRange.
- UInt32 Generate(UInt32 range);
+ uint32_t Generate(uint32_t range);
private:
- UInt32 state_;
+ uint32_t state_;
GTEST_DISALLOW_COPY_AND_ASSIGN_(Random);
};
-// Defining a variable of type CompileAssertTypesEqual<T1, T2> will cause a
-// compiler error iff T1 and T2 are different types.
-template <typename T1, typename T2>
-struct CompileAssertTypesEqual;
-
-template <typename T>
-struct CompileAssertTypesEqual<T, T> {
-};
-
-// Removes the reference from a type if it is a reference type,
-// otherwise leaves it unchanged. This is the same as
-// tr1::remove_reference, which is not widely available yet.
-template <typename T>
-struct RemoveReference { typedef T type; }; // NOLINT
-template <typename T>
-struct RemoveReference<T&> { typedef T type; }; // NOLINT
-
-// A handy wrapper around RemoveReference that works when the argument
-// T depends on template parameters.
-#define GTEST_REMOVE_REFERENCE_(T) \
- typename ::testing::internal::RemoveReference<T>::type
-
-// Removes const from a type if it is a const type, otherwise leaves
-// it unchanged. This is the same as tr1::remove_const, which is not
-// widely available yet.
-template <typename T>
-struct RemoveConst { typedef T type; }; // NOLINT
-template <typename T>
-struct RemoveConst<const T> { typedef T type; }; // NOLINT
-
-// MSVC 8.0, Sun C++, and IBM XL C++ have a bug which causes the above
-// definition to fail to remove the const in 'const int[3]' and 'const
-// char[3][4]'. The following specialization works around the bug.
-template <typename T, size_t N>
-struct RemoveConst<const T[N]> {
- typedef typename RemoveConst<T>::type type[N];
-};
-
-// A handy wrapper around RemoveConst that works when the argument
-// T depends on template parameters.
-#define GTEST_REMOVE_CONST_(T) \
- typename ::testing::internal::RemoveConst<T>::type
-
// Turns const U&, U&, const U, and U all into U.
#define GTEST_REMOVE_REFERENCE_AND_CONST_(T) \
- GTEST_REMOVE_CONST_(GTEST_REMOVE_REFERENCE_(T))
+ typename std::remove_const<typename std::remove_reference<T>::type>::type
-// IsAProtocolMessage<T>::value is a compile-time bool constant that's
-// true iff T is type ProtocolMessage, proto2::Message, or a subclass
-// of those.
+// HasDebugStringAndShortDebugString<T>::value is a compile-time bool constant
+// that's true if and only if T has methods DebugString() and ShortDebugString()
+// that return std::string.
template <typename T>
-struct IsAProtocolMessage
- : public bool_constant<
- std::is_convertible<const T*, const ::ProtocolMessage*>::value ||
- std::is_convertible<const T*, const ::proto2::Message*>::value> {
+class HasDebugStringAndShortDebugString {
+ private:
+ template <typename C>
+ static auto CheckDebugString(C*) -> typename std::is_same<
+ std::string, decltype(std::declval<const C>().DebugString())>::type;
+ template <typename>
+ static std::false_type CheckDebugString(...);
+
+ template <typename C>
+ static auto CheckShortDebugString(C*) -> typename std::is_same<
+ std::string, decltype(std::declval<const C>().ShortDebugString())>::type;
+ template <typename>
+ static std::false_type CheckShortDebugString(...);
+
+ using HasDebugStringType = decltype(CheckDebugString<T>(nullptr));
+ using HasShortDebugStringType = decltype(CheckShortDebugString<T>(nullptr));
+
+ public:
+ static constexpr bool value =
+ HasDebugStringType::value && HasShortDebugStringType::value;
};
+template <typename T>
+constexpr bool HasDebugStringAndShortDebugString<T>::value;
+
// When the compiler sees expression IsContainerTest<C>(0), if C is an
// STL-style container class, the first overload of IsContainerTest
// will be viable (since both C::iterator* and C::const_iterator* are
@@ -7463,7 +4360,7 @@
struct IsRecursiveContainerImpl;
template <typename C>
-struct IsRecursiveContainerImpl<C, false> : public false_type {};
+struct IsRecursiveContainerImpl<C, false> : public std::false_type {};
// Since the IsRecursiveContainerImpl depends on the IsContainerTest we need to
// obey the same inconsistencies as the IsContainerTest, namely check if
@@ -7473,9 +4370,9 @@
struct IsRecursiveContainerImpl<C, true> {
using value_type = decltype(*std::declval<typename C::const_iterator>());
using type =
- is_same<typename std::remove_const<
- typename std::remove_reference<value_type>::type>::type,
- C>;
+ std::is_same<typename std::remove_const<
+ typename std::remove_reference<value_type>::type>::type,
+ C>;
};
// IsRecursiveContainer<Type> is a unary compile-time predicate that
@@ -7487,13 +4384,6 @@
template <typename C>
struct IsRecursiveContainer : public IsRecursiveContainerImpl<C>::type {};
-// EnableIf<condition>::type is void when 'Cond' is true, and
-// undefined when 'Cond' is false. To use SFINAE to make a function
-// overload only apply when a particular expression is true, add
-// "typename EnableIf<expression>::type* = 0" as the last parameter.
-template<bool> struct EnableIf;
-template<> struct EnableIf<true> { typedef void type; }; // NOLINT
-
// Utilities for native arrays.
// ArrayEq() compares two k-dimensional native arrays using the
@@ -7616,10 +4506,9 @@
}
private:
- enum {
- kCheckTypeIsNotConstOrAReference = StaticAssertTypeEqHelper<
- Element, GTEST_REMOVE_REFERENCE_AND_CONST_(Element)>::value
- };
+ static_assert(!std::is_const<Element>::value, "Type must not be const");
+ static_assert(!std::is_reference<Element>::value,
+ "Type must not be a reference");
// Initializes this object with a copy of the input.
void InitCopy(const Element* array, size_t a_size) {
@@ -7640,8 +4529,6 @@
const Element* array_;
size_t size_;
void (NativeArray::*clone_)(const Element*, size_t);
-
- GTEST_DISALLOW_ASSIGN_(NativeArray);
};
// Backport of std::index_sequence.
@@ -7665,32 +4552,44 @@
// Backport of std::make_index_sequence.
// It uses O(ln(N)) instantiation depth.
template <size_t N>
-struct MakeIndexSequence
- : DoubleSequence<N % 2 == 1, typename MakeIndexSequence<N / 2>::type,
+struct MakeIndexSequenceImpl
+ : DoubleSequence<N % 2 == 1, typename MakeIndexSequenceImpl<N / 2>::type,
N / 2>::type {};
template <>
-struct MakeIndexSequence<0> : IndexSequence<> {};
+struct MakeIndexSequenceImpl<0> : IndexSequence<> {};
-// FIXME: This implementation of ElemFromList is O(1) in instantiation depth,
-// but it is O(N^2) in total instantiations. Not sure if this is the best
-// tradeoff, as it will make it somewhat slow to compile.
-template <typename T, size_t, size_t>
-struct ElemFromListImpl {};
+template <size_t N>
+using MakeIndexSequence = typename MakeIndexSequenceImpl<N>::type;
-template <typename T, size_t I>
-struct ElemFromListImpl<T, I, I> {
- using type = T;
+template <typename... T>
+using IndexSequenceFor = typename MakeIndexSequence<sizeof...(T)>::type;
+
+template <size_t>
+struct Ignore {
+ Ignore(...); // NOLINT
};
-// Get the Nth element from T...
-// It uses O(1) instantiation depth.
-template <size_t N, typename I, typename... T>
-struct ElemFromList;
+template <typename>
+struct ElemFromListImpl;
+template <size_t... I>
+struct ElemFromListImpl<IndexSequence<I...>> {
+ // We make Ignore a template to solve a problem with MSVC.
+ // A non-template Ignore would work fine with `decltype(Ignore(I))...`, but
+ // MSVC doesn't understand how to deal with that pack expansion.
+ // Use `0 * I` to have a single instantiation of Ignore.
+ template <typename R>
+ static R Apply(Ignore<0 * I>..., R (*)(), ...);
+};
-template <size_t N, size_t... I, typename... T>
-struct ElemFromList<N, IndexSequence<I...>, T...>
- : ElemFromListImpl<T, N, I>... {};
+template <size_t N, typename... T>
+struct ElemFromList {
+ using type =
+ decltype(ElemFromListImpl<typename MakeIndexSequence<N>::type>::Apply(
+ static_cast<T (*)()>(nullptr)...));
+};
+
+struct FlatTupleConstructTag {};
template <typename... T>
class FlatTuple;
@@ -7700,11 +4599,11 @@
template <typename... T, size_t I>
struct FlatTupleElemBase<FlatTuple<T...>, I> {
- using value_type =
- typename ElemFromList<I, typename MakeIndexSequence<sizeof...(T)>::type,
- T...>::type;
+ using value_type = typename ElemFromList<I, T...>::type;
FlatTupleElemBase() = default;
- explicit FlatTupleElemBase(value_type t) : value(std::move(t)) {}
+ template <typename Arg>
+ explicit FlatTupleElemBase(FlatTupleConstructTag, Arg&& t)
+ : value(std::forward<Arg>(t)) {}
value_type value;
};
@@ -7716,13 +4615,35 @@
: FlatTupleElemBase<FlatTuple<T...>, Idx>... {
using Indices = IndexSequence<Idx...>;
FlatTupleBase() = default;
- explicit FlatTupleBase(T... t)
- : FlatTupleElemBase<FlatTuple<T...>, Idx>(std::move(t))... {}
+ template <typename... Args>
+ explicit FlatTupleBase(FlatTupleConstructTag, Args&&... args)
+ : FlatTupleElemBase<FlatTuple<T...>, Idx>(FlatTupleConstructTag{},
+ std::forward<Args>(args))... {}
+
+ template <size_t I>
+ const typename ElemFromList<I, T...>::type& Get() const {
+ return FlatTupleElemBase<FlatTuple<T...>, I>::value;
+ }
+
+ template <size_t I>
+ typename ElemFromList<I, T...>::type& Get() {
+ return FlatTupleElemBase<FlatTuple<T...>, I>::value;
+ }
+
+ template <typename F>
+ auto Apply(F&& f) -> decltype(std::forward<F>(f)(this->Get<Idx>()...)) {
+ return std::forward<F>(f)(Get<Idx>()...);
+ }
+
+ template <typename F>
+ auto Apply(F&& f) const -> decltype(std::forward<F>(f)(this->Get<Idx>()...)) {
+ return std::forward<F>(f)(Get<Idx>()...);
+ }
};
// Analog to std::tuple but with different tradeoffs.
// This class minimizes the template instantiation depth, thus allowing more
-// elements that std::tuple would. std::tuple has been seen to require an
+// elements than std::tuple would. std::tuple has been seen to require an
// instantiation depth of more than 10x the number of elements in some
// implementations.
// FlatTuple and ElemFromList are not recursive and have a fixed depth
@@ -7733,21 +4654,17 @@
class FlatTuple
: private FlatTupleBase<FlatTuple<T...>,
typename MakeIndexSequence<sizeof...(T)>::type> {
- using Indices = typename FlatTuple::FlatTupleBase::Indices;
+ using Indices = typename FlatTupleBase<
+ FlatTuple<T...>, typename MakeIndexSequence<sizeof...(T)>::type>::Indices;
public:
FlatTuple() = default;
- explicit FlatTuple(T... t) : FlatTuple::FlatTupleBase(std::move(t)...) {}
+ template <typename... Args>
+ explicit FlatTuple(FlatTupleConstructTag tag, Args&&... args)
+ : FlatTuple::FlatTupleBase(tag, std::forward<Args>(args)...) {}
- template <size_t I>
- const typename ElemFromList<I, Indices, T...>::type& Get() const {
- return static_cast<const FlatTupleElemBase<FlatTuple, I>*>(this)->value;
- }
-
- template <size_t I>
- typename ElemFromList<I, Indices, T...>::type& Get() {
- return static_cast<FlatTupleElemBase<FlatTuple, I>*>(this)->value;
- }
+ using FlatTuple::FlatTupleBase::Apply;
+ using FlatTuple::FlatTupleBase::Get;
};
// Utility functions to be called with static_assert to induce deprecation
@@ -7780,6 +4697,22 @@
} // namespace internal
} // namespace testing
+namespace std {
+// Some standard library implementations use `struct tuple_size` and some use
+// `class tuple_size`. Clang warns about the mismatch.
+// https://reviews.llvm.org/D55466
+#ifdef __clang__
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Wmismatched-tags"
+#endif
+template <typename... Ts>
+struct tuple_size<testing::internal::FlatTuple<Ts...>>
+ : std::integral_constant<size_t, sizeof...(Ts)> {};
+#ifdef __clang__
+#pragma clang diagnostic pop
+#endif
+} // namespace std
+
#define GTEST_MESSAGE_AT_(file, line, message, result_type) \
::testing::internal::AssertHelper(result_type, file, line, message) \
= ::testing::Message()
@@ -7802,48 +4735,122 @@
// Suppress MSVC warning 4072 (unreachable code) for the code following
// statement if it returns or throws (or doesn't return or throw in some
// situations).
+// NOTE: The "else" is important to keep this expansion to prevent a top-level
+// "else" from attaching to our "if".
#define GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) \
- if (::testing::internal::AlwaysTrue()) { statement; }
+ if (::testing::internal::AlwaysTrue()) { \
+ statement; \
+ } else /* NOLINT */ \
+ static_assert(true, "") // User must have a semicolon after expansion.
-#define GTEST_TEST_THROW_(statement, expected_exception, fail) \
- GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
- if (::testing::internal::ConstCharPtr gtest_msg = "") { \
- bool gtest_caught_expected = false; \
- try { \
- GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \
- } \
- catch (expected_exception const&) { \
- gtest_caught_expected = true; \
- } \
- catch (...) { \
- gtest_msg.value = \
- "Expected: " #statement " throws an exception of type " \
- #expected_exception ".\n Actual: it throws a different type."; \
- goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \
- } \
- if (!gtest_caught_expected) { \
- gtest_msg.value = \
- "Expected: " #statement " throws an exception of type " \
- #expected_exception ".\n Actual: it throws nothing."; \
- goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \
- } \
- } else \
- GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__): \
- fail(gtest_msg.value)
+#if GTEST_HAS_EXCEPTIONS
+
+namespace testing {
+namespace internal {
+
+class NeverThrown {
+ public:
+ const char* what() const noexcept {
+ return "this exception should never be thrown";
+ }
+};
+
+} // namespace internal
+} // namespace testing
+
+#if GTEST_HAS_RTTI
+
+#define GTEST_EXCEPTION_TYPE_(e) ::testing::internal::GetTypeName(typeid(e))
+
+#else // GTEST_HAS_RTTI
+
+#define GTEST_EXCEPTION_TYPE_(e) \
+ std::string { "an std::exception-derived error" }
+
+#endif // GTEST_HAS_RTTI
+
+#define GTEST_TEST_THROW_CATCH_STD_EXCEPTION_(statement, expected_exception) \
+ catch (typename std::conditional< \
+ std::is_same<typename std::remove_cv<typename std::remove_reference< \
+ expected_exception>::type>::type, \
+ std::exception>::value, \
+ const ::testing::internal::NeverThrown&, const std::exception&>::type \
+ e) { \
+ gtest_msg.value = "Expected: " #statement \
+ " throws an exception of type " #expected_exception \
+ ".\n Actual: it throws "; \
+ gtest_msg.value += GTEST_EXCEPTION_TYPE_(e); \
+ gtest_msg.value += " with description \""; \
+ gtest_msg.value += e.what(); \
+ gtest_msg.value += "\"."; \
+ goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \
+ }
+
+#else // GTEST_HAS_EXCEPTIONS
+
+#define GTEST_TEST_THROW_CATCH_STD_EXCEPTION_(statement, expected_exception)
+
+#endif // GTEST_HAS_EXCEPTIONS
+
+#define GTEST_TEST_THROW_(statement, expected_exception, fail) \
+ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
+ if (::testing::internal::TrueWithString gtest_msg{}) { \
+ bool gtest_caught_expected = false; \
+ try { \
+ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \
+ } catch (expected_exception const&) { \
+ gtest_caught_expected = true; \
+ } \
+ GTEST_TEST_THROW_CATCH_STD_EXCEPTION_(statement, expected_exception) \
+ catch (...) { \
+ gtest_msg.value = "Expected: " #statement \
+ " throws an exception of type " #expected_exception \
+ ".\n Actual: it throws a different type."; \
+ goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \
+ } \
+ if (!gtest_caught_expected) { \
+ gtest_msg.value = "Expected: " #statement \
+ " throws an exception of type " #expected_exception \
+ ".\n Actual: it throws nothing."; \
+ goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \
+ } \
+ } else /*NOLINT*/ \
+ GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__) \
+ : fail(gtest_msg.value.c_str())
+
+#if GTEST_HAS_EXCEPTIONS
+
+#define GTEST_TEST_NO_THROW_CATCH_STD_EXCEPTION_() \
+ catch (std::exception const& e) { \
+ gtest_msg.value = "it throws "; \
+ gtest_msg.value += GTEST_EXCEPTION_TYPE_(e); \
+ gtest_msg.value += " with description \""; \
+ gtest_msg.value += e.what(); \
+ gtest_msg.value += "\"."; \
+ goto GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__); \
+ }
+
+#else // GTEST_HAS_EXCEPTIONS
+
+#define GTEST_TEST_NO_THROW_CATCH_STD_EXCEPTION_()
+
+#endif // GTEST_HAS_EXCEPTIONS
#define GTEST_TEST_NO_THROW_(statement, fail) \
GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
- if (::testing::internal::AlwaysTrue()) { \
+ if (::testing::internal::TrueWithString gtest_msg{}) { \
try { \
GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \
} \
+ GTEST_TEST_NO_THROW_CATCH_STD_EXCEPTION_() \
catch (...) { \
+ gtest_msg.value = "it throws."; \
goto GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__); \
} \
} else \
GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__): \
- fail("Expected: " #statement " doesn't throw an exception.\n" \
- " Actual: it throws.")
+ fail(("Expected: " #statement " doesn't throw an exception.\n" \
+ " Actual: " + gtest_msg.value).c_str())
#define GTEST_TEST_ANY_THROW_(statement, fail) \
GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
@@ -7866,7 +4873,7 @@
// Implements Boolean test assertions such as EXPECT_TRUE. expression can be
// either a boolean expression or an AssertionResult. text is a textual
-// represenation of expression as it was passed into the EXPECT_TRUE.
+// representation of expression as it was passed into the EXPECT_TRUE.
#define GTEST_TEST_BOOLEAN_(expression, text, actual, expected, fail) \
GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
if (const ::testing::AssertionResult gtest_ar_ = \
@@ -7896,16 +4903,23 @@
// Helper macro for defining tests.
#define GTEST_TEST_(test_suite_name, test_name, parent_class, parent_id) \
+ static_assert(sizeof(GTEST_STRINGIFY_(test_suite_name)) > 1, \
+ "test_suite_name must not be empty"); \
+ static_assert(sizeof(GTEST_STRINGIFY_(test_name)) > 1, \
+ "test_name must not be empty"); \
class GTEST_TEST_CLASS_NAME_(test_suite_name, test_name) \
: public parent_class { \
public: \
- GTEST_TEST_CLASS_NAME_(test_suite_name, test_name)() {} \
- \
- private: \
- virtual void TestBody(); \
- static ::testing::TestInfo* const test_info_ GTEST_ATTRIBUTE_UNUSED_; \
+ GTEST_TEST_CLASS_NAME_(test_suite_name, test_name)() = default; \
+ ~GTEST_TEST_CLASS_NAME_(test_suite_name, test_name)() override = default; \
GTEST_DISALLOW_COPY_AND_ASSIGN_(GTEST_TEST_CLASS_NAME_(test_suite_name, \
test_name)); \
+ GTEST_DISALLOW_MOVE_AND_ASSIGN_(GTEST_TEST_CLASS_NAME_(test_suite_name, \
+ test_name)); \
+ \
+ private: \
+ void TestBody() override; \
+ static ::testing::TestInfo* const test_info_ GTEST_ATTRIBUTE_UNUSED_; \
}; \
\
::testing::TestInfo* const GTEST_TEST_CLASS_NAME_(test_suite_name, \
@@ -7914,14 +4928,14 @@
#test_suite_name, #test_name, nullptr, nullptr, \
::testing::internal::CodeLocation(__FILE__, __LINE__), (parent_id), \
::testing::internal::SuiteApiResolver< \
- parent_class>::GetSetUpCaseOrSuite(), \
+ parent_class>::GetSetUpCaseOrSuite(__FILE__, __LINE__), \
::testing::internal::SuiteApiResolver< \
- parent_class>::GetTearDownCaseOrSuite(), \
+ parent_class>::GetTearDownCaseOrSuite(__FILE__, __LINE__), \
new ::testing::internal::TestFactoryImpl<GTEST_TEST_CLASS_NAME_( \
test_suite_name, test_name)>); \
void GTEST_TEST_CLASS_NAME_(test_suite_name, test_name)::TestBody()
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
// Copyright 2005, Google Inc.
// All rights reserved.
//
@@ -7959,8 +4973,8 @@
// directly.
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_
-#define GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_
// Copyright 2005, Google Inc.
// All rights reserved.
@@ -7997,8 +5011,8 @@
// death tests. They are subject to change without notice.
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_
// Copyright 2007, Google Inc.
// All rights reserved.
@@ -8034,16 +5048,14 @@
// This file implements just enough of the matcher interface to allow
// EXPECT_DEATH and friends to accept a matcher argument.
-// IWYU pragma: private, include "testing/base/public/gunit.h"
-// IWYU pragma: friend third_party/googletest/googlemock/.*
-// IWYU pragma: friend third_party/googletest/googletest/.*
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_MATCHERS_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_MATCHERS_H_
-#ifndef GTEST_INCLUDE_GTEST_GTEST_MATCHERS_H_
-#define GTEST_INCLUDE_GTEST_GTEST_MATCHERS_H_
-
+#include <atomic>
#include <memory>
#include <ostream>
#include <string>
+#include <type_traits>
// Copyright 2007, Google Inc.
// All rights reserved.
@@ -8144,10 +5156,11 @@
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
-#define GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
#include <functional>
+#include <memory>
#include <ostream> // NOLINT
#include <sstream>
#include <string>
@@ -8156,60 +5169,121 @@
#include <utility>
#include <vector>
-#if GTEST_HAS_ABSL
-#include "absl/strings/string_view.h"
-#include "absl/types/optional.h"
-#include "absl/types/variant.h"
-#endif // GTEST_HAS_ABSL
namespace testing {
-// Definitions in the 'internal' and 'internal2' name spaces are
-// subject to change without notice. DO NOT USE THEM IN USER CODE!
-namespace internal2 {
+// Definitions in the internal* namespaces are subject to change without notice.
+// DO NOT USE THEM IN USER CODE!
+namespace internal {
-// Prints the given number of bytes in the given object to the given
-// ostream.
-GTEST_API_ void PrintBytesInObjectTo(const unsigned char* obj_bytes,
- size_t count,
- ::std::ostream* os);
+template <typename T>
+void UniversalPrint(const T& value, ::std::ostream* os);
-// For selecting which printer to use when a given type has neither <<
-// nor PrintTo().
-enum TypeKind {
- kProtobuf, // a protobuf type
- kConvertibleToInteger, // a type implicitly convertible to BiggestInt
- // (e.g. a named or unnamed enum type)
-#if GTEST_HAS_ABSL
- kConvertibleToStringView, // a type implicitly convertible to
- // absl::string_view
-#endif
- kOtherType // anything else
-};
+// Used to print an STL-style container when the user doesn't define
+// a PrintTo() for it.
+struct ContainerPrinter {
+ template <typename T,
+ typename = typename std::enable_if<
+ (sizeof(IsContainerTest<T>(0)) == sizeof(IsContainer)) &&
+ !IsRecursiveContainer<T>::value>::type>
+ static void PrintValue(const T& container, std::ostream* os) {
+ const size_t kMaxCount = 32; // The maximum number of elements to print.
+ *os << '{';
+ size_t count = 0;
+ for (auto&& elem : container) {
+ if (count > 0) {
+ *os << ',';
+ if (count == kMaxCount) { // Enough has been printed.
+ *os << " ...";
+ break;
+ }
+ }
+ *os << ' ';
+ // We cannot call PrintTo(elem, os) here as PrintTo() doesn't
+ // handle `elem` being a native array.
+ internal::UniversalPrint(elem, os);
+ ++count;
+ }
-// TypeWithoutFormatter<T, kTypeKind>::PrintValue(value, os) is called
-// by the universal printer to print a value of type T when neither
-// operator<< nor PrintTo() is defined for T, where kTypeKind is the
-// "kind" of T as defined by enum TypeKind.
-template <typename T, TypeKind kTypeKind>
-class TypeWithoutFormatter {
- public:
- // This default version is called when kTypeKind is kOtherType.
- static void PrintValue(const T& value, ::std::ostream* os) {
- PrintBytesInObjectTo(static_cast<const unsigned char*>(
- reinterpret_cast<const void*>(&value)),
- sizeof(value), os);
+ if (count > 0) {
+ *os << ' ';
+ }
+ *os << '}';
}
};
-// We print a protobuf using its ShortDebugString() when the string
-// doesn't exceed this many characters; otherwise we print it using
-// DebugString() for better readability.
-const size_t kProtobufOneLinerMaxLength = 50;
+// Used to print a pointer that is neither a char pointer nor a member
+// pointer, when the user doesn't define PrintTo() for it. (A member
+// variable pointer or member function pointer doesn't really point to
+// a location in the address space. Their representation is
+// implementation-defined. Therefore they will be printed as raw
+// bytes.)
+struct FunctionPointerPrinter {
+ template <typename T, typename = typename std::enable_if<
+ std::is_function<T>::value>::type>
+ static void PrintValue(T* p, ::std::ostream* os) {
+ if (p == nullptr) {
+ *os << "NULL";
+ } else {
+ // T is a function type, so '*os << p' doesn't do what we want
+ // (it just prints p as bool). We want to print p as a const
+ // void*.
+ *os << reinterpret_cast<const void*>(p);
+ }
+ }
+};
-template <typename T>
-class TypeWithoutFormatter<T, kProtobuf> {
- public:
+struct PointerPrinter {
+ template <typename T>
+ static void PrintValue(T* p, ::std::ostream* os) {
+ if (p == nullptr) {
+ *os << "NULL";
+ } else {
+ // T is not a function type. We just call << to print p,
+ // relying on ADL to pick up user-defined << for their pointer
+ // types, if any.
+ *os << p;
+ }
+ }
+};
+
+namespace internal_stream_operator_without_lexical_name_lookup {
+
+// The presence of an operator<< here will terminate lexical scope lookup
+// straight away (even though it cannot be a match because of its argument
+// types). Thus, the two operator<< calls in StreamPrinter will find only ADL
+// candidates.
+struct LookupBlocker {};
+void operator<<(LookupBlocker, LookupBlocker);
+
+struct StreamPrinter {
+ template <typename T,
+ // Don't accept member pointers here. We'd print them via implicit
+ // conversion to bool, which isn't useful.
+ typename = typename std::enable_if<
+ !std::is_member_pointer<T>::value>::type,
+ // Only accept types for which we can find a streaming operator via
+ // ADL (possibly involving implicit conversions).
+ typename = decltype(std::declval<std::ostream&>()
+ << std::declval<const T&>())>
+ static void PrintValue(const T& value, ::std::ostream* os) {
+ // Call streaming operator found by ADL, possibly with implicit conversions
+ // of the arguments.
+ *os << value;
+ }
+};
+
+} // namespace internal_stream_operator_without_lexical_name_lookup
+
+struct ProtobufPrinter {
+ // We print a protobuf using its ShortDebugString() when the string
+ // doesn't exceed this many characters; otherwise we print it using
+ // DebugString() for better readability.
+ static const size_t kProtobufOneLinerMaxLength = 50;
+
+ template <typename T,
+ typename = typename std::enable_if<
+ internal::HasDebugStringAndShortDebugString<T>::value>::type>
static void PrintValue(const T& value, ::std::ostream* os) {
std::string pretty_str = value.ShortDebugString();
if (pretty_str.length() > kProtobufOneLinerMaxLength) {
@@ -8219,9 +5293,7 @@
}
};
-template <typename T>
-class TypeWithoutFormatter<T, kConvertibleToInteger> {
- public:
+struct ConvertibleToIntegerPrinter {
// Since T has no << operator or PrintTo() but can be implicitly
// converted to BiggestInt, we print it as a BiggestInt.
//
@@ -8229,113 +5301,74 @@
// case printing it as an integer is the desired behavior. In case
// T is not an enum, printing it as an integer is the best we can do
// given that it has no user-defined printer.
- static void PrintValue(const T& value, ::std::ostream* os) {
- const internal::BiggestInt kBigInt = value;
- *os << kBigInt;
+ static void PrintValue(internal::BiggestInt value, ::std::ostream* os) {
+ *os << value;
}
};
-#if GTEST_HAS_ABSL
-template <typename T>
-class TypeWithoutFormatter<T, kConvertibleToStringView> {
- public:
- // Since T has neither operator<< nor PrintTo() but can be implicitly
- // converted to absl::string_view, we print it as a absl::string_view.
- //
- // Note: the implementation is further below, as it depends on
- // internal::PrintTo symbol which is defined later in the file.
- static void PrintValue(const T& value, ::std::ostream* os);
+struct ConvertibleToStringViewPrinter {
+#if GTEST_INTERNAL_HAS_STRING_VIEW
+ static void PrintValue(internal::StringView value, ::std::ostream* os) {
+ internal::UniversalPrint(value, os);
+ }
+#endif
};
-#endif
-// Prints the given value to the given ostream. If the value is a
-// protocol message, its debug string is printed; if it's an enum or
-// of a type implicitly convertible to BiggestInt, it's printed as an
-// integer; otherwise the bytes in the value are printed. This is
-// what UniversalPrinter<T>::Print() does when it knows nothing about
-// type T and T has neither << operator nor PrintTo().
-//
-// A user can override this behavior for a class type Foo by defining
-// a << operator in the namespace where Foo is defined.
-//
-// We put this operator in namespace 'internal2' instead of 'internal'
-// to simplify the implementation, as much code in 'internal' needs to
-// use << in STL, which would conflict with our own << were it defined
-// in 'internal'.
-//
-// Note that this operator<< takes a generic std::basic_ostream<Char,
-// CharTraits> type instead of the more restricted std::ostream. If
-// we define it to take an std::ostream instead, we'll get an
-// "ambiguous overloads" compiler error when trying to print a type
-// Foo that supports streaming to std::basic_ostream<Char,
-// CharTraits>, as the compiler cannot tell whether
-// operator<<(std::ostream&, const T&) or
-// operator<<(std::basic_stream<Char, CharTraits>, const Foo&) is more
-// specific.
-template <typename Char, typename CharTraits, typename T>
-::std::basic_ostream<Char, CharTraits>& operator<<(
- ::std::basic_ostream<Char, CharTraits>& os, const T& x) {
- TypeWithoutFormatter<T, (internal::IsAProtocolMessage<T>::value
- ? kProtobuf
- : std::is_convertible<
- const T&, internal::BiggestInt>::value
- ? kConvertibleToInteger
- :
-#if GTEST_HAS_ABSL
- std::is_convertible<
- const T&, absl::string_view>::value
- ? kConvertibleToStringView
- :
-#endif
- kOtherType)>::PrintValue(x, &os);
- return os;
-}
-} // namespace internal2
-} // namespace testing
+// Prints the given number of bytes in the given object to the given
+// ostream.
+GTEST_API_ void PrintBytesInObjectTo(const unsigned char* obj_bytes,
+ size_t count,
+ ::std::ostream* os);
+struct RawBytesPrinter {
+ // SFINAE on `sizeof` to make sure we have a complete type.
+ template <typename T, size_t = sizeof(T)>
+ static void PrintValue(const T& value, ::std::ostream* os) {
+ PrintBytesInObjectTo(
+ static_cast<const unsigned char*>(
+ // Load bearing cast to void* to support iOS
+ reinterpret_cast<const void*>(std::addressof(value))),
+ sizeof(value), os);
+ }
+};
-// This namespace MUST NOT BE NESTED IN ::testing, or the name look-up
-// magic needed for implementing UniversalPrinter won't work.
-namespace testing_internal {
+struct FallbackPrinter {
+ template <typename T>
+ static void PrintValue(const T&, ::std::ostream* os) {
+ *os << "(incomplete type)";
+ }
+};
-// Used to print a value that is not an STL-style container when the
-// user doesn't define PrintTo() for it.
+// Try every printer in order and return the first one that works.
+template <typename T, typename E, typename Printer, typename... Printers>
+struct FindFirstPrinter : FindFirstPrinter<T, E, Printers...> {};
+
+template <typename T, typename Printer, typename... Printers>
+struct FindFirstPrinter<
+ T, decltype(Printer::PrintValue(std::declval<const T&>(), nullptr)),
+ Printer, Printers...> {
+ using type = Printer;
+};
+
+// Select the best printer in the following order:
+// - Print containers (they have begin/end/etc).
+// - Print function pointers.
+// - Print object pointers.
+// - Use the stream operator, if available.
+// - Print protocol buffers.
+// - Print types convertible to BiggestInt.
+// - Print types convertible to StringView, if available.
+// - Fallback to printing the raw bytes of the object.
template <typename T>
-void DefaultPrintNonContainerTo(const T& value, ::std::ostream* os) {
- // With the following statement, during unqualified name lookup,
- // testing::internal2::operator<< appears as if it was declared in
- // the nearest enclosing namespace that contains both
- // ::testing_internal and ::testing::internal2, i.e. the global
- // namespace. For more details, refer to the C++ Standard section
- // 7.3.4-1 [namespace.udir]. This allows us to fall back onto
- // testing::internal2::operator<< in case T doesn't come with a <<
- // operator.
- //
- // We cannot write 'using ::testing::internal2::operator<<;', which
- // gcc 3.3 fails to compile due to a compiler bug.
- using namespace ::testing::internal2; // NOLINT
-
- // Assuming T is defined in namespace foo, in the next statement,
- // the compiler will consider all of:
- //
- // 1. foo::operator<< (thanks to Koenig look-up),
- // 2. ::operator<< (as the current namespace is enclosed in ::),
- // 3. testing::internal2::operator<< (thanks to the using statement above).
- //
- // The operator<< whose type matches T best will be picked.
- //
- // We deliberately allow #2 to be a candidate, as sometimes it's
- // impossible to define #1 (e.g. when foo is ::std, defining
- // anything in it is undefined behavior unless you are a compiler
- // vendor.).
- *os << value;
+void PrintWithFallback(const T& value, ::std::ostream* os) {
+ using Printer = typename FindFirstPrinter<
+ T, void, ContainerPrinter, FunctionPointerPrinter, PointerPrinter,
+ internal_stream_operator_without_lexical_name_lookup::StreamPrinter,
+ ProtobufPrinter, ConvertibleToIntegerPrinter,
+ ConvertibleToStringViewPrinter, RawBytesPrinter, FallbackPrinter>::type;
+ Printer::PrintValue(value, os);
}
-} // namespace testing_internal
-
-namespace testing {
-namespace internal {
-
// FormatForComparison<ToPrint, OtherOperand>::Format(value) formats a
// value of type ToPrint that is an operand of a comparison assertion
// (e.g. ASSERT_EQ). OtherOperand is the type of the other operand in
@@ -8384,6 +5417,14 @@
GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const char);
GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(wchar_t);
GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const wchar_t);
+#ifdef __cpp_char8_t
+GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(char8_t);
+GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const char8_t);
+#endif
+GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(char16_t);
+GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const char16_t);
+GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(char32_t);
+GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const char32_t);
#undef GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_
@@ -8401,16 +5442,14 @@
GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char, ::std::string);
GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char, ::std::string);
-
-#if GTEST_HAS_GLOBAL_STRING
-GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char, ::string);
-GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char, ::string);
+#ifdef __cpp_char8_t
+GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char8_t, ::std::u8string);
+GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char8_t, ::std::u8string);
#endif
-
-#if GTEST_HAS_GLOBAL_WSTRING
-GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(wchar_t, ::wstring);
-GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const wchar_t, ::wstring);
-#endif
+GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char16_t, ::std::u16string);
+GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char16_t, ::std::u16string);
+GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char32_t, ::std::u32string);
+GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char32_t, ::std::u32string);
#if GTEST_HAS_STD_WSTRING
GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(wchar_t, ::std::wstring);
@@ -8443,85 +5482,6 @@
template <typename T>
class UniversalPrinter;
-template <typename T>
-void UniversalPrint(const T& value, ::std::ostream* os);
-
-enum DefaultPrinterType {
- kPrintContainer,
- kPrintPointer,
- kPrintFunctionPointer,
- kPrintOther,
-};
-template <DefaultPrinterType type> struct WrapPrinterType {};
-
-// Used to print an STL-style container when the user doesn't define
-// a PrintTo() for it.
-template <typename C>
-void DefaultPrintTo(WrapPrinterType<kPrintContainer> /* dummy */,
- const C& container, ::std::ostream* os) {
- const size_t kMaxCount = 32; // The maximum number of elements to print.
- *os << '{';
- size_t count = 0;
- for (typename C::const_iterator it = container.begin();
- it != container.end(); ++it, ++count) {
- if (count > 0) {
- *os << ',';
- if (count == kMaxCount) { // Enough has been printed.
- *os << " ...";
- break;
- }
- }
- *os << ' ';
- // We cannot call PrintTo(*it, os) here as PrintTo() doesn't
- // handle *it being a native array.
- internal::UniversalPrint(*it, os);
- }
-
- if (count > 0) {
- *os << ' ';
- }
- *os << '}';
-}
-
-// Used to print a pointer that is neither a char pointer nor a member
-// pointer, when the user doesn't define PrintTo() for it. (A member
-// variable pointer or member function pointer doesn't really point to
-// a location in the address space. Their representation is
-// implementation-defined. Therefore they will be printed as raw
-// bytes.)
-template <typename T>
-void DefaultPrintTo(WrapPrinterType<kPrintPointer> /* dummy */,
- T* p, ::std::ostream* os) {
- if (p == nullptr) {
- *os << "NULL";
- } else {
- // T is not a function type. We just call << to print p,
- // relying on ADL to pick up user-defined << for their pointer
- // types, if any.
- *os << p;
- }
-}
-template <typename T>
-void DefaultPrintTo(WrapPrinterType<kPrintFunctionPointer> /* dummy */,
- T* p, ::std::ostream* os) {
- if (p == nullptr) {
- *os << "NULL";
- } else {
- // T is a function type, so '*os << p' doesn't do what we want
- // (it just prints p as bool). We want to print p as a const
- // void*.
- *os << reinterpret_cast<const void*>(p);
- }
-}
-
-// Used to print a non-container, non-pointer value when the user
-// doesn't define PrintTo() for it.
-template <typename T>
-void DefaultPrintTo(WrapPrinterType<kPrintOther> /* dummy */,
- const T& value, ::std::ostream* os) {
- ::testing_internal::DefaultPrintNonContainerTo(value, os);
-}
-
// Prints the given value using the << operator if it has one;
// otherwise prints the bytes in it. This is what
// UniversalPrinter<T>::Print() does when PrintTo() is not specialized
@@ -8535,36 +5495,7 @@
// wants).
template <typename T>
void PrintTo(const T& value, ::std::ostream* os) {
- // DefaultPrintTo() is overloaded. The type of its first argument
- // determines which version will be picked.
- //
- // Note that we check for container types here, prior to we check
- // for protocol message types in our operator<<. The rationale is:
- //
- // For protocol messages, we want to give people a chance to
- // override Google Mock's format by defining a PrintTo() or
- // operator<<. For STL containers, other formats can be
- // incompatible with Google Mock's format for the container
- // elements; therefore we check for container types here to ensure
- // that our format is used.
- //
- // Note that MSVC and clang-cl do allow an implicit conversion from
- // pointer-to-function to pointer-to-object, but clang-cl warns on it.
- // So don't use ImplicitlyConvertible if it can be helped since it will
- // cause this warning, and use a separate overload of DefaultPrintTo for
- // function pointers so that the `*os << p` in the object pointer overload
- // doesn't cause that warning either.
- DefaultPrintTo(
- WrapPrinterType <
- (sizeof(IsContainerTest<T>(0)) == sizeof(IsContainer)) &&
- !IsRecursiveContainer<T>::value
- ? kPrintContainer
- : !std::is_pointer<T>::value
- ? kPrintOther
- : std::is_function<typename std::remove_pointer<T>::type>::value
- ? kPrintFunctionPointer
- : kPrintPointer > (),
- value, os);
+ internal::PrintWithFallback(value, os);
}
// The following list of PrintTo() overloads tells
@@ -8595,6 +5526,16 @@
// is implemented as an unsigned type.
GTEST_API_ void PrintTo(wchar_t wc, ::std::ostream* os);
+GTEST_API_ void PrintTo(char32_t c, ::std::ostream* os);
+inline void PrintTo(char16_t c, ::std::ostream* os) {
+ PrintTo(ImplicitCast_<char32_t>(c), os);
+}
+#ifdef __cpp_char8_t
+inline void PrintTo(char8_t c, ::std::ostream* os) {
+ PrintTo(ImplicitCast_<char32_t>(c), os);
+}
+#endif
+
// Overloads for C strings.
GTEST_API_ void PrintTo(const char* s, ::std::ostream* os);
inline void PrintTo(char* s, ::std::ostream* os) {
@@ -8615,6 +5556,23 @@
inline void PrintTo(unsigned char* s, ::std::ostream* os) {
PrintTo(ImplicitCast_<const void*>(s), os);
}
+#ifdef __cpp_char8_t
+// Overloads for u8 strings.
+GTEST_API_ void PrintTo(const char8_t* s, ::std::ostream* os);
+inline void PrintTo(char8_t* s, ::std::ostream* os) {
+ PrintTo(ImplicitCast_<const char8_t*>(s), os);
+}
+#endif
+// Overloads for u16 strings.
+GTEST_API_ void PrintTo(const char16_t* s, ::std::ostream* os);
+inline void PrintTo(char16_t* s, ::std::ostream* os) {
+ PrintTo(ImplicitCast_<const char16_t*>(s), os);
+}
+// Overloads for u32 strings.
+GTEST_API_ void PrintTo(const char32_t* s, ::std::ostream* os);
+inline void PrintTo(char32_t* s, ::std::ostream* os) {
+ PrintTo(ImplicitCast_<const char32_t*>(s), os);
+}
// MSVC can be configured to define wchar_t as a typedef of unsigned
// short. It defines _NATIVE_WCHAR_T_DEFINED when wchar_t is a native
@@ -8643,27 +5601,33 @@
}
}
-// Overloads for ::string and ::std::string.
-#if GTEST_HAS_GLOBAL_STRING
-GTEST_API_ void PrintStringTo(const ::string&s, ::std::ostream* os);
-inline void PrintTo(const ::string& s, ::std::ostream* os) {
- PrintStringTo(s, os);
-}
-#endif // GTEST_HAS_GLOBAL_STRING
-
+// Overloads for ::std::string.
GTEST_API_ void PrintStringTo(const ::std::string&s, ::std::ostream* os);
inline void PrintTo(const ::std::string& s, ::std::ostream* os) {
PrintStringTo(s, os);
}
-// Overloads for ::wstring and ::std::wstring.
-#if GTEST_HAS_GLOBAL_WSTRING
-GTEST_API_ void PrintWideStringTo(const ::wstring&s, ::std::ostream* os);
-inline void PrintTo(const ::wstring& s, ::std::ostream* os) {
- PrintWideStringTo(s, os);
+// Overloads for ::std::u8string
+#ifdef __cpp_char8_t
+GTEST_API_ void PrintU8StringTo(const ::std::u8string& s, ::std::ostream* os);
+inline void PrintTo(const ::std::u8string& s, ::std::ostream* os) {
+ PrintU8StringTo(s, os);
}
-#endif // GTEST_HAS_GLOBAL_WSTRING
+#endif
+// Overloads for ::std::u16string
+GTEST_API_ void PrintU16StringTo(const ::std::u16string& s, ::std::ostream* os);
+inline void PrintTo(const ::std::u16string& s, ::std::ostream* os) {
+ PrintU16StringTo(s, os);
+}
+
+// Overloads for ::std::u32string
+GTEST_API_ void PrintU32StringTo(const ::std::u32string& s, ::std::ostream* os);
+inline void PrintTo(const ::std::u32string& s, ::std::ostream* os) {
+ PrintU32StringTo(s, os);
+}
+
+// Overloads for ::std::wstring.
#if GTEST_HAS_STD_WSTRING
GTEST_API_ void PrintWideStringTo(const ::std::wstring&s, ::std::ostream* os);
inline void PrintTo(const ::std::wstring& s, ::std::ostream* os) {
@@ -8671,12 +5635,12 @@
}
#endif // GTEST_HAS_STD_WSTRING
-#if GTEST_HAS_ABSL
-// Overload for absl::string_view.
-inline void PrintTo(absl::string_view sp, ::std::ostream* os) {
+#if GTEST_INTERNAL_HAS_STRING_VIEW
+// Overload for internal::StringView.
+inline void PrintTo(internal::StringView sp, ::std::ostream* os) {
PrintTo(::std::string(sp), os);
}
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_STRING_VIEW
inline void PrintTo(std::nullptr_t, ::std::ostream* os) { *os << "(nullptr)"; }
@@ -8685,6 +5649,43 @@
UniversalPrinter<T&>::Print(ref.get(), os);
}
+inline const void* VoidifyPointer(const void* p) { return p; }
+inline const void* VoidifyPointer(volatile const void* p) {
+ return const_cast<const void*>(p);
+}
+
+template <typename T, typename Ptr>
+void PrintSmartPointer(const Ptr& ptr, std::ostream* os, char) {
+ if (ptr == nullptr) {
+ *os << "(nullptr)";
+ } else {
+ // We can't print the value. Just print the pointer..
+ *os << "(" << (VoidifyPointer)(ptr.get()) << ")";
+ }
+}
+template <typename T, typename Ptr,
+ typename = typename std::enable_if<!std::is_void<T>::value &&
+ !std::is_array<T>::value>::type>
+void PrintSmartPointer(const Ptr& ptr, std::ostream* os, int) {
+ if (ptr == nullptr) {
+ *os << "(nullptr)";
+ } else {
+ *os << "(ptr = " << (VoidifyPointer)(ptr.get()) << ", value = ";
+ UniversalPrinter<T>::Print(*ptr, os);
+ *os << ")";
+ }
+}
+
+template <typename T, typename D>
+void PrintTo(const std::unique_ptr<T, D>& ptr, std::ostream* os) {
+ (PrintSmartPointer<T>)(ptr, os, 0);
+}
+
+template <typename T>
+void PrintTo(const std::shared_ptr<T>& ptr, std::ostream* os) {
+ (PrintSmartPointer<T>)(ptr, os, 0);
+}
+
// Helper function for printing a tuple. T must be instantiated with
// a tuple type.
template <typename T>
@@ -8750,14 +5751,46 @@
GTEST_DISABLE_MSC_WARNINGS_POP_()
};
-#if GTEST_HAS_ABSL
+// Remove any const-qualifiers before passing a type to UniversalPrinter.
+template <typename T>
+class UniversalPrinter<const T> : public UniversalPrinter<T> {};
-// Printer for absl::optional
+#if GTEST_INTERNAL_HAS_ANY
+
+// Printer for std::any / absl::any
+
+template <>
+class UniversalPrinter<Any> {
+ public:
+ static void Print(const Any& value, ::std::ostream* os) {
+ if (value.has_value()) {
+ *os << "value of type " << GetTypeName(value);
+ } else {
+ *os << "no value";
+ }
+ }
+
+ private:
+ static std::string GetTypeName(const Any& value) {
+#if GTEST_HAS_RTTI
+ return internal::GetTypeName(value.type());
+#else
+ static_cast<void>(value); // possibly unused
+ return "<unknown_type>";
+#endif // GTEST_HAS_RTTI
+ }
+};
+
+#endif // GTEST_INTERNAL_HAS_ANY
+
+#if GTEST_INTERNAL_HAS_OPTIONAL
+
+// Printer for std::optional / absl::optional
template <typename T>
-class UniversalPrinter<::absl::optional<T>> {
+class UniversalPrinter<Optional<T>> {
public:
- static void Print(const ::absl::optional<T>& value, ::std::ostream* os) {
+ static void Print(const Optional<T>& value, ::std::ostream* os) {
*os << '(';
if (!value) {
*os << "nullopt";
@@ -8768,14 +5801,22 @@
}
};
-// Printer for absl::variant
+#endif // GTEST_INTERNAL_HAS_OPTIONAL
+
+#if GTEST_INTERNAL_HAS_VARIANT
+
+// Printer for std::variant / absl::variant
template <typename... T>
-class UniversalPrinter<::absl::variant<T...>> {
+class UniversalPrinter<Variant<T...>> {
public:
- static void Print(const ::absl::variant<T...>& value, ::std::ostream* os) {
+ static void Print(const Variant<T...>& value, ::std::ostream* os) {
*os << '(';
- absl::visit(Visitor{os}, value);
+#if GTEST_HAS_ABSL
+ absl::visit(Visitor{os, value.index()}, value);
+#else
+ std::visit(Visitor{os, value.index()}, value);
+#endif // GTEST_HAS_ABSL
*os << ')';
}
@@ -8783,14 +5824,16 @@
struct Visitor {
template <typename U>
void operator()(const U& u) const {
- *os << "'" << GetTypeName<U>() << "' with value ";
+ *os << "'" << GetTypeName<U>() << "(index = " << index
+ << ")' with value ";
UniversalPrint(u, os);
}
::std::ostream* os;
+ std::size_t index;
};
};
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_VARIANT
// UniversalPrintArray(begin, len, os) prints an array of 'len'
// elements, starting at address 'begin'.
@@ -8819,6 +5862,20 @@
GTEST_API_ void UniversalPrintArray(
const char* begin, size_t len, ::std::ostream* os);
+#ifdef __cpp_char8_t
+// This overload prints a (const) char8_t array compactly.
+GTEST_API_ void UniversalPrintArray(const char8_t* begin, size_t len,
+ ::std::ostream* os);
+#endif
+
+// This overload prints a (const) char16_t array compactly.
+GTEST_API_ void UniversalPrintArray(const char16_t* begin, size_t len,
+ ::std::ostream* os);
+
+// This overload prints a (const) char32_t array compactly.
+GTEST_API_ void UniversalPrintArray(const char32_t* begin, size_t len,
+ ::std::ostream* os);
+
// This overload prints a (const) wchar_t array compactly.
GTEST_API_ void UniversalPrintArray(
const wchar_t* begin, size_t len, ::std::ostream* os);
@@ -8891,12 +5948,55 @@
}
};
template <>
-class UniversalTersePrinter<char*> {
+class UniversalTersePrinter<char*> : public UniversalTersePrinter<const char*> {
+};
+
+#ifdef __cpp_char8_t
+template <>
+class UniversalTersePrinter<const char8_t*> {
public:
- static void Print(char* str, ::std::ostream* os) {
- UniversalTersePrinter<const char*>::Print(str, os);
+ static void Print(const char8_t* str, ::std::ostream* os) {
+ if (str == nullptr) {
+ *os << "NULL";
+ } else {
+ UniversalPrint(::std::u8string(str), os);
+ }
}
};
+template <>
+class UniversalTersePrinter<char8_t*>
+ : public UniversalTersePrinter<const char8_t*> {};
+#endif
+
+template <>
+class UniversalTersePrinter<const char16_t*> {
+ public:
+ static void Print(const char16_t* str, ::std::ostream* os) {
+ if (str == nullptr) {
+ *os << "NULL";
+ } else {
+ UniversalPrint(::std::u16string(str), os);
+ }
+ }
+};
+template <>
+class UniversalTersePrinter<char16_t*>
+ : public UniversalTersePrinter<const char16_t*> {};
+
+template <>
+class UniversalTersePrinter<const char32_t*> {
+ public:
+ static void Print(const char32_t* str, ::std::ostream* os) {
+ if (str == nullptr) {
+ *os << "NULL";
+ } else {
+ UniversalPrint(::std::u32string(str), os);
+ }
+ }
+};
+template <>
+class UniversalTersePrinter<char32_t*>
+ : public UniversalTersePrinter<const char32_t*> {};
#if GTEST_HAS_STD_WSTRING
template <>
@@ -8969,16 +6069,6 @@
} // namespace internal
-#if GTEST_HAS_ABSL
-namespace internal2 {
-template <typename T>
-void TypeWithoutFormatter<T, kConvertibleToStringView>::PrintValue(
- const T& value, ::std::ostream* os) {
- internal::PrintTo(absl::string_view(value), os);
-}
-} // namespace internal2
-#endif
-
template <typename T>
::std::string PrintToString(const T& value) {
::std::stringstream ss;
@@ -9029,35 +6119,38 @@
//
// ** Custom implementation starts here **
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PRINTERS_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PRINTERS_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PRINTERS_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PRINTERS_H_
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PRINTERS_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_CUSTOM_GTEST_PRINTERS_H_
-#endif // GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
+
+// MSVC warning C5046 is new as of VS2017 version 15.8.
+#if defined(_MSC_VER) && _MSC_VER >= 1915
+#define GTEST_MAYBE_5046_ 5046
+#else
+#define GTEST_MAYBE_5046_
+#endif
GTEST_DISABLE_MSC_WARNINGS_PUSH_(
- 4251 5046 /* class A needs to have dll-interface to be used by clients of
- class B */
+ 4251 GTEST_MAYBE_5046_ /* class A needs to have dll-interface to be used by
+ clients of class B */
/* Symbol involving type with internal linkage not defined */)
namespace testing {
// To implement a matcher Foo for type T, define:
-// 1. a class FooMatcherImpl that implements the
-// MatcherInterface<T> interface, and
+// 1. a class FooMatcherMatcher that implements the matcher interface:
+// using is_gtest_matcher = void;
+// bool MatchAndExplain(const T&, std::ostream*);
+// (MatchResultListener* can also be used instead of std::ostream*)
+// void DescribeTo(std::ostream*);
+// void DescribeNegationTo(std::ostream*);
+//
// 2. a factory function that creates a Matcher<T> object from a
-// FooMatcherImpl*.
-//
-// The two-level delegation design makes it possible to allow a user
-// to write "v" instead of "Eq(v)" where a Matcher is expected, which
-// is impossible if we pass matchers by pointers. It also eases
-// ownership management as Matcher objects can now be copied like
-// plain values.
+// FooMatcherMatcher.
-// MatchResultListener is an abstract class. Its << operator can be
-// used by a matcher to explain why a value matches or doesn't match.
-//
class MatchResultListener {
public:
// Creates a listener object with the given underlying ostream. The
@@ -9077,8 +6170,8 @@
// Returns the underlying ostream.
::std::ostream* stream() { return stream_; }
- // Returns true iff the listener is interested in an explanation of
- // the match result. A matcher's MatchAndExplain() method can use
+ // Returns true if and only if the listener is interested in an explanation
+ // of the match result. A matcher's MatchAndExplain() method can use
// this information to avoid generating the explanation when no one
// intends to hear it.
bool IsInterested() const { return stream_ != nullptr; }
@@ -9094,7 +6187,7 @@
// An instance of a subclass of this knows how to describe itself as a
// matcher.
-class MatcherDescriberInterface {
+class GTEST_API_ MatcherDescriberInterface {
public:
virtual ~MatcherDescriberInterface() {}
@@ -9122,8 +6215,8 @@
template <typename T>
class MatcherInterface : public MatcherDescriberInterface {
public:
- // Returns true iff the matcher matches x; also explains the match
- // result to 'listener' if necessary (see the next paragraph), in
+ // Returns true if and only if the matcher matches x; also explains the
+ // match result to 'listener' if necessary (see the next paragraph), in
// the form of a non-restrictive relative clause ("which ...",
// "whose ...", etc) that describes x. For example, the
// MatchAndExplain() method of the Pointee(...) matcher should
@@ -9162,31 +6255,6 @@
namespace internal {
-// Converts a MatcherInterface<T> to a MatcherInterface<const T&>.
-template <typename T>
-class MatcherInterfaceAdapter : public MatcherInterface<const T&> {
- public:
- explicit MatcherInterfaceAdapter(const MatcherInterface<T>* impl)
- : impl_(impl) {}
- ~MatcherInterfaceAdapter() override { delete impl_; }
-
- void DescribeTo(::std::ostream* os) const override { impl_->DescribeTo(os); }
-
- void DescribeNegationTo(::std::ostream* os) const override {
- impl_->DescribeNegationTo(os);
- }
-
- bool MatchAndExplain(const T& x,
- MatchResultListener* listener) const override {
- return impl_->MatchAndExplain(x, listener);
- }
-
- private:
- const MatcherInterface<T>* const impl_;
-
- GTEST_DISALLOW_COPY_AND_ASSIGN_(MatcherInterfaceAdapter);
-};
-
struct AnyEq {
template <typename A, typename B>
bool operator()(const A& a, const B& b) const { return a == b; }
@@ -9233,30 +6301,53 @@
GTEST_DISALLOW_COPY_AND_ASSIGN_(StreamMatchResultListener);
};
+struct SharedPayloadBase {
+ std::atomic<int> ref{1};
+ void Ref() { ref.fetch_add(1, std::memory_order_relaxed); }
+ bool Unref() { return ref.fetch_sub(1, std::memory_order_acq_rel) == 1; }
+};
+
+template <typename T>
+struct SharedPayload : SharedPayloadBase {
+ explicit SharedPayload(const T& v) : value(v) {}
+ explicit SharedPayload(T&& v) : value(std::move(v)) {}
+
+ static void Destroy(SharedPayloadBase* shared) {
+ delete static_cast<SharedPayload*>(shared);
+ }
+
+ T value;
+};
+
// An internal class for implementing Matcher<T>, which will derive
// from it. We put functionalities common to all Matcher<T>
// specializations here to avoid code duplication.
template <typename T>
-class MatcherBase {
+class MatcherBase : private MatcherDescriberInterface {
public:
- // Returns true iff the matcher matches x; also explains the match
- // result to 'listener'.
+ // Returns true if and only if the matcher matches x; also explains the
+ // match result to 'listener'.
bool MatchAndExplain(const T& x, MatchResultListener* listener) const {
- return impl_->MatchAndExplain(x, listener);
+ GTEST_CHECK_(vtable_ != nullptr);
+ return vtable_->match_and_explain(*this, x, listener);
}
- // Returns true iff this matcher matches x.
+ // Returns true if and only if this matcher matches x.
bool Matches(const T& x) const {
DummyMatchResultListener dummy;
return MatchAndExplain(x, &dummy);
}
// Describes this matcher to an ostream.
- void DescribeTo(::std::ostream* os) const { impl_->DescribeTo(os); }
+ void DescribeTo(::std::ostream* os) const final {
+ GTEST_CHECK_(vtable_ != nullptr);
+ vtable_->describe(*this, os, false);
+ }
// Describes the negation of this matcher to an ostream.
- void DescribeNegationTo(::std::ostream* os) const {
- impl_->DescribeNegationTo(os);
+ void DescribeNegationTo(::std::ostream* os) const final {
+ GTEST_CHECK_(vtable_ != nullptr);
+ vtable_->describe(*this, os, true);
}
// Explains why x matches, or doesn't match, the matcher.
@@ -9269,31 +6360,194 @@
// of the describer, which is only guaranteed to be alive when
// this matcher object is alive.
const MatcherDescriberInterface* GetDescriber() const {
- return impl_.get();
+ if (vtable_ == nullptr) return nullptr;
+ return vtable_->get_describer(*this);
}
protected:
- MatcherBase() {}
+ MatcherBase() : vtable_(nullptr) {}
// Constructs a matcher from its implementation.
- explicit MatcherBase(const MatcherInterface<const T&>* impl) : impl_(impl) {}
-
template <typename U>
- explicit MatcherBase(
- const MatcherInterface<U>* impl,
- typename internal::EnableIf<
- !internal::IsSame<U, const U&>::value>::type* = nullptr)
- : impl_(new internal::MatcherInterfaceAdapter<U>(impl)) {}
+ explicit MatcherBase(const MatcherInterface<U>* impl) {
+ Init(impl);
+ }
- MatcherBase(const MatcherBase&) = default;
- MatcherBase& operator=(const MatcherBase&) = default;
- MatcherBase(MatcherBase&&) = default;
- MatcherBase& operator=(MatcherBase&&) = default;
+ template <typename M, typename = typename std::remove_reference<
+ M>::type::is_gtest_matcher>
+ MatcherBase(M&& m) { // NOLINT
+ Init(std::forward<M>(m));
+ }
- virtual ~MatcherBase() {}
+ MatcherBase(const MatcherBase& other)
+ : vtable_(other.vtable_), buffer_(other.buffer_) {
+ if (IsShared()) buffer_.shared->Ref();
+ }
+
+ MatcherBase& operator=(const MatcherBase& other) {
+ if (this == &other) return *this;
+ Destroy();
+ vtable_ = other.vtable_;
+ buffer_ = other.buffer_;
+ if (IsShared()) buffer_.shared->Ref();
+ return *this;
+ }
+
+ MatcherBase(MatcherBase&& other)
+ : vtable_(other.vtable_), buffer_(other.buffer_) {
+ other.vtable_ = nullptr;
+ }
+
+ MatcherBase& operator=(MatcherBase&& other) {
+ if (this == &other) return *this;
+ Destroy();
+ vtable_ = other.vtable_;
+ buffer_ = other.buffer_;
+ other.vtable_ = nullptr;
+ return *this;
+ }
+
+ ~MatcherBase() override { Destroy(); }
private:
- std::shared_ptr<const MatcherInterface<const T&>> impl_;
+ struct VTable {
+ bool (*match_and_explain)(const MatcherBase&, const T&,
+ MatchResultListener*);
+ void (*describe)(const MatcherBase&, std::ostream*, bool negation);
+ // Returns the captured object if it implements the interface, otherwise
+ // returns the MatcherBase itself.
+ const MatcherDescriberInterface* (*get_describer)(const MatcherBase&);
+ // Called on shared instances when the reference count reaches 0.
+ void (*shared_destroy)(SharedPayloadBase*);
+ };
+
+ bool IsShared() const {
+ return vtable_ != nullptr && vtable_->shared_destroy != nullptr;
+ }
+
+ // If the implementation uses a listener, call that.
+ template <typename P>
+ static auto MatchAndExplainImpl(const MatcherBase& m, const T& value,
+ MatchResultListener* listener)
+ -> decltype(P::Get(m).MatchAndExplain(value, listener->stream())) {
+ return P::Get(m).MatchAndExplain(value, listener->stream());
+ }
+
+ template <typename P>
+ static auto MatchAndExplainImpl(const MatcherBase& m, const T& value,
+ MatchResultListener* listener)
+ -> decltype(P::Get(m).MatchAndExplain(value, listener)) {
+ return P::Get(m).MatchAndExplain(value, listener);
+ }
+
+ template <typename P>
+ static void DescribeImpl(const MatcherBase& m, std::ostream* os,
+ bool negation) {
+ if (negation) {
+ P::Get(m).DescribeNegationTo(os);
+ } else {
+ P::Get(m).DescribeTo(os);
+ }
+ }
+
+ template <typename P>
+ static const MatcherDescriberInterface* GetDescriberImpl(
+ const MatcherBase& m) {
+ // If the impl is a MatcherDescriberInterface, then return it.
+ // Otherwise use MatcherBase itself.
+ // This allows us to implement the GetDescriber() function without support
+ // from the impl, but some users really want to get their impl back when
+ // they call GetDescriber().
+ // We use std::get on a tuple as a workaround of not having `if constexpr`.
+ return std::get<(
+ std::is_convertible<decltype(&P::Get(m)),
+ const MatcherDescriberInterface*>::value
+ ? 1
+ : 0)>(std::make_tuple(&m, &P::Get(m)));
+ }
+
+ template <typename P>
+ const VTable* GetVTable() {
+ static constexpr VTable kVTable = {&MatchAndExplainImpl<P>,
+ &DescribeImpl<P>, &GetDescriberImpl<P>,
+ P::shared_destroy};
+ return &kVTable;
+ }
+
+ union Buffer {
+ // Add some types to give Buffer some common alignment/size use cases.
+ void* ptr;
+ double d;
+ int64_t i;
+ // And add one for the out-of-line cases.
+ SharedPayloadBase* shared;
+ };
+
+ void Destroy() {
+ if (IsShared() && buffer_.shared->Unref()) {
+ vtable_->shared_destroy(buffer_.shared);
+ }
+ }
+
+ template <typename M>
+ static constexpr bool IsInlined() {
+ return sizeof(M) <= sizeof(Buffer) && alignof(M) <= alignof(Buffer) &&
+ std::is_trivially_copy_constructible<M>::value &&
+ std::is_trivially_destructible<M>::value;
+ }
+
+ template <typename M, bool = MatcherBase::IsInlined<M>()>
+ struct ValuePolicy {
+ static const M& Get(const MatcherBase& m) {
+ // When inlined along with Init, need to be explicit to avoid violating
+ // strict aliasing rules.
+ const M *ptr = static_cast<const M*>(
+ static_cast<const void*>(&m.buffer_));
+ return *ptr;
+ }
+ static void Init(MatcherBase& m, M impl) {
+ ::new (static_cast<void*>(&m.buffer_)) M(impl);
+ }
+ static constexpr auto shared_destroy = nullptr;
+ };
+
+ template <typename M>
+ struct ValuePolicy<M, false> {
+ using Shared = SharedPayload<M>;
+ static const M& Get(const MatcherBase& m) {
+ return static_cast<Shared*>(m.buffer_.shared)->value;
+ }
+ template <typename Arg>
+ static void Init(MatcherBase& m, Arg&& arg) {
+ m.buffer_.shared = new Shared(std::forward<Arg>(arg));
+ }
+ static constexpr auto shared_destroy = &Shared::Destroy;
+ };
+
+ template <typename U, bool B>
+ struct ValuePolicy<const MatcherInterface<U>*, B> {
+ using M = const MatcherInterface<U>;
+ using Shared = SharedPayload<std::unique_ptr<M>>;
+ static const M& Get(const MatcherBase& m) {
+ return *static_cast<Shared*>(m.buffer_.shared)->value;
+ }
+ static void Init(MatcherBase& m, M* impl) {
+ m.buffer_.shared = new Shared(std::unique_ptr<M>(impl));
+ }
+
+ static constexpr auto shared_destroy = &Shared::Destroy;
+ };
+
+ template <typename M>
+ void Init(M&& m) {
+ using MM = typename std::decay<M>::type;
+ using Policy = ValuePolicy<MM>;
+ vtable_ = GetVTable<Policy>();
+ Policy::Init(*this, std::forward<M>(m));
+ }
+
+ const VTable* vtable_;
+ Buffer buffer_;
};
} // namespace internal
@@ -9315,11 +6569,16 @@
: internal::MatcherBase<T>(impl) {}
template <typename U>
- explicit Matcher(const MatcherInterface<U>* impl,
- typename internal::EnableIf<
- !internal::IsSame<U, const U&>::value>::type* = nullptr)
+ explicit Matcher(
+ const MatcherInterface<U>* impl,
+ typename std::enable_if<!std::is_same<U, const U&>::value>::type* =
+ nullptr)
: internal::MatcherBase<T>(impl) {}
+ template <typename M, typename = typename std::remove_reference<
+ M>::type::is_gtest_matcher>
+ Matcher(M&& m) : internal::MatcherBase<T>(std::forward<M>(m)) {} // NOLINT
+
// Implicit constructor here allows people to write
// EXPECT_CALL(foo, Bar(5)) instead of EXPECT_CALL(foo, Bar(Eq(5))) sometimes
Matcher(T value); // NOLINT
@@ -9337,16 +6596,15 @@
explicit Matcher(const MatcherInterface<const std::string&>* impl)
: internal::MatcherBase<const std::string&>(impl) {}
+ template <typename M, typename = typename std::remove_reference<
+ M>::type::is_gtest_matcher>
+ Matcher(M&& m) // NOLINT
+ : internal::MatcherBase<const std::string&>(std::forward<M>(m)) {}
+
// Allows the user to write str instead of Eq(str) sometimes, where
// str is a std::string object.
Matcher(const std::string& s); // NOLINT
-#if GTEST_HAS_GLOBAL_STRING
- // Allows the user to write str instead of Eq(str) sometimes, where
- // str is a ::string object.
- Matcher(const ::string& s); // NOLINT
-#endif // GTEST_HAS_GLOBAL_STRING
-
// Allows the user to write "foo" instead of Eq("foo") sometimes.
Matcher(const char* s); // NOLINT
};
@@ -9362,127 +6620,76 @@
explicit Matcher(const MatcherInterface<std::string>* impl)
: internal::MatcherBase<std::string>(impl) {}
+ template <typename M, typename = typename std::remove_reference<
+ M>::type::is_gtest_matcher>
+ Matcher(M&& m) // NOLINT
+ : internal::MatcherBase<std::string>(std::forward<M>(m)) {}
+
// Allows the user to write str instead of Eq(str) sometimes, where
// str is a string object.
Matcher(const std::string& s); // NOLINT
-#if GTEST_HAS_GLOBAL_STRING
- // Allows the user to write str instead of Eq(str) sometimes, where
- // str is a ::string object.
- Matcher(const ::string& s); // NOLINT
-#endif // GTEST_HAS_GLOBAL_STRING
-
// Allows the user to write "foo" instead of Eq("foo") sometimes.
Matcher(const char* s); // NOLINT
};
-#if GTEST_HAS_GLOBAL_STRING
-// The following two specializations allow the user to write str
-// instead of Eq(str) and "foo" instead of Eq("foo") when a ::string
-// matcher is expected.
-template <>
-class GTEST_API_ Matcher<const ::string&>
- : public internal::MatcherBase<const ::string&> {
- public:
- Matcher() {}
-
- explicit Matcher(const MatcherInterface<const ::string&>* impl)
- : internal::MatcherBase<const ::string&>(impl) {}
-
- // Allows the user to write str instead of Eq(str) sometimes, where
- // str is a std::string object.
- Matcher(const std::string& s); // NOLINT
-
- // Allows the user to write str instead of Eq(str) sometimes, where
- // str is a ::string object.
- Matcher(const ::string& s); // NOLINT
-
- // Allows the user to write "foo" instead of Eq("foo") sometimes.
- Matcher(const char* s); // NOLINT
-};
-
-template <>
-class GTEST_API_ Matcher< ::string>
- : public internal::MatcherBase< ::string> {
- public:
- Matcher() {}
-
- explicit Matcher(const MatcherInterface<const ::string&>* impl)
- : internal::MatcherBase< ::string>(impl) {}
- explicit Matcher(const MatcherInterface< ::string>* impl)
- : internal::MatcherBase< ::string>(impl) {}
-
- // Allows the user to write str instead of Eq(str) sometimes, where
- // str is a std::string object.
- Matcher(const std::string& s); // NOLINT
-
- // Allows the user to write str instead of Eq(str) sometimes, where
- // str is a ::string object.
- Matcher(const ::string& s); // NOLINT
-
- // Allows the user to write "foo" instead of Eq("foo") sometimes.
- Matcher(const char* s); // NOLINT
-};
-#endif // GTEST_HAS_GLOBAL_STRING
-
-#if GTEST_HAS_ABSL
+#if GTEST_INTERNAL_HAS_STRING_VIEW
// The following two specializations allow the user to write str
// instead of Eq(str) and "foo" instead of Eq("foo") when a absl::string_view
// matcher is expected.
template <>
-class GTEST_API_ Matcher<const absl::string_view&>
- : public internal::MatcherBase<const absl::string_view&> {
+class GTEST_API_ Matcher<const internal::StringView&>
+ : public internal::MatcherBase<const internal::StringView&> {
public:
Matcher() {}
- explicit Matcher(const MatcherInterface<const absl::string_view&>* impl)
- : internal::MatcherBase<const absl::string_view&>(impl) {}
+ explicit Matcher(const MatcherInterface<const internal::StringView&>* impl)
+ : internal::MatcherBase<const internal::StringView&>(impl) {}
+
+ template <typename M, typename = typename std::remove_reference<
+ M>::type::is_gtest_matcher>
+ Matcher(M&& m) // NOLINT
+ : internal::MatcherBase<const internal::StringView&>(std::forward<M>(m)) {
+ }
// Allows the user to write str instead of Eq(str) sometimes, where
// str is a std::string object.
Matcher(const std::string& s); // NOLINT
-#if GTEST_HAS_GLOBAL_STRING
- // Allows the user to write str instead of Eq(str) sometimes, where
- // str is a ::string object.
- Matcher(const ::string& s); // NOLINT
-#endif // GTEST_HAS_GLOBAL_STRING
-
// Allows the user to write "foo" instead of Eq("foo") sometimes.
Matcher(const char* s); // NOLINT
- // Allows the user to pass absl::string_views directly.
- Matcher(absl::string_view s); // NOLINT
+ // Allows the user to pass absl::string_views or std::string_views directly.
+ Matcher(internal::StringView s); // NOLINT
};
template <>
-class GTEST_API_ Matcher<absl::string_view>
- : public internal::MatcherBase<absl::string_view> {
+class GTEST_API_ Matcher<internal::StringView>
+ : public internal::MatcherBase<internal::StringView> {
public:
Matcher() {}
- explicit Matcher(const MatcherInterface<const absl::string_view&>* impl)
- : internal::MatcherBase<absl::string_view>(impl) {}
- explicit Matcher(const MatcherInterface<absl::string_view>* impl)
- : internal::MatcherBase<absl::string_view>(impl) {}
+ explicit Matcher(const MatcherInterface<const internal::StringView&>* impl)
+ : internal::MatcherBase<internal::StringView>(impl) {}
+ explicit Matcher(const MatcherInterface<internal::StringView>* impl)
+ : internal::MatcherBase<internal::StringView>(impl) {}
+
+ template <typename M, typename = typename std::remove_reference<
+ M>::type::is_gtest_matcher>
+ Matcher(M&& m) // NOLINT
+ : internal::MatcherBase<internal::StringView>(std::forward<M>(m)) {}
// Allows the user to write str instead of Eq(str) sometimes, where
// str is a std::string object.
Matcher(const std::string& s); // NOLINT
-#if GTEST_HAS_GLOBAL_STRING
- // Allows the user to write str instead of Eq(str) sometimes, where
- // str is a ::string object.
- Matcher(const ::string& s); // NOLINT
-#endif // GTEST_HAS_GLOBAL_STRING
-
// Allows the user to write "foo" instead of Eq("foo") sometimes.
Matcher(const char* s); // NOLINT
- // Allows the user to pass absl::string_views directly.
- Matcher(absl::string_view s); // NOLINT
+ // Allows the user to pass absl::string_views or std::string_views directly.
+ Matcher(internal::StringView s); // NOLINT
};
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_STRING_VIEW
// Prints a matcher in a human-readable format.
template <typename T>
@@ -9527,13 +6734,13 @@
public:
explicit MonomorphicImpl(const Impl& impl) : impl_(impl) {}
- virtual void DescribeTo(::std::ostream* os) const { impl_.DescribeTo(os); }
+ void DescribeTo(::std::ostream* os) const override { impl_.DescribeTo(os); }
- virtual void DescribeNegationTo(::std::ostream* os) const {
+ void DescribeNegationTo(::std::ostream* os) const override {
impl_.DescribeNegationTo(os);
}
- virtual bool MatchAndExplain(T x, MatchResultListener* listener) const {
+ bool MatchAndExplain(T x, MatchResultListener* listener) const override {
return impl_.MatchAndExplain(x, listener);
}
@@ -9582,37 +6789,32 @@
class ComparisonBase {
public:
explicit ComparisonBase(const Rhs& rhs) : rhs_(rhs) {}
+
+ using is_gtest_matcher = void;
+
template <typename Lhs>
- operator Matcher<Lhs>() const {
- return Matcher<Lhs>(new Impl<const Lhs&>(rhs_));
+ bool MatchAndExplain(const Lhs& lhs, std::ostream*) const {
+ return Op()(lhs, Unwrap(rhs_));
+ }
+ void DescribeTo(std::ostream* os) const {
+ *os << D::Desc() << " ";
+ UniversalPrint(Unwrap(rhs_), os);
+ }
+ void DescribeNegationTo(std::ostream* os) const {
+ *os << D::NegatedDesc() << " ";
+ UniversalPrint(Unwrap(rhs_), os);
}
private:
template <typename T>
- static const T& Unwrap(const T& v) { return v; }
+ static const T& Unwrap(const T& v) {
+ return v;
+ }
template <typename T>
- static const T& Unwrap(std::reference_wrapper<T> v) { return v; }
+ static const T& Unwrap(std::reference_wrapper<T> v) {
+ return v;
+ }
- template <typename Lhs, typename = Rhs>
- class Impl : public MatcherInterface<Lhs> {
- public:
- explicit Impl(const Rhs& rhs) : rhs_(rhs) {}
- bool MatchAndExplain(Lhs lhs,
- MatchResultListener* /* listener */) const override {
- return Op()(lhs, Unwrap(rhs_));
- }
- void DescribeTo(::std::ostream* os) const override {
- *os << D::Desc() << " ";
- UniversalPrint(Unwrap(rhs_), os);
- }
- void DescribeNegationTo(::std::ostream* os) const override {
- *os << D::NegatedDesc() << " ";
- UniversalPrint(Unwrap(rhs_), os);
- }
-
- private:
- Rhs rhs_;
- };
Rhs rhs_;
};
@@ -9665,6 +6867,10 @@
static const char* NegatedDesc() { return "isn't >="; }
};
+template <typename T, typename = typename std::enable_if<
+ std::is_constructible<std::string, T>::value>::type>
+using StringLike = T;
+
// Implements polymorphic matchers MatchesRegex(regex) and
// ContainsRegex(regex), which can be used as a Matcher<T> as long as
// T can be converted to a string.
@@ -9673,12 +6879,12 @@
MatchesRegexMatcher(const RE* regex, bool full_match)
: regex_(regex), full_match_(full_match) {}
-#if GTEST_HAS_ABSL
- bool MatchAndExplain(const absl::string_view& s,
+#if GTEST_INTERNAL_HAS_STRING_VIEW
+ bool MatchAndExplain(const internal::StringView& s,
MatchResultListener* listener) const {
- return MatchAndExplain(string(s), listener);
+ return MatchAndExplain(std::string(s), listener);
}
-#endif // GTEST_HAS_ABSL
+#endif // GTEST_INTERNAL_HAS_STRING_VIEW
// Accepts pointer types, particularly:
// const char*
@@ -9725,9 +6931,10 @@
const internal::RE* regex) {
return MakePolymorphicMatcher(internal::MatchesRegexMatcher(regex, true));
}
-inline PolymorphicMatcher<internal::MatchesRegexMatcher> MatchesRegex(
- const std::string& regex) {
- return MatchesRegex(new internal::RE(regex));
+template <typename T = std::string>
+PolymorphicMatcher<internal::MatchesRegexMatcher> MatchesRegex(
+ const internal::StringLike<T>& regex) {
+ return MatchesRegex(new internal::RE(std::string(regex)));
}
// Matches a string that contains regular expression 'regex'.
@@ -9736,9 +6943,10 @@
const internal::RE* regex) {
return MakePolymorphicMatcher(internal::MatchesRegexMatcher(regex, false));
}
-inline PolymorphicMatcher<internal::MatchesRegexMatcher> ContainsRegex(
- const std::string& regex) {
- return ContainsRegex(new internal::RE(regex));
+template <typename T = std::string>
+PolymorphicMatcher<internal::MatchesRegexMatcher> ContainsRegex(
+ const internal::StringLike<T>& regex) {
+ return ContainsRegex(new internal::RE(std::string(regex)));
}
// Creates a polymorphic matcher that matches anything equal to x.
@@ -9800,7 +7008,7 @@
GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251 5046
-#endif // GTEST_INCLUDE_GTEST_GTEST_MATCHERS_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_MATCHERS_H_
#include <stdio.h>
#include <memory>
@@ -9939,12 +7147,6 @@
const ::std::string& regex) {
return ContainsRegex(regex);
}
-#if GTEST_HAS_GLOBAL_STRING
-inline Matcher<const ::std::string&> MakeDeathTestMatcher(
- const ::string& regex) {
- return ContainsRegex(regex);
-}
-#endif
// If a Matcher<const ::std::string&> is passed to EXPECT_DEATH (etc.), it's
// used directly.
@@ -10070,7 +7272,7 @@
} // namespace internal
} // namespace testing
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_
namespace testing {
@@ -10129,6 +7331,10 @@
//
// ASSERT_EXIT(client.HangUpServer(), KilledBySIGHUP, "Hanging up!");
//
+// The final parameter to each of these macros is a matcher applied to any data
+// the sub-process wrote to stderr. For compatibility with existing tests, a
+// bare string is interpreted as a regular expression matcher.
+//
// On the regular expressions used in death tests:
//
// GOOGLETEST_CM0005 DO NOT DELETE
@@ -10194,27 +7400,27 @@
// directory in PATH.
//
-// Asserts that a given statement causes the program to exit, with an
-// integer exit status that satisfies predicate, and emitting error output
-// that matches regex.
-# define ASSERT_EXIT(statement, predicate, regex) \
- GTEST_DEATH_TEST_(statement, predicate, regex, GTEST_FATAL_FAILURE_)
+// Asserts that a given `statement` causes the program to exit, with an
+// integer exit status that satisfies `predicate`, and emitting error output
+// that matches `matcher`.
+# define ASSERT_EXIT(statement, predicate, matcher) \
+ GTEST_DEATH_TEST_(statement, predicate, matcher, GTEST_FATAL_FAILURE_)
-// Like ASSERT_EXIT, but continues on to successive tests in the
+// Like `ASSERT_EXIT`, but continues on to successive tests in the
// test suite, if any:
-# define EXPECT_EXIT(statement, predicate, regex) \
- GTEST_DEATH_TEST_(statement, predicate, regex, GTEST_NONFATAL_FAILURE_)
+# define EXPECT_EXIT(statement, predicate, matcher) \
+ GTEST_DEATH_TEST_(statement, predicate, matcher, GTEST_NONFATAL_FAILURE_)
-// Asserts that a given statement causes the program to exit, either by
+// Asserts that a given `statement` causes the program to exit, either by
// explicitly exiting with a nonzero exit code or being killed by a
-// signal, and emitting error output that matches regex.
-# define ASSERT_DEATH(statement, regex) \
- ASSERT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, regex)
+// signal, and emitting error output that matches `matcher`.
+# define ASSERT_DEATH(statement, matcher) \
+ ASSERT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, matcher)
-// Like ASSERT_DEATH, but continues on to successive tests in the
+// Like `ASSERT_DEATH`, but continues on to successive tests in the
// test suite, if any:
-# define EXPECT_DEATH(statement, regex) \
- EXPECT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, regex)
+# define EXPECT_DEATH(statement, matcher) \
+ EXPECT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, matcher)
// Two predicate classes that can be used in {ASSERT,EXPECT}_EXIT*:
@@ -10222,11 +7428,10 @@
class GTEST_API_ ExitedWithCode {
public:
explicit ExitedWithCode(int exit_code);
+ ExitedWithCode(const ExitedWithCode&) = default;
+ void operator=(const ExitedWithCode& other) = delete;
bool operator()(int exit_status) const;
private:
- // No implementation - assignment is unsupported.
- void operator=(const ExitedWithCode& other);
-
const int exit_code_;
};
@@ -10308,20 +7513,20 @@
// This macro is used for implementing macros such as
// EXPECT_DEATH_IF_SUPPORTED and ASSERT_DEATH_IF_SUPPORTED on systems where
// death tests are not supported. Those macros must compile on such systems
-// iff EXPECT_DEATH and ASSERT_DEATH compile with the same parameters on
-// systems that support death tests. This allows one to write such a macro
-// on a system that does not support death tests and be sure that it will
-// compile on a death-test supporting system. It is exposed publicly so that
-// systems that have death-tests with stricter requirements than
-// GTEST_HAS_DEATH_TEST can write their own equivalent of
-// EXPECT_DEATH_IF_SUPPORTED and ASSERT_DEATH_IF_SUPPORTED.
+// if and only if EXPECT_DEATH and ASSERT_DEATH compile with the same parameters
+// on systems that support death tests. This allows one to write such a macro on
+// a system that does not support death tests and be sure that it will compile
+// on a death-test supporting system. It is exposed publicly so that systems
+// that have death-tests with stricter requirements than GTEST_HAS_DEATH_TEST
+// can write their own equivalent of EXPECT_DEATH_IF_SUPPORTED and
+// ASSERT_DEATH_IF_SUPPORTED.
//
// Parameters:
// statement - A statement that a macro such as EXPECT_DEATH would test
// for program termination. This macro has to make sure this
// statement is compiled but not executed, to ensure that
// EXPECT_DEATH_IF_SUPPORTED compiles with a certain
-// parameter iff EXPECT_DEATH compiles with it.
+// parameter if and only if EXPECT_DEATH compiles with it.
// regex - A regex that a macro such as EXPECT_DEATH would use to test
// the output of statement. This parameter has to be
// compiled but not evaluated by this macro, to ensure that
@@ -10372,7 +7577,7 @@
} // namespace testing
-#endif // GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_
// Copyright 2008, Google Inc.
// All rights reserved.
//
@@ -10405,12 +7610,9 @@
// Macros and functions for implementing parameterized tests
// in Google C++ Testing and Mocking Framework (Google Test)
//
-// This file is generated by a SCRIPT. DO NOT EDIT BY HAND!
-//
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
-#define GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
-
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
// Value-parameterized tests allow you to test your code with different
// parameters without writing multiple copies of the same test.
@@ -10475,7 +7677,7 @@
Values("meeny", "miny", "moe"));
// To distinguish different instances of the pattern, (yes, you
-// can instantiate it more then once) the first argument to the
+// can instantiate it more than once) the first argument to the
// INSTANTIATE_TEST_SUITE_P macro is a prefix that will be added to the
// actual test suite name. Remember to pick unique prefixes for different
// instantiations. The tests from the instantiation above will have
@@ -10549,6 +7751,7 @@
#endif // 0
+#include <iterator>
#include <utility>
// Copyright 2008 Google Inc.
@@ -10585,8 +7788,8 @@
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
-#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
+#define GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
#include <ctype.h>
@@ -10595,9 +7798,192 @@
#include <memory>
#include <set>
#include <tuple>
+#include <type_traits>
#include <utility>
#include <vector>
+// Copyright 2008, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// GOOGLETEST_CM0001 DO NOT DELETE
+
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
+
+#include <iosfwd>
+#include <vector>
+
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
+namespace testing {
+
+// A copyable object representing the result of a test part (i.e. an
+// assertion or an explicit FAIL(), ADD_FAILURE(), or SUCCESS()).
+//
+// Don't inherit from TestPartResult as its destructor is not virtual.
+class GTEST_API_ TestPartResult {
+ public:
+ // The possible outcomes of a test part (i.e. an assertion or an
+ // explicit SUCCEED(), FAIL(), or ADD_FAILURE()).
+ enum Type {
+ kSuccess, // Succeeded.
+ kNonFatalFailure, // Failed but the test can continue.
+ kFatalFailure, // Failed and the test should be terminated.
+ kSkip // Skipped.
+ };
+
+ // C'tor. TestPartResult does NOT have a default constructor.
+ // Always use this constructor (with parameters) to create a
+ // TestPartResult object.
+ TestPartResult(Type a_type, const char* a_file_name, int a_line_number,
+ const char* a_message)
+ : type_(a_type),
+ file_name_(a_file_name == nullptr ? "" : a_file_name),
+ line_number_(a_line_number),
+ summary_(ExtractSummary(a_message)),
+ message_(a_message) {}
+
+ // Gets the outcome of the test part.
+ Type type() const { return type_; }
+
+ // Gets the name of the source file where the test part took place, or
+ // NULL if it's unknown.
+ const char* file_name() const {
+ return file_name_.empty() ? nullptr : file_name_.c_str();
+ }
+
+ // Gets the line in the source file where the test part took place,
+ // or -1 if it's unknown.
+ int line_number() const { return line_number_; }
+
+ // Gets the summary of the failure message.
+ const char* summary() const { return summary_.c_str(); }
+
+ // Gets the message associated with the test part.
+ const char* message() const { return message_.c_str(); }
+
+ // Returns true if and only if the test part was skipped.
+ bool skipped() const { return type_ == kSkip; }
+
+ // Returns true if and only if the test part passed.
+ bool passed() const { return type_ == kSuccess; }
+
+ // Returns true if and only if the test part non-fatally failed.
+ bool nonfatally_failed() const { return type_ == kNonFatalFailure; }
+
+ // Returns true if and only if the test part fatally failed.
+ bool fatally_failed() const { return type_ == kFatalFailure; }
+
+ // Returns true if and only if the test part failed.
+ bool failed() const { return fatally_failed() || nonfatally_failed(); }
+
+ private:
+ Type type_;
+
+ // Gets the summary of the failure message by omitting the stack
+ // trace in it.
+ static std::string ExtractSummary(const char* message);
+
+ // The name of the source file where the test part took place, or
+ // "" if the source file is unknown.
+ std::string file_name_;
+ // The line in the source file where the test part took place, or -1
+ // if the line number is unknown.
+ int line_number_;
+ std::string summary_; // The test failure summary.
+ std::string message_; // The test failure message.
+};
+
+// Prints a TestPartResult object.
+std::ostream& operator<<(std::ostream& os, const TestPartResult& result);
+
+// An array of TestPartResult objects.
+//
+// Don't inherit from TestPartResultArray as its destructor is not
+// virtual.
+class GTEST_API_ TestPartResultArray {
+ public:
+ TestPartResultArray() {}
+
+ // Appends the given TestPartResult to the array.
+ void Append(const TestPartResult& result);
+
+ // Returns the TestPartResult at the given index (0-based).
+ const TestPartResult& GetTestPartResult(int index) const;
+
+ // Returns the number of TestPartResult objects in the array.
+ int size() const;
+
+ private:
+ std::vector<TestPartResult> array_;
+
+ GTEST_DISALLOW_COPY_AND_ASSIGN_(TestPartResultArray);
+};
+
+// This interface knows how to report a test part result.
+class GTEST_API_ TestPartResultReporterInterface {
+ public:
+ virtual ~TestPartResultReporterInterface() {}
+
+ virtual void ReportTestPartResult(const TestPartResult& result) = 0;
+};
+
+namespace internal {
+
+// This helper class is used by {ASSERT|EXPECT}_NO_FATAL_FAILURE to check if a
+// statement generates new fatal failures. To do so it registers itself as the
+// current test part result reporter. Besides checking if fatal failures were
+// reported, it only delegates the reporting to the former result reporter.
+// The original result reporter is restored in the destructor.
+// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.
+class GTEST_API_ HasNewFatalFailureHelper
+ : public TestPartResultReporterInterface {
+ public:
+ HasNewFatalFailureHelper();
+ ~HasNewFatalFailureHelper() override;
+ void ReportTestPartResult(const TestPartResult& result) override;
+ bool has_new_fatal_failure() const { return has_new_fatal_failure_; }
+ private:
+ bool has_new_fatal_failure_;
+ TestPartResultReporterInterface* original_reporter_;
+
+ GTEST_DISALLOW_COPY_AND_ASSIGN_(HasNewFatalFailureHelper);
+};
+
+} // namespace internal
+
+} // namespace testing
+
+GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251
+
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
namespace testing {
// Input to a parameterized test name generator, describing a test parameter.
@@ -11007,11 +8393,11 @@
// Base part of test suite name for display purposes.
virtual const std::string& GetTestSuiteName() const = 0;
- // Test case id to verify identity.
+ // Test suite id to verify identity.
virtual TypeId GetTestSuiteTypeId() const = 0;
// UnitTest class invokes this method to register tests in this
// test suite right before running them in RUN_ALL_TESTS macro.
- // This method should not be called more then once on any single
+ // This method should not be called more than once on any single
// instance of a ParameterizedTestSuiteInfoBase derived class.
virtual void RegisterTests() = 0;
@@ -11024,6 +8410,17 @@
// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.
//
+// Report a the name of a test_suit as safe to ignore
+// as the side effect of construction of this type.
+struct GTEST_API_ MarkAsIgnored {
+ explicit MarkAsIgnored(const char* test_suite);
+};
+
+GTEST_API_ void InsertSyntheticTestCase(const std::string& name,
+ CodeLocation location, bool has_test_p);
+
+// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.
+//
// ParameterizedTestSuiteInfo accumulates tests obtained from TEST_P
// macro invocations for a particular test suite and generators
// obtained from INSTANTIATE_TEST_SUITE_P macro invocations for that
@@ -11044,11 +8441,11 @@
CodeLocation code_location)
: test_suite_name_(name), code_location_(code_location) {}
- // Test case base name for display purposes.
+ // Test suite base name for display purposes.
const std::string& GetTestSuiteName() const override {
return test_suite_name_;
}
- // Test case id to verify identity.
+ // Test suite id to verify identity.
TypeId GetTestSuiteTypeId() const override { return GetTypeId<TestSuite>(); }
// TEST_P macro uses AddTestPattern() to record information
// about a single test in a LocalTestInfo structure.
@@ -11057,9 +8454,10 @@
// parameter index. For the test SequenceA/FooTest.DoBar/1 FooTest is
// test suite base name and DoBar is test base name.
void AddTestPattern(const char* test_suite_name, const char* test_base_name,
- TestMetaFactoryBase<ParamType>* meta_factory) {
- tests_.push_back(std::shared_ptr<TestInfo>(
- new TestInfo(test_suite_name, test_base_name, meta_factory)));
+ TestMetaFactoryBase<ParamType>* meta_factory,
+ CodeLocation code_location) {
+ tests_.push_back(std::shared_ptr<TestInfo>(new TestInfo(
+ test_suite_name, test_base_name, meta_factory, code_location)));
}
// INSTANTIATE_TEST_SUITE_P macro uses AddGenerator() to record information
// about a generator.
@@ -11072,11 +8470,13 @@
return 0; // Return value used only to run this method in namespace scope.
}
// UnitTest class invokes this method to register tests in this test suite
- // test suites right before running tests in RUN_ALL_TESTS macro.
- // This method should not be called more then once on any single
+ // right before running tests in RUN_ALL_TESTS macro.
+ // This method should not be called more than once on any single
// instance of a ParameterizedTestSuiteInfoBase derived class.
- // UnitTest has a guard to prevent from calling this method more then once.
+ // UnitTest has a guard to prevent from calling this method more than once.
void RegisterTests() override {
+ bool generated_instantiations = false;
+
for (typename TestInfoContainer::iterator test_it = tests_.begin();
test_it != tests_.end(); ++test_it) {
std::shared_ptr<TestInfo> test_info = *test_it;
@@ -11099,6 +8499,8 @@
for (typename ParamGenerator<ParamType>::iterator param_it =
generator.begin();
param_it != generator.end(); ++param_it, ++i) {
+ generated_instantiations = true;
+
Message test_name_stream;
std::string param_name = name_func(
@@ -11115,18 +8517,27 @@
test_param_names.insert(param_name);
- test_name_stream << test_info->test_base_name << "/" << param_name;
+ if (!test_info->test_base_name.empty()) {
+ test_name_stream << test_info->test_base_name << "/";
+ }
+ test_name_stream << param_name;
MakeAndRegisterTestInfo(
test_suite_name.c_str(), test_name_stream.GetString().c_str(),
nullptr, // No type parameter.
- PrintToString(*param_it).c_str(), code_location_,
+ PrintToString(*param_it).c_str(), test_info->code_location,
GetTestSuiteTypeId(),
- SuiteApiResolver<TestSuite>::GetSetUpCaseOrSuite(),
- SuiteApiResolver<TestSuite>::GetTearDownCaseOrSuite(),
+ SuiteApiResolver<TestSuite>::GetSetUpCaseOrSuite(file, line),
+ SuiteApiResolver<TestSuite>::GetTearDownCaseOrSuite(file, line),
test_info->test_meta_factory->CreateTestFactory(*param_it));
} // for param_it
} // for gen_it
} // for test_it
+
+ if (!generated_instantiations) {
+ // There are no generaotrs, or they all generate nothing ...
+ InsertSyntheticTestCase(GetTestSuiteName(), code_location_,
+ !tests_.empty());
+ }
} // RegisterTests
private:
@@ -11134,14 +8545,17 @@
// with TEST_P macro.
struct TestInfo {
TestInfo(const char* a_test_suite_base_name, const char* a_test_base_name,
- TestMetaFactoryBase<ParamType>* a_test_meta_factory)
+ TestMetaFactoryBase<ParamType>* a_test_meta_factory,
+ CodeLocation a_code_location)
: test_suite_base_name(a_test_suite_base_name),
test_base_name(a_test_base_name),
- test_meta_factory(a_test_meta_factory) {}
+ test_meta_factory(a_test_meta_factory),
+ code_location(a_code_location) {}
const std::string test_suite_base_name;
const std::string test_base_name;
const std::unique_ptr<TestMetaFactoryBase<ParamType> > test_meta_factory;
+ const CodeLocation code_location;
};
using TestInfoContainer = ::std::vector<std::shared_ptr<TestInfo> >;
// Records data received from INSTANTIATE_TEST_SUITE_P macros:
@@ -11174,7 +8588,7 @@
// Check for invalid characters
for (std::string::size_type index = 0; index < name.size(); ++index) {
- if (!isalnum(name[index]) && name[index] != '_')
+ if (!IsAlNum(name[index]) && name[index] != '_')
return false;
}
@@ -11264,6 +8678,34 @@
GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestSuiteRegistry);
};
+// Keep track of what type-parameterized test suite are defined and
+// where as well as which are intatiated. This allows susequently
+// identifying suits that are defined but never used.
+class TypeParameterizedTestSuiteRegistry {
+ public:
+ // Add a suite definition
+ void RegisterTestSuite(const char* test_suite_name,
+ CodeLocation code_location);
+
+ // Add an instantiation of a suit.
+ void RegisterInstantiation(const char* test_suite_name);
+
+ // For each suit repored as defined but not reported as instantiation,
+ // emit a test that reports that fact (configurably, as an error).
+ void CheckForInstantiations();
+
+ private:
+ struct TypeParameterizedTestSuiteInfo {
+ explicit TypeParameterizedTestSuiteInfo(CodeLocation c)
+ : code_location(c), instantiated(false) {}
+
+ CodeLocation code_location;
+ bool instantiated;
+ };
+
+ std::map<std::string, TypeParameterizedTestSuiteInfo> suites_;
+};
+
} // namespace internal
// Forward declarations of ValuesIn(), which is implemented in
@@ -11275,10 +8717,15 @@
namespace internal {
// Used in the Values() function to provide polymorphic capabilities.
+#ifdef _MSC_VER
+#pragma warning(push)
+#pragma warning(disable : 4100)
+#endif
+
template <typename... Ts>
class ValueArray {
public:
- ValueArray(Ts... v) : v_{std::move(v)...} {}
+ explicit ValueArray(Ts... v) : v_(FlatTupleConstructTag{}, std::move(v)...) {}
template <typename T>
operator ParamGenerator<T>() const { // NOLINT
@@ -11294,6 +8741,10 @@
FlatTuple<Ts...> v_;
};
+#ifdef _MSC_VER
+#pragma warning(pop)
+#endif
+
template <typename... T>
class CartesianProductGenerator
: public ParamGeneratorInterface<::std::tuple<T...>> {
@@ -11427,7 +8878,7 @@
} // namespace internal
} // namespace testing
-#endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
namespace testing {
@@ -11541,10 +8992,9 @@
//
template <typename ForwardIterator>
internal::ParamGenerator<
- typename ::testing::internal::IteratorTraits<ForwardIterator>::value_type>
+ typename std::iterator_traits<ForwardIterator>::value_type>
ValuesIn(ForwardIterator begin, ForwardIterator end) {
- typedef typename ::testing::internal::IteratorTraits<ForwardIterator>
- ::value_type ParamType;
+ typedef typename std::iterator_traits<ForwardIterator>::value_type ParamType;
return internal::ParamGenerator<ParamType>(
new internal::ValuesInIteratorRangeGenerator<ParamType>(begin, end));
}
@@ -11620,8 +9070,6 @@
// std::tuple<T1, T2, ..., TN> where T1, T2, ..., TN are the types
// of elements from sequences produces by gen1, gen2, ..., genN.
//
-// Combine can have up to 10 arguments.
-//
// Example:
//
// This will instantiate tests in test suite AnimalTest each one with
@@ -11665,19 +9113,20 @@
: public test_suite_name { \
public: \
GTEST_TEST_CLASS_NAME_(test_suite_name, test_name)() {} \
- virtual void TestBody(); \
+ void TestBody() override; \
\
private: \
static int AddToRegistry() { \
::testing::UnitTest::GetInstance() \
->parameterized_test_registry() \
.GetTestSuitePatternHolder<test_suite_name>( \
- #test_suite_name, \
+ GTEST_STRINGIFY_(test_suite_name), \
::testing::internal::CodeLocation(__FILE__, __LINE__)) \
->AddTestPattern( \
GTEST_STRINGIFY_(test_suite_name), GTEST_STRINGIFY_(test_name), \
new ::testing::internal::TestMetaFactory<GTEST_TEST_CLASS_NAME_( \
- test_suite_name, test_name)>()); \
+ test_suite_name, test_name)>(), \
+ ::testing::internal::CodeLocation(__FILE__, __LINE__)); \
return 0; \
} \
static int gtest_registering_dummy_ GTEST_ATTRIBUTE_UNUSED_; \
@@ -11732,13 +9181,21 @@
::testing::UnitTest::GetInstance() \
->parameterized_test_registry() \
.GetTestSuitePatternHolder<test_suite_name>( \
- #test_suite_name, \
+ GTEST_STRINGIFY_(test_suite_name), \
::testing::internal::CodeLocation(__FILE__, __LINE__)) \
->AddTestSuiteInstantiation( \
- #prefix, >est_##prefix##test_suite_name##_EvalGenerator_, \
+ GTEST_STRINGIFY_(prefix), \
+ >est_##prefix##test_suite_name##_EvalGenerator_, \
>est_##prefix##test_suite_name##_EvalGenerateName_, \
__FILE__, __LINE__)
+
+// Allow Marking a Parameterized test class as not needing to be instantiated.
+#define GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(T) \
+ namespace gtest_do_not_use_outside_namespace_scope {} \
+ static const ::testing::internal::MarkAsIgnored gtest_allow_ignore_##T( \
+ GTEST_STRINGIFY_(T))
+
// Legacy API is deprecated but still available
#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
#define INSTANTIATE_TEST_CASE_P \
@@ -11749,7 +9206,7 @@
} // namespace testing
-#endif // GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
// Copyright 2006, Google Inc.
// All rights reserved.
//
@@ -11783,8 +9240,8 @@
// Google C++ Testing and Mocking Framework definitions useful in production code.
// GOOGLETEST_CM0003 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_PROD_H_
-#define GTEST_INCLUDE_GTEST_GTEST_PROD_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_PROD_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_PROD_H_
// When you need to test the private or protected members of a class,
// use the FRIEND_TEST macro to declare your tests as friends of the
@@ -11810,189 +9267,7 @@
#define FRIEND_TEST(test_case_name, test_name)\
friend class test_case_name##_##test_name##_Test
-#endif // GTEST_INCLUDE_GTEST_GTEST_PROD_H_
-// Copyright 2008, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// GOOGLETEST_CM0001 DO NOT DELETE
-
-#ifndef GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
-#define GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
-
-#include <iosfwd>
-#include <vector>
-
-GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
-/* class A needs to have dll-interface to be used by clients of class B */)
-
-namespace testing {
-
-// A copyable object representing the result of a test part (i.e. an
-// assertion or an explicit FAIL(), ADD_FAILURE(), or SUCCESS()).
-//
-// Don't inherit from TestPartResult as its destructor is not virtual.
-class GTEST_API_ TestPartResult {
- public:
- // The possible outcomes of a test part (i.e. an assertion or an
- // explicit SUCCEED(), FAIL(), or ADD_FAILURE()).
- enum Type {
- kSuccess, // Succeeded.
- kNonFatalFailure, // Failed but the test can continue.
- kFatalFailure, // Failed and the test should be terminated.
- kSkip // Skipped.
- };
-
- // C'tor. TestPartResult does NOT have a default constructor.
- // Always use this constructor (with parameters) to create a
- // TestPartResult object.
- TestPartResult(Type a_type, const char* a_file_name, int a_line_number,
- const char* a_message)
- : type_(a_type),
- file_name_(a_file_name == nullptr ? "" : a_file_name),
- line_number_(a_line_number),
- summary_(ExtractSummary(a_message)),
- message_(a_message) {}
-
- // Gets the outcome of the test part.
- Type type() const { return type_; }
-
- // Gets the name of the source file where the test part took place, or
- // NULL if it's unknown.
- const char* file_name() const {
- return file_name_.empty() ? nullptr : file_name_.c_str();
- }
-
- // Gets the line in the source file where the test part took place,
- // or -1 if it's unknown.
- int line_number() const { return line_number_; }
-
- // Gets the summary of the failure message.
- const char* summary() const { return summary_.c_str(); }
-
- // Gets the message associated with the test part.
- const char* message() const { return message_.c_str(); }
-
- // Returns true iff the test part was skipped.
- bool skipped() const { return type_ == kSkip; }
-
- // Returns true iff the test part passed.
- bool passed() const { return type_ == kSuccess; }
-
- // Returns true iff the test part non-fatally failed.
- bool nonfatally_failed() const { return type_ == kNonFatalFailure; }
-
- // Returns true iff the test part fatally failed.
- bool fatally_failed() const { return type_ == kFatalFailure; }
-
- // Returns true iff the test part failed.
- bool failed() const { return fatally_failed() || nonfatally_failed(); }
-
- private:
- Type type_;
-
- // Gets the summary of the failure message by omitting the stack
- // trace in it.
- static std::string ExtractSummary(const char* message);
-
- // The name of the source file where the test part took place, or
- // "" if the source file is unknown.
- std::string file_name_;
- // The line in the source file where the test part took place, or -1
- // if the line number is unknown.
- int line_number_;
- std::string summary_; // The test failure summary.
- std::string message_; // The test failure message.
-};
-
-// Prints a TestPartResult object.
-std::ostream& operator<<(std::ostream& os, const TestPartResult& result);
-
-// An array of TestPartResult objects.
-//
-// Don't inherit from TestPartResultArray as its destructor is not
-// virtual.
-class GTEST_API_ TestPartResultArray {
- public:
- TestPartResultArray() {}
-
- // Appends the given TestPartResult to the array.
- void Append(const TestPartResult& result);
-
- // Returns the TestPartResult at the given index (0-based).
- const TestPartResult& GetTestPartResult(int index) const;
-
- // Returns the number of TestPartResult objects in the array.
- int size() const;
-
- private:
- std::vector<TestPartResult> array_;
-
- GTEST_DISALLOW_COPY_AND_ASSIGN_(TestPartResultArray);
-};
-
-// This interface knows how to report a test part result.
-class GTEST_API_ TestPartResultReporterInterface {
- public:
- virtual ~TestPartResultReporterInterface() {}
-
- virtual void ReportTestPartResult(const TestPartResult& result) = 0;
-};
-
-namespace internal {
-
-// This helper class is used by {ASSERT|EXPECT}_NO_FATAL_FAILURE to check if a
-// statement generates new fatal failures. To do so it registers itself as the
-// current test part result reporter. Besides checking if fatal failures were
-// reported, it only delegates the reporting to the former result reporter.
-// The original result reporter is restored in the destructor.
-// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.
-class GTEST_API_ HasNewFatalFailureHelper
- : public TestPartResultReporterInterface {
- public:
- HasNewFatalFailureHelper();
- ~HasNewFatalFailureHelper() override;
- void ReportTestPartResult(const TestPartResult& result) override;
- bool has_new_fatal_failure() const { return has_new_fatal_failure_; }
- private:
- bool has_new_fatal_failure_;
- TestPartResultReporterInterface* original_reporter_;
-
- GTEST_DISALLOW_COPY_AND_ASSIGN_(HasNewFatalFailureHelper);
-};
-
-} // namespace internal
-
-} // namespace testing
-
-GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251
-
-#endif // GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_PROD_H_
// Copyright 2008 Google Inc.
// All Rights Reserved.
//
@@ -12022,11 +9297,10 @@
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_
-#define GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_
// This header implements typed tests and type-parameterized tests.
@@ -12060,9 +9334,9 @@
// Then, use TYPED_TEST() instead of TEST_F() to define as many typed
// tests for this test suite as you want.
TYPED_TEST(FooTest, DoesBlah) {
- // Inside a test, refer to TypeParam to get the type parameter.
- // Since we are inside a derived class template, C++ requires use to
- // visit the members of FooTest via 'this'.
+ // Inside a test, refer to the special name TypeParam to get the type
+ // parameter. Since we are inside a derived class template, C++ requires
+ // us to visit the members of FooTest via 'this'.
TypeParam n = this->value_;
// To visit static members of the fixture, add the TestFixture::
@@ -12168,8 +9442,6 @@
// Implements typed tests.
-#if GTEST_HAS_TYPED_TEST
-
// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.
//
// Expands to the name of the typedef for the type parameters of the
@@ -12181,27 +9453,25 @@
#define GTEST_NAME_GENERATOR_(TestSuiteName) \
gtest_type_params_##TestSuiteName##_NameGenerator
-// The 'Types' template argument below must have spaces around it
-// since some compilers may choke on '>>' when passing a template
-// instance (e.g. Types<int>)
-#define TYPED_TEST_SUITE(CaseName, Types, ...) \
- typedef ::testing::internal::TypeList<Types>::type GTEST_TYPE_PARAMS_( \
- CaseName); \
- typedef ::testing::internal::NameGeneratorSelector<__VA_ARGS__>::type \
+#define TYPED_TEST_SUITE(CaseName, Types, ...) \
+ typedef ::testing::internal::GenerateTypeList<Types>::type \
+ GTEST_TYPE_PARAMS_(CaseName); \
+ typedef ::testing::internal::NameGeneratorSelector<__VA_ARGS__>::type \
GTEST_NAME_GENERATOR_(CaseName)
-# define TYPED_TEST(CaseName, TestName) \
+#define TYPED_TEST(CaseName, TestName) \
+ static_assert(sizeof(GTEST_STRINGIFY_(TestName)) > 1, \
+ "test-name must not be empty"); \
template <typename gtest_TypeParam_> \
class GTEST_TEST_CLASS_NAME_(CaseName, TestName) \
: public CaseName<gtest_TypeParam_> { \
private: \
typedef CaseName<gtest_TypeParam_> TestFixture; \
typedef gtest_TypeParam_ TypeParam; \
- virtual void TestBody(); \
+ void TestBody() override; \
}; \
static bool gtest_##CaseName##_##TestName##_registered_ \
- GTEST_ATTRIBUTE_UNUSED_ = \
- ::testing::internal::TypeParameterizedTest< \
+ GTEST_ATTRIBUTE_UNUSED_ = ::testing::internal::TypeParameterizedTest< \
CaseName, \
::testing::internal::TemplateSel<GTEST_TEST_CLASS_NAME_(CaseName, \
TestName)>, \
@@ -12209,7 +9479,8 @@
CaseName)>::Register("", \
::testing::internal::CodeLocation( \
__FILE__, __LINE__), \
- #CaseName, #TestName, 0, \
+ GTEST_STRINGIFY_(CaseName), \
+ GTEST_STRINGIFY_(TestName), 0, \
::testing::internal::GenerateNames< \
GTEST_NAME_GENERATOR_(CaseName), \
GTEST_TYPE_PARAMS_(CaseName)>()); \
@@ -12224,12 +9495,8 @@
TYPED_TEST_SUITE
#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
-#endif // GTEST_HAS_TYPED_TEST
-
// Implements type-parameterized tests.
-#if GTEST_HAS_TYPED_TEST_P
-
// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.
//
// Expands to the namespace name that the type-parameterized tests for
@@ -12272,24 +9539,26 @@
private: \
typedef SuiteName<gtest_TypeParam_> TestFixture; \
typedef gtest_TypeParam_ TypeParam; \
- virtual void TestBody(); \
+ void TestBody() override; \
}; \
static bool gtest_##TestName##_defined_ GTEST_ATTRIBUTE_UNUSED_ = \
GTEST_TYPED_TEST_SUITE_P_STATE_(SuiteName).AddTestName( \
- __FILE__, __LINE__, #SuiteName, #TestName); \
+ __FILE__, __LINE__, GTEST_STRINGIFY_(SuiteName), \
+ GTEST_STRINGIFY_(TestName)); \
} \
template <typename gtest_TypeParam_> \
void GTEST_SUITE_NAMESPACE_( \
SuiteName)::TestName<gtest_TypeParam_>::TestBody()
-#define REGISTER_TYPED_TEST_SUITE_P(SuiteName, ...) \
- namespace GTEST_SUITE_NAMESPACE_(SuiteName) { \
- typedef ::testing::internal::Templates<__VA_ARGS__>::type gtest_AllTests_; \
- } \
- static const char* const GTEST_REGISTERED_TEST_NAMES_( \
- SuiteName) GTEST_ATTRIBUTE_UNUSED_ = \
- GTEST_TYPED_TEST_SUITE_P_STATE_(SuiteName).VerifyRegisteredTestNames( \
- __FILE__, __LINE__, #__VA_ARGS__)
+// Note: this won't work correctly if the trailing arguments are macros.
+#define REGISTER_TYPED_TEST_SUITE_P(SuiteName, ...) \
+ namespace GTEST_SUITE_NAMESPACE_(SuiteName) { \
+ typedef ::testing::internal::Templates<__VA_ARGS__> gtest_AllTests_; \
+ } \
+ static const char* const GTEST_REGISTERED_TEST_NAMES_( \
+ SuiteName) GTEST_ATTRIBUTE_UNUSED_ = \
+ GTEST_TYPED_TEST_SUITE_P_STATE_(SuiteName).VerifyRegisteredTestNames( \
+ GTEST_STRINGIFY_(SuiteName), __FILE__, __LINE__, #__VA_ARGS__)
// Legacy API is deprecated but still available
#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
@@ -12299,22 +9568,22 @@
REGISTER_TYPED_TEST_SUITE_P
#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
-// The 'Types' template argument below must have spaces around it
-// since some compilers may choke on '>>' when passing a template
-// instance (e.g. Types<int>)
#define INSTANTIATE_TYPED_TEST_SUITE_P(Prefix, SuiteName, Types, ...) \
+ static_assert(sizeof(GTEST_STRINGIFY_(Prefix)) > 1, \
+ "test-suit-prefix must not be empty"); \
static bool gtest_##Prefix##_##SuiteName GTEST_ATTRIBUTE_UNUSED_ = \
::testing::internal::TypeParameterizedTestSuite< \
SuiteName, GTEST_SUITE_NAMESPACE_(SuiteName)::gtest_AllTests_, \
- ::testing::internal::TypeList<Types>::type>:: \
- Register(#Prefix, \
+ ::testing::internal::GenerateTypeList<Types>::type>:: \
+ Register(GTEST_STRINGIFY_(Prefix), \
::testing::internal::CodeLocation(__FILE__, __LINE__), \
- >EST_TYPED_TEST_SUITE_P_STATE_(SuiteName), #SuiteName, \
+ >EST_TYPED_TEST_SUITE_P_STATE_(SuiteName), \
+ GTEST_STRINGIFY_(SuiteName), \
GTEST_REGISTERED_TEST_NAMES_(SuiteName), \
::testing::internal::GenerateNames< \
::testing::internal::NameGeneratorSelector< \
__VA_ARGS__>::type, \
- ::testing::internal::TypeList<Types>::type>())
+ ::testing::internal::GenerateTypeList<Types>::type>())
// Legacy API is deprecated but still available
#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
@@ -12324,28 +9593,11 @@
INSTANTIATE_TYPED_TEST_SUITE_P
#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
-#endif // GTEST_HAS_TYPED_TEST_P
-
-#endif // GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_
GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
/* class A needs to have dll-interface to be used by clients of class B */)
-// Depending on the platform, different string classes are available.
-// On Linux, in addition to ::std::string, Google also makes use of
-// class ::string, which has the same interface as ::std::string, but
-// has a different implementation.
-//
-// You can define GTEST_HAS_GLOBAL_STRING to 1 to indicate that
-// ::string is available AND is a distinct type to ::std::string, or
-// define it to 0 to indicate otherwise.
-//
-// If ::std::string and ::string are the same class on your platform
-// due to aliasing, you should define GTEST_HAS_GLOBAL_STRING to 0.
-//
-// If you do not define GTEST_HAS_GLOBAL_STRING, it is defined
-// heuristically.
-
namespace testing {
// Silence C4100 (unreferenced formal parameter) and 4805
@@ -12374,6 +9626,10 @@
// to let Google Test decide.
GTEST_DECLARE_string_(color);
+// This flag controls whether the test runner should continue execution past
+// first failure.
+GTEST_DECLARE_bool_(fail_fast);
+
// This flag sets up the filter to select by name using a glob pattern
// the tests to run. If the filter is not given all tests are executed.
GTEST_DECLARE_string_(filter);
@@ -12390,6 +9646,9 @@
// in addition to its normal textual output.
GTEST_DECLARE_string_(output);
+// This flags control whether Google Test prints only test failures.
+GTEST_DECLARE_bool_(brief);
+
// This flags control whether Google Test prints the elapsed time for each
// test.
GTEST_DECLARE_bool_(print_time);
@@ -12450,6 +9709,7 @@
class UnitTestImpl* GetUnitTestImpl();
void ReportFailureInUnknownLocation(TestPartResult::Type result_type,
const std::string& message);
+std::set<std::string>* GetIgnoredParameterizedTestSuites();
} // namespace internal
@@ -12551,7 +9811,11 @@
// Used in EXPECT_TRUE/FALSE(assertion_result).
AssertionResult(const AssertionResult& other);
-#if defined(_MSC_VER) && _MSC_VER < 1910
+// C4800 is a level 3 warning in Visual Studio 2015 and earlier.
+// This warning is not emitted in Visual Studio 2017.
+// This warning is off by default starting in Visual Studio 2019 but can be
+// enabled with command-line options.
+#if defined(_MSC_VER) && (_MSC_VER < 1910 || _MSC_VER >= 1920)
GTEST_DISABLE_MSC_WARNINGS_PUSH_(4800 /* forcing value to bool */)
#endif
@@ -12565,13 +9829,13 @@
template <typename T>
explicit AssertionResult(
const T& success,
- typename internal::EnableIf<
+ typename std::enable_if<
!std::is_convertible<T, AssertionResult>::value>::type*
/*enabler*/
= nullptr)
: success_(success) {}
-#if defined(_MSC_VER) && _MSC_VER < 1910
+#if defined(_MSC_VER) && (_MSC_VER < 1910 || _MSC_VER >= 1920)
GTEST_DISABLE_MSC_WARNINGS_POP_()
#endif
@@ -12581,7 +9845,7 @@
return *this;
}
- // Returns true iff the assertion succeeded.
+ // Returns true if and only if the assertion succeeded.
operator bool() const { return success_; } // NOLINT
// Returns the assertion's negation. Used with EXPECT/ASSERT_FALSE.
@@ -12680,8 +9944,8 @@
// Implements a family of generic predicate assertion macros.
// GOOGLETEST_CM0001 DO NOT DELETE
-#ifndef GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
-#define GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
+#ifndef GOOGLETEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
+#define GOOGLETEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
namespace testing {
@@ -13002,7 +10266,7 @@
} // namespace testing
-#endif // GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
namespace testing {
@@ -13036,38 +10300,39 @@
// The d'tor is virtual as we intend to inherit from Test.
virtual ~Test();
- // Sets up the stuff shared by all tests in this test case.
+ // Sets up the stuff shared by all tests in this test suite.
//
// Google Test will call Foo::SetUpTestSuite() before running the first
- // test in test case Foo. Hence a sub-class can define its own
+ // test in test suite Foo. Hence a sub-class can define its own
// SetUpTestSuite() method to shadow the one defined in the super
// class.
static void SetUpTestSuite() {}
- // Tears down the stuff shared by all tests in this test case.
+ // Tears down the stuff shared by all tests in this test suite.
//
// Google Test will call Foo::TearDownTestSuite() after running the last
- // test in test case Foo. Hence a sub-class can define its own
+ // test in test suite Foo. Hence a sub-class can define its own
// TearDownTestSuite() method to shadow the one defined in the super
// class.
static void TearDownTestSuite() {}
- // Legacy API is deprecated but still available
+ // Legacy API is deprecated but still available. Use SetUpTestSuite and
+ // TearDownTestSuite instead.
#ifndef GTEST_REMOVE_LEGACY_TEST_CASEAPI_
static void TearDownTestCase() {}
static void SetUpTestCase() {}
#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
- // Returns true iff the current test has a fatal failure.
+ // Returns true if and only if the current test has a fatal failure.
static bool HasFatalFailure();
- // Returns true iff the current test has a non-fatal failure.
+ // Returns true if and only if the current test has a non-fatal failure.
static bool HasNonfatalFailure();
- // Returns true iff the current test was skipped.
+ // Returns true if and only if the current test was skipped.
static bool IsSkipped();
- // Returns true iff the current test has a (either fatal or
+ // Returns true if and only if the current test has a (either fatal or
// non-fatal) failure.
static bool HasFailure() { return HasFatalFailure() || HasNonfatalFailure(); }
@@ -13098,8 +10363,8 @@
virtual void TearDown();
private:
- // Returns true iff the current test has the same fixture class as
- // the first test in the current test suite.
+ // Returns true if and only if the current test has the same fixture class
+ // as the first test in the current test suite.
static bool HasSameFixtureClass();
// Runs the test after the test fixture has been set up.
@@ -13200,24 +10465,28 @@
// Returns the number of the test properties.
int test_property_count() const;
- // Returns true iff the test passed (i.e. no test part failed).
+ // Returns true if and only if the test passed (i.e. no test part failed).
bool Passed() const { return !Skipped() && !Failed(); }
- // Returns true iff the test was skipped.
+ // Returns true if and only if the test was skipped.
bool Skipped() const;
- // Returns true iff the test failed.
+ // Returns true if and only if the test failed.
bool Failed() const;
- // Returns true iff the test fatally failed.
+ // Returns true if and only if the test fatally failed.
bool HasFatalFailure() const;
- // Returns true iff the test has a non-fatal failure.
+ // Returns true if and only if the test has a non-fatal failure.
bool HasNonfatalFailure() const;
// Returns the elapsed time, in milliseconds.
TimeInMillis elapsed_time() const { return elapsed_time_; }
+ // Gets the time of the test case start, in ms from the start of the
+ // UNIX epoch.
+ TimeInMillis start_timestamp() const { return start_timestamp_; }
+
// Returns the i-th test part result among all the results. i can range from 0
// to total_part_count() - 1. If i is not in that range, aborts the program.
const TestPartResult& GetTestPartResult(int i) const;
@@ -13248,6 +10517,9 @@
return test_properties_;
}
+ // Sets the start time.
+ void set_start_timestamp(TimeInMillis start) { start_timestamp_ = start; }
+
// Sets the elapsed time.
void set_elapsed_time(TimeInMillis elapsed) { elapsed_time_ = elapsed; }
@@ -13283,7 +10555,7 @@
// Protects mutable state of the property vector and of owned
// properties, whose values may be updated.
- internal::Mutex test_properites_mutex_;
+ internal::Mutex test_properties_mutex_;
// The vector of TestPartResults
std::vector<TestPartResult> test_part_results_;
@@ -13291,6 +10563,8 @@
std::vector<TestProperty> test_properties_;
// Running count of death tests.
int death_test_count_;
+ // The start time, in milliseconds since UNIX Epoch.
+ TimeInMillis start_timestamp_;
// The elapsed time, in milliseconds.
TimeInMillis elapsed_time_;
@@ -13367,7 +10641,7 @@
// contains the character 'A' or starts with "Foo.".
bool should_run() const { return should_run_; }
- // Returns true iff this test will appear in the XML report.
+ // Returns true if and only if this test will appear in the XML report.
bool is_reportable() const {
// The XML report includes tests matching the filter, excluding those
// run in other shards.
@@ -13411,6 +10685,9 @@
// deletes it.
void Run();
+ // Skip and records the test result for this object.
+ void Skip();
+
static void ClearTestResult(TestInfo* test_info) {
test_info->result_.Clear();
}
@@ -13425,12 +10702,12 @@
// value-parameterized test.
const std::unique_ptr<const ::std::string> value_param_;
internal::CodeLocation location_;
- const internal::TypeId fixture_class_id_; // ID of the test fixture class
- bool should_run_; // True iff this test should run
- bool is_disabled_; // True iff this test is disabled
- bool matches_filter_; // True if this test matches the
- // user-specified filter.
- bool is_in_another_shard_; // Will be run in another shard.
+ const internal::TypeId fixture_class_id_; // ID of the test fixture class
+ bool should_run_; // True if and only if this test should run
+ bool is_disabled_; // True if and only if this test is disabled
+ bool matches_filter_; // True if this test matches the
+ // user-specified filter.
+ bool is_in_another_shard_; // Will be run in another shard.
internal::TestFactoryBase* const factory_; // The factory that creates
// the test object
@@ -13502,15 +10779,21 @@
// Gets the number of all tests in this test suite.
int total_test_count() const;
- // Returns true iff the test suite passed.
+ // Returns true if and only if the test suite passed.
bool Passed() const { return !Failed(); }
- // Returns true iff the test suite failed.
- bool Failed() const { return failed_test_count() > 0; }
+ // Returns true if and only if the test suite failed.
+ bool Failed() const {
+ return failed_test_count() > 0 || ad_hoc_test_result().Failed();
+ }
// Returns the elapsed time, in milliseconds.
TimeInMillis elapsed_time() const { return elapsed_time_; }
+ // Gets the time of the test suite start, in ms from the start of the
+ // UNIX epoch.
+ TimeInMillis start_timestamp() const { return start_timestamp_; }
+
// Returns the i-th test among all the tests. i can range from 0 to
// total_test_count() - 1. If i is not in that range, returns NULL.
const TestInfo* GetTestInfo(int i) const;
@@ -13553,6 +10836,9 @@
// Runs every test in this TestSuite.
void Run();
+ // Skips the execution of tests under this TestSuite
+ void Skip();
+
// Runs SetUpTestSuite() for this TestSuite. This wrapper is needed
// for catching exceptions thrown from SetUpTestSuite().
void RunSetUpTestSuite() {
@@ -13569,33 +10855,33 @@
}
}
- // Returns true iff test passed.
+ // Returns true if and only if test passed.
static bool TestPassed(const TestInfo* test_info) {
return test_info->should_run() && test_info->result()->Passed();
}
- // Returns true iff test skipped.
+ // Returns true if and only if test skipped.
static bool TestSkipped(const TestInfo* test_info) {
return test_info->should_run() && test_info->result()->Skipped();
}
- // Returns true iff test failed.
+ // Returns true if and only if test failed.
static bool TestFailed(const TestInfo* test_info) {
return test_info->should_run() && test_info->result()->Failed();
}
- // Returns true iff the test is disabled and will be reported in the XML
- // report.
+ // Returns true if and only if the test is disabled and will be reported in
+ // the XML report.
static bool TestReportableDisabled(const TestInfo* test_info) {
return test_info->is_reportable() && test_info->is_disabled_;
}
- // Returns true iff test is disabled.
+ // Returns true if and only if test is disabled.
static bool TestDisabled(const TestInfo* test_info) {
return test_info->is_disabled_;
}
- // Returns true iff this test will appear in the XML report.
+ // Returns true if and only if this test will appear in the XML report.
static bool TestReportable(const TestInfo* test_info) {
return test_info->is_reportable();
}
@@ -13627,8 +10913,10 @@
internal::SetUpTestSuiteFunc set_up_tc_;
// Pointer to the function that tears down the test suite.
internal::TearDownTestSuiteFunc tear_down_tc_;
- // True iff any test in this test suite should run.
+ // True if and only if any test in this test suite should run.
bool should_run_;
+ // The start time, in milliseconds since UNIX Epoch.
+ TimeInMillis start_timestamp_;
// Elapsed time, in milliseconds.
TimeInMillis elapsed_time_;
// Holds test properties recorded during execution of SetUpTestSuite and
@@ -13927,7 +11215,7 @@
int failed_test_case_count() const;
int total_test_case_count() const;
int test_case_to_run_count() const;
-#endif // EMOVE_LEGACY_TEST_CASEAPI
+#endif // GTEST_REMOVE_LEGACY_TEST_CASEAPI_
// Gets the number of successful tests.
int successful_test_count() const;
@@ -13960,11 +11248,12 @@
// Gets the elapsed time, in milliseconds.
TimeInMillis elapsed_time() const;
- // Returns true iff the unit test passed (i.e. all test suites passed).
+ // Returns true if and only if the unit test passed (i.e. all test suites
+ // passed).
bool Passed() const;
- // Returns true iff the unit test failed (i.e. some test suite failed
- // or something outside of all tests failed).
+ // Returns true if and only if the unit test failed (i.e. some test suite
+ // failed or something outside of all tests failed).
bool Failed() const;
// Gets the i-th test suite among all the test suites. i can range from 0 to
@@ -14030,6 +11319,7 @@
friend class internal::StreamingListenerTest;
friend class internal::UnitTestRecordPropertyTestHelper;
friend Environment* AddGlobalTestEnvironment(Environment* env);
+ friend std::set<std::string>* internal::GetIgnoredParameterizedTestSuites();
friend internal::UnitTestImpl* internal::GetUnitTestImpl();
friend void internal::ReportFailureInUnknownLocation(
TestPartResult::Type result_type,
@@ -14141,26 +11431,17 @@
return CmpHelperEQFailure(lhs_expression, rhs_expression, lhs, rhs);
}
-// With this overloaded version, we allow anonymous enums to be used
-// in {ASSERT|EXPECT}_EQ when compiled with gcc 4, as anonymous enums
-// can be implicitly cast to BiggestInt.
-GTEST_API_ AssertionResult CmpHelperEQ(const char* lhs_expression,
- const char* rhs_expression,
- BiggestInt lhs,
- BiggestInt rhs);
-
-// The helper class for {ASSERT|EXPECT}_EQ. The template argument
-// lhs_is_null_literal is true iff the first argument to ASSERT_EQ()
-// is a null pointer literal. The following default implementation is
-// for lhs_is_null_literal being false.
-template <bool lhs_is_null_literal>
class EqHelper {
public:
// This templatized version is for the general case.
- template <typename T1, typename T2>
+ template <
+ typename T1, typename T2,
+ // Disable this overload for cases where one argument is a pointer
+ // and the other is the null pointer constant.
+ typename std::enable_if<!std::is_integral<T1>::value ||
+ !std::is_pointer<T2>::value>::type* = nullptr>
static AssertionResult Compare(const char* lhs_expression,
- const char* rhs_expression,
- const T1& lhs,
+ const char* rhs_expression, const T1& lhs,
const T2& rhs) {
return CmpHelperEQ(lhs_expression, rhs_expression, lhs, rhs);
}
@@ -14177,44 +11458,12 @@
BiggestInt rhs) {
return CmpHelperEQ(lhs_expression, rhs_expression, lhs, rhs);
}
-};
-// This specialization is used when the first argument to ASSERT_EQ()
-// is a null pointer literal, like NULL, false, or 0.
-template <>
-class EqHelper<true> {
- public:
- // We define two overloaded versions of Compare(). The first
- // version will be picked when the second argument to ASSERT_EQ() is
- // NOT a pointer, e.g. ASSERT_EQ(0, AnIntFunction()) or
- // EXPECT_EQ(false, a_bool).
- template <typename T1, typename T2>
- static AssertionResult Compare(
- const char* lhs_expression, const char* rhs_expression, const T1& lhs,
- const T2& rhs,
- // The following line prevents this overload from being considered if T2
- // is not a pointer type. We need this because ASSERT_EQ(NULL, my_ptr)
- // expands to Compare("", "", NULL, my_ptr), which requires a conversion
- // to match the Secret* in the other overload, which would otherwise make
- // this template match better.
- typename EnableIf<!std::is_pointer<T2>::value>::type* = nullptr) {
- return CmpHelperEQ(lhs_expression, rhs_expression, lhs, rhs);
- }
-
- // This version will be picked when the second argument to ASSERT_EQ() is a
- // pointer, e.g. ASSERT_EQ(NULL, a_pointer).
template <typename T>
static AssertionResult Compare(
- const char* lhs_expression,
- const char* rhs_expression,
- // We used to have a second template parameter instead of Secret*. That
- // template parameter would deduce to 'long', making this a better match
- // than the first overload even without the first overload's EnableIf.
- // Unfortunately, gcc with -Wconversion-null warns when "passing NULL to
- // non-pointer argument" (even a deduced integral argument), so the old
- // implementation caused warnings in user code.
- Secret* /* lhs (NULL) */,
- T* rhs) {
+ const char* lhs_expression, const char* rhs_expression,
+ // Handle cases where '0' is used as a null pointer literal.
+ std::nullptr_t /* lhs */, T* rhs) {
// We already know that 'lhs' is a null pointer.
return CmpHelperEQ(lhs_expression, rhs_expression, static_cast<T*>(nullptr),
rhs);
@@ -14238,11 +11487,6 @@
// ASSERT_?? and EXPECT_??. It is here just to avoid copy-and-paste
// of similar code.
//
-// For each templatized helper function, we also define an overloaded
-// version for BiggestInt in order to reduce code bloat and allow
-// anonymous enums to be used with {ASSERT|EXPECT}_?? when compiled
-// with gcc 4.
-//
// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.
#define GTEST_IMPL_CMP_HELPER_(op_name, op)\
@@ -14254,22 +11498,20 @@
} else {\
return CmpHelperOpFailure(expr1, expr2, val1, val2, #op);\
}\
-}\
-GTEST_API_ AssertionResult CmpHelper##op_name(\
- const char* expr1, const char* expr2, BiggestInt val1, BiggestInt val2)
+}
// INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.
// Implements the helper function for {ASSERT|EXPECT}_NE
-GTEST_IMPL_CMP_HELPER_(NE, !=);
+GTEST_IMPL_CMP_HELPER_(NE, !=)
// Implements the helper function for {ASSERT|EXPECT}_LE
-GTEST_IMPL_CMP_HELPER_(LE, <=);
+GTEST_IMPL_CMP_HELPER_(LE, <=)
// Implements the helper function for {ASSERT|EXPECT}_LT
-GTEST_IMPL_CMP_HELPER_(LT, <);
+GTEST_IMPL_CMP_HELPER_(LT, <)
// Implements the helper function for {ASSERT|EXPECT}_GE
-GTEST_IMPL_CMP_HELPER_(GE, >=);
+GTEST_IMPL_CMP_HELPER_(GE, >=)
// Implements the helper function for {ASSERT|EXPECT}_GT
-GTEST_IMPL_CMP_HELPER_(GT, >);
+GTEST_IMPL_CMP_HELPER_(GT, >)
#undef GTEST_IMPL_CMP_HELPER_
@@ -14446,12 +11688,6 @@
GTEST_DISALLOW_COPY_AND_ASSIGN_(AssertHelper);
};
-enum GTestColor { COLOR_DEFAULT, COLOR_RED, COLOR_GREEN, COLOR_YELLOW };
-
-GTEST_API_ GTEST_ATTRIBUTE_PRINTF_(2, 3) void ColoredPrintf(GTestColor color,
- const char* fmt,
- ...);
-
} // namespace internal
// The pure interface class that all value-parameterized tests inherit from.
@@ -14532,7 +11768,7 @@
// Skips test in runtime.
// Skipping test aborts current function.
// Skipped tests are neither successful nor failed.
-#define GTEST_SKIP() GTEST_SKIP_("Skipped")
+#define GTEST_SKIP() GTEST_SKIP_("")
// ADD_FAILURE unconditionally adds a failure to the current test.
// SUCCEED generates a success - it doesn't automatically make the
@@ -14563,6 +11799,11 @@
// Generates a fatal failure with a generic message.
#define GTEST_FAIL() GTEST_FATAL_FAILURE_("Failed")
+// Like GTEST_FAIL(), but at the given source file location.
+#define GTEST_FAIL_AT(file, line) \
+ GTEST_MESSAGE_AT_(file, line, "Failed", \
+ ::testing::TestPartResult::kFatalFailure)
+
// Define this macro to 1 to omit the definition of FAIL(), which is a
// generic name and clashes with some other libraries.
#if !GTEST_DONT_DEFINE_FAIL
@@ -14603,19 +11844,38 @@
// Boolean assertions. Condition can be either a Boolean expression or an
// AssertionResult. For more information on how to use AssertionResult with
// these macros see comments on that class.
-#define EXPECT_TRUE(condition) \
+#define GTEST_EXPECT_TRUE(condition) \
GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \
GTEST_NONFATAL_FAILURE_)
-#define EXPECT_FALSE(condition) \
+#define GTEST_EXPECT_FALSE(condition) \
GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \
GTEST_NONFATAL_FAILURE_)
-#define ASSERT_TRUE(condition) \
+#define GTEST_ASSERT_TRUE(condition) \
GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \
GTEST_FATAL_FAILURE_)
-#define ASSERT_FALSE(condition) \
+#define GTEST_ASSERT_FALSE(condition) \
GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \
GTEST_FATAL_FAILURE_)
+// Define these macros to 1 to omit the definition of the corresponding
+// EXPECT or ASSERT, which clashes with some users' own code.
+
+#if !GTEST_DONT_DEFINE_EXPECT_TRUE
+#define EXPECT_TRUE(condition) GTEST_EXPECT_TRUE(condition)
+#endif
+
+#if !GTEST_DONT_DEFINE_EXPECT_FALSE
+#define EXPECT_FALSE(condition) GTEST_EXPECT_FALSE(condition)
+#endif
+
+#if !GTEST_DONT_DEFINE_ASSERT_TRUE
+#define ASSERT_TRUE(condition) GTEST_ASSERT_TRUE(condition)
+#endif
+
+#if !GTEST_DONT_DEFINE_ASSERT_FALSE
+#define ASSERT_FALSE(condition) GTEST_ASSERT_FALSE(condition)
+#endif
+
// Macros for testing equalities and inequalities.
//
// * {ASSERT|EXPECT}_EQ(v1, v2): Tests that v1 == v2
@@ -14663,9 +11923,7 @@
// ASSERT_GT(records.size(), 0) << "There is no record left.";
#define EXPECT_EQ(val1, val2) \
- EXPECT_PRED_FORMAT2(::testing::internal:: \
- EqHelper<GTEST_IS_NULL_LITERAL_(val1)>::Compare, \
- val1, val2)
+ EXPECT_PRED_FORMAT2(::testing::internal::EqHelper::Compare, val1, val2)
#define EXPECT_NE(val1, val2) \
EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperNE, val1, val2)
#define EXPECT_LE(val1, val2) \
@@ -14678,9 +11936,7 @@
EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGT, val1, val2)
#define GTEST_ASSERT_EQ(val1, val2) \
- ASSERT_PRED_FORMAT2(::testing::internal:: \
- EqHelper<GTEST_IS_NULL_LITERAL_(val1)>::Compare, \
- val1, val2)
+ ASSERT_PRED_FORMAT2(::testing::internal::EqHelper::Compare, val1, val2)
#define GTEST_ASSERT_NE(val1, val2) \
ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperNE, val1, val2)
#define GTEST_ASSERT_LE(val1, val2) \
@@ -14871,12 +12127,6 @@
PushTrace(file, line, message ? message : "(null)");
}
-#if GTEST_HAS_GLOBAL_STRING
- ScopedTrace(const char* file, int line, const ::string& message) {
- PushTrace(file, line, message);
- }
-#endif
-
ScopedTrace(const char* file, int line, const std::string& message) {
PushTrace(file, line, message);
}
@@ -14914,10 +12164,9 @@
::testing::ScopedTrace GTEST_CONCAT_TOKEN_(gtest_trace_, __LINE__)(\
__FILE__, __LINE__, (message))
-
// Compile-time assertion for type equality.
-// StaticAssertTypeEq<type1, type2>() compiles iff type1 and type2 are
-// the same type. The value it returns is not interesting.
+// StaticAssertTypeEq<type1, type2>() compiles if and only if type1 and type2
+// are the same type. The value it returns is not interesting.
//
// Instead of making StaticAssertTypeEq a class template, we make it a
// function template that invokes a helper class template. This
@@ -14946,8 +12195,8 @@
//
// to cause a compiler error.
template <typename T1, typename T2>
-bool StaticAssertTypeEq() {
- (void)internal::StaticAssertTypeEqHelper<T1, T2>();
+constexpr bool StaticAssertTypeEq() noexcept {
+ static_assert(std::is_same<T1, T2>::value, "T1 and T2 are not the same type");
return true;
}
@@ -15013,9 +12262,11 @@
// }
//
// GOOGLETEST_CM0011 DO NOT DELETE
+#if !GTEST_DONT_DEFINE_TEST
#define TEST_F(test_fixture, test_name)\
GTEST_TEST_(test_fixture, test_name, test_fixture, \
::testing::internal::GetTypeId<test_fixture>())
+#endif // !GTEST_DONT_DEFINE_TEST
// Returns a path to temporary directory.
// Tries to determine an appropriate directory for the platform.
@@ -15100,8 +12351,8 @@
return internal::MakeAndRegisterTestInfo(
test_suite_name, test_name, type_param, value_param,
internal::CodeLocation(file, line), internal::GetTypeId<TestT>(),
- internal::SuiteApiResolver<TestT>::GetSetUpCaseOrSuite(),
- internal::SuiteApiResolver<TestT>::GetTearDownCaseOrSuite(),
+ internal::SuiteApiResolver<TestT>::GetSetUpCaseOrSuite(file, line),
+ internal::SuiteApiResolver<TestT>::GetTearDownCaseOrSuite(file, line),
new FactoryImpl{std::move(factory)});
}
@@ -15123,4 +12374,4 @@
GTEST_DISABLE_MSC_WARNINGS_POP_() // 4251
-#endif // GTEST_INCLUDE_GTEST_GTEST_H_
+#endif // GOOGLETEST_INCLUDE_GTEST_GTEST_H_
diff --git a/internal/ceres/householder_vector_test.cc b/internal/ceres/householder_vector_test.cc
index 6f3b172..7d69789 100644
--- a/internal/ceres/householder_vector_test.cc
+++ b/internal/ceres/householder_vector_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://code.google.com/p/ceres-solver/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,8 +34,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
static void HouseholderTestHelper(const Vector& x) {
const double kTolerance = 1e-14;
@@ -116,5 +115,4 @@
HouseholderTestHelper(x);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/implicit_schur_complement.cc b/internal/ceres/implicit_schur_complement.cc
index f2196d4..a633529 100644
--- a/internal/ceres/implicit_schur_complement.cc
+++ b/internal/ceres/implicit_schur_complement.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,36 +35,40 @@
#include "ceres/block_structure.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_solver.h"
+#include "ceres/parallel_for.h"
+#include "ceres/parallel_vector_ops.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
ImplicitSchurComplement::ImplicitSchurComplement(
const LinearSolver::Options& options)
- : options_(options), D_(NULL), b_(NULL) {}
-
-ImplicitSchurComplement::~ImplicitSchurComplement() {}
+ : options_(options) {}
void ImplicitSchurComplement::Init(const BlockSparseMatrix& A,
const double* D,
const double* b) {
// Since initialization is reasonably heavy, perhaps we can save on
// constructing a new object everytime.
- if (A_ == NULL) {
- A_.reset(PartitionedMatrixViewBase::Create(options_, A));
+ if (A_ == nullptr) {
+ A_ = PartitionedMatrixViewBase::Create(options_, A);
}
D_ = D;
b_ = b;
+ compute_ftf_inverse_ =
+ options_.use_spse_initialization ||
+ options_.preconditioner_type == JACOBI ||
+ options_.preconditioner_type == SCHUR_POWER_SERIES_EXPANSION;
+
// Initialize temporary storage and compute the block diagonals of
// E'E and F'E.
- if (block_diagonal_EtE_inverse_ == NULL) {
- block_diagonal_EtE_inverse_.reset(A_->CreateBlockDiagonalEtE());
- if (options_.preconditioner_type == JACOBI) {
- block_diagonal_FtF_inverse_.reset(A_->CreateBlockDiagonalFtF());
+ if (block_diagonal_EtE_inverse_ == nullptr) {
+ block_diagonal_EtE_inverse_ = A_->CreateBlockDiagonalEtE();
+ if (compute_ftf_inverse_) {
+ block_diagonal_FtF_inverse_ = A_->CreateBlockDiagonalFtF();
}
rhs_.resize(A_->num_cols_f());
rhs_.setZero();
@@ -74,7 +78,7 @@
tmp_f_cols_.resize(A_->num_cols_f());
} else {
A_->UpdateBlockDiagonalEtE(block_diagonal_EtE_inverse_.get());
- if (options_.preconditioner_type == JACOBI) {
+ if (compute_ftf_inverse_) {
A_->UpdateBlockDiagonalFtF(block_diagonal_FtF_inverse_.get());
}
}
@@ -83,8 +87,8 @@
// contributions from the diagonal D if it is non-null. Add that to
// the block diagonals and invert them.
AddDiagonalAndInvert(D_, block_diagonal_EtE_inverse_.get());
- if (options_.preconditioner_type == JACOBI) {
- AddDiagonalAndInvert((D_ == NULL) ? NULL : D_ + A_->num_cols_e(),
+ if (compute_ftf_inverse_) {
+ AddDiagonalAndInvert((D_ == nullptr) ? nullptr : D_ + A_->num_cols_e(),
block_diagonal_FtF_inverse_.get());
}
@@ -99,36 +103,74 @@
// By breaking it down into individual matrix vector products
// involving the matrices E and F. This is implemented using a
// PartitionedMatrixView of the input matrix A.
-void ImplicitSchurComplement::RightMultiply(const double* x, double* y) const {
+void ImplicitSchurComplement::RightMultiplyAndAccumulate(const double* x,
+ double* y) const {
// y1 = F x
- tmp_rows_.setZero();
- A_->RightMultiplyF(x, tmp_rows_.data());
+ ParallelSetZero(options_.context, options_.num_threads, tmp_rows_);
+ A_->RightMultiplyAndAccumulateF(x, tmp_rows_.data());
// y2 = E' y1
- tmp_e_cols_.setZero();
- A_->LeftMultiplyE(tmp_rows_.data(), tmp_e_cols_.data());
+ ParallelSetZero(options_.context, options_.num_threads, tmp_e_cols_);
+ A_->LeftMultiplyAndAccumulateE(tmp_rows_.data(), tmp_e_cols_.data());
// y3 = -(E'E)^-1 y2
- tmp_e_cols_2_.setZero();
- block_diagonal_EtE_inverse_->RightMultiply(tmp_e_cols_.data(),
- tmp_e_cols_2_.data());
- tmp_e_cols_2_ *= -1.0;
+ ParallelSetZero(options_.context, options_.num_threads, tmp_e_cols_2_);
+ block_diagonal_EtE_inverse_->RightMultiplyAndAccumulate(tmp_e_cols_.data(),
+ tmp_e_cols_2_.data(),
+ options_.context,
+ options_.num_threads);
+
+ ParallelAssign(
+ options_.context, options_.num_threads, tmp_e_cols_2_, -tmp_e_cols_2_);
// y1 = y1 + E y3
- A_->RightMultiplyE(tmp_e_cols_2_.data(), tmp_rows_.data());
+ A_->RightMultiplyAndAccumulateE(tmp_e_cols_2_.data(), tmp_rows_.data());
// y5 = D * x
- if (D_ != NULL) {
+ if (D_ != nullptr) {
ConstVectorRef Dref(D_ + A_->num_cols_e(), num_cols());
- VectorRef(y, num_cols()) =
- (Dref.array().square() * ConstVectorRef(x, num_cols()).array())
- .matrix();
+ VectorRef y_cols(y, num_cols());
+ ParallelAssign(
+ options_.context,
+ options_.num_threads,
+ y_cols,
+ (Dref.array().square() * ConstVectorRef(x, num_cols()).array()));
} else {
- VectorRef(y, num_cols()).setZero();
+ ParallelSetZero(options_.context, options_.num_threads, y, num_cols());
}
// y = y5 + F' y1
- A_->LeftMultiplyF(tmp_rows_.data(), y);
+ A_->LeftMultiplyAndAccumulateF(tmp_rows_.data(), y);
+}
+
+void ImplicitSchurComplement::InversePowerSeriesOperatorRightMultiplyAccumulate(
+ const double* x, double* y) const {
+ CHECK(compute_ftf_inverse_);
+ // y1 = F x
+ ParallelSetZero(options_.context, options_.num_threads, tmp_rows_);
+ A_->RightMultiplyAndAccumulateF(x, tmp_rows_.data());
+
+ // y2 = E' y1
+ ParallelSetZero(options_.context, options_.num_threads, tmp_e_cols_);
+ A_->LeftMultiplyAndAccumulateE(tmp_rows_.data(), tmp_e_cols_.data());
+
+ // y3 = (E'E)^-1 y2
+ ParallelSetZero(options_.context, options_.num_threads, tmp_e_cols_2_);
+ block_diagonal_EtE_inverse_->RightMultiplyAndAccumulate(tmp_e_cols_.data(),
+ tmp_e_cols_2_.data(),
+ options_.context,
+ options_.num_threads);
+ // y1 = E y3
+ ParallelSetZero(options_.context, options_.num_threads, tmp_rows_);
+ A_->RightMultiplyAndAccumulateE(tmp_e_cols_2_.data(), tmp_rows_.data());
+
+ // y4 = F' y1
+ ParallelSetZero(options_.context, options_.num_threads, tmp_f_cols_);
+ A_->LeftMultiplyAndAccumulateF(tmp_rows_.data(), tmp_f_cols_.data());
+
+ // y += (F'F)^-1 y4
+ block_diagonal_FtF_inverse_->RightMultiplyAndAccumulate(
+ tmp_f_cols_.data(), y, options_.context, options_.num_threads);
}
// Given a block diagonal matrix and an optional array of diagonal
@@ -138,26 +180,31 @@
const double* D, BlockSparseMatrix* block_diagonal) {
const CompressedRowBlockStructure* block_diagonal_structure =
block_diagonal->block_structure();
- for (int r = 0; r < block_diagonal_structure->rows.size(); ++r) {
- const int row_block_pos = block_diagonal_structure->rows[r].block.position;
- const int row_block_size = block_diagonal_structure->rows[r].block.size;
- const Cell& cell = block_diagonal_structure->rows[r].cells[0];
- MatrixRef m(block_diagonal->mutable_values() + cell.position,
- row_block_size,
- row_block_size);
+ ParallelFor(options_.context,
+ 0,
+ block_diagonal_structure->rows.size(),
+ options_.num_threads,
+ [block_diagonal_structure, D, block_diagonal](int row_block_id) {
+ auto& row = block_diagonal_structure->rows[row_block_id];
+ const int row_block_pos = row.block.position;
+ const int row_block_size = row.block.size;
+ const Cell& cell = row.cells[0];
+ MatrixRef m(block_diagonal->mutable_values() + cell.position,
+ row_block_size,
+ row_block_size);
- if (D != NULL) {
- ConstVectorRef d(D + row_block_pos, row_block_size);
- m += d.array().square().matrix().asDiagonal();
- }
+ if (D != nullptr) {
+ ConstVectorRef d(D + row_block_pos, row_block_size);
+ m += d.array().square().matrix().asDiagonal();
+ }
- m = m.selfadjointView<Eigen::Upper>().llt().solve(
- Matrix::Identity(row_block_size, row_block_size));
- }
+ m = m.selfadjointView<Eigen::Upper>().llt().solve(
+ Matrix::Identity(row_block_size, row_block_size));
+ });
}
-// Similar to RightMultiply, use the block structure of the matrix A
-// to compute y = (E'E)^-1 (E'b - E'F x).
+// Similar to RightMultiplyAndAccumulate, use the block structure of the matrix
+// A to compute y = (E'E)^-1 (E'b - E'F x).
void ImplicitSchurComplement::BackSubstitute(const double* x, double* y) {
const int num_cols_e = A_->num_cols_e();
const int num_cols_f = A_->num_cols_f();
@@ -165,26 +212,34 @@
const int num_rows = A_->num_rows();
// y1 = F x
- tmp_rows_.setZero();
- A_->RightMultiplyF(x, tmp_rows_.data());
+ ParallelSetZero(options_.context, options_.num_threads, tmp_rows_);
+ A_->RightMultiplyAndAccumulateF(x, tmp_rows_.data());
// y2 = b - y1
- tmp_rows_ = ConstVectorRef(b_, num_rows) - tmp_rows_;
+ ParallelAssign(options_.context,
+ options_.num_threads,
+ tmp_rows_,
+ ConstVectorRef(b_, num_rows) - tmp_rows_);
// y3 = E' y2
- tmp_e_cols_.setZero();
- A_->LeftMultiplyE(tmp_rows_.data(), tmp_e_cols_.data());
+ ParallelSetZero(options_.context, options_.num_threads, tmp_e_cols_);
+ A_->LeftMultiplyAndAccumulateE(tmp_rows_.data(), tmp_e_cols_.data());
// y = (E'E)^-1 y3
- VectorRef(y, num_cols).setZero();
- block_diagonal_EtE_inverse_->RightMultiply(tmp_e_cols_.data(), y);
+ ParallelSetZero(options_.context, options_.num_threads, y, num_cols);
+ block_diagonal_EtE_inverse_->RightMultiplyAndAccumulate(
+ tmp_e_cols_.data(), y, options_.context, options_.num_threads);
// The full solution vector y has two blocks. The first block of
// variables corresponds to the eliminated variables, which we just
// computed via back substitution. The second block of variables
// corresponds to the Schur complement system, so we just copy those
// values from the solution to the Schur complement.
- VectorRef(y + num_cols_e, num_cols_f) = ConstVectorRef(x, num_cols_f);
+ VectorRef y_cols_f(y + num_cols_e, num_cols_f);
+ ParallelAssign(options_.context,
+ options_.num_threads,
+ y_cols_f,
+ ConstVectorRef(x, num_cols_f));
}
// Compute the RHS of the Schur complement system.
@@ -195,24 +250,29 @@
// this using a series of matrix vector products.
void ImplicitSchurComplement::UpdateRhs() {
// y1 = E'b
- tmp_e_cols_.setZero();
- A_->LeftMultiplyE(b_, tmp_e_cols_.data());
+ ParallelSetZero(options_.context, options_.num_threads, tmp_e_cols_);
+ A_->LeftMultiplyAndAccumulateE(b_, tmp_e_cols_.data());
// y2 = (E'E)^-1 y1
- Vector y2 = Vector::Zero(A_->num_cols_e());
- block_diagonal_EtE_inverse_->RightMultiply(tmp_e_cols_.data(), y2.data());
+ ParallelSetZero(options_.context, options_.num_threads, tmp_e_cols_2_);
+ block_diagonal_EtE_inverse_->RightMultiplyAndAccumulate(tmp_e_cols_.data(),
+ tmp_e_cols_2_.data(),
+ options_.context,
+ options_.num_threads);
// y3 = E y2
- tmp_rows_.setZero();
- A_->RightMultiplyE(y2.data(), tmp_rows_.data());
+ ParallelSetZero(options_.context, options_.num_threads, tmp_rows_);
+ A_->RightMultiplyAndAccumulateE(tmp_e_cols_2_.data(), tmp_rows_.data());
// y3 = b - y3
- tmp_rows_ = ConstVectorRef(b_, A_->num_rows()) - tmp_rows_;
+ ParallelAssign(options_.context,
+ options_.num_threads,
+ tmp_rows_,
+ ConstVectorRef(b_, A_->num_rows()) - tmp_rows_);
// rhs = F' y3
- rhs_.setZero();
- A_->LeftMultiplyF(tmp_rows_.data(), rhs_.data());
+ ParallelSetZero(options_.context, options_.num_threads, rhs_);
+ A_->LeftMultiplyAndAccumulateF(tmp_rows_.data(), rhs_.data());
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/implicit_schur_complement.h b/internal/ceres/implicit_schur_complement.h
index e83892a..b4eb0b0 100644
--- a/internal/ceres/implicit_schur_complement.h
+++ b/internal/ceres/implicit_schur_complement.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,15 +36,15 @@
#include <memory>
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_operator.h"
#include "ceres/linear_solver.h"
#include "ceres/partitioned_matrix_view.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockSparseMatrix;
@@ -81,14 +81,14 @@
// (which for our purposes is an easily inverted block diagonal
// matrix), it can be done in terms of matrix vector products with E,
// F and (E'E)^-1. This class implements this functionality and other
-// auxilliary bits needed to implement a CG solver on the Schur
+// auxiliary bits needed to implement a CG solver on the Schur
// complement using the PartitionedMatrixView object.
//
-// THREAD SAFETY: This class is nqot thread safe. In particular, the
-// RightMultiply (and the LeftMultiply) methods are not thread safe as
-// they depend on mutable arrays used for the temporaries needed to
-// compute the product y += Sx;
-class CERES_EXPORT_INTERNAL ImplicitSchurComplement : public LinearOperator {
+// THREAD SAFETY: This class is not thread safe. In particular, the
+// RightMultiplyAndAccumulate (and the LeftMultiplyAndAccumulate) methods are
+// not thread safe as they depend on mutable arrays used for the temporaries
+// needed to compute the product y += Sx;
+class CERES_NO_EXPORT ImplicitSchurComplement final : public LinearOperator {
public:
// num_eliminate_blocks is the number of E blocks in the matrix
// A.
@@ -100,7 +100,6 @@
// TODO(sameeragarwal): Get rid of the two bools below and replace
// them with enums.
explicit ImplicitSchurComplement(const LinearSolver::Options& options);
- virtual ~ImplicitSchurComplement();
// Initialize the Schur complement for a linear least squares
// problem of the form
@@ -115,14 +114,20 @@
void Init(const BlockSparseMatrix& A, const double* D, const double* b);
// y += Sx, where S is the Schur complement.
- void RightMultiply(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
// The Schur complement is a symmetric positive definite matrix,
// thus the left and right multiply operators are the same.
- void LeftMultiply(const double* x, double* y) const final {
- RightMultiply(x, y);
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const final {
+ RightMultiplyAndAccumulate(x, y);
}
+ // Following is useful for approximation of S^-1 via power series expansion.
+ // Z = (F'F)^-1 F'E (E'E)^-1 E'F
+ // y += Zx
+ void InversePowerSeriesOperatorRightMultiplyAccumulate(const double* x,
+ double* y) const;
+
// y = (E'E)^-1 (E'b - E'F x). Given an estimate of the solution to
// the Schur complement system, this method computes the value of
// the e_block variables that were eliminated to form the Schur
@@ -138,6 +143,7 @@
}
const BlockSparseMatrix* block_diagonal_FtF_inverse() const {
+ CHECK(compute_ftf_inverse_);
return block_diagonal_FtF_inverse_.get();
}
@@ -146,24 +152,25 @@
void UpdateRhs();
const LinearSolver::Options& options_;
-
+ bool compute_ftf_inverse_ = false;
std::unique_ptr<PartitionedMatrixViewBase> A_;
- const double* D_;
- const double* b_;
+ const double* D_ = nullptr;
+ const double* b_ = nullptr;
std::unique_ptr<BlockSparseMatrix> block_diagonal_EtE_inverse_;
std::unique_ptr<BlockSparseMatrix> block_diagonal_FtF_inverse_;
Vector rhs_;
- // Temporary storage vectors used to implement RightMultiply.
+ // Temporary storage vectors used to implement RightMultiplyAndAccumulate.
mutable Vector tmp_rows_;
mutable Vector tmp_e_cols_;
mutable Vector tmp_e_cols_2_;
mutable Vector tmp_f_cols_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_IMPLICIT_SCHUR_COMPLEMENT_H_
diff --git a/internal/ceres/implicit_schur_complement_test.cc b/internal/ceres/implicit_schur_complement_test.cc
index b6d886f..35519fc 100644
--- a/internal/ceres/implicit_schur_complement_test.cc
+++ b/internal/ceres/implicit_schur_complement_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -47,8 +47,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
using testing::AssertionResult;
@@ -57,13 +56,12 @@
class ImplicitSchurComplementTest : public ::testing::Test {
protected:
void SetUp() final {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(2));
+ auto problem = CreateLinearLeastSquaresProblemFromId(2);
CHECK(problem != nullptr);
A_.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
- b_.reset(problem->b.release());
- D_.reset(problem->D.release());
+ b_ = std::move(problem->b);
+ D_ = std::move(problem->D);
num_cols_ = A_->num_cols();
num_rows_ = A_->num_rows();
@@ -76,12 +74,8 @@
Vector* solution) {
const CompressedRowBlockStructure* bs = A_->block_structure();
const int num_col_blocks = bs->cols.size();
- std::vector<int> blocks(num_col_blocks - num_eliminate_blocks_, 0);
- for (int i = num_eliminate_blocks_; i < num_col_blocks; ++i) {
- blocks[i - num_eliminate_blocks_] = bs->cols[i].size;
- }
-
- BlockRandomAccessDenseMatrix blhs(blocks);
+ auto blocks = Tail(bs->cols, num_col_blocks - num_eliminate_blocks_);
+ BlockRandomAccessDenseMatrix blhs(blocks, &context_, 1);
const int num_schur_rows = blhs.num_rows();
LinearSolver::Options options;
@@ -90,8 +84,8 @@
ContextImpl context;
options.context = &context;
- std::unique_ptr<SchurEliminatorBase> eliminator(
- SchurEliminatorBase::Create(options));
+ std::unique_ptr<SchurEliminatorBase> eliminator =
+ SchurEliminatorBase::Create(options);
CHECK(eliminator != nullptr);
const bool kFullRankETE = true;
eliminator->Init(num_eliminate_blocks_, kFullRankETE, bs);
@@ -137,18 +131,38 @@
ImplicitSchurComplement isc(options);
isc.Init(*A_, D, b_.get());
- int num_sc_cols = lhs.cols();
+ const int num_f_cols = lhs.cols();
+ const int num_e_cols = num_cols_ - num_f_cols;
- for (int i = 0; i < num_sc_cols; ++i) {
- Vector x(num_sc_cols);
+ Matrix A_dense, E, F, DE, DF;
+ A_->ToDenseMatrix(&A_dense);
+ E = A_dense.leftCols(A_->num_cols() - num_f_cols);
+ F = A_dense.rightCols(num_f_cols);
+ if (D) {
+ DE = VectorRef(D, num_e_cols).asDiagonal();
+ DF = VectorRef(D + num_e_cols, num_f_cols).asDiagonal();
+ } else {
+ DE = Matrix::Zero(num_e_cols, num_e_cols);
+ DF = Matrix::Zero(num_f_cols, num_f_cols);
+ }
+
+ // Z = (block_diagonal(F'F))^-1 F'E (E'E)^-1 E'F
+ // Here, assuming that block_diagonal(F'F) == diagonal(F'F)
+ Matrix Z_reference =
+ (F.transpose() * F + DF).diagonal().asDiagonal().inverse() *
+ F.transpose() * E * (E.transpose() * E + DE).inverse() * E.transpose() *
+ F;
+
+ for (int i = 0; i < num_f_cols; ++i) {
+ Vector x(num_f_cols);
x.setZero();
x(i) = 1.0;
- Vector y(num_sc_cols);
+ Vector y(num_f_cols);
y = lhs * x;
- Vector z(num_sc_cols);
- isc.RightMultiply(x.data(), z.data());
+ Vector z(num_f_cols);
+ isc.RightMultiplyAndAccumulate(x.data(), z.data());
// The i^th column of the implicit schur complement is the same as
// the explicit schur complement.
@@ -158,6 +172,22 @@
<< "column " << i << ". explicit: " << y.transpose()
<< " implicit: " << z.transpose();
}
+
+ y.setZero();
+ y = Z_reference * x;
+ z.setZero();
+ isc.InversePowerSeriesOperatorRightMultiplyAccumulate(x.data(), z.data());
+
+ // The i^th column of operator Z stored implicitly is the same as its
+ // explicit version.
+ if ((y - z).norm() > kEpsilon) {
+ return testing::AssertionFailure()
+ << "Explicit and Implicit operators used to approximate the "
+ "inversion of schur complement via power series expansion "
+ "differ in column "
+ << i << ". explicit: " << y.transpose()
+ << " implicit: " << z.transpose();
+ }
}
// Compare the rhs of the reduced linear system
@@ -186,6 +216,7 @@
return testing::AssertionSuccess();
}
+ ContextImpl context_;
int num_rows_;
int num_cols_;
int num_eliminate_blocks_;
@@ -202,9 +233,8 @@
// We do this with and without regularization to check that the
// support for the LM diagonal is correct.
TEST_F(ImplicitSchurComplementTest, SchurMatrixValuesTest) {
- EXPECT_TRUE(TestImplicitSchurComplement(NULL));
+ EXPECT_TRUE(TestImplicitSchurComplement(nullptr));
EXPECT_TRUE(TestImplicitSchurComplement(D_.get()));
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/inner_product_computer.cc b/internal/ceres/inner_product_computer.cc
index ef38b7b..59b5d94 100644
--- a/internal/ceres/inner_product_computer.cc
+++ b/internal/ceres/inner_product_computer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,11 +31,11 @@
#include "ceres/inner_product_computer.h"
#include <algorithm>
+#include <memory>
#include "ceres/small_blas.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Create the CompressedRowSparseMatrix matrix that will contain the
// inner product.
@@ -44,22 +44,16 @@
// or the lower triangular part of the product.
//
// num_nonzeros is the number of non-zeros in the result matrix.
-CompressedRowSparseMatrix* InnerProductComputer::CreateResultMatrix(
+std::unique_ptr<CompressedRowSparseMatrix>
+InnerProductComputer::CreateResultMatrix(
const CompressedRowSparseMatrix::StorageType storage_type,
const int num_nonzeros) {
- CompressedRowSparseMatrix* matrix =
- new CompressedRowSparseMatrix(m_.num_cols(), m_.num_cols(), num_nonzeros);
+ auto matrix = std::make_unique<CompressedRowSparseMatrix>(
+ m_.num_cols(), m_.num_cols(), num_nonzeros);
matrix->set_storage_type(storage_type);
-
const CompressedRowBlockStructure* bs = m_.block_structure();
- const std::vector<Block>& blocks = bs->cols;
- matrix->mutable_row_blocks()->resize(blocks.size());
- matrix->mutable_col_blocks()->resize(blocks.size());
- for (int i = 0; i < blocks.size(); ++i) {
- (*(matrix->mutable_row_blocks()))[i] = blocks[i].size;
- (*(matrix->mutable_col_blocks()))[i] = blocks[i].size;
- }
-
+ *matrix->mutable_row_blocks() = bs->cols;
+ *matrix->mutable_col_blocks() = bs->cols;
return matrix;
}
@@ -76,6 +70,10 @@
row_nnz->resize(blocks.size());
std::fill(row_nnz->begin(), row_nnz->end(), 0);
+ if (product_terms.empty()) {
+ return 0;
+ }
+
// First product term.
(*row_nnz)[product_terms[0].row] = blocks[product_terms[0].col].size;
int num_nonzeros =
@@ -116,24 +114,26 @@
//
// product_storage_type controls the form of the output matrix. It
// can be LOWER_TRIANGULAR or UPPER_TRIANGULAR.
-InnerProductComputer* InnerProductComputer::Create(
+std::unique_ptr<InnerProductComputer> InnerProductComputer::Create(
const BlockSparseMatrix& m,
CompressedRowSparseMatrix::StorageType product_storage_type) {
return InnerProductComputer::Create(
m, 0, m.block_structure()->rows.size(), product_storage_type);
}
-InnerProductComputer* InnerProductComputer::Create(
+std::unique_ptr<InnerProductComputer> InnerProductComputer::Create(
const BlockSparseMatrix& m,
const int start_row_block,
const int end_row_block,
CompressedRowSparseMatrix::StorageType product_storage_type) {
- CHECK(product_storage_type == CompressedRowSparseMatrix::LOWER_TRIANGULAR ||
- product_storage_type == CompressedRowSparseMatrix::UPPER_TRIANGULAR);
+ CHECK(product_storage_type ==
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR ||
+ product_storage_type ==
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR);
CHECK_GT(m.num_nonzeros(), 0)
<< "Congratulations, you found a bug in Ceres. Please report it.";
- InnerProductComputer* inner_product_computer =
- new InnerProductComputer(m, start_row_block, end_row_block);
+ std::unique_ptr<InnerProductComputer> inner_product_computer(
+ new InnerProductComputer(m, start_row_block, end_row_block));
inner_product_computer->Init(product_storage_type);
return inner_product_computer;
}
@@ -155,7 +155,8 @@
for (int c1 = 0; c1 < row.cells.size(); ++c1) {
const Cell& cell1 = row.cells[c1];
int c2_begin, c2_end;
- if (product_storage_type == CompressedRowSparseMatrix::LOWER_TRIANGULAR) {
+ if (product_storage_type ==
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR) {
c2_begin = 0;
c2_end = c1 + 1;
} else {
@@ -165,8 +166,8 @@
for (int c2 = c2_begin; c2 < c2_end; ++c2) {
const Cell& cell2 = row.cells[c2];
- product_terms.push_back(InnerProductComputer::ProductTerm(
- cell1.block_id, cell2.block_id, product_terms.size()));
+ product_terms.emplace_back(
+ cell1.block_id, cell2.block_id, product_terms.size());
}
}
}
@@ -183,7 +184,7 @@
std::vector<int> row_block_nnz;
const int num_nonzeros = ComputeNonzeros(product_terms, &row_block_nnz);
- result_.reset(CreateResultMatrix(product_storage_type, num_nonzeros));
+ result_ = CreateResultMatrix(product_storage_type, num_nonzeros);
// Populate the row non-zero counts in the result matrix.
int* crsm_rows = result_->mutable_rows();
@@ -193,6 +194,10 @@
*(crsm_rows + 1) = *crsm_rows + row_block_nnz[i];
}
}
+ result_offsets_.resize(product_terms.size());
+ if (num_nonzeros == 0) {
+ return;
+ }
// The following macro FILL_CRSM_COL_BLOCK is key to understanding
// how this class works.
@@ -239,12 +244,11 @@
} \
}
- result_offsets_.resize(product_terms.size());
int col_nnz = 0;
int nnz = 0;
// Process the first term.
- const InnerProductComputer::ProductTerm* current = &product_terms[0];
+ const InnerProductComputer::ProductTerm* current = product_terms.data();
FILL_CRSM_COL_BLOCK;
// Process the rest of the terms.
@@ -262,7 +266,7 @@
if (previous->row == current->row) {
// if the current and previous terms are in the same row block,
// then they differ in the column block, in which case advance
- // col_nnz by the column size of the prevous term.
+ // col_nnz by the column size of the previous term.
col_nnz += col_blocks[previous->col].size;
} else {
// If we have moved to a new row-block , then col_nnz is zero,
@@ -300,7 +304,8 @@
rows[bs->cols[cell1.block_id].position];
int c2_begin, c2_end;
- if (storage_type == CompressedRowSparseMatrix::LOWER_TRIANGULAR) {
+ if (storage_type ==
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR) {
c2_begin = 0;
c2_end = c1 + 1;
} else {
@@ -328,5 +333,4 @@
CHECK_EQ(cursor, result_offsets_.size());
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/inner_product_computer.h b/internal/ceres/inner_product_computer.h
index 04ec1d1..c1c0a34 100644
--- a/internal/ceres/inner_product_computer.h
+++ b/internal/ceres/inner_product_computer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,10 +36,10 @@
#include "ceres/block_sparse_matrix.h"
#include "ceres/compressed_row_sparse_matrix.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// This class is used to repeatedly compute the inner product
//
@@ -61,7 +61,7 @@
// This is not a problem as sparse linear algebra libraries can ignore
// these entries with ease and the space used is minimal/linear in the
// size of the matrices.
-class CERES_EXPORT_INTERNAL InnerProductComputer {
+class CERES_NO_EXPORT InnerProductComputer {
public:
// Factory
//
@@ -74,7 +74,7 @@
//
// The user must ensure that the matrix m is valid for the life time
// of this object.
- static InnerProductComputer* Create(
+ static std::unique_ptr<InnerProductComputer> Create(
const BlockSparseMatrix& m,
CompressedRowSparseMatrix::StorageType storage_type);
@@ -83,7 +83,7 @@
//
// a = m(start_row_block : end_row_block, :);
// result = a' * a;
- static InnerProductComputer* Create(
+ static std::unique_ptr<InnerProductComputer> Create(
const BlockSparseMatrix& m,
int start_row_block,
int end_row_block,
@@ -127,7 +127,7 @@
void Init(CompressedRowSparseMatrix::StorageType storage_type);
- CompressedRowSparseMatrix* CreateResultMatrix(
+ std::unique_ptr<CompressedRowSparseMatrix> CreateResultMatrix(
const CompressedRowSparseMatrix::StorageType storage_type,
int num_nonzeros);
@@ -152,7 +152,8 @@
std::vector<int> result_offsets_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_INNER_PRODUCT_COMPUTER_H_
diff --git a/internal/ceres/inner_product_computer_test.cc b/internal/ceres/inner_product_computer_test.cc
index ac564f4..89fe518 100644
--- a/internal/ceres/inner_product_computer_test.cc
+++ b/internal/ceres/inner_product_computer_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,11 +32,11 @@
#include <memory>
#include <numeric>
+#include <random>
#include "Eigen/SparseCore"
#include "ceres/block_sparse_matrix.h"
#include "ceres/internal/eigen.h"
-#include "ceres/random.h"
#include "ceres/triplet_sparse_matrix.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
@@ -44,44 +44,44 @@
namespace ceres {
namespace internal {
-#define COMPUTE_AND_COMPARE \
- { \
- inner_product_computer->Compute(); \
- CompressedRowSparseMatrix* actual_product_crsm = \
- inner_product_computer->mutable_result(); \
- Matrix actual_inner_product = \
- Eigen::MappedSparseMatrix<double, Eigen::ColMajor>( \
- actual_product_crsm->num_rows(), \
- actual_product_crsm->num_rows(), \
- actual_product_crsm->num_nonzeros(), \
- actual_product_crsm->mutable_rows(), \
- actual_product_crsm->mutable_cols(), \
- actual_product_crsm->mutable_values()); \
- EXPECT_EQ(actual_inner_product.rows(), actual_inner_product.cols()); \
- EXPECT_EQ(expected_inner_product.rows(), expected_inner_product.cols()); \
- EXPECT_EQ(actual_inner_product.rows(), expected_inner_product.rows()); \
- Matrix expected_t, actual_t; \
- if (actual_product_crsm->storage_type() == \
- CompressedRowSparseMatrix::LOWER_TRIANGULAR) { \
- expected_t = expected_inner_product.triangularView<Eigen::Upper>(); \
- actual_t = actual_inner_product.triangularView<Eigen::Upper>(); \
- } else { \
- expected_t = expected_inner_product.triangularView<Eigen::Lower>(); \
- actual_t = actual_inner_product.triangularView<Eigen::Lower>(); \
- } \
- EXPECT_LE((expected_t - actual_t).norm() / actual_t.norm(), \
- 100 * std::numeric_limits<double>::epsilon()) \
- << "expected: \n" \
- << expected_t << "\nactual: \n" \
- << actual_t; \
+#define COMPUTE_AND_COMPARE \
+ { \
+ inner_product_computer->Compute(); \
+ CompressedRowSparseMatrix* actual_product_crsm = \
+ inner_product_computer->mutable_result(); \
+ Matrix actual_inner_product = \
+ Eigen::Map<Eigen::SparseMatrix<double, Eigen::ColMajor>>( \
+ actual_product_crsm->num_rows(), \
+ actual_product_crsm->num_rows(), \
+ actual_product_crsm->num_nonzeros(), \
+ actual_product_crsm->mutable_rows(), \
+ actual_product_crsm->mutable_cols(), \
+ actual_product_crsm->mutable_values()); \
+ EXPECT_EQ(actual_inner_product.rows(), actual_inner_product.cols()); \
+ EXPECT_EQ(expected_inner_product.rows(), expected_inner_product.cols()); \
+ EXPECT_EQ(actual_inner_product.rows(), expected_inner_product.rows()); \
+ Matrix expected_t, actual_t; \
+ if (actual_product_crsm->storage_type() == \
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR) { \
+ expected_t = expected_inner_product.triangularView<Eigen::Upper>(); \
+ actual_t = actual_inner_product.triangularView<Eigen::Upper>(); \
+ } else { \
+ expected_t = expected_inner_product.triangularView<Eigen::Lower>(); \
+ actual_t = actual_inner_product.triangularView<Eigen::Lower>(); \
+ } \
+ EXPECT_LE((expected_t - actual_t).norm(), \
+ 100 * std::numeric_limits<double>::epsilon() * actual_t.norm()) \
+ << "expected: \n" \
+ << expected_t << "\nactual: \n" \
+ << actual_t; \
}
TEST(InnerProductComputer, NormalOperation) {
- // "Randomly generated seed."
- SetRandomState(29823);
const int kMaxNumRowBlocks = 10;
const int kMaxNumColBlocks = 10;
const int kNumTrials = 10;
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> distribution(0.01, 1.0);
// Create a random matrix, compute its outer product using Eigen and
// ComputeOuterProduct. Convert both matrices to dense matrices and
@@ -98,7 +98,7 @@
options.max_row_block_size = 5;
options.min_col_block_size = 1;
options.max_col_block_size = 10;
- options.block_density = std::max(0.1, RandDouble());
+ options.block_density = distribution(prng);
VLOG(2) << "num row blocks: " << options.num_row_blocks;
VLOG(2) << "num col blocks: " << options.num_col_blocks;
@@ -109,7 +109,7 @@
VLOG(2) << "block density: " << options.block_density;
std::unique_ptr<BlockSparseMatrix> random_matrix(
- BlockSparseMatrix::CreateRandomMatrix(options));
+ BlockSparseMatrix::CreateRandomMatrix(options, prng));
TripletSparseMatrix tsm(random_matrix->num_rows(),
random_matrix->num_cols(),
@@ -117,8 +117,7 @@
random_matrix->ToTripletSparseMatrix(&tsm);
std::vector<Eigen::Triplet<double>> triplets;
for (int i = 0; i < tsm.num_nonzeros(); ++i) {
- triplets.push_back(Eigen::Triplet<double>(
- tsm.rows()[i], tsm.cols()[i], tsm.values()[i]));
+ triplets.emplace_back(tsm.rows()[i], tsm.cols()[i], tsm.values()[i]);
}
Eigen::SparseMatrix<double> eigen_random_matrix(
random_matrix->num_rows(), random_matrix->num_cols());
@@ -128,11 +127,13 @@
std::unique_ptr<InnerProductComputer> inner_product_computer;
- inner_product_computer.reset(InnerProductComputer::Create(
- *random_matrix, CompressedRowSparseMatrix::LOWER_TRIANGULAR));
+ inner_product_computer = InnerProductComputer::Create(
+ *random_matrix,
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR);
COMPUTE_AND_COMPARE;
- inner_product_computer.reset(InnerProductComputer::Create(
- *random_matrix, CompressedRowSparseMatrix::UPPER_TRIANGULAR));
+ inner_product_computer = InnerProductComputer::Create(
+ *random_matrix,
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR);
COMPUTE_AND_COMPARE;
}
}
@@ -140,11 +141,11 @@
}
TEST(InnerProductComputer, SubMatrix) {
- // "Randomly generated seed."
- SetRandomState(29823);
const int kNumRowBlocks = 10;
const int kNumColBlocks = 20;
const int kNumTrials = 5;
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> distribution(0.01, 1.0);
// Create a random matrix, compute its outer product using Eigen and
// ComputeInnerProductComputer. Convert both matrices to dense matrices and
@@ -157,7 +158,7 @@
options.max_row_block_size = 5;
options.min_col_block_size = 1;
options.max_col_block_size = 10;
- options.block_density = std::max(0.1, RandDouble());
+ options.block_density = distribution(prng);
VLOG(2) << "num row blocks: " << options.num_row_blocks;
VLOG(2) << "num col blocks: " << options.num_col_blocks;
@@ -168,7 +169,7 @@
VLOG(2) << "block density: " << options.block_density;
std::unique_ptr<BlockSparseMatrix> random_matrix(
- BlockSparseMatrix::CreateRandomMatrix(options));
+ BlockSparseMatrix::CreateRandomMatrix(options, prng));
const std::vector<CompressedRow>& row_blocks =
random_matrix->block_structure()->rows;
@@ -189,8 +190,8 @@
std::vector<Eigen::Triplet<double>> triplets;
for (int i = 0; i < tsm.num_nonzeros(); ++i) {
if (tsm.rows()[i] >= start_row && tsm.rows()[i] < end_row) {
- triplets.push_back(Eigen::Triplet<double>(
- tsm.rows()[i], tsm.cols()[i], tsm.values()[i]));
+ triplets.emplace_back(
+ tsm.rows()[i], tsm.cols()[i], tsm.values()[i]);
}
}
@@ -202,17 +203,17 @@
eigen_random_matrix.transpose() * eigen_random_matrix;
std::unique_ptr<InnerProductComputer> inner_product_computer;
- inner_product_computer.reset(InnerProductComputer::Create(
+ inner_product_computer = InnerProductComputer::Create(
*random_matrix,
start_row_block,
end_row_block,
- CompressedRowSparseMatrix::LOWER_TRIANGULAR));
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR);
COMPUTE_AND_COMPARE;
- inner_product_computer.reset(InnerProductComputer::Create(
+ inner_product_computer = InnerProductComputer::Create(
*random_matrix,
start_row_block,
end_row_block,
- CompressedRowSparseMatrix::UPPER_TRIANGULAR));
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR);
COMPUTE_AND_COMPARE;
}
}
diff --git a/internal/ceres/integer_sequence_algorithm_test.cc b/internal/ceres/integer_sequence_algorithm_test.cc
index af42a91..9666375 100644
--- a/internal/ceres/integer_sequence_algorithm_test.cc
+++ b/internal/ceres/integer_sequence_algorithm_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -27,28 +27,16 @@
// POSSIBILITY OF SUCH DAMAGE.
//
// Author: jodebo_beck@gmx.de (Johannes Beck)
+// sergiu.deitsch@gmail.com (Sergiu Deitsch)
#include "ceres/internal/integer_sequence_algorithm.h"
#include <type_traits>
#include <utility>
-namespace ceres {
-namespace internal {
+#include "ceres/internal/jet_traits.h"
-// Unit tests for summation of integer sequence.
-static_assert(Sum<std::integer_sequence<int>>::Value == 0,
- "Unit test of summing up an integer sequence failed.");
-static_assert(Sum<std::integer_sequence<int, 2>>::Value == 2,
- "Unit test of summing up an integer sequence failed.");
-static_assert(Sum<std::integer_sequence<int, 2, 3>>::Value == 5,
- "Unit test of summing up an integer sequence failed.");
-static_assert(Sum<std::integer_sequence<int, 2, 3, 10>>::Value == 15,
- "Unit test of summing up an integer sequence failed.");
-static_assert(Sum<std::integer_sequence<int, 2, 3, 10, 4>>::Value == 19,
- "Unit test of summing up an integer sequence failed.");
-static_assert(Sum<std::integer_sequence<int, 2, 3, 10, 4, 1>>::Value == 20,
- "Unit test of summing up an integer sequence failed.");
+namespace ceres::internal {
// Unit tests for exclusive scan of integer sequence.
static_assert(std::is_same<ExclusiveScan<std::integer_sequence<int>>,
@@ -68,5 +56,83 @@
"Unit test of calculating the exclusive scan of an integer "
"sequence failed.");
-} // namespace internal
-} // namespace ceres
+using Ranks001 = Ranks_t<Jet<double, 0>, double, Jet<double, 1>>;
+using Ranks1 = Ranks_t<Jet<double, 1>>;
+using Ranks110 = Ranks_t<Jet<double, 1>, Jet<double, 1>, double>;
+using Ranks023 = Ranks_t<double, Jet<double, 2>, Jet<double, 3>>;
+using EmptyRanks = Ranks_t<>;
+
+// Remove zero from the ranks integer sequence
+using NonZeroRanks001 = RemoveValue_t<Ranks001, 0>;
+using NonZeroRanks1 = RemoveValue_t<Ranks1, 0>;
+using NonZeroRanks110 = RemoveValue_t<Ranks110, 0>;
+using NonZeroRanks023 = RemoveValue_t<Ranks023, 0>;
+
+static_assert(std::is_same<RemoveValue_t<EmptyRanks, 0>,
+ std::integer_sequence<int>>::value,
+ "filtered sequence does not match an empty one");
+static_assert(std::is_same<RemoveValue_t<std::integer_sequence<int, 2, 2>, 2>,
+ std::integer_sequence<int>>::value,
+ "filtered sequence does not match an empty one");
+static_assert(
+ std::is_same<RemoveValue_t<std::integer_sequence<int, 0, 0, 2>, 2>,
+ std::integer_sequence<int, 0, 0>>::value,
+ "filtered sequence does not match the expected one");
+static_assert(
+ std::is_same<RemoveValue_t<std::make_integer_sequence<int, 6>, 7>,
+ std::make_integer_sequence<int, 6>>::value,
+ "sequence not containing the element to remove must not be transformed");
+static_assert(
+ std::is_same<NonZeroRanks001, std::integer_sequence<int, 1>>::value,
+ "sequences do not match");
+static_assert(std::is_same<NonZeroRanks1, std::integer_sequence<int, 1>>::value,
+ "sequences do not match");
+static_assert(
+ std::is_same<NonZeroRanks110, std::integer_sequence<int, 1, 1>>::value,
+ "sequences do not match");
+static_assert(
+ std::is_same<NonZeroRanks023, std::integer_sequence<int, 2, 3>>::value,
+ "sequences do not match");
+static_assert(std::is_same<RemoveValue_t<std::integer_sequence<long>, -1>,
+ std::integer_sequence<long>>::value,
+ "sequences do not match");
+static_assert(
+ std::is_same<RemoveValue_t<std::integer_sequence<short, -2, -3, -1>, -1>,
+ std::integer_sequence<short, -2, -3>>::value,
+ "sequences do not match");
+
+using J = Jet<double, 2>;
+template <typename T>
+using J0 = Jet<T, 0>;
+using J0d = J0<double>;
+
+// Ensure all types match
+static_assert(AreAllSame_v<int, int>, "types must be the same");
+static_assert(AreAllSame_v<long, long, long>, "types must be the same");
+static_assert(AreAllSame_v<J0d, J0d, J0d>, "types must be the same");
+static_assert(!AreAllSame_v<double, int>, "types must not be the same");
+static_assert(!AreAllSame_v<int, short, char>, "types must not be the same");
+
+// Ensure all values in the integer sequence match
+static_assert(AreAllEqual_v<int, 1, 1>,
+ "integer sequence must contain same values");
+static_assert(AreAllEqual_v<long, 2>,
+ "integer sequence must contain one value");
+static_assert(!AreAllEqual_v<short, 3, 4>,
+ "integer sequence must not contain the same values");
+static_assert(!AreAllEqual_v<unsigned, 3, 4, 3>,
+ "integer sequence must not contain the same values");
+static_assert(!AreAllEqual_v<int, 4, 4, 3>,
+ "integer sequence must not contain the same values");
+
+static_assert(IsEmptyOrAreAllEqual_v<std::integer_sequence<short>>,
+ "expected empty sequence is not");
+static_assert(IsEmptyOrAreAllEqual_v<std::integer_sequence<unsigned, 7, 7, 7>>,
+ "expected all equal sequence is not");
+static_assert(IsEmptyOrAreAllEqual_v<std::integer_sequence<int, 1>>,
+ "expected all equal sequence is not");
+static_assert(
+ IsEmptyOrAreAllEqual_v<std::integer_sequence<long, 111, 111, 111, 111>>,
+ "expected all equal sequence is not");
+
+} // namespace ceres::internal
diff --git a/internal/ceres/invert_psd_matrix.h b/internal/ceres/invert_psd_matrix.h
index ac8808b..bc74900 100644
--- a/internal/ceres/invert_psd_matrix.h
+++ b/internal/ceres/invert_psd_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,8 +35,7 @@
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Helper routine to compute the inverse or pseudo-inverse of a
// symmetric positive semi-definite matrix.
@@ -73,7 +72,6 @@
return svd.solve(MType::Identity(size, size));
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_INVERT_PSD_MATRIX_H_
diff --git a/internal/ceres/invert_psd_matrix_benchmark.cc b/internal/ceres/invert_psd_matrix_benchmark.cc
index 02a19f3..16c3671 100644
--- a/internal/ceres/invert_psd_matrix_benchmark.cc
+++ b/internal/ceres/invert_psd_matrix_benchmark.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -9,7 +9,7 @@
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
-// and/or other materils provided with the distribution.
+// and/or other materials provided with the distribution.
// * Neither the name of Google Inc. nor the names of its contributors may be
// used to endorse or promote products derived from this software without
// specific prior written permission.
@@ -32,8 +32,7 @@
#include "benchmark/benchmark.h"
#include "ceres/invert_psd_matrix.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template <int kSize>
void BenchmarkFixedSizedInvertPSDMatrix(benchmark::State& state) {
@@ -62,10 +61,10 @@
BENCHMARK_TEMPLATE(BenchmarkFixedSizedInvertPSDMatrix, 11);
BENCHMARK_TEMPLATE(BenchmarkFixedSizedInvertPSDMatrix, 12);
-void BenchmarkDynamicallyInvertPSDMatrix(benchmark::State& state) {
+static void BenchmarkDynamicallyInvertPSDMatrix(benchmark::State& state) {
using MatrixType =
typename EigenTypes<Eigen::Dynamic, Eigen::Dynamic>::Matrix;
- const int size = state.range(0);
+ const int size = static_cast<int>(state.range(0));
MatrixType input = MatrixType::Random(size, size);
input += input.transpose() + MatrixType::Identity(size, size);
@@ -84,7 +83,6 @@
}
});
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
BENCHMARK_MAIN();
diff --git a/internal/ceres/invert_psd_matrix_test.cc b/internal/ceres/invert_psd_matrix_test.cc
index 279eeab..22ec439 100644
--- a/internal/ceres/invert_psd_matrix_test.cc
+++ b/internal/ceres/invert_psd_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,8 +33,7 @@
#include "ceres/internal/eigen.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
static constexpr bool kFullRank = true;
static constexpr bool kRankDeficient = false;
@@ -109,5 +108,4 @@
10 * std::numeric_limits<double>::epsilon());
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/is_close.cc b/internal/ceres/is_close.cc
index 0becf55..575918b 100644
--- a/internal/ceres/is_close.cc
+++ b/internal/ceres/is_close.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,8 +33,7 @@
#include <algorithm>
#include <cmath>
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
bool IsClose(double x,
double y,
double relative_precision,
@@ -57,5 +56,4 @@
}
return *relative_error < std::fabs(relative_precision);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/is_close.h b/internal/ceres/is_close.h
index b781a44..1f6c82f 100644
--- a/internal/ceres/is_close.h
+++ b/internal/ceres/is_close.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,21 +33,22 @@
#ifndef CERES_INTERNAL_IS_CLOSE_H_
#define CERES_INTERNAL_IS_CLOSE_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Returns true if x and y have a relative (unsigned) difference less than
// relative_precision and false otherwise. Stores the relative and absolute
-// difference in relative/absolute_error if non-NULL. If one of the two values
-// is exactly zero, the absolute difference will be compared, and relative_error
-// will be set to the absolute difference.
-CERES_EXPORT_INTERNAL bool IsClose(double x,
- double y,
- double relative_precision,
- double* relative_error,
- double* absolute_error);
-} // namespace internal
-} // namespace ceres
+// difference in relative/absolute_error if non-nullptr. If one of the two
+// values is exactly zero, the absolute difference will be compared, and
+// relative_error will be set to the absolute difference.
+CERES_NO_EXPORT bool IsClose(double x,
+ double y,
+ double relative_precision,
+ double* relative_error,
+ double* absolute_error);
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_IS_CLOSE_H_
diff --git a/internal/ceres/is_close_test.cc b/internal/ceres/is_close_test.cc
index 12d6236..7b071af 100644
--- a/internal/ceres/is_close_test.cc
+++ b/internal/ceres/is_close_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,8 +34,7 @@
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
const double kTolerance = 1e-9;
@@ -174,5 +173,4 @@
EXPECT_NEAR(relative_error, 0.0, kTolerance);
EXPECT_NEAR(absolute_error, 0.0, kTolerance);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/float_cxsparse.cc b/internal/ceres/iteration_callback.cc
similarity index 80%
copy from internal/ceres/float_cxsparse.cc
copy to internal/ceres/iteration_callback.cc
index 6c68830..0cec071 100644
--- a/internal/ceres/float_cxsparse.cc
+++ b/internal/ceres/iteration_callback.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,20 +28,10 @@
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
-#include "ceres/float_cxsparse.h"
-
-#if !defined(CERES_NO_CXSPARSE)
+#include "ceres/iteration_callback.h"
namespace ceres {
-namespace internal {
-std::unique_ptr<SparseCholesky> FloatCXSparseCholesky::Create(
- OrderingType ordering_type) {
- LOG(FATAL) << "FloatCXSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
-}
+IterationCallback::~IterationCallback() = default;
-} // namespace internal
} // namespace ceres
-
-#endif // !defined(CERES_NO_CXSPARSE)
diff --git a/internal/ceres/iterative_refiner.cc b/internal/ceres/iterative_refiner.cc
index 5f0bfdd..54d48f3 100644
--- a/internal/ceres/iterative_refiner.cc
+++ b/internal/ceres/iterative_refiner.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,43 +33,69 @@
#include <string>
#include "Eigen/Core"
+#include "ceres/dense_cholesky.h"
#include "ceres/sparse_cholesky.h"
#include "ceres/sparse_matrix.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-IterativeRefiner::IterativeRefiner(const int max_num_iterations)
+SparseIterativeRefiner::SparseIterativeRefiner(const int max_num_iterations)
: max_num_iterations_(max_num_iterations) {}
-IterativeRefiner::~IterativeRefiner() {}
+SparseIterativeRefiner::~SparseIterativeRefiner() = default;
-void IterativeRefiner::Allocate(int num_cols) {
+void SparseIterativeRefiner::Allocate(int num_cols) {
residual_.resize(num_cols);
correction_.resize(num_cols);
lhs_x_solution_.resize(num_cols);
}
-void IterativeRefiner::Refine(const SparseMatrix& lhs,
- const double* rhs_ptr,
- SparseCholesky* sparse_cholesky,
- double* solution_ptr) {
+void SparseIterativeRefiner::Refine(const SparseMatrix& lhs,
+ const double* rhs_ptr,
+ SparseCholesky* cholesky,
+ double* solution_ptr) {
const int num_cols = lhs.num_cols();
Allocate(num_cols);
ConstVectorRef rhs(rhs_ptr, num_cols);
VectorRef solution(solution_ptr, num_cols);
+ std::string ignored_message;
for (int i = 0; i < max_num_iterations_; ++i) {
// residual = rhs - lhs * solution
lhs_x_solution_.setZero();
- lhs.RightMultiply(solution_ptr, lhs_x_solution_.data());
+ lhs.RightMultiplyAndAccumulate(solution_ptr, lhs_x_solution_.data());
residual_ = rhs - lhs_x_solution_;
// solution += lhs^-1 residual
- std::string ignored_message;
- sparse_cholesky->Solve(
- residual_.data(), correction_.data(), &ignored_message);
+ cholesky->Solve(residual_.data(), correction_.data(), &ignored_message);
solution += correction_;
}
};
-} // namespace internal
-} // namespace ceres
+DenseIterativeRefiner::DenseIterativeRefiner(const int max_num_iterations)
+ : max_num_iterations_(max_num_iterations) {}
+
+DenseIterativeRefiner::~DenseIterativeRefiner() = default;
+
+void DenseIterativeRefiner::Allocate(int num_cols) {
+ residual_.resize(num_cols);
+ correction_.resize(num_cols);
+}
+
+void DenseIterativeRefiner::Refine(const int num_cols,
+ const double* lhs_ptr,
+ const double* rhs_ptr,
+ DenseCholesky* cholesky,
+ double* solution_ptr) {
+ Allocate(num_cols);
+ ConstMatrixRef lhs(lhs_ptr, num_cols, num_cols);
+ ConstVectorRef rhs(rhs_ptr, num_cols);
+ VectorRef solution(solution_ptr, num_cols);
+ std::string ignored_message;
+ for (int i = 0; i < max_num_iterations_; ++i) {
+ residual_ = rhs - lhs * solution;
+ // solution += lhs^-1 residual
+ cholesky->Solve(residual_.data(), correction_.data(), &ignored_message);
+ solution += correction_;
+ }
+};
+
+} // namespace ceres::internal
diff --git a/internal/ceres/iterative_refiner.h b/internal/ceres/iterative_refiner.h
index 08f8d67..6607268 100644
--- a/internal/ceres/iterative_refiner.h
+++ b/internal/ceres/iterative_refiner.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,14 +33,15 @@
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
// clang-format on
#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
+class DenseCholesky;
class SparseCholesky;
class SparseMatrix;
@@ -57,20 +58,20 @@
// Definite linear systems.
//
// The above iterative loop is run until max_num_iterations is reached.
-class CERES_EXPORT_INTERNAL IterativeRefiner {
+class CERES_NO_EXPORT SparseIterativeRefiner {
public:
// max_num_iterations is the number of refinement iterations to
// perform.
- IterativeRefiner(int max_num_iterations);
+ explicit SparseIterativeRefiner(int max_num_iterations);
// Needed for mocking.
- virtual ~IterativeRefiner();
+ virtual ~SparseIterativeRefiner();
// Given an initial estimate of the solution of lhs * x = rhs, use
// max_num_iterations rounds of iterative refinement to improve it.
//
- // sparse_cholesky is assumed to contain an already computed
- // factorization (or approximation thereof) of lhs.
+ // cholesky is assumed to contain an already computed factorization (or
+ // an approximation thereof) of lhs.
//
// solution is expected to contain a approximation to the solution
// to lhs * x = rhs. It can be zero.
@@ -78,7 +79,7 @@
// This method is virtual to facilitate mocking.
virtual void Refine(const SparseMatrix& lhs,
const double* rhs,
- SparseCholesky* sparse_cholesky,
+ SparseCholesky* cholesky,
double* solution);
private:
@@ -90,7 +91,39 @@
Vector lhs_x_solution_;
};
-} // namespace internal
-} // namespace ceres
+class CERES_NO_EXPORT DenseIterativeRefiner {
+ public:
+ // max_num_iterations is the number of refinement iterations to
+ // perform.
+ explicit DenseIterativeRefiner(int max_num_iterations);
+
+ // Needed for mocking.
+ virtual ~DenseIterativeRefiner();
+
+ // Given an initial estimate of the solution of lhs * x = rhs, use
+ // max_num_iterations rounds of iterative refinement to improve it.
+ //
+ // cholesky is assumed to contain an already computed factorization (or
+ // an approximation thereof) of lhs.
+ //
+ // solution is expected to contain a approximation to the solution
+ // to lhs * x = rhs. It can be zero.
+ //
+ // This method is virtual to facilitate mocking.
+ virtual void Refine(int num_cols,
+ const double* lhs,
+ const double* rhs,
+ DenseCholesky* cholesky,
+ double* solution);
+
+ private:
+ void Allocate(int num_cols);
+
+ int max_num_iterations_;
+ Vector residual_;
+ Vector correction_;
+};
+
+} // namespace ceres::internal
#endif // CERES_INTERNAL_ITERATIVE_REFINER_H_
diff --git a/internal/ceres/iterative_refiner_test.cc b/internal/ceres/iterative_refiner_test.cc
index 49887c6..3b7bfd2 100644
--- a/internal/ceres/iterative_refiner_test.cc
+++ b/internal/ceres/iterative_refiner_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,15 +30,17 @@
#include "ceres/iterative_refiner.h"
+#include <utility>
+
#include "Eigen/Dense"
+#include "ceres/dense_cholesky.h"
#include "ceres/internal/eigen.h"
#include "ceres/sparse_cholesky.h"
#include "ceres/sparse_matrix.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Macros to help us define virtual methods which we do not expect to
// use/call in this test.
@@ -53,17 +55,16 @@
// A fake SparseMatrix, which uses an Eigen matrix to do the real work.
class FakeSparseMatrix : public SparseMatrix {
public:
- FakeSparseMatrix(const Matrix& m) : m_(m) {}
- virtual ~FakeSparseMatrix() {}
+ explicit FakeSparseMatrix(Matrix m) : m_(std::move(m)) {}
// y += Ax
- void RightMultiply(const double* x, double* y) const final {
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final {
VectorRef(y, m_.cols()) += m_ * ConstVectorRef(x, m_.cols());
}
// y += A'x
- void LeftMultiply(const double* x, double* y) const final {
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const final {
// We will assume that this is a symmetric matrix.
- RightMultiply(x, y);
+ RightMultiplyAndAccumulate(x, y);
}
double* mutable_values() final { return m_.data(); }
@@ -89,8 +90,39 @@
template <typename Scalar>
class FakeSparseCholesky : public SparseCholesky {
public:
- FakeSparseCholesky(const Matrix& lhs) { lhs_ = lhs.cast<Scalar>(); }
- virtual ~FakeSparseCholesky() {}
+ explicit FakeSparseCholesky(const Matrix& lhs) { lhs_ = lhs.cast<Scalar>(); }
+
+ LinearSolverTerminationType Solve(const double* rhs_ptr,
+ double* solution_ptr,
+ std::string* message) final {
+ const int num_cols = lhs_.cols();
+ VectorRef solution(solution_ptr, num_cols);
+ ConstVectorRef rhs(rhs_ptr, num_cols);
+ auto llt = lhs_.llt();
+ CHECK_EQ(llt.info(), Eigen::Success);
+ solution = llt.solve(rhs.cast<Scalar>()).template cast<double>();
+ return LinearSolverTerminationType::SUCCESS;
+ }
+
+ // The following methods are not needed for tests in this file.
+ CompressedRowSparseMatrix::StorageType StorageType() const final
+ DO_NOT_CALL_WITH_RETURN(
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR);
+ LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
+ std::string* message) final
+ DO_NOT_CALL_WITH_RETURN(LinearSolverTerminationType::FAILURE);
+
+ private:
+ Eigen::Matrix<Scalar, Eigen::Dynamic, Eigen::Dynamic> lhs_;
+};
+
+// A fake DenseCholesky which uses Eigen's Cholesky factorization to
+// do the real work. The template parameter allows us to work in
+// doubles or floats, even though the source matrix is double.
+template <typename Scalar>
+class FakeDenseCholesky : public DenseCholesky {
+ public:
+ explicit FakeDenseCholesky(const Matrix& lhs) { lhs_ = lhs.cast<Scalar>(); }
LinearSolverTerminationType Solve(const double* rhs_ptr,
double* solution_ptr,
@@ -99,21 +131,13 @@
VectorRef solution(solution_ptr, num_cols);
ConstVectorRef rhs(rhs_ptr, num_cols);
solution = lhs_.llt().solve(rhs.cast<Scalar>()).template cast<double>();
- return LINEAR_SOLVER_SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
}
- // The following methods are not needed for tests in this file.
- CompressedRowSparseMatrix::StorageType StorageType() const final
- DO_NOT_CALL_WITH_RETURN(CompressedRowSparseMatrix::UPPER_TRIANGULAR);
- LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
+ LinearSolverTerminationType Factorize(int num_cols,
+ double* lhs,
std::string* message) final
- DO_NOT_CALL_WITH_RETURN(LINEAR_SOLVER_FAILURE);
-
- LinearSolverTerminationType FactorAndSolve(CompressedRowSparseMatrix* lhs,
- const double* rhs,
- double* solution,
- std::string* message) final
- DO_NOT_CALL_WITH_RETURN(LINEAR_SOLVER_FAILURE);
+ DO_NOT_CALL_WITH_RETURN(LinearSolverTerminationType::FAILURE);
private:
Eigen::Matrix<Scalar, Eigen::Dynamic, Eigen::Dynamic> lhs_;
@@ -122,9 +146,9 @@
#undef DO_NOT_CALL
#undef DO_NOT_CALL_WITH_RETURN
-class IterativeRefinerTest : public ::testing::Test {
+class SparseIterativeRefinerTest : public ::testing::Test {
public:
- void SetUp() {
+ void SetUp() override {
num_cols_ = 5;
max_num_iterations_ = 30;
Matrix m(num_cols_, num_cols_);
@@ -142,10 +166,11 @@
Vector rhs_, solution_;
};
-TEST_F(IterativeRefinerTest, RandomSolutionWithExactFactorizationConverges) {
+TEST_F(SparseIterativeRefinerTest,
+ RandomSolutionWithExactFactorizationConverges) {
FakeSparseMatrix lhs(lhs_);
FakeSparseCholesky<double> sparse_cholesky(lhs_);
- IterativeRefiner refiner(max_num_iterations_);
+ SparseIterativeRefiner refiner(max_num_iterations_);
Vector refined_solution(num_cols_);
refined_solution.setRandom();
refiner.Refine(lhs, rhs_.data(), &sparse_cholesky, refined_solution.data());
@@ -154,13 +179,13 @@
std::numeric_limits<double>::epsilon() * 10);
}
-TEST_F(IterativeRefinerTest,
+TEST_F(SparseIterativeRefinerTest,
RandomSolutionWithApproximationFactorizationConverges) {
FakeSparseMatrix lhs(lhs_);
// Use a single precision Cholesky factorization of the double
// precision matrix. This will give us an approximate factorization.
FakeSparseCholesky<float> sparse_cholesky(lhs_);
- IterativeRefiner refiner(max_num_iterations_);
+ SparseIterativeRefiner refiner(max_num_iterations_);
Vector refined_solution(num_cols_);
refined_solution.setRandom();
refiner.Refine(lhs, rhs_.data(), &sparse_cholesky, refined_solution.data());
@@ -169,5 +194,60 @@
std::numeric_limits<double>::epsilon() * 10);
}
-} // namespace internal
-} // namespace ceres
+class DenseIterativeRefinerTest : public ::testing::Test {
+ public:
+ void SetUp() override {
+ num_cols_ = 5;
+ max_num_iterations_ = 30;
+ Matrix m(num_cols_, num_cols_);
+ m.setRandom();
+ lhs_ = m * m.transpose();
+ solution_.resize(num_cols_);
+ solution_.setRandom();
+ rhs_ = lhs_ * solution_;
+ };
+
+ protected:
+ int num_cols_;
+ int max_num_iterations_;
+ Matrix lhs_;
+ Vector rhs_, solution_;
+};
+
+TEST_F(DenseIterativeRefinerTest,
+ RandomSolutionWithExactFactorizationConverges) {
+ Matrix lhs = lhs_;
+ FakeDenseCholesky<double> dense_cholesky(lhs);
+ DenseIterativeRefiner refiner(max_num_iterations_);
+ Vector refined_solution(num_cols_);
+ refined_solution.setRandom();
+ refiner.Refine(lhs.cols(),
+ lhs.data(),
+ rhs_.data(),
+ &dense_cholesky,
+ refined_solution.data());
+ EXPECT_NEAR((lhs_ * refined_solution - rhs_).norm(),
+ 0.0,
+ std::numeric_limits<double>::epsilon() * 10);
+}
+
+TEST_F(DenseIterativeRefinerTest,
+ RandomSolutionWithApproximationFactorizationConverges) {
+ Matrix lhs = lhs_;
+ // Use a single precision Cholesky factorization of the double
+ // precision matrix. This will give us an approximate factorization.
+ FakeDenseCholesky<float> dense_cholesky(lhs_);
+ DenseIterativeRefiner refiner(max_num_iterations_);
+ Vector refined_solution(num_cols_);
+ refined_solution.setRandom();
+ refiner.Refine(lhs.cols(),
+ lhs.data(),
+ rhs_.data(),
+ &dense_cholesky,
+ refined_solution.data());
+ EXPECT_NEAR((lhs_ * refined_solution - rhs_).norm(),
+ 0.0,
+ std::numeric_limits<double>::epsilon() * 10);
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/iterative_schur_complement_solver.cc b/internal/ceres/iterative_schur_complement_solver.cc
index 143df5e..bcfb6e4 100644
--- a/internal/ceres/iterative_schur_complement_solver.cc
+++ b/internal/ceres/iterative_schur_complement_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,7 @@
#include <algorithm>
#include <cstring>
+#include <utility>
#include <vector>
#include "Eigen/Dense"
@@ -42,6 +43,7 @@
#include "ceres/implicit_schur_complement.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_solver.h"
+#include "ceres/power_series_expansion_preconditioner.h"
#include "ceres/preconditioner.h"
#include "ceres/schur_jacobi_preconditioner.h"
#include "ceres/triplet_sparse_matrix.h"
@@ -50,14 +52,13 @@
#include "ceres/wall_time.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
IterativeSchurComplementSolver::IterativeSchurComplementSolver(
- const LinearSolver::Options& options)
- : options_(options) {}
+ LinearSolver::Options options)
+ : options_(std::move(options)) {}
-IterativeSchurComplementSolver::~IterativeSchurComplementSolver() {}
+IterativeSchurComplementSolver::~IterativeSchurComplementSolver() = default;
LinearSolver::Summary IterativeSchurComplementSolver::SolveImpl(
BlockSparseMatrix* A,
@@ -67,15 +68,17 @@
EventLogger event_logger("IterativeSchurComplementSolver::Solve");
CHECK(A->block_structure() != nullptr);
+ CHECK(A->transpose_block_structure() != nullptr);
+
const int num_eliminate_blocks = options_.elimination_groups[0];
// Initialize a ImplicitSchurComplement object.
- if (schur_complement_ == NULL) {
+ if (schur_complement_ == nullptr) {
DetectStructure(*(A->block_structure()),
num_eliminate_blocks,
&options_.row_block_size,
&options_.e_block_size,
&options_.f_block_size);
- schur_complement_.reset(new ImplicitSchurComplement(options_));
+ schur_complement_ = std::make_unique<ImplicitSchurComplement>(options_);
}
schur_complement_->Init(*A, per_solve_options.D, b);
@@ -85,45 +88,66 @@
VLOG(2) << "No parameter blocks left in the schur complement.";
LinearSolver::Summary summary;
summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
- schur_complement_->BackSubstitute(NULL, x);
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
+ schur_complement_->BackSubstitute(nullptr, x);
return summary;
}
- // Initialize the solution to the Schur complement system to zero.
+ // Initialize the solution to the Schur complement system.
reduced_linear_system_solution_.resize(schur_complement_->num_rows());
reduced_linear_system_solution_.setZero();
-
- LinearSolver::Options cg_options;
- cg_options.min_num_iterations = options_.min_num_iterations;
- cg_options.max_num_iterations = options_.max_num_iterations;
- ConjugateGradientsSolver cg_solver(cg_options);
-
- LinearSolver::PerSolveOptions cg_per_solve_options;
- cg_per_solve_options.r_tolerance = per_solve_options.r_tolerance;
- cg_per_solve_options.q_tolerance = per_solve_options.q_tolerance;
+ if (options_.use_spse_initialization) {
+ Preconditioner::Options preconditioner_options(options_);
+ preconditioner_options.type = SCHUR_POWER_SERIES_EXPANSION;
+ PowerSeriesExpansionPreconditioner pse_solver(
+ schur_complement_.get(),
+ options_.max_num_spse_iterations,
+ options_.spse_tolerance,
+ preconditioner_options);
+ pse_solver.RightMultiplyAndAccumulate(
+ schur_complement_->rhs().data(),
+ reduced_linear_system_solution_.data());
+ }
CreatePreconditioner(A);
- if (preconditioner_.get() != NULL) {
+ if (preconditioner_ != nullptr) {
if (!preconditioner_->Update(*A, per_solve_options.D)) {
LinearSolver::Summary summary;
summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_FAILURE;
+ summary.termination_type = LinearSolverTerminationType::FAILURE;
summary.message = "Preconditioner update failed.";
return summary;
}
-
- cg_per_solve_options.preconditioner = preconditioner_.get();
}
+ ConjugateGradientsSolverOptions cg_options;
+ cg_options.min_num_iterations = options_.min_num_iterations;
+ cg_options.max_num_iterations = options_.max_num_iterations;
+ cg_options.residual_reset_period = options_.residual_reset_period;
+ cg_options.q_tolerance = per_solve_options.q_tolerance;
+ cg_options.r_tolerance = per_solve_options.r_tolerance;
+
+ LinearOperatorAdapter lhs(*schur_complement_);
+ LinearOperatorAdapter preconditioner(*preconditioner_);
+
+ Vector scratch[4];
+ for (int i = 0; i < 4; ++i) {
+ scratch[i].resize(schur_complement_->num_cols());
+ }
+ Vector* scratch_ptr[4] = {&scratch[0], &scratch[1], &scratch[2], &scratch[3]};
+
event_logger.AddEvent("Setup");
+
LinearSolver::Summary summary =
- cg_solver.Solve(schur_complement_.get(),
- schur_complement_->rhs().data(),
- cg_per_solve_options,
- reduced_linear_system_solution_.data());
- if (summary.termination_type != LINEAR_SOLVER_FAILURE &&
- summary.termination_type != LINEAR_SOLVER_FATAL_ERROR) {
+ ConjugateGradientsSolver(cg_options,
+ lhs,
+ schur_complement_->rhs(),
+ preconditioner,
+ scratch_ptr,
+ reduced_linear_system_solution_);
+
+ if (summary.termination_type != LinearSolverTerminationType::FAILURE &&
+ summary.termination_type != LinearSolverTerminationType::FATAL_ERROR) {
schur_complement_->BackSubstitute(reduced_linear_system_solution_.data(),
x);
}
@@ -133,43 +157,44 @@
void IterativeSchurComplementSolver::CreatePreconditioner(
BlockSparseMatrix* A) {
- if (options_.preconditioner_type == IDENTITY ||
- preconditioner_.get() != NULL) {
+ if (preconditioner_ != nullptr) {
return;
}
- Preconditioner::Options preconditioner_options;
- preconditioner_options.type = options_.preconditioner_type;
- preconditioner_options.visibility_clustering_type =
- options_.visibility_clustering_type;
- preconditioner_options.sparse_linear_algebra_library_type =
- options_.sparse_linear_algebra_library_type;
- preconditioner_options.num_threads = options_.num_threads;
- preconditioner_options.row_block_size = options_.row_block_size;
- preconditioner_options.e_block_size = options_.e_block_size;
- preconditioner_options.f_block_size = options_.f_block_size;
- preconditioner_options.elimination_groups = options_.elimination_groups;
- CHECK(options_.context != NULL);
- preconditioner_options.context = options_.context;
+ Preconditioner::Options preconditioner_options(options_);
+ CHECK(options_.context != nullptr);
switch (options_.preconditioner_type) {
+ case IDENTITY:
+ preconditioner_ = std::make_unique<IdentityPreconditioner>(
+ schur_complement_->num_cols());
+ break;
case JACOBI:
- preconditioner_.reset(new SparseMatrixPreconditionerWrapper(
- schur_complement_->block_diagonal_FtF_inverse()));
+ preconditioner_ = std::make_unique<SparseMatrixPreconditionerWrapper>(
+ schur_complement_->block_diagonal_FtF_inverse(),
+ preconditioner_options);
+ break;
+ case SCHUR_POWER_SERIES_EXPANSION:
+ // Ignoring the value of spse_tolerance to ensure preconditioner stays
+ // fixed during the iterations of cg.
+ preconditioner_ = std::make_unique<PowerSeriesExpansionPreconditioner>(
+ schur_complement_.get(),
+ options_.max_num_spse_iterations,
+ 0,
+ preconditioner_options);
break;
case SCHUR_JACOBI:
- preconditioner_.reset(new SchurJacobiPreconditioner(
- *A->block_structure(), preconditioner_options));
+ preconditioner_ = std::make_unique<SchurJacobiPreconditioner>(
+ *A->block_structure(), preconditioner_options);
break;
case CLUSTER_JACOBI:
case CLUSTER_TRIDIAGONAL:
- preconditioner_.reset(new VisibilityBasedPreconditioner(
- *A->block_structure(), preconditioner_options));
+ preconditioner_ = std::make_unique<VisibilityBasedPreconditioner>(
+ *A->block_structure(), preconditioner_options);
break;
default:
LOG(FATAL) << "Unknown Preconditioner Type";
}
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/iterative_schur_complement_solver.h b/internal/ceres/iterative_schur_complement_solver.h
index 37606b3..a4b6b53 100644
--- a/internal/ceres/iterative_schur_complement_solver.h
+++ b/internal/ceres/iterative_schur_complement_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,13 +33,13 @@
#include <memory>
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockSparseMatrix;
class ImplicitSchurComplement;
@@ -52,7 +52,7 @@
// The algorithm used by this solver was developed in a series of
// papers - "Agarwal et al, Bundle Adjustment in the Large, ECCV 2010"
// and "Wu et al, Multicore Bundle Adjustment, submitted to CVPR
-// 2011" at the Univeristy of Washington.
+// 2011" at the University of Washington.
//
// The key idea is that one can run Conjugate Gradients on the Schur
// Complement system without explicitly forming the Schur Complement
@@ -69,15 +69,15 @@
// a proof of this fact and others related to this solver please see
// the section on Domain Decomposition Methods in Saad's book
// "Iterative Methods for Sparse Linear Systems".
-class CERES_EXPORT_INTERNAL IterativeSchurComplementSolver
+class CERES_NO_EXPORT IterativeSchurComplementSolver final
: public BlockSparseMatrixSolver {
public:
- explicit IterativeSchurComplementSolver(const LinearSolver::Options& options);
+ explicit IterativeSchurComplementSolver(LinearSolver::Options options);
IterativeSchurComplementSolver(const IterativeSchurComplementSolver&) =
delete;
void operator=(const IterativeSchurComplementSolver&) = delete;
- virtual ~IterativeSchurComplementSolver();
+ ~IterativeSchurComplementSolver() override;
private:
LinearSolver::Summary SolveImpl(BlockSparseMatrix* A,
@@ -93,7 +93,8 @@
Vector reduced_linear_system_solution_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_ITERATIVE_SCHUR_COMPLEMENT_SOLVER_H_
diff --git a/internal/ceres/iterative_schur_complement_solver_test.cc b/internal/ceres/iterative_schur_complement_solver_test.cc
index fdd65c7..c5d67de 100644
--- a/internal/ceres/iterative_schur_complement_solver_test.cc
+++ b/internal/ceres/iterative_schur_complement_solver_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -61,20 +61,22 @@
class IterativeSchurComplementSolverTest : public ::testing::Test {
protected:
void SetUpProblem(int problem_id) {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(problem_id));
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(problem_id);
CHECK(problem != nullptr);
A_.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
- b_.reset(problem->b.release());
- D_.reset(problem->D.release());
+ b_ = std::move(problem->b);
+ D_ = std::move(problem->D);
num_cols_ = A_->num_cols();
num_rows_ = A_->num_rows();
num_eliminate_blocks_ = problem->num_eliminate_blocks;
}
- AssertionResult TestSolver(double* D) {
+ AssertionResult TestSolver(double* D,
+ PreconditionerType preconditioner_type,
+ bool use_spse_initialization) {
TripletSparseMatrix triplet_A(
A_->num_rows(), A_->num_cols(), A_->num_nonzeros());
A_->ToTripletSparseMatrix(&triplet_A);
@@ -95,7 +97,9 @@
options.elimination_groups.push_back(num_eliminate_blocks_);
options.elimination_groups.push_back(0);
options.max_num_iterations = num_cols_;
- options.preconditioner_type = SCHUR_JACOBI;
+ options.max_num_spse_iterations = 1;
+ options.use_spse_initialization = use_spse_initialization;
+ options.preconditioner_type = preconditioner_type;
IterativeSchurComplementSolver isc(options);
Vector isc_sol(num_cols_);
@@ -119,16 +123,30 @@
std::unique_ptr<double[]> D_;
};
-TEST_F(IterativeSchurComplementSolverTest, NormalProblem) {
+TEST_F(IterativeSchurComplementSolverTest, NormalProblemSchurJacobi) {
SetUpProblem(2);
- EXPECT_TRUE(TestSolver(NULL));
- EXPECT_TRUE(TestSolver(D_.get()));
+ EXPECT_TRUE(TestSolver(nullptr, SCHUR_JACOBI, false));
+ EXPECT_TRUE(TestSolver(D_.get(), SCHUR_JACOBI, false));
+}
+
+TEST_F(IterativeSchurComplementSolverTest,
+ NormalProblemSchurJacobiWithPowerSeriesExpansionInitialization) {
+ SetUpProblem(2);
+ EXPECT_TRUE(TestSolver(nullptr, SCHUR_JACOBI, true));
+ EXPECT_TRUE(TestSolver(D_.get(), SCHUR_JACOBI, true));
+}
+
+TEST_F(IterativeSchurComplementSolverTest,
+ NormalProblemPowerSeriesExpansionPreconditioner) {
+ SetUpProblem(5);
+ EXPECT_TRUE(TestSolver(nullptr, SCHUR_POWER_SERIES_EXPANSION, false));
+ EXPECT_TRUE(TestSolver(D_.get(), SCHUR_POWER_SERIES_EXPANSION, false));
}
TEST_F(IterativeSchurComplementSolverTest, ProblemWithNoFBlocks) {
SetUpProblem(3);
- EXPECT_TRUE(TestSolver(NULL));
- EXPECT_TRUE(TestSolver(D_.get()));
+ EXPECT_TRUE(TestSolver(nullptr, SCHUR_JACOBI, false));
+ EXPECT_TRUE(TestSolver(D_.get(), SCHUR_JACOBI, false));
}
} // namespace internal
diff --git a/internal/ceres/jet_operator_benchmark.cc b/internal/ceres/jet_operator_benchmark.cc
new file mode 100644
index 0000000..94b0308
--- /dev/null
+++ b/internal/ceres/jet_operator_benchmark.cc
@@ -0,0 +1,289 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: alex@karatarakis.com (Alexander Karatarakis)
+
+#include <array>
+
+#include "benchmark/benchmark.h"
+#include "ceres/jet.h"
+
+namespace ceres {
+
+// Cycle the Jets to avoid caching effects in the benchmark.
+template <class JetType>
+class JetInputData {
+ using T = typename JetType::Scalar;
+ static constexpr std::size_t SIZE = 20;
+
+ public:
+ JetInputData() {
+ for (int i = 0; i < static_cast<int>(SIZE); i++) {
+ const T ti = static_cast<T>(i + 1);
+
+ a_[i].a = T(1.1) * ti;
+ a_[i].v.setRandom();
+
+ b_[i].a = T(2.2) * ti;
+ b_[i].v.setRandom();
+
+ c_[i].a = T(3.3) * ti;
+ c_[i].v.setRandom();
+
+ d_[i].a = T(4.4) * ti;
+ d_[i].v.setRandom();
+
+ e_[i].a = T(5.5) * ti;
+ e_[i].v.setRandom();
+
+ scalar_a_[i] = T(1.1) * ti;
+ scalar_b_[i] = T(2.2) * ti;
+ scalar_c_[i] = T(3.3) * ti;
+ scalar_d_[i] = T(4.4) * ti;
+ scalar_e_[i] = T(5.5) * ti;
+ }
+ }
+
+ void advance() { index_ = (index_ + 1) % SIZE; }
+
+ const JetType& a() const { return a_[index_]; }
+ const JetType& b() const { return b_[index_]; }
+ const JetType& c() const { return c_[index_]; }
+ const JetType& d() const { return d_[index_]; }
+ const JetType& e() const { return e_[index_]; }
+ T scalar_a() const { return scalar_a_[index_]; }
+ T scalar_b() const { return scalar_b_[index_]; }
+ T scalar_c() const { return scalar_c_[index_]; }
+ T scalar_d() const { return scalar_d_[index_]; }
+ T scalar_e() const { return scalar_e_[index_]; }
+
+ private:
+ std::size_t index_{0};
+ std::array<JetType, SIZE> a_{};
+ std::array<JetType, SIZE> b_{};
+ std::array<JetType, SIZE> c_{};
+ std::array<JetType, SIZE> d_{};
+ std::array<JetType, SIZE> e_{};
+ std::array<T, SIZE> scalar_a_;
+ std::array<T, SIZE> scalar_b_;
+ std::array<T, SIZE> scalar_c_;
+ std::array<T, SIZE> scalar_d_;
+ std::array<T, SIZE> scalar_e_;
+};
+
+template <std::size_t JET_SIZE, class Function>
+static void JetBenchmarkHelper(benchmark::State& state, const Function& func) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetInputData<JetType> data{};
+ JetType out{};
+ const int iterations = static_cast<int>(state.range(0));
+ for (auto _ : state) {
+ for (int i = 0; i < iterations; i++) {
+ func(data, out);
+ data.advance();
+ }
+ }
+ benchmark::DoNotOptimize(out);
+}
+
+template <std::size_t JET_SIZE>
+static void Addition(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out += +d.a() + d.b() + d.c() + d.d() + d.e();
+ });
+}
+BENCHMARK_TEMPLATE(Addition, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(Addition, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(Addition, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(Addition, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(Addition, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(Addition, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void AdditionScalar(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out +=
+ d.scalar_a() + d.scalar_b() + d.c() + d.scalar_d() + d.scalar_e();
+ });
+}
+BENCHMARK_TEMPLATE(AdditionScalar, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(AdditionScalar, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(AdditionScalar, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(AdditionScalar, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(AdditionScalar, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(AdditionScalar, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void Subtraction(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out -= -d.a() - d.b() - d.c() - d.d() - d.e();
+ });
+}
+BENCHMARK_TEMPLATE(Subtraction, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(Subtraction, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(Subtraction, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(Subtraction, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(Subtraction, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(Subtraction, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void SubtractionScalar(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out -=
+ -d.scalar_a() - d.scalar_b() - d.c() - d.scalar_d() - d.scalar_e();
+ });
+}
+BENCHMARK_TEMPLATE(SubtractionScalar, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(SubtractionScalar, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(SubtractionScalar, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(SubtractionScalar, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(SubtractionScalar, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(SubtractionScalar, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void Multiplication(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out *= d.a() * d.b() * d.c() * d.d() * d.e();
+ });
+}
+BENCHMARK_TEMPLATE(Multiplication, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(Multiplication, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(Multiplication, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(Multiplication, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(Multiplication, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(Multiplication, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void MultiplicationLeftScalar(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out += d.scalar_a() *
+ (d.scalar_b() * (d.scalar_c() * (d.scalar_d() * d.e())));
+ });
+}
+BENCHMARK_TEMPLATE(MultiplicationLeftScalar, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationLeftScalar, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationLeftScalar, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationLeftScalar, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationLeftScalar, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationLeftScalar, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void MultiplicationRightScalar(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out += (((d.a() * d.scalar_b()) * d.scalar_c()) * d.scalar_d()) *
+ d.scalar_e();
+ });
+}
+BENCHMARK_TEMPLATE(MultiplicationRightScalar, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationRightScalar, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationRightScalar, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationRightScalar, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationRightScalar, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplicationRightScalar, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void Division(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out /= d.a() / d.b() / d.c() / d.d() / d.e();
+ });
+}
+BENCHMARK_TEMPLATE(Division, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(Division, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(Division, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(Division, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(Division, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(Division, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void DivisionLeftScalar(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out += d.scalar_a() /
+ (d.scalar_b() / (d.scalar_c() / (d.scalar_d() / d.e())));
+ });
+}
+BENCHMARK_TEMPLATE(DivisionLeftScalar, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionLeftScalar, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionLeftScalar, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionLeftScalar, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionLeftScalar, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionLeftScalar, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void DivisionRightScalar(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out += (((d.a() / d.scalar_b()) / d.scalar_c()) / d.scalar_d()) /
+ d.scalar_e();
+ });
+}
+BENCHMARK_TEMPLATE(DivisionRightScalar, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionRightScalar, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionRightScalar, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionRightScalar, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionRightScalar, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(DivisionRightScalar, 200)->Arg(160);
+
+template <std::size_t JET_SIZE>
+static void MultiplyAndAdd(benchmark::State& state) {
+ using JetType = Jet<double, JET_SIZE>;
+ JetBenchmarkHelper<JET_SIZE>(
+ state, [](const JetInputData<JetType>& d, JetType& out) {
+ out += d.scalar_a() * d.a() + d.scalar_b() * d.b() +
+ d.scalar_c() * d.c() + d.scalar_d() * d.d() +
+ d.scalar_e() * d.e();
+ });
+}
+BENCHMARK_TEMPLATE(MultiplyAndAdd, 3)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplyAndAdd, 10)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplyAndAdd, 15)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplyAndAdd, 25)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplyAndAdd, 32)->Arg(1000);
+BENCHMARK_TEMPLATE(MultiplyAndAdd, 200)->Arg(160);
+
+} // namespace ceres
+
+BENCHMARK_MAIN();
diff --git a/internal/ceres/jet_test.cc b/internal/ceres/jet_test.cc
index 36f279d..7f67bd6 100644
--- a/internal/ceres/jet_test.cc
+++ b/internal/ceres/jet_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,23 +32,39 @@
#include <Eigen/Dense>
#include <algorithm>
+#include <cfenv>
#include <cmath>
#include "ceres/stringprintf.h"
#include "ceres/test_util.h"
#include "glog/logging.h"
+#include "gmock/gmock.h"
#include "gtest/gtest.h"
-#define VL VLOG(1)
+// The floating-point environment access and modification is only meaningful
+// with the following pragma.
+#ifdef _MSC_VER
+#pragma float_control(precise, on, push)
+#pragma fenv_access(on)
+#elif !(defined(__ARM_ARCH) && __ARM_ARCH >= 8) && !defined(__MINGW32__)
+// NOTE: FENV_ACCESS cannot be set to ON when targeting arm(v8) and MinGW
+#pragma STDC FENV_ACCESS ON
+#else
+#define CERES_NO_FENV_ACCESS
+#endif
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
namespace {
-const double kE = 2.71828182845904523536;
+constexpr double kE = 2.71828182845904523536;
-typedef Jet<double, 2> J;
+using J = Jet<double, 2>;
+// Don't care about the dual part for scalar part categorization and comparison
+// tests
+template <typename T>
+using J0 = Jet<T, 0>;
+using J0d = J0<double>;
// Convenient shorthand for making a jet.
J MakeJet(double a, double v0, double v1) {
@@ -59,13 +75,63 @@
return z;
}
-// On a 32-bit optimized build, the mismatch is about 1.4e-14.
-double const kTolerance = 1e-13;
+constexpr double kTolerance = 1e-13;
-void ExpectJetsClose(const J& x, const J& y) {
- ExpectClose(x.a, y.a, kTolerance);
- ExpectClose(x.v[0], y.v[0], kTolerance);
- ExpectClose(x.v[1], y.v[1], kTolerance);
+// Stores the floating-point environment containing active floating-point
+// exceptions, rounding mode, etc., and restores it upon destruction.
+//
+// Useful for avoiding side-effects.
+class Fenv {
+ public:
+ Fenv() { std::fegetenv(&e); }
+ ~Fenv() { std::fesetenv(&e); }
+
+ Fenv(const Fenv&) = delete;
+ Fenv& operator=(const Fenv&) = delete;
+
+ private:
+ std::fenv_t e;
+};
+
+bool AreAlmostEqual(double x, double y, double max_abs_relative_difference) {
+ if (std::isnan(x) && std::isnan(y)) {
+ return true;
+ }
+
+ if (std::isinf(x) && std::isinf(y)) {
+ return (std::signbit(x) == std::signbit(y));
+ }
+
+ Fenv env; // Do not leak floating-point exceptions to the caller
+ double absolute_difference = std::abs(x - y);
+ double relative_difference =
+ absolute_difference / std::max(std::abs(x), std::abs(y));
+
+ if (std::fpclassify(x) == FP_ZERO || std::fpclassify(y) == FP_ZERO) {
+ // If x or y is exactly zero, then relative difference doesn't have any
+ // meaning. Take the absolute difference instead.
+ relative_difference = absolute_difference;
+ }
+ return std::islessequal(relative_difference, max_abs_relative_difference);
+}
+
+MATCHER_P2(IsAlmostEqualToWithTolerance,
+ y,
+ tolerance,
+ "is almost equal to " + testing::PrintToString(y) +
+ " with tolerance " + testing::PrintToString(tolerance)) {
+ const bool result = (AreAlmostEqual(arg.a, y.a, tolerance) &&
+ AreAlmostEqual(arg.v[0], y.v[0], tolerance) &&
+ AreAlmostEqual(arg.v[1], y.v[1], tolerance));
+ if (!result) {
+ *result_listener << "\nexpected - actual : " << y - arg;
+ }
+ return result;
+}
+
+MATCHER_P(IsAlmostEqualTo, y, "") {
+ return ExplainMatchResult(
+ IsAlmostEqualToWithTolerance(y, kTolerance), arg, result_listener);
}
const double kStep = 1e-8;
@@ -77,8 +143,8 @@
const double exact_dx = f(MakeJet(x, 1.0, 0.0)).v[0];
const double estimated_dx =
(f(J(x + kStep)).a - f(J(x - kStep)).a) / (2.0 * kStep);
- VL << name << "(" << x << "), exact dx: " << exact_dx
- << ", estimated dx: " << estimated_dx;
+ VLOG(1) << name << "(" << x << "), exact dx: " << exact_dx
+ << ", estimated dx: " << estimated_dx;
ExpectClose(exact_dx, estimated_dx, kNumericalTolerance);
}
@@ -102,478 +168,211 @@
(f(J(x + kStep), J(y)).a - f(J(x - kStep), J(y)).a) / (2.0 * kStep);
const double estimated_dy =
(f(J(x), J(y + kStep)).a - f(J(x), J(y - kStep)).a) / (2.0 * kStep);
- VL << name << "(" << x << ", " << y << "), exact dx: " << exact_dx
- << ", estimated dx: " << estimated_dx;
+ VLOG(1) << name << "(" << x << ", " << y << "), exact dx: " << exact_dx
+ << ", estimated dx: " << estimated_dx;
ExpectClose(exact_dx, estimated_dx, kNumericalTolerance);
- VL << name << "(" << x << ", " << y << "), exact dy: " << exact_dy
- << ", estimated dy: " << estimated_dy;
+ VLOG(1) << name << "(" << x << ", " << y << "), exact dy: " << exact_dy
+ << ", estimated dy: " << estimated_dy;
ExpectClose(exact_dy, estimated_dy, kNumericalTolerance);
}
} // namespace
-TEST(Jet, Jet) {
- // Pick arbitrary values for x and y.
- J x = MakeJet(2.3, -2.7, 1e-3);
- J y = MakeJet(1.7, 0.5, 1e+2);
+// Pick arbitrary values for x and y.
+const J x = MakeJet(2.3, -2.7, 1e-3);
+const J y = MakeJet(1.7, 0.5, 1e+2);
+const J z = MakeJet(1e-6, 1e-4, 1e-2);
- VL << "x = " << x;
- VL << "y = " << y;
-
- { // Check that log(exp(x)) == x.
- J z = exp(x);
- J w = log(z);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(w, x);
- }
-
- { // Check that (x * y) / x == y.
- J z = x * y;
- J w = z / x;
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(w, y);
- }
-
- { // Check that sqrt(x * x) == x.
- J z = x * x;
- J w = sqrt(z);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(w, x);
- }
-
- { // Check that sqrt(y) * sqrt(y) == y.
- J z = sqrt(y);
- J w = z * z;
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(w, y);
- }
+TEST(Jet, Elementary) {
+ EXPECT_THAT((x * y) / x, IsAlmostEqualTo(y));
+ EXPECT_THAT(sqrt(x * x), IsAlmostEqualTo(x));
+ EXPECT_THAT(sqrt(y) * sqrt(y), IsAlmostEqualTo(y));
NumericalTest("sqrt", sqrt<double, 2>, 0.00001);
NumericalTest("sqrt", sqrt<double, 2>, 1.0);
- { // Check that cos(2*x) = cos(x)^2 - sin(x)^2
- J z = cos(J(2.0) * x);
- J w = cos(x) * cos(x) - sin(x) * sin(x);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(w, z);
- }
-
- { // Check that sin(2*x) = 2*cos(x)*sin(x)
- J z = sin(J(2.0) * x);
- J w = J(2.0) * cos(x) * sin(x);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(w, z);
- }
-
- { // Check that cos(x)*cos(x) + sin(x)*sin(x) = 1
- J z = cos(x) * cos(x);
- J w = sin(x) * sin(x);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(z + w, J(1.0));
- }
-
- { // Check that atan2(r*sin(t), r*cos(t)) = t.
- J t = MakeJet(0.7, -0.3, +1.5);
- J r = MakeJet(2.3, 0.13, -2.4);
- VL << "t = " << t;
- VL << "r = " << r;
-
- J u = atan2(r * sin(t), r * cos(t));
- VL << "u = " << u;
-
- ExpectJetsClose(u, t);
- }
-
- { // Check that tan(x) = sin(x) / cos(x).
- J z = tan(x);
- J w = sin(x) / cos(x);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(z, w);
- }
-
- { // Check that tan(atan(x)) = x.
- J z = tan(atan(x));
- J w = x;
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(z, w);
- }
-
- { // Check that cosh(x)*cosh(x) - sinh(x)*sinh(x) = 1
- J z = cosh(x) * cosh(x);
- J w = sinh(x) * sinh(x);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(z - w, J(1.0));
- }
-
- { // Check that tanh(x + y) = (tanh(x) + tanh(y)) / (1 + tanh(x) tanh(y))
- J z = tanh(x + y);
- J w = (tanh(x) + tanh(y)) / (J(1.0) + tanh(x) * tanh(y));
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(z, w);
- }
-
- { // Check that pow(x, 1) == x.
- VL << "x = " << x;
-
- J u = pow(x, 1.);
- VL << "u = " << u;
-
- ExpectJetsClose(x, u);
- }
-
- { // Check that pow(x, 1) == x.
- J y = MakeJet(1, 0.0, 0.0);
- VL << "x = " << x;
- VL << "y = " << y;
-
- J u = pow(x, y);
- VL << "u = " << u;
-
- ExpectJetsClose(x, u);
- }
-
- { // Check that pow(e, log(x)) == x.
- J logx = log(x);
-
- VL << "x = " << x;
- VL << "y = " << y;
-
- J u = pow(kE, logx);
- VL << "u = " << u;
-
- ExpectJetsClose(x, u);
- }
-
- { // Check that pow(e, log(x)) == x.
- J logx = log(x);
- J e = MakeJet(kE, 0., 0.);
- VL << "x = " << x;
- VL << "log(x) = " << logx;
-
- J u = pow(e, logx);
- VL << "u = " << u;
-
- ExpectJetsClose(x, u);
- }
-
- { // Check that pow(e, log(x)) == x.
- J logx = log(x);
- J e = MakeJet(kE, 0., 0.);
- VL << "x = " << x;
- VL << "logx = " << logx;
-
- J u = pow(e, logx);
- VL << "u = " << u;
-
- ExpectJetsClose(x, u);
- }
-
- { // Check that pow(x,y) = exp(y*log(x)).
- J logx = log(x);
- J e = MakeJet(kE, 0., 0.);
- VL << "x = " << x;
- VL << "logx = " << logx;
-
- J u = pow(e, y * logx);
- J v = pow(x, y);
- VL << "u = " << u;
- VL << "v = " << v;
-
- ExpectJetsClose(v, u);
- }
-
- { // Check that pow(0, y) == 0 for y > 1, with both arguments Jets.
- // This tests special case handling inside pow().
- J a = MakeJet(0, 1, 2);
- J b = MakeJet(2, 3, 4);
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- ExpectJetsClose(c, MakeJet(0, 0, 0));
- }
-
- { // Check that pow(0, y) == 0 for y == 1, with both arguments Jets.
- // This tests special case handling inside pow().
- J a = MakeJet(0, 1, 2);
- J b = MakeJet(1, 3, 4);
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- ExpectJetsClose(c, MakeJet(0, 1, 2));
- }
-
- { // Check that pow(0, <1) is not finite, with both arguments Jets.
- for (int i = 1; i < 10; i++) {
- J a = MakeJet(0, 1, 2);
- J b = MakeJet(i * 0.1, 3, 4); // b = 0.1 ... 0.9
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- EXPECT_EQ(c.a, 0.0);
- EXPECT_FALSE(IsFinite(c.v[0]));
- EXPECT_FALSE(IsFinite(c.v[1]));
- }
- for (int i = -10; i < 0; i++) {
- J a = MakeJet(0, 1, 2);
- J b = MakeJet(i * 0.1, 3, 4); // b = -1,-0.9 ... -0.1
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- EXPECT_FALSE(IsFinite(c.a));
- EXPECT_FALSE(IsFinite(c.v[0]));
- EXPECT_FALSE(IsFinite(c.v[1]));
- }
-
- {
- // The special case of 0^0 = 1 defined by the C standard.
- J a = MakeJet(0, 1, 2);
- J b = MakeJet(0, 3, 4);
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- EXPECT_EQ(c.a, 1.0);
- EXPECT_FALSE(IsFinite(c.v[0]));
- EXPECT_FALSE(IsFinite(c.v[1]));
- }
- }
-
- { // Check that pow(<0, b) is correct for integer b.
- // This tests special case handling inside pow().
- J a = MakeJet(-1.5, 3, 4);
-
- // b integer:
- for (int i = -10; i <= 10; i++) {
- J b = MakeJet(i, 0, 5);
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- ExpectClose(c.a, pow(-1.5, i), kTolerance);
- EXPECT_TRUE(IsFinite(c.v[0]));
- EXPECT_FALSE(IsFinite(c.v[1]));
- ExpectClose(c.v[0], i * pow(-1.5, i - 1) * 3.0, kTolerance);
- }
- }
-
- { // Check that pow(<0, b) is correct for noninteger b.
- // This tests special case handling inside pow().
- J a = MakeJet(-1.5, 3, 4);
- J b = MakeJet(-2.5, 0, 5);
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- EXPECT_FALSE(IsFinite(c.a));
- EXPECT_FALSE(IsFinite(c.v[0]));
- EXPECT_FALSE(IsFinite(c.v[1]));
- }
-
+ EXPECT_THAT(x + 1.0, IsAlmostEqualTo(1.0 + x));
{
- // Check that pow(0,y) == 0 for y == 2, with the second argument a
- // Jet. This tests special case handling inside pow().
- double a = 0;
- J b = MakeJet(2, 3, 4);
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- ExpectJetsClose(c, MakeJet(0, 0, 0));
- }
-
- {
- // Check that pow(<0,y) is correct for integer y. This tests special case
- // handling inside pow().
- double a = -1.5;
- for (int i = -10; i <= 10; i++) {
- J b = MakeJet(i, 3, 0);
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- ExpectClose(c.a, pow(-1.5, i), kTolerance);
- EXPECT_FALSE(IsFinite(c.v[0]));
- EXPECT_TRUE(IsFinite(c.v[1]));
- ExpectClose(c.v[1], 0, kTolerance);
- }
- }
-
- {
- // Check that pow(<0,y) is correct for noninteger y. This tests special
- // case handling inside pow().
- double a = -1.5;
- J b = MakeJet(-3.14, 3, 0);
- VL << "a = " << a;
- VL << "b = " << b;
-
- J c = pow(a, b);
- VL << "a^b = " << c;
- EXPECT_FALSE(IsFinite(c.a));
- EXPECT_FALSE(IsFinite(c.v[0]));
- EXPECT_FALSE(IsFinite(c.v[1]));
- }
-
- { // Check that 1 + x == x + 1.
- J a = x + 1.0;
- J b = 1.0 + x;
J c = x;
c += 1.0;
-
- ExpectJetsClose(a, b);
- ExpectJetsClose(a, c);
+ EXPECT_THAT(c, IsAlmostEqualTo(1.0 + x));
}
- { // Check that 1 - x == -(x - 1).
- J a = 1.0 - x;
- J b = -(x - 1.0);
+ EXPECT_THAT(-(x - 1.0), IsAlmostEqualTo(1.0 - x));
+ {
J c = x;
c -= 1.0;
-
- ExpectJetsClose(a, b);
- ExpectJetsClose(a, -c);
+ EXPECT_THAT(c, IsAlmostEqualTo(x - 1.0));
}
- { // Check that (x/s)*s == (x*s)/s.
- J a = x / 5.0;
- J b = x * 5.0;
+ EXPECT_THAT((x * 5.0) / 5.0, IsAlmostEqualTo((x / 5.0) * 5.0));
+ EXPECT_THAT((x * 5.0) / 5.0, IsAlmostEqualTo(x));
+ EXPECT_THAT((x / 5.0) * 5.0, IsAlmostEqualTo(x));
+
+ {
J c = x;
c /= 5.0;
J d = x;
d *= 5.0;
-
- ExpectJetsClose(5.0 * a, b / 5.0);
- ExpectJetsClose(a, c);
- ExpectJetsClose(b, d);
+ EXPECT_THAT(c, IsAlmostEqualTo(x / 5.0));
+ EXPECT_THAT(d, IsAlmostEqualTo(5.0 * x));
}
- { // Check that x / y == 1 / (y / x).
- J a = x / y;
- J b = 1.0 / (y / x);
- VL << "a = " << a;
- VL << "b = " << b;
+ EXPECT_THAT(1.0 / (y / x), IsAlmostEqualTo(x / y));
+}
- ExpectJetsClose(a, b);
+TEST(Jet, Trigonometric) {
+ EXPECT_THAT(cos(2.0 * x), IsAlmostEqualTo(cos(x) * cos(x) - sin(x) * sin(x)));
+ EXPECT_THAT(sin(2.0 * x), IsAlmostEqualTo(2.0 * sin(x) * cos(x)));
+ EXPECT_THAT(sin(x) * sin(x) + cos(x) * cos(x), IsAlmostEqualTo(J(1.0)));
+
+ {
+ J t = MakeJet(0.7, -0.3, +1.5);
+ J r = MakeJet(2.3, 0.13, -2.4);
+ EXPECT_THAT(atan2(r * sin(t), r * cos(t)), IsAlmostEqualTo(t));
}
- { // Check that abs(-x * x) == sqrt(x * x).
- ExpectJetsClose(abs(-x), sqrt(x * x));
- }
+ EXPECT_THAT(sin(x) / cos(x), IsAlmostEqualTo(tan(x)));
+ EXPECT_THAT(tan(atan(x)), IsAlmostEqualTo(x));
- { // Check that cos(acos(x)) == x.
+ {
J a = MakeJet(0.1, -2.7, 1e-3);
- ExpectJetsClose(cos(acos(a)), a);
- ExpectJetsClose(acos(cos(a)), a);
+ EXPECT_THAT(cos(acos(a)), IsAlmostEqualTo(a));
+ EXPECT_THAT(acos(cos(a)), IsAlmostEqualTo(a));
J b = MakeJet(0.6, 0.5, 1e+2);
- ExpectJetsClose(cos(acos(b)), b);
- ExpectJetsClose(acos(cos(b)), b);
- }
-
- { // Check that sin(asin(x)) == x.
- J a = MakeJet(0.1, -2.7, 1e-3);
- ExpectJetsClose(sin(asin(a)), a);
- ExpectJetsClose(asin(sin(a)), a);
-
- J b = MakeJet(0.4, 0.5, 1e+2);
- ExpectJetsClose(sin(asin(b)), b);
- ExpectJetsClose(asin(sin(b)), b);
+ EXPECT_THAT(cos(acos(b)), IsAlmostEqualTo(b));
+ EXPECT_THAT(acos(cos(b)), IsAlmostEqualTo(b));
}
{
- J zero = J(0.0);
+ J a = MakeJet(0.1, -2.7, 1e-3);
+ EXPECT_THAT(sin(asin(a)), IsAlmostEqualTo(a));
+ EXPECT_THAT(asin(sin(a)), IsAlmostEqualTo(a));
- // Check that J0(0) == 1.
- ExpectJetsClose(BesselJ0(zero), J(1.0));
-
- // Check that J1(0) == 0.
- ExpectJetsClose(BesselJ1(zero), zero);
-
- // Check that J2(0) == 0.
- ExpectJetsClose(BesselJn(2, zero), zero);
-
- // Check that J3(0) == 0.
- ExpectJetsClose(BesselJn(3, zero), zero);
-
- J z = MakeJet(0.1, -2.7, 1e-3);
-
- // Check that J0(z) == Jn(0,z).
- ExpectJetsClose(BesselJ0(z), BesselJn(0, z));
-
- // Check that J1(z) == Jn(1,z).
- ExpectJetsClose(BesselJ1(z), BesselJn(1, z));
-
- // Check that J0(z)+J2(z) == (2/z)*J1(z).
- // See formula http://dlmf.nist.gov/10.6.E1
- ExpectJetsClose(BesselJ0(z) + BesselJn(2, z), (2.0 / z) * BesselJ1(z));
+ J b = MakeJet(0.4, 0.5, 1e+2);
+ EXPECT_THAT(sin(asin(b)), IsAlmostEqualTo(b));
+ EXPECT_THAT(asin(sin(b)), IsAlmostEqualTo(b));
}
+}
- { // Check that floor of a positive number works.
+TEST(Jet, Hyperbolic) {
+ // cosh(x)*cosh(x) - sinh(x)*sinh(x) = 1
+ EXPECT_THAT(cosh(x) * cosh(x) - sinh(x) * sinh(x), IsAlmostEqualTo(J(1.0)));
+
+ // tanh(x + y) = (tanh(x) + tanh(y)) / (1 + tanh(x) tanh(y))
+ EXPECT_THAT(
+ tanh(x + y),
+ IsAlmostEqualTo((tanh(x) + tanh(y)) / (J(1.0) + tanh(x) * tanh(y))));
+}
+
+TEST(Jet, Abs) {
+ EXPECT_THAT(abs(-x * x), IsAlmostEqualTo(x * x));
+ EXPECT_THAT(abs(-x), IsAlmostEqualTo(sqrt(x * x)));
+
+ {
+ J a = MakeJet(-std::numeric_limits<double>::quiet_NaN(), 2.0, 4.0);
+ J b = abs(a);
+ EXPECT_TRUE(std::signbit(b.v[0]));
+ EXPECT_TRUE(std::signbit(b.v[1]));
+ }
+}
+
+#if defined(CERES_HAS_POSIX_BESSEL_FUNCTIONS) || \
+ defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
+TEST(Jet, Bessel) {
+ J zero = J(0.0);
+ J z = MakeJet(0.1, -2.7, 1e-3);
+
+#ifdef CERES_HAS_POSIX_BESSEL_FUNCTIONS
+ EXPECT_THAT(BesselJ0(zero), IsAlmostEqualTo(J(1.0)));
+ EXPECT_THAT(BesselJ1(zero), IsAlmostEqualTo(zero));
+ EXPECT_THAT(BesselJn(2, zero), IsAlmostEqualTo(zero));
+ EXPECT_THAT(BesselJn(3, zero), IsAlmostEqualTo(zero));
+
+ EXPECT_THAT(BesselJ0(z), IsAlmostEqualTo(BesselJn(0, z)));
+ EXPECT_THAT(BesselJ1(z), IsAlmostEqualTo(BesselJn(1, z)));
+
+ // See formula http://dlmf.nist.gov/10.6.E1
+ EXPECT_THAT(BesselJ0(z) + BesselJn(2, z),
+ IsAlmostEqualTo((2.0 / z) * BesselJ1(z)));
+#endif // CERES_HAS_POSIX_BESSEL_FUNCTIONS
+
+#ifdef CERES_HAS_CPP17_BESSEL_FUNCTIONS
+ EXPECT_THAT(cyl_bessel_j(0, zero), IsAlmostEqualTo(J(1.0)));
+ EXPECT_THAT(cyl_bessel_j(1, zero), IsAlmostEqualTo(zero));
+ EXPECT_THAT(cyl_bessel_j(2, zero), IsAlmostEqualTo(zero));
+ EXPECT_THAT(cyl_bessel_j(3, zero), IsAlmostEqualTo(zero));
+
+ EXPECT_THAT(cyl_bessel_j(0, z), IsAlmostEqualTo(BesselJn(0, z)));
+ EXPECT_THAT(cyl_bessel_j(1, z), IsAlmostEqualTo(BesselJn(1, z)));
+
+ // MSVC Bessel functions and their derivatives produce errors slightly above
+ // kTolerance. Provide an alternative variant with a relaxed threshold.
+ constexpr double kRelaxedTolerance = 10 * kTolerance;
+
+ // See formula http://dlmf.nist.gov/10.6.E1
+ EXPECT_THAT(cyl_bessel_j(0, z) + cyl_bessel_j(2, z),
+ IsAlmostEqualToWithTolerance((2.0 / z) * cyl_bessel_j(1, z),
+ kRelaxedTolerance));
+
+ // MSVC does not throw an exception on invalid first argument
+#ifndef _MSC_VER
+ EXPECT_THROW(cyl_bessel_j(-1, zero), std::domain_error);
+#endif // defined(_MSC_VER)
+#endif // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
+}
+#endif // defined(CERES_HAS_POSIX_BESSEL_FUNCTIONS) ||
+ // defined(CERES_HAS_CPP17_BESSEL_FUNCTIONS)
+
+TEST(Jet, Floor) {
+ { // floor of a positive number works.
J a = MakeJet(0.1, -2.7, 1e-3);
J b = floor(a);
J expected = MakeJet(floor(a.a), 0.0, 0.0);
EXPECT_EQ(expected, b);
}
- { // Check that floor of a negative number works.
+ { // floor of a negative number works.
J a = MakeJet(-1.1, -2.7, 1e-3);
J b = floor(a);
J expected = MakeJet(floor(a.a), 0.0, 0.0);
EXPECT_EQ(expected, b);
}
- { // Check that floor of a positive number works.
+ { // floor of a positive number works.
J a = MakeJet(10.123, -2.7, 1e-3);
J b = floor(a);
J expected = MakeJet(floor(a.a), 0.0, 0.0);
EXPECT_EQ(expected, b);
}
+}
- { // Check that ceil of a positive number works.
+TEST(Jet, Ceil) {
+ { // ceil of a positive number works.
J a = MakeJet(0.1, -2.7, 1e-3);
J b = ceil(a);
J expected = MakeJet(ceil(a.a), 0.0, 0.0);
EXPECT_EQ(expected, b);
}
- { // Check that ceil of a negative number works.
+ { // ceil of a negative number works.
J a = MakeJet(-1.1, -2.7, 1e-3);
J b = ceil(a);
J expected = MakeJet(ceil(a.a), 0.0, 0.0);
EXPECT_EQ(expected, b);
}
- { // Check that ceil of a positive number works.
+ { // ceil of a positive number works.
J a = MakeJet(10.123, -2.7, 1e-3);
J b = ceil(a);
J expected = MakeJet(ceil(a.a), 0.0, 0.0);
EXPECT_EQ(expected, b);
}
+}
- { // Check that erf works.
+TEST(Jet, Erf) {
+ { // erf works.
J a = MakeJet(10.123, -2.7, 1e-3);
J b = erf(a);
J expected = MakeJet(erf(a.a), 0.0, 0.0);
@@ -583,8 +382,10 @@
NumericalTest("erf", erf<double, 2>, 1e-5);
NumericalTest("erf", erf<double, 2>, 0.5);
NumericalTest("erf", erf<double, 2>, 100.0);
+}
- { // Check that erfc works.
+TEST(Jet, Erfc) {
+ { // erfc works.
J a = MakeJet(10.123, -2.7, 1e-3);
J b = erfc(a);
J expected = MakeJet(erfc(a.a), 0.0, 0.0);
@@ -594,42 +395,48 @@
NumericalTest("erfc", erfc<double, 2>, 1e-5);
NumericalTest("erfc", erfc<double, 2>, 0.5);
NumericalTest("erfc", erfc<double, 2>, 100.0);
+}
- { // Check that cbrt(x * x * x) == x.
- J z = x * x * x;
- J w = cbrt(z);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(w, x);
- }
+TEST(Jet, Cbrt) {
+ EXPECT_THAT(cbrt(x * x * x), IsAlmostEqualTo(x));
+ EXPECT_THAT(cbrt(y) * cbrt(y) * cbrt(y), IsAlmostEqualTo(y));
+ EXPECT_THAT(cbrt(x), IsAlmostEqualTo(pow(x, 1.0 / 3.0)));
- { // Check that cbrt(y) * cbrt(y) * cbrt(y) == y.
- J z = cbrt(y);
- J w = z * z * z;
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(w, y);
- }
-
- { // Check that cbrt(x) == pow(x, 1/3).
- J z = cbrt(x);
- J w = pow(x, 1.0 / 3.0);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(z, w);
- }
NumericalTest("cbrt", cbrt<double, 2>, -1.0);
NumericalTest("cbrt", cbrt<double, 2>, -1e-5);
NumericalTest("cbrt", cbrt<double, 2>, 1e-5);
NumericalTest("cbrt", cbrt<double, 2>, 1.0);
+}
- { // Check that exp2(x) == exp(x * log(2))
- J z = exp2(x);
- J w = exp(x * log(2.0));
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(z, w);
+TEST(Jet, Log1p) {
+ EXPECT_THAT(log1p(expm1(x)), IsAlmostEqualTo(x));
+ EXPECT_THAT(log1p(x), IsAlmostEqualTo(log(J{1} + x)));
+
+ { // log1p(x) does not loose precision for small x
+ J x = MakeJet(1e-16, 1e-8, 1e-4);
+ EXPECT_THAT(log1p(x),
+ IsAlmostEqualTo(MakeJet(9.9999999999999998e-17, 1e-8, 1e-4)));
+ // log(1 + x) collapses to 0
+ J v = log(J{1} + x);
+ EXPECT_TRUE(v.a == 0);
}
+}
+
+TEST(Jet, Expm1) {
+ EXPECT_THAT(expm1(log1p(x)), IsAlmostEqualTo(x));
+ EXPECT_THAT(expm1(x), IsAlmostEqualTo(exp(x) - 1.0));
+
+ { // expm1(x) does not loose precision for small x
+ J x = MakeJet(9.9999999999999998e-17, 1e-8, 1e-4);
+ EXPECT_THAT(expm1(x), IsAlmostEqualTo(MakeJet(1e-16, 1e-8, 1e-4)));
+ // exp(x) - 1 collapses to 0
+ J v = exp(x) - J{1};
+ EXPECT_TRUE(v.a == 0);
+ }
+}
+
+TEST(Jet, Exp2) {
+ EXPECT_THAT(exp2(x), IsAlmostEqualTo(exp(x * log(2.0))));
NumericalTest("exp2", exp2<double, 2>, -1.0);
NumericalTest("exp2", exp2<double, 2>, -1e-5);
NumericalTest("exp2", exp2<double, 2>, -1e-200);
@@ -637,92 +444,514 @@
NumericalTest("exp2", exp2<double, 2>, 1e-200);
NumericalTest("exp2", exp2<double, 2>, 1e-5);
NumericalTest("exp2", exp2<double, 2>, 1.0);
+}
- { // Check that log2(x) == log(x) / log(2)
- J z = log2(x);
- J w = log(x) / log(2.0);
- VL << "z = " << z;
- VL << "w = " << w;
- ExpectJetsClose(z, w);
- }
+TEST(Jet, Log) { EXPECT_THAT(log(exp(x)), IsAlmostEqualTo(x)); }
+
+TEST(Jet, Log10) {
+ EXPECT_THAT(log10(x), IsAlmostEqualTo(log(x) / log(10)));
+ NumericalTest("log10", log10<double, 2>, 1e-5);
+ NumericalTest("log10", log10<double, 2>, 1.0);
+ NumericalTest("log10", log10<double, 2>, 98.76);
+}
+
+TEST(Jet, Log2) {
+ EXPECT_THAT(log2(x), IsAlmostEqualTo(log(x) / log(2)));
NumericalTest("log2", log2<double, 2>, 1e-5);
NumericalTest("log2", log2<double, 2>, 1.0);
NumericalTest("log2", log2<double, 2>, 100.0);
+}
- { // Check that hypot(x, y) == sqrt(x^2 + y^2)
- J h = hypot(x, y);
- J s = sqrt(x * x + y * y);
- VL << "h = " << h;
- VL << "s = " << s;
- ExpectJetsClose(h, s);
+TEST(Jet, Norm) {
+ EXPECT_THAT(norm(x), IsAlmostEqualTo(x * x));
+ EXPECT_THAT(norm(-x), IsAlmostEqualTo(x * x));
+}
+
+TEST(Jet, Pow) {
+ EXPECT_THAT(pow(x, 1.0), IsAlmostEqualTo(x));
+ EXPECT_THAT(pow(x, MakeJet(1.0, 0.0, 0.0)), IsAlmostEqualTo(x));
+ EXPECT_THAT(pow(kE, log(x)), IsAlmostEqualTo(x));
+ EXPECT_THAT(pow(MakeJet(kE, 0., 0.), log(x)), IsAlmostEqualTo(x));
+ EXPECT_THAT(pow(x, y),
+ IsAlmostEqualTo(pow(MakeJet(kE, 0.0, 0.0), y * log(x))));
+
+ // Specially cases
+
+ // pow(0, y) == 0 for y > 1, with both arguments Jets.
+ EXPECT_THAT(pow(MakeJet(0, 1, 2), MakeJet(2, 3, 4)),
+ IsAlmostEqualTo(MakeJet(0, 0, 0)));
+
+ // pow(0, y) == 0 for y == 1, with both arguments Jets.
+ EXPECT_THAT(pow(MakeJet(0, 1, 2), MakeJet(1, 3, 4)),
+ IsAlmostEqualTo(MakeJet(0, 1, 2)));
+
+ // pow(0, <1) is not finite, with both arguments Jets.
+ {
+ for (int i = 1; i < 10; i++) {
+ J a = MakeJet(0, 1, 2);
+ J b = MakeJet(i * 0.1, 3, 4); // b = 0.1 ... 0.9
+ J c = pow(a, b);
+ EXPECT_EQ(c.a, 0.0) << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[0]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[1]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ }
+
+ for (int i = -10; i < 0; i++) {
+ J a = MakeJet(0, 1, 2);
+ J b = MakeJet(i * 0.1, 3, 4); // b = -1,-0.9 ... -0.1
+ J c = pow(a, b);
+ EXPECT_FALSE(isfinite(c.a))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[0]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[1]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ }
+
+ // The special case of 0^0 = 1 defined by the C standard.
+ {
+ J a = MakeJet(0, 1, 2);
+ J b = MakeJet(0, 3, 4);
+ J c = pow(a, b);
+ EXPECT_EQ(c.a, 1.0) << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[0]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[1]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ }
}
- { // Check that hypot(x, x) == sqrt(2) * abs(x)
- J h = hypot(x, x);
- J s = sqrt(2.0) * abs(x);
- VL << "h = " << h;
- VL << "s = " << s;
- ExpectJetsClose(h, s);
+ // pow(<0, b) is correct for integer b.
+ {
+ J a = MakeJet(-1.5, 3, 4);
+
+ // b integer:
+ for (int i = -10; i <= 10; i++) {
+ J b = MakeJet(i, 0, 5);
+ J c = pow(a, b);
+
+ EXPECT_TRUE(AreAlmostEqual(c.a, pow(-1.5, i), kTolerance))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_TRUE(isfinite(c.v[0]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[1]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_TRUE(
+ AreAlmostEqual(c.v[0], i * pow(-1.5, i - 1) * 3.0, kTolerance))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ }
}
- { // Check that the derivative is zero tangentially to the circle:
- J h = hypot(MakeJet(2.0, 1.0, 1.0), MakeJet(2.0, 1.0, -1.0));
- VL << "h = " << h;
- ExpectJetsClose(h, MakeJet(sqrt(8.0), std::sqrt(2.0), 0.0));
+ // pow(<0, b) is correct for noninteger b.
+ {
+ J a = MakeJet(-1.5, 3, 4);
+ J b = MakeJet(-2.5, 0, 5);
+ J c = pow(a, b);
+ EXPECT_FALSE(isfinite(c.a))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[0]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[1]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
}
- { // Check that hypot(x, 0) == x
- J zero = MakeJet(0.0, 2.0, 3.14);
- J h = hypot(x, zero);
- VL << "h = " << h;
- ExpectJetsClose(x, h);
+ // pow(0,y) == 0 for y == 2, with the second argument a Jet.
+ EXPECT_THAT(pow(0.0, MakeJet(2, 3, 4)), IsAlmostEqualTo(MakeJet(0, 0, 0)));
+
+ // pow(<0,y) is correct for integer y.
+ {
+ double a = -1.5;
+ for (int i = -10; i <= 10; i++) {
+ J b = MakeJet(i, 3, 0);
+ J c = pow(a, b);
+ ExpectClose(c.a, pow(-1.5, i), kTolerance);
+ EXPECT_FALSE(isfinite(c.v[0]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_TRUE(isfinite(c.v[1]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ ExpectClose(c.v[1], 0, kTolerance);
+ }
}
- { // Check that hypot(0, y) == y
- J zero = MakeJet(0.0, 2.0, 3.14);
- J h = hypot(zero, y);
- VL << "h = " << h;
- ExpectJetsClose(y, h);
+ // pow(<0,y) is correct for noninteger y.
+ {
+ double a = -1.5;
+ J b = MakeJet(-3.14, 3, 0);
+ J c = pow(a, b);
+ EXPECT_FALSE(isfinite(c.a))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[0]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
+ EXPECT_FALSE(isfinite(c.v[1]))
+ << "\na: " << a << "\nb: " << b << "\na^b: " << c;
}
+}
- { // Check that hypot(x, 0) == sqrt(x * x) == x, even when x * x underflows:
- EXPECT_EQ(DBL_MIN * DBL_MIN, 0.0); // Make sure it underflows
- J huge = MakeJet(DBL_MIN, 2.0, 3.14);
- J h = hypot(huge, J(0.0));
- VL << "h = " << h;
- ExpectJetsClose(h, huge);
- }
-
- { // Check that hypot(x, 0) == sqrt(x * x) == x, even when x * x overflows:
- EXPECT_EQ(DBL_MAX * DBL_MAX, std::numeric_limits<double>::infinity());
- J huge = MakeJet(DBL_MAX, 2.0, 3.14);
- J h = hypot(huge, J(0.0));
- VL << "h = " << h;
- ExpectJetsClose(h, huge);
- }
+TEST(Jet, Hypot2) {
+ // Resolve the ambiguity between two and three argument hypot overloads
+ using Hypot2 = J(const J&, const J&);
+ auto* const hypot2 = static_cast<Hypot2*>(&hypot<double, 2>);
// clang-format off
- NumericalTest2("hypot", hypot<double, 2>, 0.0, 1e-5);
- NumericalTest2("hypot", hypot<double, 2>, -1e-5, 0.0);
- NumericalTest2("hypot", hypot<double, 2>, 1e-5, 1e-5);
- NumericalTest2("hypot", hypot<double, 2>, 0.0, 1.0);
- NumericalTest2("hypot", hypot<double, 2>, 1e-3, 1.0);
- NumericalTest2("hypot", hypot<double, 2>, 1e-3, -1.0);
- NumericalTest2("hypot", hypot<double, 2>, -1e-3, 1.0);
- NumericalTest2("hypot", hypot<double, 2>, -1e-3, -1.0);
- NumericalTest2("hypot", hypot<double, 2>, 1.0, 2.0);
+ NumericalTest2("hypot2", hypot2, 0.0, 1e-5);
+ NumericalTest2("hypot2", hypot2, -1e-5, 0.0);
+ NumericalTest2("hypot2", hypot2, 1e-5, 1e-5);
+ NumericalTest2("hypot2", hypot2, 0.0, 1.0);
+ NumericalTest2("hypot2", hypot2, 1e-3, 1.0);
+ NumericalTest2("hypot2", hypot2, 1e-3, -1.0);
+ NumericalTest2("hypot2", hypot2, -1e-3, 1.0);
+ NumericalTest2("hypot2", hypot2, -1e-3, -1.0);
+ NumericalTest2("hypot2", hypot2, 1.0, 2.0);
// clang-format on
+ J zero = MakeJet(0.0, 2.0, 3.14);
+ EXPECT_THAT(hypot(x, y), IsAlmostEqualTo(sqrt(x * x + y * y)));
+ EXPECT_THAT(hypot(x, x), IsAlmostEqualTo(sqrt(2.0) * abs(x)));
+
+ // The derivative is zero tangentially to the circle:
+ EXPECT_THAT(hypot(MakeJet(2.0, 1.0, 1.0), MakeJet(2.0, 1.0, -1.0)),
+ IsAlmostEqualTo(MakeJet(sqrt(8.0), std::sqrt(2.0), 0.0)));
+
+ EXPECT_THAT(hypot(zero, x), IsAlmostEqualTo(x));
+ EXPECT_THAT(hypot(y, zero), IsAlmostEqualTo(y));
+
+ // hypot(x, 0, 0) == x, even when x * x underflows:
+ EXPECT_EQ(
+ std::numeric_limits<double>::min() * std::numeric_limits<double>::min(),
+ 0.0); // Make sure it underflows
+ J tiny = MakeJet(std::numeric_limits<double>::min(), 2.0, 3.14);
+ EXPECT_THAT(hypot(tiny, J{0}), IsAlmostEqualTo(tiny));
+
+ // hypot(x, 0, 0) == x, even when x * x overflows:
+ EXPECT_EQ(
+ std::numeric_limits<double>::max() * std::numeric_limits<double>::max(),
+ std::numeric_limits<double>::infinity());
+ J huge = MakeJet(std::numeric_limits<double>::max(), 2.0, 3.14);
+ EXPECT_THAT(hypot(huge, J{0}), IsAlmostEqualTo(huge));
+}
+
+TEST(Jet, Hypot3) {
+ J zero = MakeJet(0.0, 2.0, 3.14);
+
+ // hypot(x, y, z) == sqrt(x^2 + y^2 + z^2)
+ EXPECT_THAT(hypot(x, y, z), IsAlmostEqualTo(sqrt(x * x + y * y + z * z)));
+
+ // hypot(x, x) == sqrt(3) * abs(x)
+ EXPECT_THAT(hypot(x, x, x), IsAlmostEqualTo(sqrt(3.0) * abs(x)));
+
+ // The derivative is zero tangentially to the circle:
+ EXPECT_THAT(hypot(MakeJet(2.0, 1.0, 1.0),
+ MakeJet(2.0, 1.0, -1.0),
+ MakeJet(2.0, -1.0, 0.0)),
+ IsAlmostEqualTo(MakeJet(sqrt(12.0), 1.0 / std::sqrt(3.0), 0.0)));
+
+ EXPECT_THAT(hypot(x, zero, zero), IsAlmostEqualTo(x));
+ EXPECT_THAT(hypot(zero, y, zero), IsAlmostEqualTo(y));
+ EXPECT_THAT(hypot(zero, zero, z), IsAlmostEqualTo(z));
+ EXPECT_THAT(hypot(x, y, z), IsAlmostEqualTo(hypot(hypot(x, y), z)));
+ EXPECT_THAT(hypot(x, y, z), IsAlmostEqualTo(hypot(x, hypot(y, z))));
+
+ // The following two tests are disabled because the three argument hypot is
+ // broken in the libc++ shipped with CLANG as of January 2022.
+
+#if !defined(_LIBCPP_VERSION)
+ // hypot(x, 0, 0) == x, even when x * x underflows:
+ EXPECT_EQ(
+ std::numeric_limits<double>::min() * std::numeric_limits<double>::min(),
+ 0.0); // Make sure it underflows
+ J tiny = MakeJet(std::numeric_limits<double>::min(), 2.0, 3.14);
+ EXPECT_THAT(hypot(tiny, J{0}, J{0}), IsAlmostEqualTo(tiny));
+
+ // hypot(x, 0, 0) == x, even when x * x overflows:
+ EXPECT_EQ(
+ std::numeric_limits<double>::max() * std::numeric_limits<double>::max(),
+ std::numeric_limits<double>::infinity());
+ J huge = MakeJet(std::numeric_limits<double>::max(), 2.0, 3.14);
+ EXPECT_THAT(hypot(huge, J{0}, J{0}), IsAlmostEqualTo(huge));
+#endif
+}
+
+#ifdef CERES_HAS_CPP20
+
+TEST(Jet, Lerp) {
+ EXPECT_THAT(lerp(x, y, J{0}), IsAlmostEqualTo(x));
+ EXPECT_THAT(lerp(x, y, J{1}), IsAlmostEqualTo(y));
+ EXPECT_THAT(lerp(x, x, J{1}), IsAlmostEqualTo(x));
+ EXPECT_THAT(lerp(y, y, J{0}), IsAlmostEqualTo(y));
+ EXPECT_THAT(lerp(x, y, J{0.5}), IsAlmostEqualTo((x + y) / J{2.0}));
+ EXPECT_THAT(lerp(x, y, J{2}), IsAlmostEqualTo(J{2.0} * y - x));
+ EXPECT_THAT(lerp(x, y, J{-2}), IsAlmostEqualTo(J{3.0} * x - J{2} * y));
+}
+
+TEST(Jet, Midpoint) {
+ EXPECT_THAT(midpoint(x, y), IsAlmostEqualTo((x + y) / J{2}));
+ EXPECT_THAT(midpoint(x, x), IsAlmostEqualTo(x));
+
{
- J z = fmax(x, y);
- VL << "z = " << z;
- ExpectJetsClose(x, z);
+ // midpoint(x, y) = (x + y) / 2 while avoiding overflow
+ J x = MakeJet(std::numeric_limits<double>::min(), 1, 2);
+ J y = MakeJet(std::numeric_limits<double>::max(), 3, 4);
+ EXPECT_THAT(midpoint(x, y), IsAlmostEqualTo(x + (y - x) / J{2}));
}
{
- J z = fmin(x, y);
- VL << "z = " << z;
- ExpectJetsClose(y, z);
+ // midpoint(x, x) = x while avoiding overflow
+ J x = MakeJet(std::numeric_limits<double>::max(),
+ std::numeric_limits<double>::max(),
+ std::numeric_limits<double>::max());
+ EXPECT_THAT(midpoint(x, x), IsAlmostEqualTo(x));
+ }
+
+ { // midpoint does not overflow for very large values
+ constexpr double a = 0.75 * std::numeric_limits<double>::max();
+ J x = MakeJet(a, a, -a);
+ J y = MakeJet(a, a, a);
+ EXPECT_THAT(midpoint(x, y), IsAlmostEqualTo(MakeJet(a, a, 0)));
+ }
+}
+
+#endif // defined(CERES_HAS_CPP20)
+
+TEST(Jet, Fma) {
+ J v = fma(x, y, z);
+ J w = x * y + z;
+ EXPECT_THAT(v, IsAlmostEqualTo(w));
+}
+
+TEST(Jet, FmaxJetWithJet) {
+ Fenv env;
+ // Clear all exceptions to ensure none are set by the following function
+ // calls.
+ std::feclearexcept(FE_ALL_EXCEPT);
+
+ EXPECT_THAT(fmax(x, y), IsAlmostEqualTo(x));
+ EXPECT_THAT(fmax(y, x), IsAlmostEqualTo(x));
+
+ // Average the Jets on equality (of scalar parts).
+ const J scalar_part_only_equal_to_x = J(x.a, 2 * x.v);
+ const J average = (x + scalar_part_only_equal_to_x) * 0.5;
+ EXPECT_THAT(fmax(x, scalar_part_only_equal_to_x), IsAlmostEqualTo(average));
+ EXPECT_THAT(fmax(scalar_part_only_equal_to_x, x), IsAlmostEqualTo(average));
+
+ // Follow convention of fmax(): treat NANs as missing values.
+ const J nan_scalar_part(std::numeric_limits<double>::quiet_NaN(), 2 * x.v);
+ EXPECT_THAT(fmax(x, nan_scalar_part), IsAlmostEqualTo(x));
+ EXPECT_THAT(fmax(nan_scalar_part, x), IsAlmostEqualTo(x));
+
+#ifndef CERES_NO_FENV_ACCESS
+ EXPECT_EQ(std::fetestexcept(FE_ALL_EXCEPT & ~FE_INEXACT), 0);
+#endif
+}
+
+TEST(Jet, FmaxJetWithScalar) {
+ Fenv env;
+ // Clear all exceptions to ensure none are set by the following function
+ // calls.
+ std::feclearexcept(FE_ALL_EXCEPT);
+
+ EXPECT_THAT(fmax(x, y.a), IsAlmostEqualTo(x));
+ EXPECT_THAT(fmax(y.a, x), IsAlmostEqualTo(x));
+ EXPECT_THAT(fmax(y, x.a), IsAlmostEqualTo(J{x.a}));
+ EXPECT_THAT(fmax(x.a, y), IsAlmostEqualTo(J{x.a}));
+
+ // Average the Jet and scalar cast to a Jet on equality (of scalar parts).
+ const J average = (x + J{x.a}) * 0.5;
+ EXPECT_THAT(fmax(x, x.a), IsAlmostEqualTo(average));
+ EXPECT_THAT(fmax(x.a, x), IsAlmostEqualTo(average));
+
+ // Follow convention of fmax(): treat NANs as missing values.
+ EXPECT_THAT(fmax(x, std::numeric_limits<double>::quiet_NaN()),
+ IsAlmostEqualTo(x));
+ EXPECT_THAT(fmax(std::numeric_limits<double>::quiet_NaN(), x),
+ IsAlmostEqualTo(x));
+ const J nan_scalar_part(std::numeric_limits<double>::quiet_NaN(), 2 * x.v);
+ EXPECT_THAT(fmax(nan_scalar_part, x.a), IsAlmostEqualTo(J{x.a}));
+ EXPECT_THAT(fmax(x.a, nan_scalar_part), IsAlmostEqualTo(J{x.a}));
+
+#ifndef CERES_NO_FENV_ACCESS
+ EXPECT_EQ(std::fetestexcept(FE_ALL_EXCEPT & ~FE_INEXACT), 0);
+#endif
+}
+
+TEST(Jet, FminJetWithJet) {
+ Fenv env;
+ // Clear all exceptions to ensure none are set by the following function
+ // calls.
+ std::feclearexcept(FE_ALL_EXCEPT);
+
+ EXPECT_THAT(fmin(x, y), IsAlmostEqualTo(y));
+ EXPECT_THAT(fmin(y, x), IsAlmostEqualTo(y));
+
+ // Average the Jets on equality (of scalar parts).
+ const J scalar_part_only_equal_to_x = J(x.a, 2 * x.v);
+ const J average = (x + scalar_part_only_equal_to_x) * 0.5;
+ EXPECT_THAT(fmin(x, scalar_part_only_equal_to_x), IsAlmostEqualTo(average));
+ EXPECT_THAT(fmin(scalar_part_only_equal_to_x, x), IsAlmostEqualTo(average));
+
+ // Follow convention of fmin(): treat NANs as missing values.
+ const J nan_scalar_part(std::numeric_limits<double>::quiet_NaN(), 2 * x.v);
+ EXPECT_THAT(fmin(x, nan_scalar_part), IsAlmostEqualTo(x));
+ EXPECT_THAT(fmin(nan_scalar_part, x), IsAlmostEqualTo(x));
+
+#ifndef CERES_NO_FENV_ACCESS
+ EXPECT_EQ(std::fetestexcept(FE_ALL_EXCEPT & ~FE_INEXACT), 0);
+#endif
+}
+
+TEST(Jet, FminJetWithScalar) {
+ Fenv env;
+ // Clear all exceptions to ensure none are set by the following function
+ // calls.
+ std::feclearexcept(FE_ALL_EXCEPT);
+
+ EXPECT_THAT(fmin(x, y.a), IsAlmostEqualTo(J{y.a}));
+ EXPECT_THAT(fmin(y.a, x), IsAlmostEqualTo(J{y.a}));
+ EXPECT_THAT(fmin(y, x.a), IsAlmostEqualTo(y));
+ EXPECT_THAT(fmin(x.a, y), IsAlmostEqualTo(y));
+
+ // Average the Jet and scalar cast to a Jet on equality (of scalar parts).
+ const J average = (x + J{x.a}) * 0.5;
+ EXPECT_THAT(fmin(x, x.a), IsAlmostEqualTo(average));
+ EXPECT_THAT(fmin(x.a, x), IsAlmostEqualTo(average));
+
+ // Follow convention of fmin(): treat NANs as missing values.
+ EXPECT_THAT(fmin(x, std::numeric_limits<double>::quiet_NaN()),
+ IsAlmostEqualTo(x));
+ EXPECT_THAT(fmin(std::numeric_limits<double>::quiet_NaN(), x),
+ IsAlmostEqualTo(x));
+ const J nan_scalar_part(std::numeric_limits<double>::quiet_NaN(), 2 * x.v);
+ EXPECT_THAT(fmin(nan_scalar_part, x.a), IsAlmostEqualTo(J{x.a}));
+ EXPECT_THAT(fmin(x.a, nan_scalar_part), IsAlmostEqualTo(J{x.a}));
+
+#ifndef CERES_NO_FENV_ACCESS
+ EXPECT_EQ(std::fetestexcept(FE_ALL_EXCEPT & ~FE_INEXACT), 0);
+#endif
+}
+
+TEST(Jet, Fdim) {
+ Fenv env;
+ // Clear all exceptions to ensure none are set by the following function
+ // calls.
+ std::feclearexcept(FE_ALL_EXCEPT);
+
+ const J zero{};
+ const J diff = x - y;
+ const J diffx = x - J{y.a};
+ const J diffy = J{x.a} - y;
+
+ EXPECT_THAT(fdim(x, y), IsAlmostEqualTo(diff));
+ EXPECT_THAT(fdim(y, x), IsAlmostEqualTo(zero));
+ EXPECT_THAT(fdim(x, y.a), IsAlmostEqualTo(diffx));
+ EXPECT_THAT(fdim(y.a, x), IsAlmostEqualTo(J{zero.a}));
+ EXPECT_THAT(fdim(x.a, y), IsAlmostEqualTo(diffy));
+ EXPECT_THAT(fdim(y, x.a), IsAlmostEqualTo(zero));
+ EXPECT_TRUE(isnan(fdim(x, std::numeric_limits<J>::quiet_NaN())));
+ EXPECT_TRUE(isnan(fdim(std::numeric_limits<J>::quiet_NaN(), x)));
+ EXPECT_TRUE(isnan(fdim(x, std::numeric_limits<double>::quiet_NaN())));
+ EXPECT_TRUE(isnan(fdim(std::numeric_limits<double>::quiet_NaN(), x)));
+
+#ifndef CERES_NO_FENV_ACCESS
+ EXPECT_EQ(std::fetestexcept(FE_ALL_EXCEPT & ~FE_INEXACT), 0);
+#endif
+}
+
+TEST(Jet, CopySign) {
+ { // copysign(x, +1)
+ J z = copysign(x, J{+1});
+ EXPECT_THAT(z, IsAlmostEqualTo(x));
+ EXPECT_TRUE(isfinite(z.v[0])) << z;
+ EXPECT_TRUE(isfinite(z.v[1])) << z;
+ }
+ { // copysign(x, -1)
+ J z = copysign(x, J{-1});
+ EXPECT_THAT(z, IsAlmostEqualTo(-x));
+ EXPECT_TRUE(isfinite(z.v[0])) << z;
+ EXPECT_TRUE(isfinite(z.v[1])) << z;
+ }
+ { // copysign(-x, +1)
+
+ J z = copysign(-x, J{+1});
+ EXPECT_THAT(z, IsAlmostEqualTo(x));
+ EXPECT_TRUE(isfinite(z.v[0])) << z;
+ EXPECT_TRUE(isfinite(z.v[1])) << z;
+ }
+ { // copysign(-x, -1)
+ J z = copysign(-x, J{-1});
+ EXPECT_THAT(z, IsAlmostEqualTo(-x));
+ EXPECT_TRUE(isfinite(z.v[0])) << z;
+ EXPECT_TRUE(isfinite(z.v[1])) << z;
+ }
+ { // copysign(-0, +1)
+ J z = copysign(MakeJet(-0, 1, 2), J{+1});
+ EXPECT_THAT(z, IsAlmostEqualTo(MakeJet(+0, 1, 2)));
+ EXPECT_FALSE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(isfinite(z.v[0])) << z;
+ EXPECT_TRUE(isfinite(z.v[1])) << z;
+ }
+ { // copysign(-0, -1)
+ J z = copysign(MakeJet(-0, 1, 2), J{-1});
+ EXPECT_THAT(z, IsAlmostEqualTo(MakeJet(-0, -1, -2)));
+ EXPECT_TRUE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(isfinite(z.v[0])) << z;
+ EXPECT_TRUE(isfinite(z.v[1])) << z;
+ }
+ { // copysign(+0, -1)
+ J z = copysign(MakeJet(+0, 1, 2), J{-1});
+ EXPECT_THAT(z, IsAlmostEqualTo(MakeJet(-0, -1, -2)));
+ EXPECT_TRUE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(isfinite(z.v[0])) << z;
+ EXPECT_TRUE(isfinite(z.v[1])) << z;
+ }
+ { // copysign(+0, +1)
+ J z = copysign(MakeJet(+0, 1, 2), J{+1});
+ EXPECT_THAT(z, IsAlmostEqualTo(MakeJet(+0, 1, 2)));
+ EXPECT_FALSE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(isfinite(z.v[0])) << z;
+ EXPECT_TRUE(isfinite(z.v[1])) << z;
+ }
+ { // copysign(+0, +0)
+ J z = copysign(MakeJet(+0, 1, 2), J{+0});
+ EXPECT_FALSE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(isnan(z.v[0])) << z;
+ EXPECT_TRUE(isnan(z.v[1])) << z;
+ }
+ { // copysign(+0, -0)
+ J z = copysign(MakeJet(+0, 1, 2), J{-0});
+ EXPECT_FALSE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(isnan(z.v[0])) << z;
+ EXPECT_TRUE(isnan(z.v[1])) << z;
+ }
+ { // copysign(-0, +0)
+ J z = copysign(MakeJet(-0, 1, 2), J{+0});
+ EXPECT_FALSE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(isnan(z.v[0])) << z;
+ EXPECT_TRUE(isnan(z.v[1])) << z;
+ }
+ { // copysign(-0, -0)
+ J z = copysign(MakeJet(-0, 1, 2), J{-0});
+ EXPECT_FALSE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(isnan(z.v[0])) << z;
+ EXPECT_TRUE(isnan(z.v[1])) << z;
+ }
+ { // copysign(1, -nan)
+ J z = copysign(MakeJet(1, 2, 3),
+ -J{std::numeric_limits<double>::quiet_NaN()});
+ EXPECT_TRUE(std::signbit(z.a)) << z;
+ EXPECT_TRUE(std::signbit(z.v[0])) << z;
+ EXPECT_TRUE(std::signbit(z.v[1])) << z;
+ EXPECT_FALSE(isnan(z.v[0])) << z;
+ EXPECT_FALSE(isnan(z.v[1])) << z;
+ }
+ { // copysign(1, +nan)
+ J z = copysign(MakeJet(1, 2, 3),
+ +J{std::numeric_limits<double>::quiet_NaN()});
+ EXPECT_FALSE(std::signbit(z.a)) << z;
+ EXPECT_FALSE(std::signbit(z.v[0])) << z;
+ EXPECT_FALSE(std::signbit(z.v[1])) << z;
+ EXPECT_FALSE(isnan(z.v[0])) << z;
+ EXPECT_FALSE(isnan(z.v[1])) << z;
}
}
@@ -738,56 +967,188 @@
M << x, y, z, w;
v << x, z;
- // Check that M * v == (v^T * M^T)^T
+ // M * v == (v^T * M^T)^T
r1 = M * v;
r2 = (v.transpose() * M.transpose()).transpose();
- ExpectJetsClose(r1(0), r2(0));
- ExpectJetsClose(r1(1), r2(1));
+ EXPECT_THAT(r1(0), IsAlmostEqualTo(r2(0)));
+ EXPECT_THAT(r1(1), IsAlmostEqualTo(r2(1)));
}
-TEST(JetTraitsTest, ClassificationMixed) {
- Jet<double, 3> a(5.5, 0);
- a.v[0] = std::numeric_limits<double>::quiet_NaN();
- a.v[1] = std::numeric_limits<double>::infinity();
- a.v[2] = -std::numeric_limits<double>::infinity();
- EXPECT_FALSE(IsFinite(a));
- EXPECT_FALSE(IsNormal(a));
- EXPECT_TRUE(IsInfinite(a));
- EXPECT_TRUE(IsNaN(a));
+TEST(Jet, ScalarComparison) {
+ Jet<double, 1> zero{0.0};
+ zero.v << std::numeric_limits<double>::infinity();
+
+ Jet<double, 1> one{1.0};
+ one.v << std::numeric_limits<double>::quiet_NaN();
+
+ Jet<double, 1> two{2.0};
+ two.v << std::numeric_limits<double>::min() / 2;
+
+ Jet<double, 1> three{3.0};
+
+ auto inf = std::numeric_limits<Jet<double, 1>>::infinity();
+ auto nan = std::numeric_limits<Jet<double, 1>>::quiet_NaN();
+ inf.v << 1.2;
+ nan.v << 3.4;
+
+ std::feclearexcept(FE_ALL_EXCEPT);
+
+ EXPECT_FALSE(islessgreater(zero, zero));
+ EXPECT_FALSE(islessgreater(zero, zero.a));
+ EXPECT_FALSE(islessgreater(zero.a, zero));
+
+ EXPECT_TRUE(isgreaterequal(three, three));
+ EXPECT_TRUE(isgreaterequal(three, three.a));
+ EXPECT_TRUE(isgreaterequal(three.a, three));
+
+ EXPECT_TRUE(isgreater(three, two));
+ EXPECT_TRUE(isgreater(three, two.a));
+ EXPECT_TRUE(isgreater(three.a, two));
+
+ EXPECT_TRUE(islessequal(one, one));
+ EXPECT_TRUE(islessequal(one, one.a));
+ EXPECT_TRUE(islessequal(one.a, one));
+
+ EXPECT_TRUE(isless(one, two));
+ EXPECT_TRUE(isless(one, two.a));
+ EXPECT_TRUE(isless(one.a, two));
+
+ EXPECT_FALSE(isunordered(inf, one));
+ EXPECT_FALSE(isunordered(inf, one.a));
+ EXPECT_FALSE(isunordered(inf.a, one));
+
+ EXPECT_TRUE(isunordered(nan, two));
+ EXPECT_TRUE(isunordered(nan, two.a));
+ EXPECT_TRUE(isunordered(nan.a, two));
+
+ EXPECT_TRUE(isunordered(inf, nan));
+ EXPECT_TRUE(isunordered(inf, nan.a));
+ EXPECT_TRUE(isunordered(inf.a, nan.a));
+
+ EXPECT_EQ(std::fetestexcept(FE_ALL_EXCEPT & ~FE_INEXACT), 0);
+}
+
+TEST(Jet, Nested2XScalarComparison) {
+ Jet<J0d, 1> zero{J0d{0.0}};
+ zero.v << std::numeric_limits<J0d>::infinity();
+
+ Jet<J0d, 1> one{J0d{1.0}};
+ one.v << std::numeric_limits<J0d>::quiet_NaN();
+
+ Jet<J0d, 1> two{J0d{2.0}};
+ two.v << std::numeric_limits<J0d>::min() / J0d{2};
+
+ Jet<J0d, 1> three{J0d{3.0}};
+
+ auto inf = std::numeric_limits<Jet<J0d, 1>>::infinity();
+ auto nan = std::numeric_limits<Jet<J0d, 1>>::quiet_NaN();
+ inf.v << J0d{1.2};
+ nan.v << J0d{3.4};
+
+ std::feclearexcept(FE_ALL_EXCEPT);
+
+ EXPECT_FALSE(islessgreater(zero, zero));
+ EXPECT_FALSE(islessgreater(zero, zero.a));
+ EXPECT_FALSE(islessgreater(zero.a, zero));
+ EXPECT_FALSE(islessgreater(zero, zero.a.a));
+ EXPECT_FALSE(islessgreater(zero.a.a, zero));
+
+ EXPECT_TRUE(isgreaterequal(three, three));
+ EXPECT_TRUE(isgreaterequal(three, three.a));
+ EXPECT_TRUE(isgreaterequal(three.a, three));
+ EXPECT_TRUE(isgreaterequal(three, three.a.a));
+ EXPECT_TRUE(isgreaterequal(three.a.a, three));
+
+ EXPECT_TRUE(isgreater(three, two));
+ EXPECT_TRUE(isgreater(three, two.a));
+ EXPECT_TRUE(isgreater(three.a, two));
+ EXPECT_TRUE(isgreater(three, two.a.a));
+ EXPECT_TRUE(isgreater(three.a.a, two));
+
+ EXPECT_TRUE(islessequal(one, one));
+ EXPECT_TRUE(islessequal(one, one.a));
+ EXPECT_TRUE(islessequal(one.a, one));
+ EXPECT_TRUE(islessequal(one, one.a.a));
+ EXPECT_TRUE(islessequal(one.a.a, one));
+
+ EXPECT_TRUE(isless(one, two));
+ EXPECT_TRUE(isless(one, two.a));
+ EXPECT_TRUE(isless(one.a, two));
+ EXPECT_TRUE(isless(one, two.a.a));
+ EXPECT_TRUE(isless(one.a.a, two));
+
+ EXPECT_FALSE(isunordered(inf, one));
+ EXPECT_FALSE(isunordered(inf, one.a));
+ EXPECT_FALSE(isunordered(inf.a, one));
+ EXPECT_FALSE(isunordered(inf, one.a.a));
+ EXPECT_FALSE(isunordered(inf.a.a, one));
+
+ EXPECT_TRUE(isunordered(nan, two));
+ EXPECT_TRUE(isunordered(nan, two.a));
+ EXPECT_TRUE(isunordered(nan.a, two));
+ EXPECT_TRUE(isunordered(nan, two.a.a));
+ EXPECT_TRUE(isunordered(nan.a.a, two));
+
+ EXPECT_TRUE(isunordered(inf, nan));
+ EXPECT_TRUE(isunordered(inf, nan.a));
+ EXPECT_TRUE(isunordered(inf.a, nan));
+ EXPECT_TRUE(isunordered(inf, nan.a.a));
+ EXPECT_TRUE(isunordered(inf.a.a, nan));
+
+ EXPECT_EQ(std::fetestexcept(FE_ALL_EXCEPT & ~FE_INEXACT), 0);
}
TEST(JetTraitsTest, ClassificationNaN) {
- Jet<double, 3> a(5.5, 0);
- a.v[0] = std::numeric_limits<double>::quiet_NaN();
- a.v[1] = 0.0;
- a.v[2] = 0.0;
- EXPECT_FALSE(IsFinite(a));
- EXPECT_FALSE(IsNormal(a));
- EXPECT_FALSE(IsInfinite(a));
- EXPECT_TRUE(IsNaN(a));
+ Jet<double, 1> a(std::numeric_limits<double>::quiet_NaN());
+ a.v << std::numeric_limits<double>::infinity();
+ EXPECT_EQ(fpclassify(a), FP_NAN);
+ EXPECT_FALSE(isfinite(a));
+ EXPECT_FALSE(isinf(a));
+ EXPECT_FALSE(isnormal(a));
+ EXPECT_FALSE(signbit(a));
+ EXPECT_TRUE(isnan(a));
}
TEST(JetTraitsTest, ClassificationInf) {
- Jet<double, 3> a(5.5, 0);
- a.v[0] = std::numeric_limits<double>::infinity();
- a.v[1] = 0.0;
- a.v[2] = 0.0;
- EXPECT_FALSE(IsFinite(a));
- EXPECT_FALSE(IsNormal(a));
- EXPECT_TRUE(IsInfinite(a));
- EXPECT_FALSE(IsNaN(a));
+ Jet<double, 1> a(-std::numeric_limits<double>::infinity());
+ a.v << std::numeric_limits<double>::quiet_NaN();
+ EXPECT_EQ(fpclassify(a), FP_INFINITE);
+ EXPECT_FALSE(isfinite(a));
+ EXPECT_FALSE(isnan(a));
+ EXPECT_FALSE(isnormal(a));
+ EXPECT_TRUE(signbit(a));
+ EXPECT_TRUE(isinf(a));
}
TEST(JetTraitsTest, ClassificationFinite) {
- Jet<double, 3> a(5.5, 0);
- a.v[0] = 100.0;
- a.v[1] = 1.0;
- a.v[2] = 3.14159;
- EXPECT_TRUE(IsFinite(a));
- EXPECT_TRUE(IsNormal(a));
- EXPECT_FALSE(IsInfinite(a));
- EXPECT_FALSE(IsNaN(a));
+ Jet<double, 1> a(-5.5);
+ a.v << std::numeric_limits<double>::quiet_NaN();
+ EXPECT_EQ(fpclassify(a), FP_NORMAL);
+ EXPECT_FALSE(isinf(a));
+ EXPECT_FALSE(isnan(a));
+ EXPECT_TRUE(signbit(a));
+ EXPECT_TRUE(isfinite(a));
+ EXPECT_TRUE(isnormal(a));
+}
+
+TEST(JetTraitsTest, ClassificationScalar) {
+ EXPECT_EQ(fpclassify(J0d{+0.0}), FP_ZERO);
+ EXPECT_EQ(fpclassify(J0d{-0.0}), FP_ZERO);
+ EXPECT_EQ(fpclassify(J0d{1.234}), FP_NORMAL);
+ EXPECT_EQ(fpclassify(J0d{std::numeric_limits<double>::min() / 2}),
+ FP_SUBNORMAL);
+ EXPECT_EQ(fpclassify(J0d{std::numeric_limits<double>::quiet_NaN()}), FP_NAN);
+}
+
+TEST(JetTraitsTest, Nested2XClassificationScalar) {
+ EXPECT_EQ(fpclassify(J0<J0d>{J0d{+0.0}}), FP_ZERO);
+ EXPECT_EQ(fpclassify(J0<J0d>{J0d{-0.0}}), FP_ZERO);
+ EXPECT_EQ(fpclassify(J0<J0d>{J0d{1.234}}), FP_NORMAL);
+ EXPECT_EQ(fpclassify(J0<J0d>{J0d{std::numeric_limits<double>::min() / 2}}),
+ FP_SUBNORMAL);
+ EXPECT_EQ(fpclassify(J0<J0d>{J0d{std::numeric_limits<double>::quiet_NaN()}}),
+ FP_NAN);
}
// The following test ensures that Jets have all the appropriate Eigen
@@ -854,7 +1215,7 @@
const J sum = a.sum();
const J sum2 = a(0) + a(1);
- ExpectJetsClose(sum, sum2);
+ EXPECT_THAT(sum, IsAlmostEqualTo(sum2));
}
TEST(JetTraitsTest, MatrixScalarBinaryOps) {
@@ -869,22 +1230,22 @@
M << x, y, z, w;
v << 0.6, -2.1;
- // Check that M * v == M * v.cast<J>().
+ // M * v == M * v.cast<J>().
const Eigen::Matrix<J, 2, 1> r1 = M * v;
const Eigen::Matrix<J, 2, 1> r2 = M * v.cast<J>();
- ExpectJetsClose(r1(0), r2(0));
- ExpectJetsClose(r1(1), r2(1));
+ EXPECT_THAT(r1(0), IsAlmostEqualTo(r2(0)));
+ EXPECT_THAT(r1(1), IsAlmostEqualTo(r2(1)));
- // Check that M * a == M * T(a).
+ // M * a == M * T(a).
const double a = 3.1;
const Eigen::Matrix<J, 2, 2> r3 = M * a;
const Eigen::Matrix<J, 2, 2> r4 = M * J(a);
- ExpectJetsClose(r3(0, 0), r4(0, 0));
- ExpectJetsClose(r3(1, 0), r4(1, 0));
- ExpectJetsClose(r3(0, 1), r4(0, 1));
- ExpectJetsClose(r3(1, 1), r4(1, 1));
+ EXPECT_THAT(r3(0, 0), IsAlmostEqualTo(r4(0, 0)));
+ EXPECT_THAT(r3(0, 1), IsAlmostEqualTo(r4(0, 1)));
+ EXPECT_THAT(r3(1, 0), IsAlmostEqualTo(r4(1, 0)));
+ EXPECT_THAT(r3(1, 1), IsAlmostEqualTo(r4(1, 1)));
}
TEST(JetTraitsTest, ArrayScalarUnaryOps) {
@@ -895,7 +1256,7 @@
const J sum = a.sum();
const J sum2 = a(0) + a(1);
- ExpectJetsClose(sum, sum2);
+ EXPECT_THAT(sum, sum2);
}
TEST(JetTraitsTest, ArrayScalarBinaryOps) {
@@ -908,25 +1269,25 @@
a << x, y;
b << 0.6, -2.1;
- // Check that a * b == a * b.cast<T>()
+ // a * b == a * b.cast<T>()
const Eigen::Array<J, 2, 1> r1 = a * b;
const Eigen::Array<J, 2, 1> r2 = a * b.cast<J>();
- ExpectJetsClose(r1(0), r2(0));
- ExpectJetsClose(r1(1), r2(1));
+ EXPECT_THAT(r1(0), r2(0));
+ EXPECT_THAT(r1(1), r2(1));
- // Check that a * c == a * T(c).
+ // a * c == a * T(c).
const double c = 3.1;
const Eigen::Array<J, 2, 1> r3 = a * c;
const Eigen::Array<J, 2, 1> r4 = a * J(c);
- ExpectJetsClose(r3(0), r3(0));
- ExpectJetsClose(r4(1), r4(1));
+ EXPECT_THAT(r3(0), r3(0));
+ EXPECT_THAT(r4(1), r4(1));
}
-TEST(Jet, nested3x) {
- typedef Jet<J, 2> JJ;
- typedef Jet<JJ, 2> JJJ;
+TEST(Jet, Nested3X) {
+ using JJ = Jet<J, 2>;
+ using JJJ = Jet<JJ, 2>;
JJJ x;
x.a = JJ(J(1, 0), 0);
@@ -947,5 +1308,92 @@
ExpectClose(e.v[0].v[0].v[0], kE, kTolerance);
}
-} // namespace internal
-} // namespace ceres
+#if GTEST_HAS_TYPED_TEST
+
+using Types = testing::Types<std::int16_t,
+ std::uint16_t,
+ std::int32_t,
+ std::uint32_t,
+ std::int64_t,
+ std::uint64_t,
+ float,
+ double,
+ long double>;
+
+template <typename T>
+class JetTest : public testing::Test {};
+
+TYPED_TEST_SUITE(JetTest, Types);
+
+TYPED_TEST(JetTest, Comparison) {
+ using Scalar = TypeParam;
+
+ EXPECT_EQ(J0<Scalar>{0}, J0<Scalar>{0});
+ EXPECT_GE(J0<Scalar>{3}, J0<Scalar>{3});
+ EXPECT_GT(J0<Scalar>{3}, J0<Scalar>{2});
+ EXPECT_LE(J0<Scalar>{1}, J0<Scalar>{1});
+ EXPECT_LT(J0<Scalar>{1}, J0<Scalar>{2});
+ EXPECT_NE(J0<Scalar>{1}, J0<Scalar>{2});
+}
+
+TYPED_TEST(JetTest, ScalarComparison) {
+ using Scalar = TypeParam;
+
+ EXPECT_EQ(J0d{0.0}, Scalar{0});
+ EXPECT_GE(J0d{3.0}, Scalar{3});
+ EXPECT_GT(J0d{3.0}, Scalar{2});
+ EXPECT_LE(J0d{1.0}, Scalar{1});
+ EXPECT_LT(J0d{1.0}, Scalar{2});
+ EXPECT_NE(J0d{1.0}, Scalar{2});
+
+ EXPECT_EQ(Scalar{0}, J0d{0.0});
+ EXPECT_GE(Scalar{1}, J0d{1.0});
+ EXPECT_GT(Scalar{2}, J0d{1.0});
+ EXPECT_LE(Scalar{3}, J0d{3.0});
+ EXPECT_LT(Scalar{2}, J0d{3.0});
+ EXPECT_NE(Scalar{2}, J0d{1.0});
+}
+
+TYPED_TEST(JetTest, Nested2XComparison) {
+ using Scalar = TypeParam;
+
+ EXPECT_EQ(J0<J0d>{J0d{0.0}}, Scalar{0});
+ EXPECT_GE(J0<J0d>{J0d{3.0}}, Scalar{3});
+ EXPECT_GT(J0<J0d>{J0d{3.0}}, Scalar{2});
+ EXPECT_LE(J0<J0d>{J0d{1.0}}, Scalar{1});
+ EXPECT_LT(J0<J0d>{J0d{1.0}}, Scalar{2});
+ EXPECT_NE(J0<J0d>{J0d{1.0}}, Scalar{2});
+
+ EXPECT_EQ(Scalar{0}, J0<J0d>{J0d{0.0}});
+ EXPECT_GE(Scalar{1}, J0<J0d>{J0d{1.0}});
+ EXPECT_GT(Scalar{2}, J0<J0d>{J0d{1.0}});
+ EXPECT_LE(Scalar{3}, J0<J0d>{J0d{3.0}});
+ EXPECT_LT(Scalar{2}, J0<J0d>{J0d{3.0}});
+ EXPECT_NE(Scalar{2}, J0<J0d>{J0d{1.0}});
+}
+
+TYPED_TEST(JetTest, Nested3XComparison) {
+ using Scalar = TypeParam;
+
+ EXPECT_EQ(J0<J0<J0d>>{J0<J0d>{J0d{0.0}}}, Scalar{0});
+ EXPECT_GE(J0<J0<J0d>>{J0<J0d>{J0d{3.0}}}, Scalar{3});
+ EXPECT_GT(J0<J0<J0d>>{J0<J0d>{J0d{3.0}}}, Scalar{2});
+ EXPECT_LE(J0<J0<J0d>>{J0<J0d>{J0d{1.0}}}, Scalar{1});
+ EXPECT_LT(J0<J0<J0d>>{J0<J0d>{J0d{1.0}}}, Scalar{2});
+ EXPECT_NE(J0<J0<J0d>>{J0<J0d>{J0d{1.0}}}, Scalar{2});
+
+ EXPECT_EQ(Scalar{0}, J0<J0<J0d>>{J0<J0d>{J0d{0.0}}});
+ EXPECT_GE(Scalar{1}, J0<J0<J0d>>{J0<J0d>{J0d{1.0}}});
+ EXPECT_GT(Scalar{2}, J0<J0<J0d>>{J0<J0d>{J0d{1.0}}});
+ EXPECT_LE(Scalar{3}, J0<J0<J0d>>{J0<J0d>{J0d{3.0}}});
+ EXPECT_LT(Scalar{2}, J0<J0<J0d>>{J0<J0d>{J0d{3.0}}});
+ EXPECT_NE(Scalar{2}, J0<J0<J0d>>{J0<J0d>{J0d{1.0}}});
+}
+
+#endif // GTEST_HAS_TYPED_TEST
+
+} // namespace ceres::internal
+
+#ifdef _MSC_VER
+#pragma float_control(pop)
+#endif
diff --git a/internal/ceres/jet_traits_test.cc b/internal/ceres/jet_traits_test.cc
new file mode 100644
index 0000000..0631784
--- /dev/null
+++ b/internal/ceres/jet_traits_test.cc
@@ -0,0 +1,106 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sergiu.deitsch@gmail.com (Sergiu Deitsch)
+
+#include "ceres/internal/jet_traits.h"
+
+#include <Eigen/Core>
+#include <type_traits>
+#include <utility>
+
+namespace ceres::internal {
+
+using J = Jet<double, 2>;
+// Don't care about the dual part for scalar part categorization and comparison
+// tests
+template <typename T>
+using J0 = Jet<T, 0>;
+using J0d = J0<double>;
+
+// Extract the ranks of given types
+using Ranks001 = Ranks_t<Jet<double, 0>, double, Jet<double, 1>>;
+using Ranks1 = Ranks_t<Jet<double, 1>>;
+using Ranks110 = Ranks_t<Jet<double, 1>, Jet<double, 1>, double>;
+using Ranks023 = Ranks_t<double, Jet<double, 2>, Jet<double, 3>>;
+using EmptyRanks = Ranks_t<>;
+
+// Ensure extracted ranks match the expected integer sequence
+static_assert(
+ std::is_same<Ranks001, std::integer_sequence<int, 0, 0, 1>>::value,
+ "ranks do not match");
+static_assert(std::is_same<Ranks1, std::integer_sequence<int, 1>>::value,
+ "ranks do not match");
+static_assert(
+ std::is_same<Ranks110, std::integer_sequence<int, 1, 1, 0>>::value,
+ "ranks do not match");
+static_assert(
+ std::is_same<Ranks023, std::integer_sequence<int, 0, 2, 3>>::value,
+ "ranks do not match");
+static_assert(std::is_same<EmptyRanks, std::integer_sequence<int>>::value,
+ "ranks sequence is not empty");
+
+// Extract the underlying floating-point type
+static_assert(std::is_same<UnderlyingScalar_t<double>, double>::value,
+ "underlying type is not a double");
+static_assert(std::is_same<UnderlyingScalar_t<J0d>, double>::value,
+ "underlying type is not a double");
+static_assert(std::is_same<UnderlyingScalar_t<J0<J0d>>, double>::value,
+ "underlying type is not a double");
+static_assert(std::is_same<UnderlyingScalar_t<J0<J0<J0d>>>, double>::value,
+ "underlying type is not a double");
+
+static_assert(CompatibleJetOperands_v<Jet<double, 1>, Jet<double, 1>>,
+ "Jets must be compatible");
+static_assert(CompatibleJetOperands_v<Jet<double, 1>, double>,
+ "Jet and scalar must be compatible");
+static_assert(CompatibleJetOperands_v<Jet<double, 2>>,
+ "single Jet must be compatible");
+static_assert(!CompatibleJetOperands_v<Jet<double, 1>, double, Jet<double, 2>>,
+ "Jets and scalar must not be compatible");
+static_assert(!CompatibleJetOperands_v<double, double>,
+ "scalars must not be compatible");
+static_assert(!CompatibleJetOperands_v<double>,
+ "single scalar must not be compatible");
+static_assert(!CompatibleJetOperands_v<>,
+ "empty arguments must not be compatible");
+
+static_assert(!PromotableJetOperands_v<double>,
+ "single scalar must not be Jet promotable");
+static_assert(!PromotableJetOperands_v<double, float, int>,
+ "multiple scalars must not be Jet promotable");
+static_assert(PromotableJetOperands_v<J0d, float, int>,
+ "Jet and several scalars must be promotable");
+static_assert(PromotableJetOperands_v<J0<J0d>, float, int>,
+ "nested Jet and several scalars must be promotable");
+static_assert(!PromotableJetOperands_v<Eigen::Array<double, 2, 3>, float, int>,
+ "Eigen::Array must not be Jet promotable");
+static_assert(!PromotableJetOperands_v<Eigen::Matrix<double, 3, 2>, float, int>,
+ "Eigen::Matrix must not be Jet promotable");
+
+} // namespace ceres::internal
diff --git a/internal/ceres/lapack.cc b/internal/ceres/lapack.cc
deleted file mode 100644
index a159ec7..0000000
--- a/internal/ceres/lapack.cc
+++ /dev/null
@@ -1,190 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-
-#include "ceres/lapack.h"
-
-#include "ceres/internal/port.h"
-#include "ceres/linear_solver.h"
-#include "glog/logging.h"
-
-#ifndef CERES_NO_LAPACK
-// C interface to the LAPACK Cholesky factorization and triangular solve.
-extern "C" void dpotrf_(char* uplo, int* n, double* a, int* lda, int* info);
-
-extern "C" void dpotrs_(char* uplo,
- int* n,
- int* nrhs,
- double* a,
- int* lda,
- double* b,
- int* ldb,
- int* info);
-
-extern "C" void dgels_(char* uplo,
- int* m,
- int* n,
- int* nrhs,
- double* a,
- int* lda,
- double* b,
- int* ldb,
- double* work,
- int* lwork,
- int* info);
-#endif
-
-namespace ceres {
-namespace internal {
-
-LinearSolverTerminationType LAPACK::SolveInPlaceUsingCholesky(
- int num_rows,
- const double* in_lhs,
- double* rhs_and_solution,
- std::string* message) {
-#ifdef CERES_NO_LAPACK
- LOG(FATAL) << "Ceres was built without a BLAS library.";
- return LINEAR_SOLVER_FATAL_ERROR;
-#else
- char uplo = 'L';
- int n = num_rows;
- int info = 0;
- int nrhs = 1;
- double* lhs = const_cast<double*>(in_lhs);
-
- dpotrf_(&uplo, &n, lhs, &n, &info);
- if (info < 0) {
- LOG(FATAL) << "Congratulations, you found a bug in Ceres."
- << "Please report it."
- << "LAPACK::dpotrf fatal error."
- << "Argument: " << -info << " is invalid.";
- return LINEAR_SOLVER_FATAL_ERROR;
- }
-
- if (info > 0) {
- *message = StringPrintf(
- "LAPACK::dpotrf numerical failure. "
- "The leading minor of order %d is not positive definite.",
- info);
- return LINEAR_SOLVER_FAILURE;
- }
-
- dpotrs_(&uplo, &n, &nrhs, lhs, &n, rhs_and_solution, &n, &info);
- if (info < 0) {
- LOG(FATAL) << "Congratulations, you found a bug in Ceres."
- << "Please report it."
- << "LAPACK::dpotrs fatal error."
- << "Argument: " << -info << " is invalid.";
- return LINEAR_SOLVER_FATAL_ERROR;
- }
-
- *message = "Success";
- return LINEAR_SOLVER_SUCCESS;
-#endif
-}
-
-int LAPACK::EstimateWorkSizeForQR(int num_rows, int num_cols) {
-#ifdef CERES_NO_LAPACK
- LOG(FATAL) << "Ceres was built without a LAPACK library.";
- return -1;
-#else
- char trans = 'N';
- int nrhs = 1;
- int lwork = -1;
- double work;
- int info = 0;
- dgels_(&trans,
- &num_rows,
- &num_cols,
- &nrhs,
- NULL,
- &num_rows,
- NULL,
- &num_rows,
- &work,
- &lwork,
- &info);
-
- if (info < 0) {
- LOG(FATAL) << "Congratulations, you found a bug in Ceres."
- << "Please report it."
- << "LAPACK::dgels fatal error."
- << "Argument: " << -info << " is invalid.";
- }
- return static_cast<int>(work);
-#endif
-}
-
-LinearSolverTerminationType LAPACK::SolveInPlaceUsingQR(
- int num_rows,
- int num_cols,
- const double* in_lhs,
- int work_size,
- double* work,
- double* rhs_and_solution,
- std::string* message) {
-#ifdef CERES_NO_LAPACK
- LOG(FATAL) << "Ceres was built without a LAPACK library.";
- return LINEAR_SOLVER_FATAL_ERROR;
-#else
- char trans = 'N';
- int m = num_rows;
- int n = num_cols;
- int nrhs = 1;
- int lda = num_rows;
- int ldb = num_rows;
- int info = 0;
- double* lhs = const_cast<double*>(in_lhs);
-
- dgels_(&trans,
- &m,
- &n,
- &nrhs,
- lhs,
- &lda,
- rhs_and_solution,
- &ldb,
- work,
- &work_size,
- &info);
-
- if (info < 0) {
- LOG(FATAL) << "Congratulations, you found a bug in Ceres."
- << "Please report it."
- << "LAPACK::dgels fatal error."
- << "Argument: " << -info << " is invalid.";
- }
-
- *message = "Success.";
- return LINEAR_SOLVER_SUCCESS;
-#endif
-}
-
-} // namespace internal
-} // namespace ceres
diff --git a/internal/ceres/lapack.h b/internal/ceres/lapack.h
deleted file mode 100644
index 5c5bf8b..0000000
--- a/internal/ceres/lapack.h
+++ /dev/null
@@ -1,101 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-
-#ifndef CERES_INTERNAL_LAPACK_H_
-#define CERES_INTERNAL_LAPACK_H_
-
-#include <string>
-
-#include "ceres/internal/port.h"
-#include "ceres/linear_solver.h"
-
-namespace ceres {
-namespace internal {
-
-class LAPACK {
- public:
- // Solve
- //
- // lhs * solution = rhs
- //
- // using a Cholesky factorization. Here
- // lhs is a symmetric positive definite matrix. It is assumed to be
- // column major and only the lower triangular part of the matrix is
- // referenced.
- //
- // This function uses the LAPACK dpotrf and dpotrs routines.
- //
- // The return value and the message string together describe whether
- // the solver terminated successfully or not and if so, what was the
- // reason for failure.
- static LinearSolverTerminationType SolveInPlaceUsingCholesky(
- int num_rows,
- const double* lhs,
- double* rhs_and_solution,
- std::string* message);
-
- // The SolveUsingQR function requires a buffer for its temporary
- // computation. This function given the size of the lhs matrix will
- // return the size of the buffer needed.
- static int EstimateWorkSizeForQR(int num_rows, int num_cols);
-
- // Solve
- //
- // lhs * solution = rhs
- //
- // using a dense QR factorization. lhs is an arbitrary (possibly
- // rectangular) matrix with full column rank.
- //
- // work is an array of size work_size that this routine uses for its
- // temporary storage. The optimal size of this array can be obtained
- // by calling EstimateWorkSizeForQR.
- //
- // When calling, rhs_and_solution contains the rhs, and upon return
- // the first num_col entries are the solution.
- //
- // This function uses the LAPACK dgels routine.
- //
- // The return value and the message string together describe whether
- // the solver terminated successfully or not and if so, what was the
- // reason for failure.
- static LinearSolverTerminationType SolveInPlaceUsingQR(
- int num_rows,
- int num_cols,
- const double* lhs,
- int work_size,
- double* work,
- double* rhs_and_solution,
- std::string* message);
-};
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_INTERNAL_LAPACK_H_
diff --git a/internal/ceres/levenberg_marquardt_strategy.cc b/internal/ceres/levenberg_marquardt_strategy.cc
index cb0e937..37bc6f4 100644
--- a/internal/ceres/levenberg_marquardt_strategy.cc
+++ b/internal/ceres/levenberg_marquardt_strategy.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,13 +38,13 @@
#include "ceres/internal/eigen.h"
#include "ceres/linear_least_squares_problems.h"
#include "ceres/linear_solver.h"
+#include "ceres/parallel_vector_ops.h"
#include "ceres/sparse_matrix.h"
#include "ceres/trust_region_strategy.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
LevenbergMarquardtStrategy::LevenbergMarquardtStrategy(
const TrustRegionStrategy::Options& options)
@@ -54,14 +54,16 @@
min_diagonal_(options.min_lm_diagonal),
max_diagonal_(options.max_lm_diagonal),
decrease_factor_(2.0),
- reuse_diagonal_(false) {
+ reuse_diagonal_(false),
+ context_(options.context),
+ num_threads_(options.num_threads) {
CHECK(linear_solver_ != nullptr);
CHECK_GT(min_diagonal_, 0.0);
CHECK_LE(min_diagonal_, max_diagonal_);
CHECK_GT(max_radius_, 0.0);
}
-LevenbergMarquardtStrategy::~LevenbergMarquardtStrategy() {}
+LevenbergMarquardtStrategy::~LevenbergMarquardtStrategy() = default;
TrustRegionStrategy::Summary LevenbergMarquardtStrategy::ComputeStep(
const TrustRegionStrategy::PerSolveOptions& per_solve_options,
@@ -78,14 +80,18 @@
diagonal_.resize(num_parameters, 1);
}
- jacobian->SquaredColumnNorm(diagonal_.data());
- for (int i = 0; i < num_parameters; ++i) {
- diagonal_[i] =
- std::min(std::max(diagonal_[i], min_diagonal_), max_diagonal_);
- }
+ jacobian->SquaredColumnNorm(diagonal_.data(), context_, num_threads_);
+ ParallelAssign(context_,
+ num_threads_,
+ diagonal_,
+ diagonal_.array().max(min_diagonal_).min(max_diagonal_));
}
- lm_diagonal_ = (diagonal_ / radius_).array().sqrt();
+ if (lm_diagonal_.size() == 0) {
+ lm_diagonal_.resize(num_parameters);
+ }
+ ParallelAssign(
+ context_, num_threads_, lm_diagonal_, (diagonal_ / radius_).cwiseSqrt());
LinearSolver::PerSolveOptions solve_options;
solve_options.D = lm_diagonal_.data();
@@ -99,7 +105,7 @@
// Invalidate the output array lm_step, so that we can detect if
// the linear solver generated numerical garbage. This is known
// to happen for the DENSE_QR and then DENSE_SCHUR solver when
- // the Jacobin is severely rank deficient and mu is too small.
+ // the Jacobian is severely rank deficient and mu is too small.
InvalidateArray(num_parameters, step);
// Instead of solving Jx = -r, solve Jy = r.
@@ -108,17 +114,21 @@
LinearSolver::Summary linear_solver_summary =
linear_solver_->Solve(jacobian, residuals, solve_options, step);
- if (linear_solver_summary.termination_type == LINEAR_SOLVER_FATAL_ERROR) {
+ if (linear_solver_summary.termination_type ==
+ LinearSolverTerminationType::FATAL_ERROR) {
LOG(WARNING) << "Linear solver fatal error: "
<< linear_solver_summary.message;
- } else if (linear_solver_summary.termination_type == LINEAR_SOLVER_FAILURE) {
+ } else if (linear_solver_summary.termination_type ==
+ LinearSolverTerminationType::FAILURE) {
LOG(WARNING) << "Linear solver failure. Failed to compute a step: "
<< linear_solver_summary.message;
} else if (!IsArrayValid(num_parameters, step)) {
LOG(WARNING) << "Linear solver failure. Failed to compute a finite step.";
- linear_solver_summary.termination_type = LINEAR_SOLVER_FAILURE;
+ linear_solver_summary.termination_type =
+ LinearSolverTerminationType::FAILURE;
} else {
- VectorRef(step, num_parameters) *= -1.0;
+ VectorRef step_vec(step, num_parameters);
+ ParallelAssign(context_, num_threads_, step_vec, -step_vec);
}
reuse_diagonal_ = true;
@@ -153,7 +163,7 @@
reuse_diagonal_ = false;
}
-void LevenbergMarquardtStrategy::StepRejected(double step_quality) {
+void LevenbergMarquardtStrategy::StepRejected(double /*step_quality*/) {
radius_ = radius_ / decrease_factor_;
decrease_factor_ *= 2.0;
reuse_diagonal_ = true;
@@ -161,5 +171,4 @@
double LevenbergMarquardtStrategy::Radius() const { return radius_; }
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/levenberg_marquardt_strategy.h b/internal/ceres/levenberg_marquardt_strategy.h
index 12cd463..1b341c1 100644
--- a/internal/ceres/levenberg_marquardt_strategy.h
+++ b/internal/ceres/levenberg_marquardt_strategy.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,24 +31,26 @@
#ifndef CERES_INTERNAL_LEVENBERG_MARQUARDT_STRATEGY_H_
#define CERES_INTERNAL_LEVENBERG_MARQUARDT_STRATEGY_H_
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/trust_region_strategy.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
+
+class ContextImpl;
// Levenberg-Marquardt step computation and trust region sizing
// strategy based on on "Methods for Nonlinear Least Squares" by
// K. Madsen, H.B. Nielsen and O. Tingleff. Available to download from
//
// http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3215/pdf/imm3215.pdf
-class CERES_EXPORT_INTERNAL LevenbergMarquardtStrategy
+class CERES_NO_EXPORT LevenbergMarquardtStrategy final
: public TrustRegionStrategy {
public:
explicit LevenbergMarquardtStrategy(
const TrustRegionStrategy::Options& options);
- virtual ~LevenbergMarquardtStrategy();
+ ~LevenbergMarquardtStrategy() override;
// TrustRegionStrategy interface
TrustRegionStrategy::Summary ComputeStep(
@@ -81,9 +83,12 @@
// allocations in every iteration and reuse when a step fails and
// ComputeStep is called again.
Vector lm_diagonal_; // lm_diagonal_ = sqrt(diagonal_ / radius_);
+ ContextImpl* context_;
+ int num_threads_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_LEVENBERG_MARQUARDT_STRATEGY_H_
diff --git a/internal/ceres/levenberg_marquardt_strategy_test.cc b/internal/ceres/levenberg_marquardt_strategy_test.cc
index 500f269..ca69f28 100644
--- a/internal/ceres/levenberg_marquardt_strategy_test.cc
+++ b/internal/ceres/levenberg_marquardt_strategy_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -58,8 +58,6 @@
RegularizationCheckingLinearSolver(const int num_cols, const double* diagonal)
: num_cols_(num_cols), diagonal_(diagonal) {}
- virtual ~RegularizationCheckingLinearSolver() {}
-
private:
LinearSolver::Summary SolveImpl(
DenseSparseMatrix* A,
@@ -71,7 +69,7 @@
EXPECT_NEAR(per_solve_options.D[i], diagonal_[i], kTolerance)
<< i << " " << per_solve_options.D[i] << " " << diagonal_[i];
}
- return LinearSolver::Summary();
+ return {};
}
const int num_cols_;
@@ -87,7 +85,7 @@
// We need a non-null pointer here, so anything should do.
std::unique_ptr<LinearSolver> linear_solver(
- new RegularizationCheckingLinearSolver(0, NULL));
+ new RegularizationCheckingLinearSolver(0, nullptr));
options.linear_solver = linear_solver.get();
LevenbergMarquardtStrategy lms(options);
@@ -132,8 +130,8 @@
diagonal[0] = options.min_lm_diagonal;
diagonal[1] = 2.0;
diagonal[2] = options.max_lm_diagonal;
- for (int i = 0; i < 3; ++i) {
- diagonal[i] = sqrt(diagonal[i] / options.initial_radius);
+ for (double& diagonal_entry : diagonal) {
+ diagonal_entry = sqrt(diagonal_entry / options.initial_radius);
}
RegularizationCheckingLinearSolver linear_solver(3, diagonal);
@@ -149,7 +147,7 @@
// are versions of glog which are not in the google namespace.
using namespace google;
-#if defined(_MSC_VER)
+#if defined(GLOG_NO_ABBREVIATED_SEVERITIES)
// Use GLOG_WARNING to support MSVC if GLOG_NO_ABBREVIATED_SEVERITIES
// is defined.
EXPECT_CALL(log,
@@ -161,7 +159,7 @@
TrustRegionStrategy::Summary summary =
lms.ComputeStep(pso, &dsm, &residual, x);
- EXPECT_EQ(summary.termination_type, LINEAR_SOLVER_FAILURE);
+ EXPECT_EQ(summary.termination_type, LinearSolverTerminationType::FAILURE);
}
}
diff --git a/internal/ceres/line_search.cc b/internal/ceres/line_search.cc
index 7e871a2..eb2c7c9 100644
--- a/internal/ceres/line_search.cc
+++ b/internal/ceres/line_search.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,7 +33,11 @@
#include <algorithm>
#include <cmath>
#include <iomanip>
-#include <iostream> // NOLINT
+#include <map>
+#include <memory>
+#include <ostream> // NOLINT
+#include <string>
+#include <vector>
#include "ceres/evaluator.h"
#include "ceres/function_sample.h"
@@ -44,48 +48,41 @@
#include "ceres/wall_time.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::map;
-using std::ostream;
-using std::string;
-using std::vector;
+namespace ceres::internal {
namespace {
// Precision used for floating point values in error message output.
const int kErrorMessageNumericPrecision = 8;
} // namespace
-ostream& operator<<(ostream& os, const FunctionSample& sample);
+std::ostream& operator<<(std::ostream& os, const FunctionSample& sample);
// Convenience stream operator for pushing FunctionSamples into log messages.
-ostream& operator<<(ostream& os, const FunctionSample& sample) {
+std::ostream& operator<<(std::ostream& os, const FunctionSample& sample) {
os << sample.ToDebugString();
return os;
}
+LineSearch::~LineSearch() = default;
+
LineSearch::LineSearch(const LineSearch::Options& options)
: options_(options) {}
-LineSearch* LineSearch::Create(const LineSearchType line_search_type,
- const LineSearch::Options& options,
- string* error) {
- LineSearch* line_search = NULL;
+std::unique_ptr<LineSearch> LineSearch::Create(
+ const LineSearchType line_search_type,
+ const LineSearch::Options& options,
+ std::string* error) {
switch (line_search_type) {
case ceres::ARMIJO:
- line_search = new ArmijoLineSearch(options);
- break;
+ return std::make_unique<ArmijoLineSearch>(options);
case ceres::WOLFE:
- line_search = new WolfeLineSearch(options);
- break;
+ return std::make_unique<WolfeLineSearch>(options);
default:
- *error = string("Invalid line search algorithm type: ") +
+ *error = std::string("Invalid line search algorithm type: ") +
LineSearchTypeToString(line_search_type) +
- string(", unable to create line search.");
- return NULL;
+ std::string(", unable to create line search.");
}
- return line_search;
+ return nullptr;
}
LineSearchFunction::LineSearchFunction(Evaluator* evaluator)
@@ -119,13 +116,13 @@
}
output->vector_x_is_valid = true;
- double* gradient = NULL;
+ double* gradient = nullptr;
if (evaluate_gradient) {
output->vector_gradient.resize(direction_.rows(), 1);
gradient = output->vector_gradient.data();
}
const bool eval_status = evaluator_->Evaluate(
- output->vector_x.data(), &(output->value), NULL, gradient, NULL);
+ output->vector_x.data(), &(output->value), nullptr, gradient, nullptr);
if (!eval_status || !std::isfinite(output->value)) {
return;
@@ -150,7 +147,7 @@
}
void LineSearchFunction::ResetTimeStatistics() {
- const map<string, CallStatistics> evaluator_statistics =
+ const std::map<std::string, CallStatistics> evaluator_statistics =
evaluator_->Statistics();
initial_evaluator_residual_time_in_seconds =
@@ -166,7 +163,7 @@
void LineSearchFunction::TimeStatistics(
double* cost_evaluation_time_in_seconds,
double* gradient_evaluation_time_in_seconds) const {
- const map<string, CallStatistics> evaluator_time_statistics =
+ const std::map<std::string, CallStatistics> evaluator_time_statistics =
evaluator_->Statistics();
*cost_evaluation_time_in_seconds =
FindWithDefault(
@@ -243,18 +240,18 @@
// Select step size by interpolating the function and gradient values
// and minimizing the corresponding polynomial.
- vector<FunctionSample> samples;
+ std::vector<FunctionSample> samples;
samples.push_back(lowerbound);
if (interpolation_type == QUADRATIC) {
// Two point interpolation using function values and the
// gradient at the lower bound.
- samples.push_back(FunctionSample(current.x, current.value));
+ samples.emplace_back(current.x, current.value);
if (previous.value_is_valid) {
// Three point interpolation, using function values and the
// gradient at the lower bound.
- samples.push_back(FunctionSample(previous.x, previous.value));
+ samples.emplace_back(previous.x, previous.value);
}
} else if (interpolation_type == CUBIC) {
// Two point interpolation using the function values and the gradients.
@@ -427,7 +424,7 @@
// shrank the bracket width until it was below our minimum tolerance.
// As these are 'artificial' constraints, and we would otherwise fail to
// produce a valid point when ArmijoLineSearch would succeed, we return the
- // point with the lowest cost found thus far which satsifies the Armijo
+ // point with the lowest cost found thus far which satisfies the Armijo
// condition (but not the Wolfe conditions).
summary->optimal_point = bracket_low;
summary->success = true;
@@ -449,8 +446,8 @@
// defined by bracket_low & bracket_high, which satisfy:
//
// 1. The interval bounded by step sizes: bracket_low.x & bracket_high.x
- // contains step sizes that satsify the strong Wolfe conditions.
- // 2. bracket_low.x is of all the step sizes evaluated *which satisifed the
+ // contains step sizes that satisfy the strong Wolfe conditions.
+ // 2. bracket_low.x is of all the step sizes evaluated *which satisfied the
// Armijo sufficient decrease condition*, the one which generated the
// smallest function value, i.e. bracket_low.value <
// f(all other steps satisfying Armijo).
@@ -494,7 +491,7 @@
// Or, searching was stopped due to an 'artificial' constraint, i.e. not
// a condition imposed / required by the underlying algorithm, but instead an
// engineering / implementation consideration. But a step which exceeds the
-// minimum step size, and satsifies the Armijo condition was still found,
+// minimum step size, and satisfies the Armijo condition was still found,
// and should thus be used [zoom not required].
//
// Returns false if no step size > minimum step size was found which
@@ -518,7 +515,7 @@
// As we require the gradient to evaluate the Wolfe condition, we always
// calculate it together with the value, irrespective of the interpolation
// type. As opposed to only calculating the gradient after the Armijo
- // condition is satisifed, as the computational saving from this approach
+ // condition is satisfied, as the computational saving from this approach
// would be slight (perhaps even negative due to the extra call). Also,
// always calculating the value & gradient together protects against us
// reporting invalid solutions if the cost function returns slightly different
@@ -821,7 +818,7 @@
// As we require the gradient to evaluate the Wolfe condition, we always
// calculate it together with the value, irrespective of the interpolation
// type. As opposed to only calculating the gradient after the Armijo
- // condition is satisifed, as the computational saving from this approach
+ // condition is satisfied, as the computational saving from this approach
// would be slight (perhaps even negative due to the extra call). Also,
// always calculating the value & gradient together protects against us
// reporting invalid solutions if the cost function returns slightly
@@ -883,5 +880,4 @@
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/line_search.h b/internal/ceres/line_search.h
index 634c971..acf85c0 100644
--- a/internal/ceres/line_search.h
+++ b/internal/ceres/line_search.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,16 +33,16 @@
#ifndef CERES_INTERNAL_LINE_SEARCH_H_
#define CERES_INTERNAL_LINE_SEARCH_H_
+#include <memory>
#include <string>
#include <vector>
#include "ceres/function_sample.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class Evaluator;
class LineSearchFunction;
@@ -57,11 +57,11 @@
// sufficient decrease condition. Depending on the particular
// condition used, we get a variety of different line search
// algorithms, e.g., Armijo, Wolfe etc.
-class LineSearch {
+class CERES_NO_EXPORT LineSearch {
public:
struct Summary;
- struct Options {
+ struct CERES_NO_EXPORT Options {
// Degree of the polynomial used to approximate the objective
// function.
LineSearchInterpolationType interpolation_type = CUBIC;
@@ -161,11 +161,12 @@
};
explicit LineSearch(const LineSearch::Options& options);
- virtual ~LineSearch() {}
+ virtual ~LineSearch();
- static LineSearch* Create(const LineSearchType line_search_type,
- const LineSearch::Options& options,
- std::string* error);
+ static std::unique_ptr<LineSearch> Create(
+ const LineSearchType line_search_type,
+ const LineSearch::Options& options,
+ std::string* error);
// Perform the line search.
//
@@ -208,7 +209,7 @@
// In practice, this object provides access to the objective
// function value and the directional derivative of the underlying
// optimization problem along a specific search direction.
-class LineSearchFunction {
+class CERES_NO_EXPORT LineSearchFunction {
public:
explicit LineSearchFunction(Evaluator* evaluator);
void Init(const Vector& position, const Vector& direction);
@@ -257,10 +258,9 @@
// minFunc package by Mark Schmidt.
//
// For more details: http://www.di.ens.fr/~mschmidt/Software/minFunc.html
-class ArmijoLineSearch : public LineSearch {
+class CERES_NO_EXPORT ArmijoLineSearch final : public LineSearch {
public:
explicit ArmijoLineSearch(const LineSearch::Options& options);
- virtual ~ArmijoLineSearch() {}
private:
void DoSearch(double step_size_estimate,
@@ -276,10 +276,9 @@
//
// [1] Nocedal J., Wright S., Numerical Optimization, 2nd Ed., Springer, 1999.
// [2] http://www.di.ens.fr/~mschmidt/Software/minFunc.html.
-class WolfeLineSearch : public LineSearch {
+class CERES_NO_EXPORT WolfeLineSearch final : public LineSearch {
public:
explicit WolfeLineSearch(const LineSearch::Options& options);
- virtual ~WolfeLineSearch() {}
// Returns true iff either a valid point, or valid bracket are found.
bool BracketingPhase(const FunctionSample& initial_position,
@@ -302,7 +301,6 @@
Summary* summary) const final;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_LINE_SEARCH_H_
diff --git a/internal/ceres/line_search_direction.cc b/internal/ceres/line_search_direction.cc
index 48e6c98..62fcc81 100644
--- a/internal/ceres/line_search_direction.cc
+++ b/internal/ceres/line_search_direction.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,26 +30,28 @@
#include "ceres/line_search_direction.h"
+#include <memory>
+
#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
#include "ceres/line_search_minimizer.h"
#include "ceres/low_rank_inverse_hessian.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class SteepestDescent : public LineSearchDirection {
+class CERES_NO_EXPORT SteepestDescent final : public LineSearchDirection {
public:
- virtual ~SteepestDescent() {}
- bool NextDirection(const LineSearchMinimizer::State& previous,
+ bool NextDirection(const LineSearchMinimizer::State& /*previous*/,
const LineSearchMinimizer::State& current,
- Vector* search_direction) {
+ Vector* search_direction) override {
*search_direction = -current.gradient;
return true;
}
};
-class NonlinearConjugateGradient : public LineSearchDirection {
+class CERES_NO_EXPORT NonlinearConjugateGradient final
+ : public LineSearchDirection {
public:
NonlinearConjugateGradient(const NonlinearConjugateGradientType type,
const double function_tolerance)
@@ -57,7 +59,7 @@
bool NextDirection(const LineSearchMinimizer::State& previous,
const LineSearchMinimizer::State& current,
- Vector* search_direction) {
+ Vector* search_direction) override {
double beta = 0.0;
Vector gradient_change;
switch (type_) {
@@ -95,7 +97,7 @@
const double function_tolerance_;
};
-class LBFGS : public LineSearchDirection {
+class CERES_NO_EXPORT LBFGS final : public LineSearchDirection {
public:
LBFGS(const int num_parameters,
const int max_lbfgs_rank,
@@ -105,11 +107,9 @@
use_approximate_eigenvalue_bfgs_scaling),
is_positive_definite_(true) {}
- virtual ~LBFGS() {}
-
bool NextDirection(const LineSearchMinimizer::State& previous,
const LineSearchMinimizer::State& current,
- Vector* search_direction) {
+ Vector* search_direction) override {
CHECK(is_positive_definite_)
<< "Ceres bug: NextDirection() called on L-BFGS after inverse Hessian "
<< "approximation has become indefinite, please contact the "
@@ -120,8 +120,8 @@
current.gradient - previous.gradient);
search_direction->setZero();
- low_rank_inverse_hessian_.RightMultiply(current.gradient.data(),
- search_direction->data());
+ low_rank_inverse_hessian_.RightMultiplyAndAccumulate(
+ current.gradient.data(), search_direction->data());
*search_direction *= -1.0;
if (search_direction->dot(current.gradient) >= 0.0) {
@@ -141,7 +141,7 @@
bool is_positive_definite_;
};
-class BFGS : public LineSearchDirection {
+class CERES_NO_EXPORT BFGS final : public LineSearchDirection {
public:
BFGS(const int num_parameters, const bool use_approximate_eigenvalue_scaling)
: num_parameters_(num_parameters),
@@ -161,11 +161,9 @@
inverse_hessian_ = Matrix::Identity(num_parameters, num_parameters);
}
- virtual ~BFGS() {}
-
bool NextDirection(const LineSearchMinimizer::State& previous,
const LineSearchMinimizer::State& current,
- Vector* search_direction) {
+ Vector* search_direction) override {
CHECK(is_positive_definite_)
<< "Ceres bug: NextDirection() called on BFGS after inverse Hessian "
<< "approximation has become indefinite, please contact the "
@@ -243,7 +241,7 @@
//
// The original origin of this rescaling trick is somewhat unclear, the
// earliest reference appears to be Oren [1], however it is widely
- // discussed without specific attributation in various texts including
+ // discussed without specific attribution in various texts including
// [2] (p143).
//
// [1] Oren S.S., Self-scaling variable metric (SSVM) algorithms
@@ -338,33 +336,34 @@
bool is_positive_definite_;
};
-LineSearchDirection* LineSearchDirection::Create(
+LineSearchDirection::~LineSearchDirection() = default;
+
+std::unique_ptr<LineSearchDirection> LineSearchDirection::Create(
const LineSearchDirection::Options& options) {
if (options.type == STEEPEST_DESCENT) {
- return new SteepestDescent;
+ return std::make_unique<SteepestDescent>();
}
if (options.type == NONLINEAR_CONJUGATE_GRADIENT) {
- return new NonlinearConjugateGradient(
+ return std::make_unique<NonlinearConjugateGradient>(
options.nonlinear_conjugate_gradient_type, options.function_tolerance);
}
if (options.type == ceres::LBFGS) {
- return new ceres::internal::LBFGS(
+ return std::make_unique<ceres::internal::LBFGS>(
options.num_parameters,
options.max_lbfgs_rank,
options.use_approximate_eigenvalue_bfgs_scaling);
}
if (options.type == ceres::BFGS) {
- return new ceres::internal::BFGS(
+ return std::make_unique<ceres::internal::BFGS>(
options.num_parameters,
options.use_approximate_eigenvalue_bfgs_scaling);
}
LOG(ERROR) << "Unknown line search direction type: " << options.type;
- return NULL;
+ return nullptr;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/line_search_direction.h b/internal/ceres/line_search_direction.h
index 2fcf472..6716840 100644
--- a/internal/ceres/line_search_direction.h
+++ b/internal/ceres/line_search_direction.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,41 +31,35 @@
#ifndef CERES_INTERNAL_LINE_SEARCH_DIRECTION_H_
#define CERES_INTERNAL_LINE_SEARCH_DIRECTION_H_
+#include <memory>
+
#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
#include "ceres/line_search_minimizer.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class LineSearchDirection {
+class CERES_NO_EXPORT LineSearchDirection {
public:
struct Options {
- Options()
- : num_parameters(0),
- type(LBFGS),
- nonlinear_conjugate_gradient_type(FLETCHER_REEVES),
- function_tolerance(1e-12),
- max_lbfgs_rank(20),
- use_approximate_eigenvalue_bfgs_scaling(true) {}
-
- int num_parameters;
- LineSearchDirectionType type;
- NonlinearConjugateGradientType nonlinear_conjugate_gradient_type;
- double function_tolerance;
- int max_lbfgs_rank;
- bool use_approximate_eigenvalue_bfgs_scaling;
+ int num_parameters{0};
+ LineSearchDirectionType type{LBFGS};
+ NonlinearConjugateGradientType nonlinear_conjugate_gradient_type{
+ FLETCHER_REEVES};
+ double function_tolerance{1e-12};
+ int max_lbfgs_rank{20};
+ bool use_approximate_eigenvalue_bfgs_scaling{true};
};
- static LineSearchDirection* Create(const Options& options);
+ static std::unique_ptr<LineSearchDirection> Create(const Options& options);
- virtual ~LineSearchDirection() {}
+ virtual ~LineSearchDirection();
virtual bool NextDirection(const LineSearchMinimizer::State& previous,
const LineSearchMinimizer::State& current,
Vector* search_direction) = 0;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_LINE_SEARCH_DIRECTION_H_
diff --git a/internal/ceres/line_search_minimizer.cc b/internal/ceres/line_search_minimizer.cc
index ea1c507..58a4bf9 100644
--- a/internal/ceres/line_search_minimizer.cc
+++ b/internal/ceres/line_search_minimizer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,7 +30,7 @@
//
// Generic loop for line search based optimization algorithms.
//
-// This is primarily inpsired by the minFunc packaged written by Mark
+// This is primarily inspired by the minFunc packaged written by Mark
// Schmidt.
//
// http://www.di.ens.fr/~mschmidt/Software/minFunc.html
@@ -51,7 +51,7 @@
#include "ceres/array_utils.h"
#include "ceres/evaluator.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/line_search.h"
#include "ceres/line_search_direction.h"
#include "ceres/stringprintf.h"
@@ -59,8 +59,7 @@
#include "ceres/wall_time.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
namespace {
bool EvaluateGradientNorms(Evaluator* evaluator,
@@ -171,8 +170,8 @@
line_search_direction_options.max_lbfgs_rank = options.max_lbfgs_rank;
line_search_direction_options.use_approximate_eigenvalue_bfgs_scaling =
options.use_approximate_eigenvalue_bfgs_scaling;
- std::unique_ptr<LineSearchDirection> line_search_direction(
- LineSearchDirection::Create(line_search_direction_options));
+ std::unique_ptr<LineSearchDirection> line_search_direction =
+ LineSearchDirection::Create(line_search_direction_options);
LineSearchFunction line_search_function(evaluator);
@@ -280,8 +279,8 @@
<< options.max_num_line_search_direction_restarts
<< " [max].";
}
- line_search_direction.reset(
- LineSearchDirection::Create(line_search_direction_options));
+ line_search_direction =
+ LineSearchDirection::Create(line_search_direction_options);
current_state.search_direction = -current_state.gradient;
}
@@ -473,5 +472,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/line_search_minimizer.h b/internal/ceres/line_search_minimizer.h
index 79e8dc9..f3621d9 100644
--- a/internal/ceres/line_search_minimizer.h
+++ b/internal/ceres/line_search_minimizer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,21 +32,21 @@
#define CERES_INTERNAL_LINE_SEARCH_MINIMIZER_H_
#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
#include "ceres/minimizer.h"
#include "ceres/solver.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Generic line search minimization algorithm.
//
// For example usage, see SolverImpl::Minimize.
-class LineSearchMinimizer : public Minimizer {
+class CERES_NO_EXPORT LineSearchMinimizer final : public Minimizer {
public:
struct State {
- State(int num_parameters, int num_effective_parameters)
+ State(int /*num_parameters*/, int num_effective_parameters)
: cost(0.0),
gradient(num_effective_parameters),
gradient_squared_norm(0.0),
@@ -63,13 +63,11 @@
double step_size;
};
- ~LineSearchMinimizer() {}
void Minimize(const Minimizer::Options& options,
double* parameters,
Solver::Summary* summary) final;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_LINE_SEARCH_MINIMIZER_H_
diff --git a/internal/ceres/line_search_minimizer_test.cc b/internal/ceres/line_search_minimizer_test.cc
index 2ef27b9..3b15ae8 100644
--- a/internal/ceres/line_search_minimizer_test.cc
+++ b/internal/ceres/line_search_minimizer_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,8 +35,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class QuadraticFirstOrderFunction : public ceres::FirstOrderFunction {
public:
@@ -44,7 +43,7 @@
double* cost,
double* gradient) const final {
cost[0] = parameters[0] * parameters[0];
- if (gradient != NULL) {
+ if (gradient != nullptr) {
gradient[0] = 2.0 * parameters[0];
}
return true;
@@ -62,5 +61,4 @@
EXPECT_NEAR(summary.final_cost, 0.0, std::numeric_limits<double>::epsilon());
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/line_search_preprocessor.cc b/internal/ceres/line_search_preprocessor.cc
index 6a69425..3109c48 100644
--- a/internal/ceres/line_search_preprocessor.cc
+++ b/internal/ceres/line_search_preprocessor.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -41,8 +41,7 @@
#include "ceres/program.h"
#include "ceres/wall_time.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
namespace {
bool IsProgramValid(const Program& program, std::string* error) {
@@ -63,15 +62,13 @@
pp->evaluator_options.context = pp->problem->context();
pp->evaluator_options.evaluation_callback =
pp->reduced_program->mutable_evaluation_callback();
- pp->evaluator.reset(Evaluator::Create(
- pp->evaluator_options, pp->reduced_program.get(), &pp->error));
- return (pp->evaluator.get() != NULL);
+ pp->evaluator = Evaluator::Create(
+ pp->evaluator_options, pp->reduced_program.get(), &pp->error);
+ return (pp->evaluator.get() != nullptr);
}
} // namespace
-LineSearchPreprocessor::~LineSearchPreprocessor() {}
-
bool LineSearchPreprocessor::Preprocess(const Solver::Options& options,
ProblemImpl* problem,
PreprocessedProblem* pp) {
@@ -85,10 +82,10 @@
return false;
}
- pp->reduced_program.reset(program->CreateReducedProgram(
- &pp->removed_parameter_blocks, &pp->fixed_cost, &pp->error));
+ pp->reduced_program = program->CreateReducedProgram(
+ &pp->removed_parameter_blocks, &pp->fixed_cost, &pp->error);
- if (pp->reduced_program.get() == NULL) {
+ if (pp->reduced_program.get() == nullptr) {
return false;
}
@@ -104,5 +101,4 @@
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/line_search_preprocessor.h b/internal/ceres/line_search_preprocessor.h
index bd426c7..0ffdba1 100644
--- a/internal/ceres/line_search_preprocessor.h
+++ b/internal/ceres/line_search_preprocessor.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,21 +31,21 @@
#ifndef CERES_INTERNAL_LINE_SEARCH_PREPROCESSOR_H_
#define CERES_INTERNAL_LINE_SEARCH_PREPROCESSOR_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/preprocessor.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class CERES_EXPORT_INTERNAL LineSearchPreprocessor : public Preprocessor {
+class CERES_NO_EXPORT LineSearchPreprocessor final : public Preprocessor {
public:
- virtual ~LineSearchPreprocessor();
bool Preprocess(const Solver::Options& options,
ProblemImpl* problem,
PreprocessedProblem* preprocessed_problem) final;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_LINE_SEARCH_PREPROCESSOR_H_
diff --git a/internal/ceres/line_search_preprocessor_test.cc b/internal/ceres/line_search_preprocessor_test.cc
index 68860c5..e002a4b 100644
--- a/internal/ceres/line_search_preprocessor_test.cc
+++ b/internal/ceres/line_search_preprocessor_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,8 +37,7 @@
#include "ceres/solver.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST(LineSearchPreprocessor, ZeroProblem) {
ProblemImpl problem;
@@ -77,7 +76,7 @@
public:
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
return false;
}
};
@@ -85,7 +84,7 @@
TEST(LineSearchPreprocessor, RemoveParameterBlocksFailed) {
ProblemImpl problem;
double x = 3.0;
- problem.AddResidualBlock(new FailingCostFunction, NULL, &x);
+ problem.AddResidualBlock(new FailingCostFunction, nullptr, &x);
problem.SetParameterBlockConstant(&x);
Solver::Options options;
options.minimizer_type = LINE_SEARCH;
@@ -111,7 +110,7 @@
public:
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
return true;
}
};
@@ -121,8 +120,8 @@
double x = 1.0;
double y = 1.0;
double z = 1.0;
- problem.AddResidualBlock(new DummyCostFunction<1, 1, 1>, NULL, &x, &y);
- problem.AddResidualBlock(new DummyCostFunction<1, 1, 1>, NULL, &y, &z);
+ problem.AddResidualBlock(new DummyCostFunction<1, 1, 1>, nullptr, &x, &y);
+ problem.AddResidualBlock(new DummyCostFunction<1, 1, 1>, nullptr, &y, &z);
Solver::Options options;
options.minimizer_type = LINE_SEARCH;
@@ -131,8 +130,7 @@
PreprocessedProblem pp;
EXPECT_TRUE(preprocessor.Preprocess(options, &problem, &pp));
EXPECT_EQ(pp.evaluator_options.linear_solver_type, CGNR);
- EXPECT_TRUE(pp.evaluator.get() != NULL);
+ EXPECT_TRUE(pp.evaluator.get() != nullptr);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/linear_least_squares_problems.cc b/internal/ceres/linear_least_squares_problems.cc
index 299051c..36cffec 100644
--- a/internal/ceres/linear_least_squares_problems.cc
+++ b/internal/ceres/linear_least_squares_problems.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -44,12 +44,10 @@
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::string;
-
-LinearLeastSquaresProblem* CreateLinearLeastSquaresProblemFromId(int id) {
+std::unique_ptr<LinearLeastSquaresProblem>
+CreateLinearLeastSquaresProblemFromId(int id) {
switch (id) {
case 0:
return LinearLeastSquaresProblem0();
@@ -61,10 +59,14 @@
return LinearLeastSquaresProblem3();
case 4:
return LinearLeastSquaresProblem4();
+ case 5:
+ return LinearLeastSquaresProblem5();
+ case 6:
+ return LinearLeastSquaresProblem6();
default:
LOG(FATAL) << "Unknown problem id requested " << id;
}
- return NULL;
+ return nullptr;
}
/*
@@ -85,15 +87,15 @@
x_D = [1.78448275;
2.82327586;]
*/
-LinearLeastSquaresProblem* LinearLeastSquaresProblem0() {
- LinearLeastSquaresProblem* problem = new LinearLeastSquaresProblem;
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem0() {
+ auto problem = std::make_unique<LinearLeastSquaresProblem>();
- TripletSparseMatrix* A = new TripletSparseMatrix(3, 2, 6);
- problem->b.reset(new double[3]);
- problem->D.reset(new double[2]);
+ auto A = std::make_unique<TripletSparseMatrix>(3, 2, 6);
+ problem->b = std::make_unique<double[]>(3);
+ problem->D = std::make_unique<double[]>(2);
- problem->x.reset(new double[2]);
- problem->x_D.reset(new double[2]);
+ problem->x = std::make_unique<double[]>(2);
+ problem->x_D = std::make_unique<double[]>(2);
int* Ai = A->mutable_rows();
int* Aj = A->mutable_cols();
@@ -115,7 +117,7 @@
Ax[4] = 6;
Ax[5] = -10;
A->set_num_nonzeros(6);
- problem->A.reset(A);
+ problem->A = std::move(A);
problem->b[0] = 8;
problem->b[1] = 18;
@@ -159,13 +161,15 @@
12 0 1 17 1
0 30 1 1 37]
+ cond(A'A) = 200.36
+
S = [ 42.3419 -1.4000 -11.5806
-1.4000 2.6000 1.0000
- 11.5806 1.0000 31.1935]
+ -11.5806 1.0000 31.1935]
r = [ 4.3032
5.4000
- 5.0323]
+ 4.0323]
S\r = [ 0.2102
2.1367
@@ -181,17 +185,25 @@
// BlockSparseMatrix version of this problem.
// TripletSparseMatrix version.
-LinearLeastSquaresProblem* LinearLeastSquaresProblem1() {
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem1() {
int num_rows = 6;
int num_cols = 5;
- LinearLeastSquaresProblem* problem = new LinearLeastSquaresProblem;
- TripletSparseMatrix* A =
- new TripletSparseMatrix(num_rows, num_cols, num_rows * num_cols);
- problem->b.reset(new double[num_rows]);
- problem->D.reset(new double[num_cols]);
+ auto problem = std::make_unique<LinearLeastSquaresProblem>();
+
+ auto A = std::make_unique<TripletSparseMatrix>(
+ num_rows, num_cols, num_rows * num_cols);
+ problem->b = std::make_unique<double[]>(num_rows);
+ problem->D = std::make_unique<double[]>(num_cols);
problem->num_eliminate_blocks = 2;
+ problem->x = std::make_unique<double[]>(num_cols);
+ problem->x[0] = -2.3061;
+ problem->x[1] = 0.3172;
+ problem->x[2] = 0.2102;
+ problem->x[3] = 2.1367;
+ problem->x[4] = 0.1388;
+
int* rows = A->mutable_rows();
int* cols = A->mutable_cols();
double* values = A->mutable_values();
@@ -271,7 +283,7 @@
A->set_num_nonzeros(nnz);
CHECK(A->IsValid());
- problem->A.reset(A);
+ problem->A = std::move(A);
for (int i = 0; i < num_cols; ++i) {
problem->D.get()[i] = 1;
@@ -285,21 +297,28 @@
}
// BlockSparseMatrix version
-LinearLeastSquaresProblem* LinearLeastSquaresProblem2() {
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem2() {
int num_rows = 6;
int num_cols = 5;
- LinearLeastSquaresProblem* problem = new LinearLeastSquaresProblem;
+ auto problem = std::make_unique<LinearLeastSquaresProblem>();
- problem->b.reset(new double[num_rows]);
- problem->D.reset(new double[num_cols]);
+ problem->b = std::make_unique<double[]>(num_rows);
+ problem->D = std::make_unique<double[]>(num_cols);
problem->num_eliminate_blocks = 2;
- CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
- std::unique_ptr<double[]> values(new double[num_rows * num_cols]);
+ problem->x = std::make_unique<double[]>(num_cols);
+ problem->x[0] = -2.3061;
+ problem->x[1] = 0.3172;
+ problem->x[2] = 0.2102;
+ problem->x[3] = 2.1367;
+ problem->x[4] = 0.1388;
+
+ auto* bs = new CompressedRowBlockStructure;
+ auto values = std::make_unique<double[]>(num_rows * num_cols);
for (int c = 0; c < num_cols; ++c) {
- bs->cols.push_back(Block());
+ bs->cols.emplace_back();
bs->cols.back().size = 1;
bs->cols.back().position = c;
}
@@ -311,12 +330,12 @@
values[nnz++] = 1;
values[nnz++] = 2;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(2, 1));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(2, 1);
}
// Row 2
@@ -324,12 +343,12 @@
values[nnz++] = 3;
values[nnz++] = 4;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 1;
- row.cells.push_back(Cell(0, 2));
- row.cells.push_back(Cell(3, 3));
+ row.cells.emplace_back(0, 2);
+ row.cells.emplace_back(3, 3);
}
// Row 3
@@ -337,12 +356,12 @@
values[nnz++] = 5;
values[nnz++] = 6;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 2;
- row.cells.push_back(Cell(1, 4));
- row.cells.push_back(Cell(4, 5));
+ row.cells.emplace_back(1, 4);
+ row.cells.emplace_back(4, 5);
}
// Row 4
@@ -350,12 +369,12 @@
values[nnz++] = 7;
values[nnz++] = 8;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 3;
- row.cells.push_back(Cell(1, 6));
- row.cells.push_back(Cell(2, 7));
+ row.cells.emplace_back(1, 6);
+ row.cells.emplace_back(2, 7);
}
// Row 5
@@ -363,12 +382,12 @@
values[nnz++] = 9;
values[nnz++] = 1;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 4;
- row.cells.push_back(Cell(1, 8));
- row.cells.push_back(Cell(2, 9));
+ row.cells.emplace_back(1, 8);
+ row.cells.emplace_back(2, 9);
}
// Row 6
@@ -377,16 +396,16 @@
values[nnz++] = 1;
values[nnz++] = 1;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 5;
- row.cells.push_back(Cell(2, 10));
- row.cells.push_back(Cell(3, 11));
- row.cells.push_back(Cell(4, 12));
+ row.cells.emplace_back(2, 10);
+ row.cells.emplace_back(3, 11);
+ row.cells.emplace_back(4, 12);
}
- BlockSparseMatrix* A = new BlockSparseMatrix(bs);
+ auto A = std::make_unique<BlockSparseMatrix>(bs);
memcpy(A->mutable_values(), values.get(), nnz * sizeof(*A->values()));
for (int i = 0; i < num_cols; ++i) {
@@ -397,7 +416,7 @@
problem->b.get()[i] = i;
}
- problem->A.reset(A);
+ problem->A = std::move(A);
return problem;
}
@@ -418,21 +437,21 @@
5]
*/
// BlockSparseMatrix version
-LinearLeastSquaresProblem* LinearLeastSquaresProblem3() {
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem3() {
int num_rows = 5;
int num_cols = 2;
- LinearLeastSquaresProblem* problem = new LinearLeastSquaresProblem;
+ auto problem = std::make_unique<LinearLeastSquaresProblem>();
- problem->b.reset(new double[num_rows]);
- problem->D.reset(new double[num_cols]);
+ problem->b = std::make_unique<double[]>(num_rows);
+ problem->D = std::make_unique<double[]>(num_cols);
problem->num_eliminate_blocks = 2;
- CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
- std::unique_ptr<double[]> values(new double[num_rows * num_cols]);
+ auto* bs = new CompressedRowBlockStructure;
+ auto values = std::make_unique<double[]>(num_rows * num_cols);
for (int c = 0; c < num_cols; ++c) {
- bs->cols.push_back(Block());
+ bs->cols.emplace_back();
bs->cols.back().size = 1;
bs->cols.back().position = c;
}
@@ -442,54 +461,54 @@
// Row 1
{
values[nnz++] = 1;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
+ row.cells.emplace_back(0, 0);
}
// Row 2
{
values[nnz++] = 3;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 1;
- row.cells.push_back(Cell(0, 1));
+ row.cells.emplace_back(0, 1);
}
// Row 3
{
values[nnz++] = 5;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 2;
- row.cells.push_back(Cell(1, 2));
+ row.cells.emplace_back(1, 2);
}
// Row 4
{
values[nnz++] = 7;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 3;
- row.cells.push_back(Cell(1, 3));
+ row.cells.emplace_back(1, 3);
}
// Row 5
{
values[nnz++] = 9;
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 4;
- row.cells.push_back(Cell(1, 4));
+ row.cells.emplace_back(1, 4);
}
- BlockSparseMatrix* A = new BlockSparseMatrix(bs);
+ auto A = std::make_unique<BlockSparseMatrix>(bs);
memcpy(A->mutable_values(), values.get(), nnz * sizeof(*A->values()));
for (int i = 0; i < num_cols; ++i) {
@@ -500,7 +519,7 @@
problem->b.get()[i] = i;
}
- problem->A.reset(A);
+ problem->A = std::move(A);
return problem;
}
@@ -525,29 +544,29 @@
//
// NOTE: This problem is too small and rank deficient to be solved without
// the diagonal regularization.
-LinearLeastSquaresProblem* LinearLeastSquaresProblem4() {
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem4() {
int num_rows = 3;
int num_cols = 7;
- LinearLeastSquaresProblem* problem = new LinearLeastSquaresProblem;
+ auto problem = std::make_unique<LinearLeastSquaresProblem>();
- problem->b.reset(new double[num_rows]);
- problem->D.reset(new double[num_cols]);
+ problem->b = std::make_unique<double[]>(num_rows);
+ problem->D = std::make_unique<double[]>(num_cols);
problem->num_eliminate_blocks = 1;
- CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
- std::unique_ptr<double[]> values(new double[num_rows * num_cols]);
+ auto* bs = new CompressedRowBlockStructure;
+ auto values = std::make_unique<double[]>(num_rows * num_cols);
// Column block structure
- bs->cols.push_back(Block());
+ bs->cols.emplace_back();
bs->cols.back().size = 2;
bs->cols.back().position = 0;
- bs->cols.push_back(Block());
+ bs->cols.emplace_back();
bs->cols.back().size = 3;
bs->cols.back().position = 2;
- bs->cols.push_back(Block());
+ bs->cols.emplace_back();
bs->cols.back().size = 2;
bs->cols.back().position = 5;
@@ -555,18 +574,18 @@
// Row 1 & 2
{
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 2;
row.block.position = 0;
- row.cells.push_back(Cell(0, nnz));
+ row.cells.emplace_back(0, nnz);
values[nnz++] = 1;
values[nnz++] = 2;
values[nnz++] = 1;
values[nnz++] = 4;
- row.cells.push_back(Cell(2, nnz));
+ row.cells.emplace_back(2, nnz);
values[nnz++] = 1;
values[nnz++] = 1;
values[nnz++] = 5;
@@ -575,22 +594,22 @@
// Row 3
{
- bs->rows.push_back(CompressedRow());
+ bs->rows.emplace_back();
CompressedRow& row = bs->rows.back();
row.block.size = 1;
row.block.position = 2;
- row.cells.push_back(Cell(1, nnz));
+ row.cells.emplace_back(1, nnz);
values[nnz++] = 9;
values[nnz++] = 0;
values[nnz++] = 0;
- row.cells.push_back(Cell(2, nnz));
+ row.cells.emplace_back(2, nnz);
values[nnz++] = 3;
values[nnz++] = 1;
}
- BlockSparseMatrix* A = new BlockSparseMatrix(bs);
+ auto A = std::make_unique<BlockSparseMatrix>(bs);
memcpy(A->mutable_values(), values.get(), nnz * sizeof(*A->values()));
for (int i = 0; i < num_cols; ++i) {
@@ -601,7 +620,308 @@
problem->b.get()[i] = i;
}
- problem->A.reset(A);
+ problem->A = std::move(A);
+ return problem;
+}
+
+/*
+A problem with block-diagonal F'F.
+
+ A = [1 0 | 0 0 2
+ 3 0 | 0 0 4
+ 0 -1 | 0 1 0
+ 0 -3 | 0 1 0
+ 0 -1 | 3 0 0
+ 0 -2 | 1 0 0]
+
+ b = [0
+ 1
+ 2
+ 3
+ 4
+ 5]
+
+ c = A'* b = [ 22
+ -25
+ 17
+ 7
+ 4]
+
+ A'A = [10 0 0 0 10
+ 0 15 -5 -4 0
+ 0 -5 10 0 0
+ 0 -4 0 2 0
+ 10 0 0 0 20]
+
+ cond(A'A) = 41.402
+
+ S = [ 8.3333 -1.3333 0
+ -1.3333 0.9333 0
+ 0 0 10.0000]
+
+ r = [ 8.6667
+ -1.6667
+ 1.0000]
+
+ S\r = [ 0.9778
+ -0.3889
+ 0.1000]
+
+ A\b = [ 0.2
+ -1.4444
+ 0.9777
+ -0.3888
+ 0.1]
+*/
+
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem5() {
+ int num_rows = 6;
+ int num_cols = 5;
+
+ auto problem = std::make_unique<LinearLeastSquaresProblem>();
+ problem->b = std::make_unique<double[]>(num_rows);
+ problem->D = std::make_unique<double[]>(num_cols);
+ problem->num_eliminate_blocks = 2;
+
+ // TODO: add x
+ problem->x = std::make_unique<double[]>(num_cols);
+ problem->x[0] = 0.2;
+ problem->x[1] = -1.4444;
+ problem->x[2] = 0.9777;
+ problem->x[3] = -0.3888;
+ problem->x[4] = 0.1;
+
+ auto* bs = new CompressedRowBlockStructure;
+ auto values = std::make_unique<double[]>(num_rows * num_cols);
+
+ for (int c = 0; c < num_cols; ++c) {
+ bs->cols.emplace_back();
+ bs->cols.back().size = 1;
+ bs->cols.back().position = c;
+ }
+
+ int nnz = 0;
+
+ // Row 1
+ {
+ values[nnz++] = -1;
+ values[nnz++] = 2;
+
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 1;
+ row.block.position = 0;
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(4, 1);
+ }
+
+ // Row 2
+ {
+ values[nnz++] = 3;
+ values[nnz++] = 4;
+
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 1;
+ row.block.position = 1;
+ row.cells.emplace_back(0, 2);
+ row.cells.emplace_back(4, 3);
+ }
+
+ // Row 3
+ {
+ values[nnz++] = -1;
+ values[nnz++] = 1;
+
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 1;
+ row.block.position = 2;
+ row.cells.emplace_back(1, 4);
+ row.cells.emplace_back(3, 5);
+ }
+
+ // Row 4
+ {
+ values[nnz++] = -3;
+ values[nnz++] = 1;
+
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 1;
+ row.block.position = 3;
+ row.cells.emplace_back(1, 6);
+ row.cells.emplace_back(3, 7);
+ }
+
+ // Row 5
+ {
+ values[nnz++] = -1;
+ values[nnz++] = 3;
+
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 1;
+ row.block.position = 4;
+ row.cells.emplace_back(1, 8);
+ row.cells.emplace_back(2, 9);
+ }
+
+ // Row 6
+ {
+ // values[nnz++] = 2;
+ values[nnz++] = -2;
+ values[nnz++] = 1;
+
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 1;
+ row.block.position = 5;
+ // row.cells.emplace_back(0, 10);
+ row.cells.emplace_back(1, 10);
+ row.cells.emplace_back(2, 11);
+ }
+
+ auto A = std::make_unique<BlockSparseMatrix>(bs);
+ memcpy(A->mutable_values(), values.get(), nnz * sizeof(*A->values()));
+
+ for (int i = 0; i < num_cols; ++i) {
+ problem->D.get()[i] = 1;
+ }
+
+ for (int i = 0; i < num_rows; ++i) {
+ problem->b.get()[i] = i;
+ }
+
+ problem->A = std::move(A);
+
+ return problem;
+}
+
+/*
+ A = [1 2 0 0 0 1 1
+ 1 4 0 0 0 5 6
+ 3 4 0 0 0 7 8
+ 5 6 0 0 0 9 0
+ 0 0 9 0 0 3 1]
+
+ b = [0
+ 1
+ 2
+ 3
+ 4]
+*/
+// BlockSparseMatrix version
+//
+// This problem has the unique property that it has two different
+// sized f-blocks, but only one of them occurs in the rows involving
+// the one e-block. So performing Schur elimination on this problem
+// tests the Schur Eliminator's ability to handle non-e-block rows
+// correctly when their structure does not conform to the static
+// structure determined by DetectStructure.
+//
+// Additionally, this problem has the first row of the last row block of E being
+// larger than number of row blocks in E
+//
+// NOTE: This problem is too small and rank deficient to be solved without
+// the diagonal regularization.
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem6() {
+ int num_rows = 5;
+ int num_cols = 7;
+
+ auto problem = std::make_unique<LinearLeastSquaresProblem>();
+
+ problem->b = std::make_unique<double[]>(num_rows);
+ problem->D = std::make_unique<double[]>(num_cols);
+ problem->num_eliminate_blocks = 1;
+
+ auto* bs = new CompressedRowBlockStructure;
+ auto values = std::make_unique<double[]>(num_rows * num_cols);
+
+ // Column block structure
+ bs->cols.emplace_back();
+ bs->cols.back().size = 2;
+ bs->cols.back().position = 0;
+
+ bs->cols.emplace_back();
+ bs->cols.back().size = 3;
+ bs->cols.back().position = 2;
+
+ bs->cols.emplace_back();
+ bs->cols.back().size = 2;
+ bs->cols.back().position = 5;
+
+ int nnz = 0;
+
+ // Row 1 & 2
+ {
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 2;
+ row.block.position = 0;
+
+ row.cells.emplace_back(0, nnz);
+ values[nnz++] = 1;
+ values[nnz++] = 2;
+ values[nnz++] = 1;
+ values[nnz++] = 4;
+
+ row.cells.emplace_back(2, nnz);
+ values[nnz++] = 1;
+ values[nnz++] = 1;
+ values[nnz++] = 5;
+ values[nnz++] = 6;
+ }
+
+ // Row 3 & 4
+ {
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 2;
+ row.block.position = 2;
+
+ row.cells.emplace_back(0, nnz);
+ values[nnz++] = 3;
+ values[nnz++] = 4;
+ values[nnz++] = 5;
+ values[nnz++] = 6;
+
+ row.cells.emplace_back(2, nnz);
+ values[nnz++] = 7;
+ values[nnz++] = 8;
+ values[nnz++] = 9;
+ values[nnz++] = 0;
+ }
+
+ // Row 5
+ {
+ bs->rows.emplace_back();
+ CompressedRow& row = bs->rows.back();
+ row.block.size = 1;
+ row.block.position = 4;
+
+ row.cells.emplace_back(1, nnz);
+ values[nnz++] = 9;
+ values[nnz++] = 0;
+ values[nnz++] = 0;
+
+ row.cells.emplace_back(2, nnz);
+ values[nnz++] = 3;
+ values[nnz++] = 1;
+ }
+
+ auto A = std::make_unique<BlockSparseMatrix>(bs);
+ memcpy(A->mutable_values(), values.get(), nnz * sizeof(*A->values()));
+
+ for (int i = 0; i < num_cols; ++i) {
+ problem->D.get()[i] = (i + 1) * 100;
+ }
+
+ for (int i = 0; i < num_rows; ++i) {
+ problem->b.get()[i] = i;
+ }
+
+ problem->A = std::move(A);
return problem;
}
@@ -610,27 +930,27 @@
const double* D,
const double* b,
const double* x,
- int num_eliminate_blocks) {
+ int /*num_eliminate_blocks*/) {
CHECK(A != nullptr);
Matrix AA;
A->ToDenseMatrix(&AA);
LOG(INFO) << "A^T: \n" << AA.transpose();
- if (D != NULL) {
+ if (D != nullptr) {
LOG(INFO) << "A's appended diagonal:\n" << ConstVectorRef(D, A->num_cols());
}
- if (b != NULL) {
+ if (b != nullptr) {
LOG(INFO) << "b: \n" << ConstVectorRef(b, A->num_rows());
}
- if (x != NULL) {
+ if (x != nullptr) {
LOG(INFO) << "x: \n" << ConstVectorRef(x, A->num_cols());
}
return true;
}
-void WriteArrayToFileOrDie(const string& filename,
+void WriteArrayToFileOrDie(const std::string& filename,
const double* x,
const int size) {
CHECK(x != nullptr);
@@ -643,23 +963,23 @@
fclose(fptr);
}
-bool DumpLinearLeastSquaresProblemToTextFile(const string& filename_base,
+bool DumpLinearLeastSquaresProblemToTextFile(const std::string& filename_base,
const SparseMatrix* A,
const double* D,
const double* b,
const double* x,
- int num_eliminate_blocks) {
+ int /*num_eliminate_blocks*/) {
CHECK(A != nullptr);
LOG(INFO) << "writing to: " << filename_base << "*";
- string matlab_script;
+ std::string matlab_script;
StringAppendF(&matlab_script,
"function lsqp = load_trust_region_problem()\n");
StringAppendF(&matlab_script, "lsqp.num_rows = %d;\n", A->num_rows());
StringAppendF(&matlab_script, "lsqp.num_cols = %d;\n", A->num_cols());
{
- string filename = filename_base + "_A.txt";
+ std::string filename = filename_base + "_A.txt";
FILE* fptr = fopen(filename.c_str(), "w");
CHECK(fptr != nullptr);
A->ToTextFile(fptr);
@@ -673,34 +993,34 @@
A->num_cols());
}
- if (D != NULL) {
- string filename = filename_base + "_D.txt";
+ if (D != nullptr) {
+ std::string filename = filename_base + "_D.txt";
WriteArrayToFileOrDie(filename, D, A->num_cols());
StringAppendF(
&matlab_script, "lsqp.D = load('%s', '-ascii');\n", filename.c_str());
}
- if (b != NULL) {
- string filename = filename_base + "_b.txt";
+ if (b != nullptr) {
+ std::string filename = filename_base + "_b.txt";
WriteArrayToFileOrDie(filename, b, A->num_rows());
StringAppendF(
&matlab_script, "lsqp.b = load('%s', '-ascii');\n", filename.c_str());
}
- if (x != NULL) {
- string filename = filename_base + "_x.txt";
+ if (x != nullptr) {
+ std::string filename = filename_base + "_x.txt";
WriteArrayToFileOrDie(filename, x, A->num_cols());
StringAppendF(
&matlab_script, "lsqp.x = load('%s', '-ascii');\n", filename.c_str());
}
- string matlab_filename = filename_base + ".m";
+ std::string matlab_filename = filename_base + ".m";
WriteStringToFileOrDie(matlab_script, matlab_filename);
return true;
}
} // namespace
-bool DumpLinearLeastSquaresProblem(const string& filename_base,
+bool DumpLinearLeastSquaresProblem(const std::string& filename_base,
DumpFormatType dump_format_type,
const SparseMatrix* A,
const double* D,
@@ -721,5 +1041,4 @@
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/linear_least_squares_problems.h b/internal/ceres/linear_least_squares_problems.h
index cddaa9f..9d01add 100644
--- a/internal/ceres/linear_least_squares_problems.h
+++ b/internal/ceres/linear_least_squares_problems.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,23 +35,23 @@
#include <string>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/sparse_matrix.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Structure defining a linear least squares problem and if possible
// ground truth solutions. To be used by various LinearSolver tests.
-struct CERES_EXPORT_INTERNAL LinearLeastSquaresProblem {
- LinearLeastSquaresProblem() : num_eliminate_blocks(0) {}
+struct CERES_NO_EXPORT LinearLeastSquaresProblem {
+ LinearLeastSquaresProblem() = default;
std::unique_ptr<SparseMatrix> A;
std::unique_ptr<double[]> b;
std::unique_ptr<double[]> D;
// If using the schur eliminator then how many of the variable
// blocks are e_type blocks.
- int num_eliminate_blocks;
+ int num_eliminate_blocks{0};
// Solution to min_x |Ax - b|^2
std::unique_ptr<double[]> x;
@@ -60,17 +60,27 @@
};
// Factories for linear least squares problem.
-CERES_EXPORT_INTERNAL LinearLeastSquaresProblem*
+CERES_NO_EXPORT std::unique_ptr<LinearLeastSquaresProblem>
CreateLinearLeastSquaresProblemFromId(int id);
-LinearLeastSquaresProblem* LinearLeastSquaresProblem0();
-LinearLeastSquaresProblem* LinearLeastSquaresProblem1();
-LinearLeastSquaresProblem* LinearLeastSquaresProblem2();
-LinearLeastSquaresProblem* LinearLeastSquaresProblem3();
-LinearLeastSquaresProblem* LinearLeastSquaresProblem4();
+CERES_NO_EXPORT
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem0();
+CERES_NO_EXPORT
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem1();
+CERES_NO_EXPORT
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem2();
+CERES_NO_EXPORT
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem3();
+CERES_NO_EXPORT
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem4();
+CERES_NO_EXPORT
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem5();
+CERES_NO_EXPORT
+std::unique_ptr<LinearLeastSquaresProblem> LinearLeastSquaresProblem6();
// Write the linear least squares problem to disk. The exact format
// depends on dump_format_type.
+CERES_NO_EXPORT
bool DumpLinearLeastSquaresProblem(const std::string& filename_base,
DumpFormatType dump_format_type,
const SparseMatrix* A,
@@ -78,7 +88,8 @@
const double* b,
const double* x,
int num_eliminate_blocks);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_LINEAR_LEAST_SQUARES_PROBLEMS_H_
diff --git a/internal/ceres/linear_operator.cc b/internal/ceres/linear_operator.cc
index 548c724..f4c2c5e 100644
--- a/internal/ceres/linear_operator.cc
+++ b/internal/ceres/linear_operator.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,10 +30,34 @@
#include "ceres/linear_operator.h"
-namespace ceres {
-namespace internal {
+#include <glog/logging.h>
-LinearOperator::~LinearOperator() {}
+namespace ceres::internal {
-} // namespace internal
-} // namespace ceres
+void LinearOperator::RightMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const {
+ (void)context;
+ if (num_threads != 1) {
+ VLOG(3) << "Parallel right product is not supported by linear operator "
+ "implementation";
+ }
+ RightMultiplyAndAccumulate(x, y);
+}
+
+void LinearOperator::LeftMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const {
+ (void)context;
+ if (num_threads != 1) {
+ VLOG(3) << "Parallel left product is not supported by linear operator "
+ "implementation";
+ }
+ LeftMultiplyAndAccumulate(x, y);
+}
+
+LinearOperator::~LinearOperator() = default;
+
+} // namespace ceres::internal
diff --git a/internal/ceres/linear_operator.h b/internal/ceres/linear_operator.h
index 9c59fc3..aafc584 100644
--- a/internal/ceres/linear_operator.h
+++ b/internal/ceres/linear_operator.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,28 +33,59 @@
#ifndef CERES_INTERNAL_LINEAR_OPERATOR_H_
#define CERES_INTERNAL_LINEAR_OPERATOR_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
+
+class ContextImpl;
// This is an abstract base class for linear operators. It supports
// access to size information and left and right multiply operators.
-class CERES_EXPORT_INTERNAL LinearOperator {
+class CERES_NO_EXPORT LinearOperator {
public:
virtual ~LinearOperator();
// y = y + Ax;
- virtual void RightMultiply(const double* x, double* y) const = 0;
+ virtual void RightMultiplyAndAccumulate(const double* x, double* y) const = 0;
+ virtual void RightMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const;
// y = y + A'x;
- virtual void LeftMultiply(const double* x, double* y) const = 0;
+ virtual void LeftMultiplyAndAccumulate(const double* x, double* y) const = 0;
+ virtual void LeftMultiplyAndAccumulate(const double* x,
+ double* y,
+ ContextImpl* context,
+ int num_threads) const;
+
+ virtual void RightMultiplyAndAccumulate(const Vector& x, Vector& y) const {
+ RightMultiplyAndAccumulate(x.data(), y.data());
+ }
+
+ virtual void LeftMultiplyAndAccumulate(const Vector& x, Vector& y) const {
+ LeftMultiplyAndAccumulate(x.data(), y.data());
+ }
+
+ virtual void RightMultiplyAndAccumulate(const Vector& x,
+ Vector& y,
+ ContextImpl* context,
+ int num_threads) const {
+ RightMultiplyAndAccumulate(x.data(), y.data(), context, num_threads);
+ }
+
+ virtual void LeftMultiplyAndAccumulate(const Vector& x,
+ Vector& y,
+ ContextImpl* context,
+ int num_threads) const {
+ LeftMultiplyAndAccumulate(x.data(), y.data(), context, num_threads);
+ }
virtual int num_rows() const = 0;
virtual int num_cols() const = 0;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_LINEAR_OPERATOR_H_
diff --git a/internal/ceres/linear_solver.cc b/internal/ceres/linear_solver.cc
index 6cae248..4ba0b75 100644
--- a/internal/ceres/linear_solver.cc
+++ b/internal/ceres/linear_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,20 +30,22 @@
#include "ceres/linear_solver.h"
+#include <memory>
+
#include "ceres/cgnr_solver.h"
#include "ceres/dense_normal_cholesky_solver.h"
#include "ceres/dense_qr_solver.h"
#include "ceres/dynamic_sparse_normal_cholesky_solver.h"
+#include "ceres/internal/config.h"
#include "ceres/iterative_schur_complement_solver.h"
#include "ceres/schur_complement_solver.h"
#include "ceres/sparse_normal_cholesky_solver.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-LinearSolver::~LinearSolver() {}
+LinearSolver::~LinearSolver() = default;
LinearSolverType LinearSolver::LinearSolverForZeroEBlocks(
LinearSolverType linear_solver_type) {
@@ -69,52 +71,59 @@
return linear_solver_type;
}
-LinearSolver* LinearSolver::Create(const LinearSolver::Options& options) {
- CHECK(options.context != NULL);
+std::unique_ptr<LinearSolver> LinearSolver::Create(
+ const LinearSolver::Options& options) {
+ CHECK(options.context != nullptr);
switch (options.type) {
- case CGNR:
- return new CgnrSolver(options);
+ case CGNR: {
+#ifndef CERES_NO_CUDA
+ if (options.sparse_linear_algebra_library_type == CUDA_SPARSE) {
+ std::string error;
+ return CudaCgnrSolver::Create(options, &error);
+ }
+#endif
+ return std::make_unique<CgnrSolver>(options);
+ } break;
case SPARSE_NORMAL_CHOLESKY:
#if defined(CERES_NO_SPARSE)
- return NULL;
+ return nullptr;
#else
if (options.dynamic_sparsity) {
- return new DynamicSparseNormalCholeskySolver(options);
+ return std::make_unique<DynamicSparseNormalCholeskySolver>(options);
}
- return new SparseNormalCholeskySolver(options);
+ return std::make_unique<SparseNormalCholeskySolver>(options);
#endif
case SPARSE_SCHUR:
#if defined(CERES_NO_SPARSE)
- return NULL;
+ return nullptr;
#else
- return new SparseSchurComplementSolver(options);
+ return std::make_unique<SparseSchurComplementSolver>(options);
#endif
case DENSE_SCHUR:
- return new DenseSchurComplementSolver(options);
+ return std::make_unique<DenseSchurComplementSolver>(options);
case ITERATIVE_SCHUR:
if (options.use_explicit_schur_complement) {
- return new SparseSchurComplementSolver(options);
+ return std::make_unique<SparseSchurComplementSolver>(options);
} else {
- return new IterativeSchurComplementSolver(options);
+ return std::make_unique<IterativeSchurComplementSolver>(options);
}
case DENSE_QR:
- return new DenseQRSolver(options);
+ return std::make_unique<DenseQRSolver>(options);
case DENSE_NORMAL_CHOLESKY:
- return new DenseNormalCholeskySolver(options);
+ return std::make_unique<DenseNormalCholeskySolver>(options);
default:
LOG(FATAL) << "Unknown linear solver type :" << options.type;
- return NULL; // MSVC doesn't understand that LOG(FATAL) never returns.
+ return nullptr; // MSVC doesn't understand that LOG(FATAL) never returns.
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/linear_solver.h b/internal/ceres/linear_solver.h
index 49c6527..1d5338f 100644
--- a/internal/ceres/linear_solver.h
+++ b/internal/ceres/linear_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,6 +36,7 @@
#include <cstddef>
#include <map>
+#include <memory>
#include <string>
#include <vector>
@@ -45,44 +46,87 @@
#include "ceres/context_impl.h"
#include "ceres/dense_sparse_matrix.h"
#include "ceres/execution_summary.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-enum LinearSolverTerminationType {
+enum class LinearSolverTerminationType {
// Termination criterion was met.
- LINEAR_SOLVER_SUCCESS,
+ SUCCESS,
// Solver ran for max_num_iterations and terminated before the
// termination tolerance could be satisfied.
- LINEAR_SOLVER_NO_CONVERGENCE,
+ NO_CONVERGENCE,
// Solver was terminated due to numerical problems, generally due to
// the linear system being poorly conditioned.
- LINEAR_SOLVER_FAILURE,
+ FAILURE,
// Solver failed with a fatal error that cannot be recovered from,
// e.g. CHOLMOD ran out of memory when computing the symbolic or
// numeric factorization or an underlying library was called with
// the wrong arguments.
- LINEAR_SOLVER_FATAL_ERROR
+ FATAL_ERROR
};
+inline std::ostream& operator<<(std::ostream& s,
+ LinearSolverTerminationType type) {
+ switch (type) {
+ case LinearSolverTerminationType::SUCCESS:
+ s << "LINEAR_SOLVER_SUCCESS";
+ break;
+ case LinearSolverTerminationType::NO_CONVERGENCE:
+ s << "LINEAR_SOLVER_NO_CONVERGENCE";
+ break;
+ case LinearSolverTerminationType::FAILURE:
+ s << "LINEAR_SOLVER_FAILURE";
+ break;
+ case LinearSolverTerminationType::FATAL_ERROR:
+ s << "LINEAR_SOLVER_FATAL_ERROR";
+ break;
+ default:
+ s << "UNKNOWN LinearSolverTerminationType";
+ }
+ return s;
+}
+
// This enum controls the fill-reducing ordering a sparse linear
// algebra library should use before computing a sparse factorization
// (usually Cholesky).
-enum OrderingType {
+//
+// TODO(sameeragarwal): Add support for nested dissection
+enum class OrderingType {
NATURAL, // Do not re-order the matrix. This is useful when the
// matrix has been ordered using a fill-reducing ordering
// already.
- AMD // Use the Approximate Minimum Degree algorithm to re-order
- // the matrix.
+
+ AMD, // Use the Approximate Minimum Degree algorithm to re-order
+ // the matrix.
+
+ NESDIS, // Use the Nested Dissection algorithm to re-order the matrix.
};
+inline std::ostream& operator<<(std::ostream& s, OrderingType type) {
+ switch (type) {
+ case OrderingType::NATURAL:
+ s << "NATURAL";
+ break;
+ case OrderingType::AMD:
+ s << "AMD";
+ break;
+ case OrderingType::NESDIS:
+ s << "NESDIS";
+ break;
+ default:
+ s << "UNKNOWN OrderingType";
+ }
+ return s;
+}
+
class LinearOperator;
// Abstract base class for objects that implement algorithms for
@@ -101,7 +145,7 @@
// The Options struct configures the LinearSolver object for its
// lifetime. The PerSolveOptions struct is used to specify options for
// a particular Solve call.
-class CERES_EXPORT_INTERNAL LinearSolver {
+class CERES_NO_EXPORT LinearSolver {
public:
struct Options {
LinearSolverType type = SPARSE_NORMAL_CHOLESKY;
@@ -110,9 +154,9 @@
DenseLinearAlgebraLibraryType dense_linear_algebra_library_type = EIGEN;
SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type =
SUITE_SPARSE;
+ OrderingType ordering_type = OrderingType::NATURAL;
// See solver.h for information about these flags.
- bool use_postordering = false;
bool dynamic_sparsity = false;
bool use_explicit_schur_complement = false;
@@ -121,6 +165,23 @@
int min_num_iterations = 1;
int max_num_iterations = 1;
+ // Maximum number of iterations performed by SCHUR_POWER_SERIES_EXPANSION.
+ // This value controls the maximum number of iterations whether it is used
+ // as a preconditioner or just to initialize the solution for
+ // ITERATIVE_SCHUR.
+ int max_num_spse_iterations = 5;
+
+ // Use SCHUR_POWER_SERIES_EXPANSION to initialize the solution for
+ // ITERATIVE_SCHUR. This option can be set true regardless of what
+ // preconditioner is being used.
+ bool use_spse_initialization = false;
+
+ // When use_spse_initialization is true, this parameter along with
+ // max_num_spse_iterations controls the number of
+ // SCHUR_POWER_SERIES_EXPANSION iterations performed for initialization. It
+ // is not used to control the preconditioner.
+ double spse_tolerance = 0.1;
+
// If possible, how many threads can the solver use.
int num_threads = 1;
@@ -259,7 +320,8 @@
struct Summary {
double residual_norm = -1.0;
int num_iterations = -1;
- LinearSolverTerminationType termination_type = LINEAR_SOLVER_FAILURE;
+ LinearSolverTerminationType termination_type =
+ LinearSolverTerminationType::FAILURE;
std::string message;
};
@@ -284,11 +346,11 @@
// issues. Further, this calls are not expected to be frequent or
// performance sensitive.
virtual std::map<std::string, CallStatistics> Statistics() const {
- return std::map<std::string, CallStatistics>();
+ return {};
}
// Factory
- static LinearSolver* Create(const Options& options);
+ static std::unique_ptr<LinearSolver> Create(const Options& options);
};
// This templated subclass of LinearSolver serves as a base class for
@@ -301,12 +363,11 @@
template <typename MatrixType>
class TypedLinearSolver : public LinearSolver {
public:
- virtual ~TypedLinearSolver() {}
- virtual LinearSolver::Summary Solve(
+ LinearSolver::Summary Solve(
LinearOperator* A,
const double* b,
const LinearSolver::PerSolveOptions& per_solve_options,
- double* x) {
+ double* x) override {
ScopedExecutionTimer total_time("LinearSolver::Solve", &execution_summary_);
CHECK(A != nullptr);
CHECK(b != nullptr);
@@ -314,7 +375,7 @@
return SolveImpl(down_cast<MatrixType*>(A), b, per_solve_options, x);
}
- virtual std::map<std::string, CallStatistics> Statistics() const {
+ std::map<std::string, CallStatistics> Statistics() const override {
return execution_summary_.statistics();
}
@@ -328,16 +389,17 @@
ExecutionSummary execution_summary_;
};
-// Linear solvers that depend on acccess to the low level structure of
+// Linear solvers that depend on access to the low level structure of
// a SparseMatrix.
// clang-format off
-typedef TypedLinearSolver<BlockSparseMatrix> BlockSparseMatrixSolver; // NOLINT
-typedef TypedLinearSolver<CompressedRowSparseMatrix> CompressedRowSparseMatrixSolver; // NOLINT
-typedef TypedLinearSolver<DenseSparseMatrix> DenseSparseMatrixSolver; // NOLINT
-typedef TypedLinearSolver<TripletSparseMatrix> TripletSparseMatrixSolver; // NOLINT
+using BlockSparseMatrixSolver = TypedLinearSolver<BlockSparseMatrix>; // NOLINT
+using CompressedRowSparseMatrixSolver = TypedLinearSolver<CompressedRowSparseMatrix>; // NOLINT
+using DenseSparseMatrixSolver = TypedLinearSolver<DenseSparseMatrix>; // NOLINT
+using TripletSparseMatrixSolver = TypedLinearSolver<TripletSparseMatrix>; // NOLINT
// clang-format on
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_LINEAR_SOLVER_H_
diff --git a/internal/ceres/local_parameterization.cc b/internal/ceres/local_parameterization.cc
deleted file mode 100644
index 62947f0..0000000
--- a/internal/ceres/local_parameterization.cc
+++ /dev/null
@@ -1,349 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-
-#include "ceres/local_parameterization.h"
-
-#include <algorithm>
-
-#include "Eigen/Geometry"
-#include "ceres/internal/eigen.h"
-#include "ceres/internal/fixed_array.h"
-#include "ceres/internal/householder_vector.h"
-#include "ceres/rotation.h"
-#include "glog/logging.h"
-
-namespace ceres {
-
-using std::vector;
-
-LocalParameterization::~LocalParameterization() {}
-
-bool LocalParameterization::MultiplyByJacobian(const double* x,
- const int num_rows,
- const double* global_matrix,
- double* local_matrix) const {
- if (LocalSize() == 0) {
- return true;
- }
-
- Matrix jacobian(GlobalSize(), LocalSize());
- if (!ComputeJacobian(x, jacobian.data())) {
- return false;
- }
-
- MatrixRef(local_matrix, num_rows, LocalSize()) =
- ConstMatrixRef(global_matrix, num_rows, GlobalSize()) * jacobian;
- return true;
-}
-
-IdentityParameterization::IdentityParameterization(const int size)
- : size_(size) {
- CHECK_GT(size, 0);
-}
-
-bool IdentityParameterization::Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const {
- VectorRef(x_plus_delta, size_) =
- ConstVectorRef(x, size_) + ConstVectorRef(delta, size_);
- return true;
-}
-
-bool IdentityParameterization::ComputeJacobian(const double* x,
- double* jacobian) const {
- MatrixRef(jacobian, size_, size_).setIdentity();
- return true;
-}
-
-bool IdentityParameterization::MultiplyByJacobian(const double* x,
- const int num_cols,
- const double* global_matrix,
- double* local_matrix) const {
- std::copy(
- global_matrix, global_matrix + num_cols * GlobalSize(), local_matrix);
- return true;
-}
-
-SubsetParameterization::SubsetParameterization(
- int size, const vector<int>& constant_parameters)
- : local_size_(size - constant_parameters.size()), constancy_mask_(size, 0) {
- if (constant_parameters.empty()) {
- return;
- }
-
- vector<int> constant = constant_parameters;
- std::sort(constant.begin(), constant.end());
- CHECK_GE(constant.front(), 0) << "Indices indicating constant parameter must "
- "be greater than equal to zero.";
- CHECK_LT(constant.back(), size)
- << "Indices indicating constant parameter must be less than the size "
- << "of the parameter block.";
- CHECK(std::adjacent_find(constant.begin(), constant.end()) == constant.end())
- << "The set of constant parameters cannot contain duplicates";
- for (int i = 0; i < constant_parameters.size(); ++i) {
- constancy_mask_[constant_parameters[i]] = 1;
- }
-}
-
-bool SubsetParameterization::Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const {
- const int global_size = GlobalSize();
- for (int i = 0, j = 0; i < global_size; ++i) {
- if (constancy_mask_[i]) {
- x_plus_delta[i] = x[i];
- } else {
- x_plus_delta[i] = x[i] + delta[j++];
- }
- }
- return true;
-}
-
-bool SubsetParameterization::ComputeJacobian(const double* x,
- double* jacobian) const {
- if (local_size_ == 0) {
- return true;
- }
-
- const int global_size = GlobalSize();
- MatrixRef m(jacobian, global_size, local_size_);
- m.setZero();
- for (int i = 0, j = 0; i < global_size; ++i) {
- if (!constancy_mask_[i]) {
- m(i, j++) = 1.0;
- }
- }
- return true;
-}
-
-bool SubsetParameterization::MultiplyByJacobian(const double* x,
- const int num_cols,
- const double* global_matrix,
- double* local_matrix) const {
- if (local_size_ == 0) {
- return true;
- }
-
- const int global_size = GlobalSize();
- for (int col = 0; col < num_cols; ++col) {
- for (int i = 0, j = 0; i < global_size; ++i) {
- if (!constancy_mask_[i]) {
- local_matrix[col * local_size_ + j++] =
- global_matrix[col * global_size + i];
- }
- }
- }
- return true;
-}
-
-bool QuaternionParameterization::Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const {
- const double norm_delta =
- sqrt(delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2]);
- if (norm_delta > 0.0) {
- const double sin_delta_by_delta = (sin(norm_delta) / norm_delta);
- double q_delta[4];
- q_delta[0] = cos(norm_delta);
- q_delta[1] = sin_delta_by_delta * delta[0];
- q_delta[2] = sin_delta_by_delta * delta[1];
- q_delta[3] = sin_delta_by_delta * delta[2];
- QuaternionProduct(q_delta, x, x_plus_delta);
- } else {
- for (int i = 0; i < 4; ++i) {
- x_plus_delta[i] = x[i];
- }
- }
- return true;
-}
-
-bool QuaternionParameterization::ComputeJacobian(const double* x,
- double* jacobian) const {
- // clang-format off
- jacobian[0] = -x[1]; jacobian[1] = -x[2]; jacobian[2] = -x[3];
- jacobian[3] = x[0]; jacobian[4] = x[3]; jacobian[5] = -x[2];
- jacobian[6] = -x[3]; jacobian[7] = x[0]; jacobian[8] = x[1];
- jacobian[9] = x[2]; jacobian[10] = -x[1]; jacobian[11] = x[0];
- // clang-format on
- return true;
-}
-
-bool EigenQuaternionParameterization::Plus(const double* x_ptr,
- const double* delta,
- double* x_plus_delta_ptr) const {
- Eigen::Map<Eigen::Quaterniond> x_plus_delta(x_plus_delta_ptr);
- Eigen::Map<const Eigen::Quaterniond> x(x_ptr);
-
- const double norm_delta =
- sqrt(delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2]);
- if (norm_delta > 0.0) {
- const double sin_delta_by_delta = sin(norm_delta) / norm_delta;
-
- // Note, in the constructor w is first.
- Eigen::Quaterniond delta_q(cos(norm_delta),
- sin_delta_by_delta * delta[0],
- sin_delta_by_delta * delta[1],
- sin_delta_by_delta * delta[2]);
- x_plus_delta = delta_q * x;
- } else {
- x_plus_delta = x;
- }
-
- return true;
-}
-
-bool EigenQuaternionParameterization::ComputeJacobian(const double* x,
- double* jacobian) const {
- // clang-format off
- jacobian[0] = x[3]; jacobian[1] = x[2]; jacobian[2] = -x[1];
- jacobian[3] = -x[2]; jacobian[4] = x[3]; jacobian[5] = x[0];
- jacobian[6] = x[1]; jacobian[7] = -x[0]; jacobian[8] = x[3];
- jacobian[9] = -x[0]; jacobian[10] = -x[1]; jacobian[11] = -x[2];
- // clang-format on
- return true;
-}
-
-HomogeneousVectorParameterization::HomogeneousVectorParameterization(int size)
- : size_(size) {
- CHECK_GT(size_, 1) << "The size of the homogeneous vector needs to be "
- << "greater than 1.";
-}
-
-bool HomogeneousVectorParameterization::Plus(const double* x_ptr,
- const double* delta_ptr,
- double* x_plus_delta_ptr) const {
- ConstVectorRef x(x_ptr, size_);
- ConstVectorRef delta(delta_ptr, size_ - 1);
- VectorRef x_plus_delta(x_plus_delta_ptr, size_);
-
- const double norm_delta = delta.norm();
-
- if (norm_delta == 0.0) {
- x_plus_delta = x;
- return true;
- }
-
- // Map the delta from the minimum representation to the over parameterized
- // homogeneous vector. See section A6.9.2 on page 624 of Hartley & Zisserman
- // (2nd Edition) for a detailed description. Note there is a typo on Page
- // 625, line 4 so check the book errata.
- const double norm_delta_div_2 = 0.5 * norm_delta;
- const double sin_delta_by_delta =
- std::sin(norm_delta_div_2) / norm_delta_div_2;
-
- Vector y(size_);
- y.head(size_ - 1) = 0.5 * sin_delta_by_delta * delta;
- y(size_ - 1) = std::cos(norm_delta_div_2);
-
- Vector v(size_);
- double beta;
-
- // NOTE: The explicit template arguments are needed here because
- // ComputeHouseholderVector is templated and some versions of MSVC
- // have trouble deducing the type of v automatically.
- internal::ComputeHouseholderVector<ConstVectorRef, double, Eigen::Dynamic>(
- x, &v, &beta);
-
- // Apply the delta update to remain on the unit sphere. See section A6.9.3
- // on page 625 of Hartley & Zisserman (2nd Edition) for a detailed
- // description.
- x_plus_delta = x.norm() * (y - v * (beta * (v.transpose() * y)));
-
- return true;
-}
-
-bool HomogeneousVectorParameterization::ComputeJacobian(
- const double* x_ptr, double* jacobian_ptr) const {
- ConstVectorRef x(x_ptr, size_);
- MatrixRef jacobian(jacobian_ptr, size_, size_ - 1);
-
- Vector v(size_);
- double beta;
-
- // NOTE: The explicit template arguments are needed here because
- // ComputeHouseholderVector is templated and some versions of MSVC
- // have trouble deducing the type of v automatically.
- internal::ComputeHouseholderVector<ConstVectorRef, double, Eigen::Dynamic>(
- x, &v, &beta);
-
- // The Jacobian is equal to J = 0.5 * H.leftCols(size_ - 1) where H is the
- // Householder matrix (H = I - beta * v * v').
- for (int i = 0; i < size_ - 1; ++i) {
- jacobian.col(i) = -0.5 * beta * v(i) * v;
- jacobian.col(i)(i) += 0.5;
- }
- jacobian *= x.norm();
-
- return true;
-}
-
-bool ProductParameterization::Plus(const double* x,
- const double* delta,
- double* x_plus_delta) const {
- int x_cursor = 0;
- int delta_cursor = 0;
- for (const auto& param : local_params_) {
- if (!param->Plus(
- x + x_cursor, delta + delta_cursor, x_plus_delta + x_cursor)) {
- return false;
- }
- delta_cursor += param->LocalSize();
- x_cursor += param->GlobalSize();
- }
-
- return true;
-}
-
-bool ProductParameterization::ComputeJacobian(const double* x,
- double* jacobian_ptr) const {
- MatrixRef jacobian(jacobian_ptr, GlobalSize(), LocalSize());
- jacobian.setZero();
- internal::FixedArray<double> buffer(buffer_size_);
-
- int x_cursor = 0;
- int delta_cursor = 0;
- for (const auto& param : local_params_) {
- const int local_size = param->LocalSize();
- const int global_size = param->GlobalSize();
-
- if (!param->ComputeJacobian(x + x_cursor, buffer.data())) {
- return false;
- }
- jacobian.block(x_cursor, delta_cursor, global_size, local_size) =
- MatrixRef(buffer.data(), global_size, local_size);
-
- delta_cursor += local_size;
- x_cursor += global_size;
- }
-
- return true;
-}
-
-} // namespace ceres
diff --git a/internal/ceres/local_parameterization_test.cc b/internal/ceres/local_parameterization_test.cc
deleted file mode 100644
index ec8e660..0000000
--- a/internal/ceres/local_parameterization_test.cc
+++ /dev/null
@@ -1,953 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-
-#include "ceres/local_parameterization.h"
-
-#include <cmath>
-#include <limits>
-#include <memory>
-
-#include "Eigen/Geometry"
-#include "ceres/autodiff_local_parameterization.h"
-#include "ceres/internal/autodiff.h"
-#include "ceres/internal/eigen.h"
-#include "ceres/internal/householder_vector.h"
-#include "ceres/random.h"
-#include "ceres/rotation.h"
-#include "gtest/gtest.h"
-
-namespace ceres {
-namespace internal {
-
-TEST(IdentityParameterization, EverythingTest) {
- IdentityParameterization parameterization(3);
- EXPECT_EQ(parameterization.GlobalSize(), 3);
- EXPECT_EQ(parameterization.LocalSize(), 3);
-
- double x[3] = {1.0, 2.0, 3.0};
- double delta[3] = {0.0, 1.0, 2.0};
- double x_plus_delta[3] = {0.0, 0.0, 0.0};
- parameterization.Plus(x, delta, x_plus_delta);
- EXPECT_EQ(x_plus_delta[0], 1.0);
- EXPECT_EQ(x_plus_delta[1], 3.0);
- EXPECT_EQ(x_plus_delta[2], 5.0);
-
- double jacobian[9];
- parameterization.ComputeJacobian(x, jacobian);
- int k = 0;
- for (int i = 0; i < 3; ++i) {
- for (int j = 0; j < 3; ++j, ++k) {
- EXPECT_EQ(jacobian[k], (i == j) ? 1.0 : 0.0);
- }
- }
-
- Matrix global_matrix = Matrix::Ones(10, 3);
- Matrix local_matrix = Matrix::Zero(10, 3);
- parameterization.MultiplyByJacobian(
- x, 10, global_matrix.data(), local_matrix.data());
- EXPECT_EQ((local_matrix - global_matrix).norm(), 0.0);
-}
-
-TEST(SubsetParameterization, EmptyConstantParameters) {
- std::vector<int> constant_parameters;
- SubsetParameterization parameterization(3, constant_parameters);
- EXPECT_EQ(parameterization.GlobalSize(), 3);
- EXPECT_EQ(parameterization.LocalSize(), 3);
- double x[3] = {1, 2, 3};
- double delta[3] = {4, 5, 6};
- double x_plus_delta[3] = {-1, -2, -3};
- parameterization.Plus(x, delta, x_plus_delta);
- EXPECT_EQ(x_plus_delta[0], x[0] + delta[0]);
- EXPECT_EQ(x_plus_delta[1], x[1] + delta[1]);
- EXPECT_EQ(x_plus_delta[2], x[2] + delta[2]);
-
- Matrix jacobian(3, 3);
- Matrix expected_jacobian(3, 3);
- expected_jacobian.setIdentity();
- parameterization.ComputeJacobian(x, jacobian.data());
- EXPECT_EQ(jacobian, expected_jacobian);
-
- Matrix global_matrix(3, 5);
- global_matrix.setRandom();
- Matrix local_matrix(3, 5);
- parameterization.MultiplyByJacobian(
- x, 5, global_matrix.data(), local_matrix.data());
- EXPECT_EQ(global_matrix, local_matrix);
-}
-
-TEST(SubsetParameterization, NegativeParameterIndexDeathTest) {
- std::vector<int> constant_parameters;
- constant_parameters.push_back(-1);
- EXPECT_DEATH_IF_SUPPORTED(
- SubsetParameterization parameterization(2, constant_parameters),
- "greater than equal to zero");
-}
-
-TEST(SubsetParameterization, GreaterThanSizeParameterIndexDeathTest) {
- std::vector<int> constant_parameters;
- constant_parameters.push_back(2);
- EXPECT_DEATH_IF_SUPPORTED(
- SubsetParameterization parameterization(2, constant_parameters),
- "less than the size");
-}
-
-TEST(SubsetParameterization, DuplicateParametersDeathTest) {
- std::vector<int> constant_parameters;
- constant_parameters.push_back(1);
- constant_parameters.push_back(1);
- EXPECT_DEATH_IF_SUPPORTED(
- SubsetParameterization parameterization(2, constant_parameters),
- "duplicates");
-}
-
-TEST(SubsetParameterization,
- ProductParameterizationWithZeroLocalSizeSubsetParameterization1) {
- std::vector<int> constant_parameters;
- constant_parameters.push_back(0);
- LocalParameterization* subset_param =
- new SubsetParameterization(1, constant_parameters);
- LocalParameterization* identity_param = new IdentityParameterization(2);
- ProductParameterization product_param(subset_param, identity_param);
- EXPECT_EQ(product_param.GlobalSize(), 3);
- EXPECT_EQ(product_param.LocalSize(), 2);
- double x[] = {1.0, 1.0, 1.0};
- double delta[] = {2.0, 3.0};
- double x_plus_delta[] = {0.0, 0.0, 0.0};
- EXPECT_TRUE(product_param.Plus(x, delta, x_plus_delta));
- EXPECT_EQ(x_plus_delta[0], x[0]);
- EXPECT_EQ(x_plus_delta[1], x[1] + delta[0]);
- EXPECT_EQ(x_plus_delta[2], x[2] + delta[1]);
-
- Matrix actual_jacobian(3, 2);
- EXPECT_TRUE(product_param.ComputeJacobian(x, actual_jacobian.data()));
-}
-
-TEST(SubsetParameterization,
- ProductParameterizationWithZeroLocalSizeSubsetParameterization2) {
- std::vector<int> constant_parameters;
- constant_parameters.push_back(0);
- LocalParameterization* subset_param =
- new SubsetParameterization(1, constant_parameters);
- LocalParameterization* identity_param = new IdentityParameterization(2);
- ProductParameterization product_param(identity_param, subset_param);
- EXPECT_EQ(product_param.GlobalSize(), 3);
- EXPECT_EQ(product_param.LocalSize(), 2);
- double x[] = {1.0, 1.0, 1.0};
- double delta[] = {2.0, 3.0};
- double x_plus_delta[] = {0.0, 0.0, 0.0};
- EXPECT_TRUE(product_param.Plus(x, delta, x_plus_delta));
- EXPECT_EQ(x_plus_delta[0], x[0] + delta[0]);
- EXPECT_EQ(x_plus_delta[1], x[1] + delta[1]);
- EXPECT_EQ(x_plus_delta[2], x[2]);
-
- Matrix actual_jacobian(3, 2);
- EXPECT_TRUE(product_param.ComputeJacobian(x, actual_jacobian.data()));
-}
-
-TEST(SubsetParameterization, NormalFunctionTest) {
- const int kGlobalSize = 4;
- const int kLocalSize = 3;
-
- double x[kGlobalSize] = {1.0, 2.0, 3.0, 4.0};
- for (int i = 0; i < kGlobalSize; ++i) {
- std::vector<int> constant_parameters;
- constant_parameters.push_back(i);
- SubsetParameterization parameterization(kGlobalSize, constant_parameters);
- double delta[kLocalSize] = {1.0, 2.0, 3.0};
- double x_plus_delta[kGlobalSize] = {0.0, 0.0, 0.0};
-
- parameterization.Plus(x, delta, x_plus_delta);
- int k = 0;
- for (int j = 0; j < kGlobalSize; ++j) {
- if (j == i) {
- EXPECT_EQ(x_plus_delta[j], x[j]);
- } else {
- EXPECT_EQ(x_plus_delta[j], x[j] + delta[k++]);
- }
- }
-
- double jacobian[kGlobalSize * kLocalSize];
- parameterization.ComputeJacobian(x, jacobian);
- int delta_cursor = 0;
- int jacobian_cursor = 0;
- for (int j = 0; j < kGlobalSize; ++j) {
- if (j != i) {
- for (int k = 0; k < kLocalSize; ++k, jacobian_cursor++) {
- EXPECT_EQ(jacobian[jacobian_cursor], delta_cursor == k ? 1.0 : 0.0);
- }
- ++delta_cursor;
- } else {
- for (int k = 0; k < kLocalSize; ++k, jacobian_cursor++) {
- EXPECT_EQ(jacobian[jacobian_cursor], 0.0);
- }
- }
- }
-
- Matrix global_matrix = Matrix::Ones(10, kGlobalSize);
- for (int row = 0; row < kGlobalSize; ++row) {
- for (int col = 0; col < kGlobalSize; ++col) {
- global_matrix(row, col) = col;
- }
- }
-
- Matrix local_matrix = Matrix::Zero(10, kLocalSize);
- parameterization.MultiplyByJacobian(
- x, 10, global_matrix.data(), local_matrix.data());
- Matrix expected_local_matrix =
- global_matrix * MatrixRef(jacobian, kGlobalSize, kLocalSize);
- EXPECT_EQ((local_matrix - expected_local_matrix).norm(), 0.0);
- }
-}
-
-// Functor needed to implement automatically differentiated Plus for
-// quaternions.
-struct QuaternionPlus {
- template <typename T>
- bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
- const T squared_norm_delta =
- delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2];
-
- T q_delta[4];
- if (squared_norm_delta > T(0.0)) {
- T norm_delta = sqrt(squared_norm_delta);
- const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
- q_delta[0] = cos(norm_delta);
- q_delta[1] = sin_delta_by_delta * delta[0];
- q_delta[2] = sin_delta_by_delta * delta[1];
- q_delta[3] = sin_delta_by_delta * delta[2];
- } else {
- // We do not just use q_delta = [1,0,0,0] here because that is a
- // constant and when used for automatic differentiation will
- // lead to a zero derivative. Instead we take a first order
- // approximation and evaluate it at zero.
- q_delta[0] = T(1.0);
- q_delta[1] = delta[0];
- q_delta[2] = delta[1];
- q_delta[3] = delta[2];
- }
-
- QuaternionProduct(q_delta, x, x_plus_delta);
- return true;
- }
-};
-
-template <typename Parameterization, typename Plus>
-void QuaternionParameterizationTestHelper(const double* x,
- const double* delta,
- const double* x_plus_delta_ref) {
- const int kGlobalSize = 4;
- const int kLocalSize = 3;
-
- const double kTolerance = 1e-14;
-
- double x_plus_delta[kGlobalSize] = {0.0, 0.0, 0.0, 0.0};
- Parameterization parameterization;
- parameterization.Plus(x, delta, x_plus_delta);
- for (int i = 0; i < kGlobalSize; ++i) {
- EXPECT_NEAR(x_plus_delta[i], x_plus_delta[i], kTolerance);
- }
-
- const double x_plus_delta_norm = sqrt(
- x_plus_delta[0] * x_plus_delta[0] + x_plus_delta[1] * x_plus_delta[1] +
- x_plus_delta[2] * x_plus_delta[2] + x_plus_delta[3] * x_plus_delta[3]);
-
- EXPECT_NEAR(x_plus_delta_norm, 1.0, kTolerance);
-
- double jacobian_ref[12];
- double zero_delta[kLocalSize] = {0.0, 0.0, 0.0};
- const double* parameters[2] = {x, zero_delta};
- double* jacobian_array[2] = {NULL, jacobian_ref};
-
- // Autodiff jacobian at delta_x = 0.
- internal::AutoDifferentiate<kGlobalSize,
- StaticParameterDims<kGlobalSize, kLocalSize>>(
- Plus(), parameters, kGlobalSize, x_plus_delta, jacobian_array);
-
- double jacobian[12];
- parameterization.ComputeJacobian(x, jacobian);
- for (int i = 0; i < 12; ++i) {
- EXPECT_TRUE(IsFinite(jacobian[i]));
- EXPECT_NEAR(jacobian[i], jacobian_ref[i], kTolerance)
- << "Jacobian mismatch: i = " << i << "\n Expected \n"
- << ConstMatrixRef(jacobian_ref, kGlobalSize, kLocalSize)
- << "\n Actual \n"
- << ConstMatrixRef(jacobian, kGlobalSize, kLocalSize);
- }
-
- Matrix global_matrix = Matrix::Random(10, kGlobalSize);
- Matrix local_matrix = Matrix::Zero(10, kLocalSize);
- parameterization.MultiplyByJacobian(
- x, 10, global_matrix.data(), local_matrix.data());
- Matrix expected_local_matrix =
- global_matrix * MatrixRef(jacobian, kGlobalSize, kLocalSize);
- EXPECT_NEAR((local_matrix - expected_local_matrix).norm(),
- 0.0,
- 10.0 * std::numeric_limits<double>::epsilon());
-}
-
-template <int N>
-void Normalize(double* x) {
- VectorRef(x, N).normalize();
-}
-
-TEST(QuaternionParameterization, ZeroTest) {
- double x[4] = {0.5, 0.5, 0.5, 0.5};
- double delta[3] = {0.0, 0.0, 0.0};
- double q_delta[4] = {1.0, 0.0, 0.0, 0.0};
- double x_plus_delta[4] = {0.0, 0.0, 0.0, 0.0};
- QuaternionProduct(q_delta, x, x_plus_delta);
- QuaternionParameterizationTestHelper<QuaternionParameterization,
- QuaternionPlus>(x, delta, x_plus_delta);
-}
-
-TEST(QuaternionParameterization, NearZeroTest) {
- double x[4] = {0.52, 0.25, 0.15, 0.45};
- Normalize<4>(x);
-
- double delta[3] = {0.24, 0.15, 0.10};
- for (int i = 0; i < 3; ++i) {
- delta[i] = delta[i] * 1e-14;
- }
-
- double q_delta[4];
- q_delta[0] = 1.0;
- q_delta[1] = delta[0];
- q_delta[2] = delta[1];
- q_delta[3] = delta[2];
-
- double x_plus_delta[4] = {0.0, 0.0, 0.0, 0.0};
- QuaternionProduct(q_delta, x, x_plus_delta);
- QuaternionParameterizationTestHelper<QuaternionParameterization,
- QuaternionPlus>(x, delta, x_plus_delta);
-}
-
-TEST(QuaternionParameterization, AwayFromZeroTest) {
- double x[4] = {0.52, 0.25, 0.15, 0.45};
- Normalize<4>(x);
-
- double delta[3] = {0.24, 0.15, 0.10};
- const double delta_norm =
- sqrt(delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2]);
- double q_delta[4];
- q_delta[0] = cos(delta_norm);
- q_delta[1] = sin(delta_norm) / delta_norm * delta[0];
- q_delta[2] = sin(delta_norm) / delta_norm * delta[1];
- q_delta[3] = sin(delta_norm) / delta_norm * delta[2];
-
- double x_plus_delta[4] = {0.0, 0.0, 0.0, 0.0};
- QuaternionProduct(q_delta, x, x_plus_delta);
- QuaternionParameterizationTestHelper<QuaternionParameterization,
- QuaternionPlus>(x, delta, x_plus_delta);
-}
-
-// Functor needed to implement automatically differentiated Plus for
-// Eigen's quaternion.
-struct EigenQuaternionPlus {
- template <typename T>
- bool operator()(const T* x, const T* delta, T* x_plus_delta) const {
- const T norm_delta =
- sqrt(delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2]);
-
- Eigen::Quaternion<T> q_delta;
- if (norm_delta > T(0.0)) {
- const T sin_delta_by_delta = sin(norm_delta) / norm_delta;
- q_delta.coeffs() << sin_delta_by_delta * delta[0],
- sin_delta_by_delta * delta[1], sin_delta_by_delta * delta[2],
- cos(norm_delta);
- } else {
- // We do not just use q_delta = [0,0,0,1] here because that is a
- // constant and when used for automatic differentiation will
- // lead to a zero derivative. Instead we take a first order
- // approximation and evaluate it at zero.
- q_delta.coeffs() << delta[0], delta[1], delta[2], T(1.0);
- }
-
- Eigen::Map<Eigen::Quaternion<T>> x_plus_delta_ref(x_plus_delta);
- Eigen::Map<const Eigen::Quaternion<T>> x_ref(x);
- x_plus_delta_ref = q_delta * x_ref;
- return true;
- }
-};
-
-TEST(EigenQuaternionParameterization, ZeroTest) {
- Eigen::Quaterniond x(0.5, 0.5, 0.5, 0.5);
- double delta[3] = {0.0, 0.0, 0.0};
- Eigen::Quaterniond q_delta(1.0, 0.0, 0.0, 0.0);
- Eigen::Quaterniond x_plus_delta = q_delta * x;
- QuaternionParameterizationTestHelper<EigenQuaternionParameterization,
- EigenQuaternionPlus>(
- x.coeffs().data(), delta, x_plus_delta.coeffs().data());
-}
-
-TEST(EigenQuaternionParameterization, NearZeroTest) {
- Eigen::Quaterniond x(0.52, 0.25, 0.15, 0.45);
- x.normalize();
-
- double delta[3] = {0.24, 0.15, 0.10};
- for (int i = 0; i < 3; ++i) {
- delta[i] = delta[i] * 1e-14;
- }
-
- // Note: w is first in the constructor.
- Eigen::Quaterniond q_delta(1.0, delta[0], delta[1], delta[2]);
-
- Eigen::Quaterniond x_plus_delta = q_delta * x;
- QuaternionParameterizationTestHelper<EigenQuaternionParameterization,
- EigenQuaternionPlus>(
- x.coeffs().data(), delta, x_plus_delta.coeffs().data());
-}
-
-TEST(EigenQuaternionParameterization, AwayFromZeroTest) {
- Eigen::Quaterniond x(0.52, 0.25, 0.15, 0.45);
- x.normalize();
-
- double delta[3] = {0.24, 0.15, 0.10};
- const double delta_norm =
- sqrt(delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2]);
-
- // Note: w is first in the constructor.
- Eigen::Quaterniond q_delta(cos(delta_norm),
- sin(delta_norm) / delta_norm * delta[0],
- sin(delta_norm) / delta_norm * delta[1],
- sin(delta_norm) / delta_norm * delta[2]);
-
- Eigen::Quaterniond x_plus_delta = q_delta * x;
- QuaternionParameterizationTestHelper<EigenQuaternionParameterization,
- EigenQuaternionPlus>(
- x.coeffs().data(), delta, x_plus_delta.coeffs().data());
-}
-
-// Functor needed to implement automatically differentiated Plus for
-// homogeneous vectors.
-template <int Dim>
-struct HomogeneousVectorParameterizationPlus {
- template <typename Scalar>
- bool operator()(const Scalar* p_x,
- const Scalar* p_delta,
- Scalar* p_x_plus_delta) const {
- Eigen::Map<const Eigen::Matrix<Scalar, Dim, 1>> x(p_x);
- Eigen::Map<const Eigen::Matrix<Scalar, Dim - 1, 1>> delta(p_delta);
- Eigen::Map<Eigen::Matrix<Scalar, Dim, 1>> x_plus_delta(p_x_plus_delta);
-
- const Scalar squared_norm_delta = delta.squaredNorm();
-
- Eigen::Matrix<Scalar, Dim, 1> y;
- Scalar one_half(0.5);
- if (squared_norm_delta > Scalar(0.0)) {
- Scalar norm_delta = sqrt(squared_norm_delta);
- Scalar norm_delta_div_2 = 0.5 * norm_delta;
- const Scalar sin_delta_by_delta =
- sin(norm_delta_div_2) / norm_delta_div_2;
- y.template head<Dim - 1>() = sin_delta_by_delta * one_half * delta;
- y[Dim - 1] = cos(norm_delta_div_2);
-
- } else {
- // We do not just use y = [0,0,0,1] here because that is a
- // constant and when used for automatic differentiation will
- // lead to a zero derivative. Instead we take a first order
- // approximation and evaluate it at zero.
- y.template head<Dim - 1>() = delta * one_half;
- y[Dim - 1] = Scalar(1.0);
- }
-
- Eigen::Matrix<Scalar, Dim, 1> v;
- Scalar beta;
-
- // NOTE: The explicit template arguments are needed here because
- // ComputeHouseholderVector is templated and some versions of MSVC
- // have trouble deducing the type of v automatically.
- internal::ComputeHouseholderVector<
- Eigen::Map<const Eigen::Matrix<Scalar, Dim, 1>>,
- Scalar,
- Dim>(x, &v, &beta);
-
- x_plus_delta = x.norm() * (y - v * (beta * v.dot(y)));
-
- return true;
- }
-};
-
-static void HomogeneousVectorParameterizationHelper(const double* x,
- const double* delta) {
- const double kTolerance = 1e-14;
-
- HomogeneousVectorParameterization homogeneous_vector_parameterization(4);
-
- // Ensure the update maintains the norm.
- double x_plus_delta[4] = {0.0, 0.0, 0.0, 0.0};
- homogeneous_vector_parameterization.Plus(x, delta, x_plus_delta);
-
- const double x_plus_delta_norm = sqrt(
- x_plus_delta[0] * x_plus_delta[0] + x_plus_delta[1] * x_plus_delta[1] +
- x_plus_delta[2] * x_plus_delta[2] + x_plus_delta[3] * x_plus_delta[3]);
-
- const double x_norm =
- sqrt(x[0] * x[0] + x[1] * x[1] + x[2] * x[2] + x[3] * x[3]);
-
- EXPECT_NEAR(x_plus_delta_norm, x_norm, kTolerance);
-
- // Autodiff jacobian at delta_x = 0.
- AutoDiffLocalParameterization<HomogeneousVectorParameterizationPlus<4>, 4, 3>
- autodiff_jacobian;
-
- double jacobian_autodiff[12];
- double jacobian_analytic[12];
-
- homogeneous_vector_parameterization.ComputeJacobian(x, jacobian_analytic);
- autodiff_jacobian.ComputeJacobian(x, jacobian_autodiff);
-
- for (int i = 0; i < 12; ++i) {
- EXPECT_TRUE(ceres::IsFinite(jacobian_analytic[i]));
- EXPECT_NEAR(jacobian_analytic[i], jacobian_autodiff[i], kTolerance)
- << "Jacobian mismatch: i = " << i << ", " << jacobian_analytic[i] << " "
- << jacobian_autodiff[i];
- }
-}
-
-TEST(HomogeneousVectorParameterization, ZeroTest) {
- double x[4] = {0.0, 0.0, 0.0, 1.0};
- Normalize<4>(x);
- double delta[3] = {0.0, 0.0, 0.0};
-
- HomogeneousVectorParameterizationHelper(x, delta);
-}
-
-TEST(HomogeneousVectorParameterization, NearZeroTest1) {
- double x[4] = {1e-5, 1e-5, 1e-5, 1.0};
- Normalize<4>(x);
- double delta[3] = {0.0, 1.0, 0.0};
-
- HomogeneousVectorParameterizationHelper(x, delta);
-}
-
-TEST(HomogeneousVectorParameterization, NearZeroTest2) {
- double x[4] = {0.001, 0.0, 0.0, 0.0};
- double delta[3] = {0.0, 1.0, 0.0};
-
- HomogeneousVectorParameterizationHelper(x, delta);
-}
-
-TEST(HomogeneousVectorParameterization, AwayFromZeroTest1) {
- double x[4] = {0.52, 0.25, 0.15, 0.45};
- Normalize<4>(x);
- double delta[3] = {0.0, 1.0, -0.5};
-
- HomogeneousVectorParameterizationHelper(x, delta);
-}
-
-TEST(HomogeneousVectorParameterization, AwayFromZeroTest2) {
- double x[4] = {0.87, -0.25, -0.34, 0.45};
- Normalize<4>(x);
- double delta[3] = {0.0, 0.0, -0.5};
-
- HomogeneousVectorParameterizationHelper(x, delta);
-}
-
-TEST(HomogeneousVectorParameterization, AwayFromZeroTest3) {
- double x[4] = {0.0, 0.0, 0.0, 2.0};
- double delta[3] = {0.0, 0.0, 0};
-
- HomogeneousVectorParameterizationHelper(x, delta);
-}
-
-TEST(HomogeneousVectorParameterization, AwayFromZeroTest4) {
- double x[4] = {0.2, -1.0, 0.0, 2.0};
- double delta[3] = {1.4, 0.0, -0.5};
-
- HomogeneousVectorParameterizationHelper(x, delta);
-}
-
-TEST(HomogeneousVectorParameterization, AwayFromZeroTest5) {
- double x[4] = {2.0, 0.0, 0.0, 0.0};
- double delta[3] = {1.4, 0.0, -0.5};
-
- HomogeneousVectorParameterizationHelper(x, delta);
-}
-
-TEST(HomogeneousVectorParameterization, DeathTests) {
- EXPECT_DEATH_IF_SUPPORTED(HomogeneousVectorParameterization x(1), "size");
-}
-
-// Functor needed to implement automatically differentiated Plus for
-// line parameterization.
-template <int AmbientSpaceDim>
-struct LineParameterizationPlus {
- template <typename Scalar>
- bool operator()(const Scalar* p_x,
- const Scalar* p_delta,
- Scalar* p_x_plus_delta) const {
- static constexpr int kTangetSpaceDim = AmbientSpaceDim - 1;
- Eigen::Map<const Eigen::Matrix<Scalar, AmbientSpaceDim, 1>> origin_point(
- p_x);
- Eigen::Map<const Eigen::Matrix<Scalar, AmbientSpaceDim, 1>> dir(
- p_x + AmbientSpaceDim);
- Eigen::Map<const Eigen::Matrix<Scalar, kTangetSpaceDim, 1>>
- delta_origin_point(p_delta);
- Eigen::Map<Eigen::Matrix<Scalar, AmbientSpaceDim, 1>>
- origin_point_plus_delta(p_x_plus_delta);
-
- HomogeneousVectorParameterizationPlus<AmbientSpaceDim> dir_plus;
- dir_plus(dir.data(),
- p_delta + kTangetSpaceDim,
- p_x_plus_delta + AmbientSpaceDim);
-
- Eigen::Matrix<Scalar, AmbientSpaceDim, 1> v;
- Scalar beta;
-
- // NOTE: The explicit template arguments are needed here because
- // ComputeHouseholderVector is templated and some versions of MSVC
- // have trouble deducing the type of v automatically.
- internal::ComputeHouseholderVector<
- Eigen::Map<const Eigen::Matrix<Scalar, AmbientSpaceDim, 1>>,
- Scalar,
- AmbientSpaceDim>(dir, &v, &beta);
-
- Eigen::Matrix<Scalar, AmbientSpaceDim, 1> y;
- y << 0.5 * delta_origin_point, Scalar(0.0);
- origin_point_plus_delta = origin_point + y - v * (beta * v.dot(y));
-
- return true;
- }
-};
-
-template <int AmbientSpaceDim>
-static void LineParameterizationHelper(const double* x_ptr,
- const double* delta) {
- const double kTolerance = 1e-14;
-
- static constexpr int ParameterDim = 2 * AmbientSpaceDim;
- static constexpr int TangientParameterDim = 2 * (AmbientSpaceDim - 1);
-
- LineParameterization<AmbientSpaceDim> line_parameterization;
-
- using ParameterVector = Eigen::Matrix<double, ParameterDim, 1>;
- ParameterVector x_plus_delta = ParameterVector::Zero();
- line_parameterization.Plus(x_ptr, delta, x_plus_delta.data());
-
- // Ensure the update maintains the norm for the line direction.
- Eigen::Map<const ParameterVector> x(x_ptr);
- const double dir_plus_delta_norm =
- x_plus_delta.template tail<AmbientSpaceDim>().norm();
- const double dir_norm = x.template tail<AmbientSpaceDim>().norm();
- EXPECT_NEAR(dir_plus_delta_norm, dir_norm, kTolerance);
-
- // Ensure the update of the origin point is perpendicular to the line
- // direction.
- const double dot_prod_val = x.template tail<AmbientSpaceDim>().dot(
- x_plus_delta.template head<AmbientSpaceDim>() -
- x.template head<AmbientSpaceDim>());
- EXPECT_NEAR(dot_prod_val, 0.0, kTolerance);
-
- // Autodiff jacobian at delta_x = 0.
- AutoDiffLocalParameterization<LineParameterizationPlus<AmbientSpaceDim>,
- ParameterDim,
- TangientParameterDim>
- autodiff_jacobian;
-
- using JacobianMatrix = Eigen::
- Matrix<double, ParameterDim, TangientParameterDim, Eigen::RowMajor>;
- constexpr double kNaN = std::numeric_limits<double>::quiet_NaN();
- JacobianMatrix jacobian_autodiff = JacobianMatrix::Constant(kNaN);
- JacobianMatrix jacobian_analytic = JacobianMatrix::Constant(kNaN);
-
- autodiff_jacobian.ComputeJacobian(x_ptr, jacobian_autodiff.data());
- line_parameterization.ComputeJacobian(x_ptr, jacobian_analytic.data());
-
- EXPECT_FALSE(jacobian_autodiff.hasNaN());
- EXPECT_FALSE(jacobian_analytic.hasNaN());
- EXPECT_TRUE(jacobian_autodiff.isApprox(jacobian_analytic))
- << "auto diff:\n"
- << jacobian_autodiff << "\n"
- << "analytic diff:\n"
- << jacobian_analytic;
-}
-
-TEST(LineParameterization, ZeroTest3D) {
- double x[6] = {0.0, 0.0, 0.0, 0.0, 0.0, 1.0};
- double delta[4] = {0.0, 0.0, 0.0, 0.0};
-
- LineParameterizationHelper<3>(x, delta);
-}
-
-TEST(LineParameterization, ZeroTest4D) {
- double x[8] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0};
- double delta[6] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
-
- LineParameterizationHelper<4>(x, delta);
-}
-
-TEST(LineParameterization, ZeroOriginPointTest3D) {
- double x[6] = {0.0, 0.0, 0.0, 0.0, 0.0, 1.0};
- double delta[4] = {0.0, 0.0, 1.0, 2.0};
-
- LineParameterizationHelper<3>(x, delta);
-}
-
-TEST(LineParameterization, ZeroOriginPointTest4D) {
- double x[8] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0};
- double delta[6] = {0.0, 0.0, 0.0, 1.0, 2.0, 3.0};
-
- LineParameterizationHelper<4>(x, delta);
-}
-
-TEST(LineParameterization, ZeroDirTest3D) {
- double x[6] = {0.0, 0.0, 0.0, 0.0, 0.0, 1.0};
- double delta[4] = {3.0, 2.0, 0.0, 0.0};
-
- LineParameterizationHelper<3>(x, delta);
-}
-
-TEST(LineParameterization, ZeroDirTest4D) {
- double x[8] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0};
- double delta[6] = {3.0, 2.0, 1.0, 0.0, 0.0, 0.0};
-
- LineParameterizationHelper<4>(x, delta);
-}
-
-TEST(LineParameterization, AwayFromZeroTest3D1) {
- Eigen::Matrix<double, 6, 1> x;
- x.head<3>() << 1.54, 2.32, 1.34;
- x.tail<3>() << 0.52, 0.25, 0.15;
- x.tail<3>().normalize();
-
- double delta[4] = {4.0, 7.0, 1.0, -0.5};
-
- LineParameterizationHelper<3>(x.data(), delta);
-}
-
-TEST(LineParameterization, AwayFromZeroTest4D1) {
- Eigen::Matrix<double, 8, 1> x;
- x.head<4>() << 1.54, 2.32, 1.34, 3.23;
- x.tail<4>() << 0.52, 0.25, 0.15, 0.45;
- x.tail<4>().normalize();
-
- double delta[6] = {4.0, 7.0, -3.0, 0.0, 1.0, -0.5};
-
- LineParameterizationHelper<4>(x.data(), delta);
-}
-
-TEST(LineParameterization, AwayFromZeroTest3D2) {
- Eigen::Matrix<double, 6, 1> x;
- x.head<3>() << 7.54, -2.81, 8.63;
- x.tail<3>() << 2.52, 5.25, 4.15;
-
- double delta[4] = {4.0, 7.0, 1.0, -0.5};
-
- LineParameterizationHelper<3>(x.data(), delta);
-}
-
-TEST(LineParameterization, AwayFromZeroTest4D2) {
- Eigen::Matrix<double, 8, 1> x;
- x.head<4>() << 7.54, -2.81, 8.63, 6.93;
- x.tail<4>() << 2.52, 5.25, 4.15, 1.45;
-
- double delta[6] = {4.0, 7.0, -3.0, 2.0, 1.0, -0.5};
-
- LineParameterizationHelper<4>(x.data(), delta);
-}
-
-class ProductParameterizationTest : public ::testing::Test {
- protected:
- void SetUp() final {
- const int global_size1 = 5;
- std::vector<int> constant_parameters1;
- constant_parameters1.push_back(2);
- param1_.reset(
- new SubsetParameterization(global_size1, constant_parameters1));
-
- const int global_size2 = 3;
- std::vector<int> constant_parameters2;
- constant_parameters2.push_back(0);
- constant_parameters2.push_back(1);
- param2_.reset(
- new SubsetParameterization(global_size2, constant_parameters2));
-
- const int global_size3 = 4;
- std::vector<int> constant_parameters3;
- constant_parameters3.push_back(1);
- param3_.reset(
- new SubsetParameterization(global_size3, constant_parameters3));
-
- const int global_size4 = 2;
- std::vector<int> constant_parameters4;
- constant_parameters4.push_back(1);
- param4_.reset(
- new SubsetParameterization(global_size4, constant_parameters4));
- }
-
- std::unique_ptr<LocalParameterization> param1_;
- std::unique_ptr<LocalParameterization> param2_;
- std::unique_ptr<LocalParameterization> param3_;
- std::unique_ptr<LocalParameterization> param4_;
-};
-
-TEST_F(ProductParameterizationTest, LocalAndGlobalSize2) {
- LocalParameterization* param1 = param1_.release();
- LocalParameterization* param2 = param2_.release();
-
- ProductParameterization product_param(param1, param2);
- EXPECT_EQ(product_param.LocalSize(),
- param1->LocalSize() + param2->LocalSize());
- EXPECT_EQ(product_param.GlobalSize(),
- param1->GlobalSize() + param2->GlobalSize());
-}
-
-TEST_F(ProductParameterizationTest, LocalAndGlobalSize3) {
- LocalParameterization* param1 = param1_.release();
- LocalParameterization* param2 = param2_.release();
- LocalParameterization* param3 = param3_.release();
-
- ProductParameterization product_param(param1, param2, param3);
- EXPECT_EQ(product_param.LocalSize(),
- param1->LocalSize() + param2->LocalSize() + param3->LocalSize());
- EXPECT_EQ(product_param.GlobalSize(),
- param1->GlobalSize() + param2->GlobalSize() + param3->GlobalSize());
-}
-
-TEST_F(ProductParameterizationTest, LocalAndGlobalSize4) {
- LocalParameterization* param1 = param1_.release();
- LocalParameterization* param2 = param2_.release();
- LocalParameterization* param3 = param3_.release();
- LocalParameterization* param4 = param4_.release();
-
- ProductParameterization product_param(param1, param2, param3, param4);
- EXPECT_EQ(product_param.LocalSize(),
- param1->LocalSize() + param2->LocalSize() + param3->LocalSize() +
- param4->LocalSize());
- EXPECT_EQ(product_param.GlobalSize(),
- param1->GlobalSize() + param2->GlobalSize() + param3->GlobalSize() +
- param4->GlobalSize());
-}
-
-TEST_F(ProductParameterizationTest, Plus) {
- LocalParameterization* param1 = param1_.release();
- LocalParameterization* param2 = param2_.release();
- LocalParameterization* param3 = param3_.release();
- LocalParameterization* param4 = param4_.release();
-
- ProductParameterization product_param(param1, param2, param3, param4);
- std::vector<double> x(product_param.GlobalSize(), 0.0);
- std::vector<double> delta(product_param.LocalSize(), 0.0);
- std::vector<double> x_plus_delta_expected(product_param.GlobalSize(), 0.0);
- std::vector<double> x_plus_delta(product_param.GlobalSize(), 0.0);
-
- for (int i = 0; i < product_param.GlobalSize(); ++i) {
- x[i] = RandNormal();
- }
-
- for (int i = 0; i < product_param.LocalSize(); ++i) {
- delta[i] = RandNormal();
- }
-
- EXPECT_TRUE(product_param.Plus(&x[0], &delta[0], &x_plus_delta_expected[0]));
- int x_cursor = 0;
- int delta_cursor = 0;
-
- EXPECT_TRUE(param1->Plus(
- &x[x_cursor], &delta[delta_cursor], &x_plus_delta[x_cursor]));
- x_cursor += param1->GlobalSize();
- delta_cursor += param1->LocalSize();
-
- EXPECT_TRUE(param2->Plus(
- &x[x_cursor], &delta[delta_cursor], &x_plus_delta[x_cursor]));
- x_cursor += param2->GlobalSize();
- delta_cursor += param2->LocalSize();
-
- EXPECT_TRUE(param3->Plus(
- &x[x_cursor], &delta[delta_cursor], &x_plus_delta[x_cursor]));
- x_cursor += param3->GlobalSize();
- delta_cursor += param3->LocalSize();
-
- EXPECT_TRUE(param4->Plus(
- &x[x_cursor], &delta[delta_cursor], &x_plus_delta[x_cursor]));
- x_cursor += param4->GlobalSize();
- delta_cursor += param4->LocalSize();
-
- for (int i = 0; i < x.size(); ++i) {
- EXPECT_EQ(x_plus_delta[i], x_plus_delta_expected[i]);
- }
-}
-
-TEST_F(ProductParameterizationTest, ComputeJacobian) {
- LocalParameterization* param1 = param1_.release();
- LocalParameterization* param2 = param2_.release();
- LocalParameterization* param3 = param3_.release();
- LocalParameterization* param4 = param4_.release();
-
- ProductParameterization product_param(param1, param2, param3, param4);
- std::vector<double> x(product_param.GlobalSize(), 0.0);
-
- for (int i = 0; i < product_param.GlobalSize(); ++i) {
- x[i] = RandNormal();
- }
-
- Matrix jacobian =
- Matrix::Random(product_param.GlobalSize(), product_param.LocalSize());
- EXPECT_TRUE(product_param.ComputeJacobian(&x[0], jacobian.data()));
- int x_cursor = 0;
- int delta_cursor = 0;
-
- Matrix jacobian1(param1->GlobalSize(), param1->LocalSize());
- EXPECT_TRUE(param1->ComputeJacobian(&x[x_cursor], jacobian1.data()));
- jacobian.block(
- x_cursor, delta_cursor, param1->GlobalSize(), param1->LocalSize()) -=
- jacobian1;
- x_cursor += param1->GlobalSize();
- delta_cursor += param1->LocalSize();
-
- Matrix jacobian2(param2->GlobalSize(), param2->LocalSize());
- EXPECT_TRUE(param2->ComputeJacobian(&x[x_cursor], jacobian2.data()));
- jacobian.block(
- x_cursor, delta_cursor, param2->GlobalSize(), param2->LocalSize()) -=
- jacobian2;
- x_cursor += param2->GlobalSize();
- delta_cursor += param2->LocalSize();
-
- Matrix jacobian3(param3->GlobalSize(), param3->LocalSize());
- EXPECT_TRUE(param3->ComputeJacobian(&x[x_cursor], jacobian3.data()));
- jacobian.block(
- x_cursor, delta_cursor, param3->GlobalSize(), param3->LocalSize()) -=
- jacobian3;
- x_cursor += param3->GlobalSize();
- delta_cursor += param3->LocalSize();
-
- Matrix jacobian4(param4->GlobalSize(), param4->LocalSize());
- EXPECT_TRUE(param4->ComputeJacobian(&x[x_cursor], jacobian4.data()));
- jacobian.block(
- x_cursor, delta_cursor, param4->GlobalSize(), param4->LocalSize()) -=
- jacobian4;
- x_cursor += param4->GlobalSize();
- delta_cursor += param4->LocalSize();
-
- EXPECT_NEAR(jacobian.norm(), 0.0, std::numeric_limits<double>::epsilon());
-}
-
-} // namespace internal
-} // namespace ceres
diff --git a/internal/ceres/loss_function.cc b/internal/ceres/loss_function.cc
index 353f29a..82563c8 100644
--- a/internal/ceres/loss_function.cc
+++ b/internal/ceres/loss_function.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,6 +39,8 @@
namespace ceres {
+LossFunction::~LossFunction() = default;
+
void TrivialLoss::Evaluate(double s, double rho[3]) const {
rho[0] = s;
rho[1] = 1.0;
@@ -161,7 +163,7 @@
}
void ScaledLoss::Evaluate(double s, double rho[3]) const {
- if (rho_.get() == NULL) {
+ if (rho_.get() == nullptr) {
rho[0] = a_ * s;
rho[1] = a_;
rho[2] = 0.0;
diff --git a/internal/ceres/loss_function_test.cc b/internal/ceres/loss_function_test.cc
index 638c0c9..1fd492b 100644
--- a/internal/ceres/loss_function_test.cc
+++ b/internal/ceres/loss_function_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -195,7 +195,7 @@
// construction with the call to AssertLossFunctionIsValid() because Apple's
// GCC is unable to eliminate the copy of ScaledLoss, which is not copyable.
{
- ScaledLoss scaled_loss(NULL, 6, TAKE_OWNERSHIP);
+ ScaledLoss scaled_loss(nullptr, 6, TAKE_OWNERSHIP);
AssertLossFunctionIsValid(scaled_loss, 0.323);
}
{
@@ -265,17 +265,17 @@
EXPECT_NEAR(rho[i], rho_gold[i], 1e-12);
}
- // Set to NULL
+ // Set to nullptr
TrivialLoss loss_function4;
- loss_function_wrapper.Reset(NULL, TAKE_OWNERSHIP);
+ loss_function_wrapper.Reset(nullptr, TAKE_OWNERSHIP);
loss_function_wrapper.Evaluate(s, rho);
loss_function4.Evaluate(s, rho_gold);
for (int i = 0; i < 3; ++i) {
EXPECT_NEAR(rho[i], rho_gold[i], 1e-12);
}
- // Set to NULL, not taking ownership
- loss_function_wrapper.Reset(NULL, DO_NOT_TAKE_OWNERSHIP);
+ // Set to nullptr, not taking ownership
+ loss_function_wrapper.Reset(nullptr, DO_NOT_TAKE_OWNERSHIP);
loss_function_wrapper.Evaluate(s, rho);
loss_function4.Evaluate(s, rho_gold);
for (int i = 0; i < 3; ++i) {
diff --git a/internal/ceres/low_rank_inverse_hessian.cc b/internal/ceres/low_rank_inverse_hessian.cc
index c73e5db..14559b6 100644
--- a/internal/ceres/low_rank_inverse_hessian.cc
+++ b/internal/ceres/low_rank_inverse_hessian.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,10 +35,7 @@
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::list;
+namespace ceres::internal {
// The (L)BFGS algorithm explicitly requires that the secant equation:
//
@@ -117,8 +114,8 @@
return true;
}
-void LowRankInverseHessian::RightMultiply(const double* x_ptr,
- double* y_ptr) const {
+void LowRankInverseHessian::RightMultiplyAndAccumulate(const double* x_ptr,
+ double* y_ptr) const {
ConstVectorRef gradient(x_ptr, num_parameters_);
VectorRef search_direction(y_ptr, num_parameters_);
@@ -127,9 +124,7 @@
const int num_corrections = indices_.size();
Vector alpha(num_corrections);
- for (list<int>::const_reverse_iterator it = indices_.rbegin();
- it != indices_.rend();
- ++it) {
+ for (auto it = indices_.rbegin(); it != indices_.rend(); ++it) {
const double alpha_i = delta_x_history_.col(*it).dot(search_direction) /
delta_x_dot_delta_gradient_(*it);
search_direction -= alpha_i * delta_gradient_history_.col(*it);
@@ -161,7 +156,7 @@
//
// The original origin of this rescaling trick is somewhat unclear, the
// earliest reference appears to be Oren [1], however it is widely discussed
- // without specific attributation in various texts including [2] (p143/178).
+ // without specific attribution in various texts including [2] (p143/178).
//
// [1] Oren S.S., Self-scaling variable metric (SSVM) algorithms Part II:
// Implementation and experiments, Management Science,
@@ -181,5 +176,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/low_rank_inverse_hessian.h b/internal/ceres/low_rank_inverse_hessian.h
index 0028a98..72f6f65 100644
--- a/internal/ceres/low_rank_inverse_hessian.h
+++ b/internal/ceres/low_rank_inverse_hessian.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,10 +37,10 @@
#include <list>
#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_operator.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// LowRankInverseHessian is a positive definite approximation to the
// Hessian using the limited memory variant of the
@@ -59,12 +59,12 @@
// Byrd, R. H.; Nocedal, J.; Schnabel, R. B. (1994).
// "Representations of Quasi-Newton Matrices and their use in
// Limited Memory Methods". Mathematical Programming 63 (4):
-class LowRankInverseHessian : public LinearOperator {
+class CERES_NO_EXPORT LowRankInverseHessian final : public LinearOperator {
public:
// num_parameters is the row/column size of the Hessian.
// max_num_corrections is the rank of the Hessian approximation.
// use_approximate_eigenvalue_scaling controls whether the initial
- // inverse Hessian used during Right/LeftMultiply() is scaled by
+ // inverse Hessian used during Right/LeftMultiplyAndAccumulate() is scaled by
// the approximate eigenvalue of the true inverse Hessian at the
// current operating point.
// The approximation uses:
@@ -73,7 +73,6 @@
LowRankInverseHessian(int num_parameters,
int max_num_corrections,
bool use_approximate_eigenvalue_scaling);
- virtual ~LowRankInverseHessian() {}
// Update the low rank approximation. delta_x is the change in the
// domain of Hessian, and delta_gradient is the change in the
@@ -84,9 +83,9 @@
bool Update(const Vector& delta_x, const Vector& delta_gradient);
// LinearOperator interface
- void RightMultiply(const double* x, double* y) const final;
- void LeftMultiply(const double* x, double* y) const final {
- RightMultiply(x, y);
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const final {
+ RightMultiplyAndAccumulate(x, y);
}
int num_rows() const final { return num_parameters_; }
int num_cols() const final { return num_parameters_; }
@@ -102,7 +101,6 @@
std::list<int> indices_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_LOW_RANK_INVERSE_HESSIAN_H_
diff --git a/internal/ceres/manifold.cc b/internal/ceres/manifold.cc
new file mode 100644
index 0000000..c4895fd
--- /dev/null
+++ b/internal/ceres/manifold.cc
@@ -0,0 +1,316 @@
+#include "ceres/manifold.h"
+
+#include <algorithm>
+#include <cmath>
+
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/fixed_array.h"
+#include "glog/logging.h"
+
+namespace ceres {
+namespace {
+
+struct CeresQuaternionOrder {
+ static constexpr int kW = 0;
+ static constexpr int kX = 1;
+ static constexpr int kY = 2;
+ static constexpr int kZ = 3;
+};
+
+struct EigenQuaternionOrder {
+ static constexpr int kW = 3;
+ static constexpr int kX = 0;
+ static constexpr int kY = 1;
+ static constexpr int kZ = 2;
+};
+
+template <typename Order>
+inline void QuaternionPlusImpl(const double* x,
+ const double* delta,
+ double* x_plus_delta) {
+ // x_plus_delta = QuaternionProduct(q_delta, x), where q_delta is the
+ // quaternion constructed from delta.
+ const double norm_delta = std::hypot(delta[0], delta[1], delta[2]);
+
+ if (std::fpclassify(norm_delta) == FP_ZERO) {
+ // No change in rotation: return the quaternion as is.
+ std::copy_n(x, 4, x_plus_delta);
+ return;
+ }
+
+ const double sin_delta_by_delta = (std::sin(norm_delta) / norm_delta);
+ double q_delta[4];
+ q_delta[Order::kW] = std::cos(norm_delta);
+ q_delta[Order::kX] = sin_delta_by_delta * delta[0];
+ q_delta[Order::kY] = sin_delta_by_delta * delta[1];
+ q_delta[Order::kZ] = sin_delta_by_delta * delta[2];
+
+ x_plus_delta[Order::kW] =
+ q_delta[Order::kW] * x[Order::kW] - q_delta[Order::kX] * x[Order::kX] -
+ q_delta[Order::kY] * x[Order::kY] - q_delta[Order::kZ] * x[Order::kZ];
+ x_plus_delta[Order::kX] =
+ q_delta[Order::kW] * x[Order::kX] + q_delta[Order::kX] * x[Order::kW] +
+ q_delta[Order::kY] * x[Order::kZ] - q_delta[Order::kZ] * x[Order::kY];
+ x_plus_delta[Order::kY] =
+ q_delta[Order::kW] * x[Order::kY] - q_delta[Order::kX] * x[Order::kZ] +
+ q_delta[Order::kY] * x[Order::kW] + q_delta[Order::kZ] * x[Order::kX];
+ x_plus_delta[Order::kZ] =
+ q_delta[Order::kW] * x[Order::kZ] + q_delta[Order::kX] * x[Order::kY] -
+ q_delta[Order::kY] * x[Order::kX] + q_delta[Order::kZ] * x[Order::kW];
+}
+
+template <typename Order>
+inline void QuaternionPlusJacobianImpl(const double* x, double* jacobian_ptr) {
+ Eigen::Map<Eigen::Matrix<double, 4, 3, Eigen::RowMajor>> jacobian(
+ jacobian_ptr);
+
+ jacobian(Order::kW, 0) = -x[Order::kX];
+ jacobian(Order::kW, 1) = -x[Order::kY];
+ jacobian(Order::kW, 2) = -x[Order::kZ];
+ jacobian(Order::kX, 0) = x[Order::kW];
+ jacobian(Order::kX, 1) = x[Order::kZ];
+ jacobian(Order::kX, 2) = -x[Order::kY];
+ jacobian(Order::kY, 0) = -x[Order::kZ];
+ jacobian(Order::kY, 1) = x[Order::kW];
+ jacobian(Order::kY, 2) = x[Order::kX];
+ jacobian(Order::kZ, 0) = x[Order::kY];
+ jacobian(Order::kZ, 1) = -x[Order::kX];
+ jacobian(Order::kZ, 2) = x[Order::kW];
+}
+
+template <typename Order>
+inline void QuaternionMinusImpl(const double* y,
+ const double* x,
+ double* y_minus_x) {
+ // ambient_y_minus_x = QuaternionProduct(y, -x) where -x is the conjugate of
+ // x.
+ double ambient_y_minus_x[4];
+ ambient_y_minus_x[Order::kW] =
+ y[Order::kW] * x[Order::kW] + y[Order::kX] * x[Order::kX] +
+ y[Order::kY] * x[Order::kY] + y[Order::kZ] * x[Order::kZ];
+ ambient_y_minus_x[Order::kX] =
+ -y[Order::kW] * x[Order::kX] + y[Order::kX] * x[Order::kW] -
+ y[Order::kY] * x[Order::kZ] + y[Order::kZ] * x[Order::kY];
+ ambient_y_minus_x[Order::kY] =
+ -y[Order::kW] * x[Order::kY] + y[Order::kX] * x[Order::kZ] +
+ y[Order::kY] * x[Order::kW] - y[Order::kZ] * x[Order::kX];
+ ambient_y_minus_x[Order::kZ] =
+ -y[Order::kW] * x[Order::kZ] - y[Order::kX] * x[Order::kY] +
+ y[Order::kY] * x[Order::kX] + y[Order::kZ] * x[Order::kW];
+
+ const double u_norm = std::hypot(ambient_y_minus_x[Order::kX],
+ ambient_y_minus_x[Order::kY],
+ ambient_y_minus_x[Order::kZ]);
+ if (std::fpclassify(u_norm) != FP_ZERO) {
+ const double theta = std::atan2(u_norm, ambient_y_minus_x[Order::kW]);
+ y_minus_x[0] = theta * ambient_y_minus_x[Order::kX] / u_norm;
+ y_minus_x[1] = theta * ambient_y_minus_x[Order::kY] / u_norm;
+ y_minus_x[2] = theta * ambient_y_minus_x[Order::kZ] / u_norm;
+ } else {
+ std::fill_n(y_minus_x, 3, 0.0);
+ }
+}
+
+template <typename Order>
+inline void QuaternionMinusJacobianImpl(const double* x, double* jacobian_ptr) {
+ Eigen::Map<Eigen::Matrix<double, 3, 4, Eigen::RowMajor>> jacobian(
+ jacobian_ptr);
+
+ jacobian(0, Order::kW) = -x[Order::kX];
+ jacobian(0, Order::kX) = x[Order::kW];
+ jacobian(0, Order::kY) = -x[Order::kZ];
+ jacobian(0, Order::kZ) = x[Order::kY];
+ jacobian(1, Order::kW) = -x[Order::kY];
+ jacobian(1, Order::kX) = x[Order::kZ];
+ jacobian(1, Order::kY) = x[Order::kW];
+ jacobian(1, Order::kZ) = -x[Order::kX];
+ jacobian(2, Order::kW) = -x[Order::kZ];
+ jacobian(2, Order::kX) = -x[Order::kY];
+ jacobian(2, Order::kY) = x[Order::kX];
+ jacobian(2, Order::kZ) = x[Order::kW];
+}
+
+} // namespace
+
+Manifold::~Manifold() = default;
+
+bool Manifold::RightMultiplyByPlusJacobian(const double* x,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const {
+ const int tangent_size = TangentSize();
+ if (tangent_size == 0) {
+ return true;
+ }
+
+ const int ambient_size = AmbientSize();
+ Matrix plus_jacobian(ambient_size, tangent_size);
+ if (!PlusJacobian(x, plus_jacobian.data())) {
+ return false;
+ }
+
+ MatrixRef(tangent_matrix, num_rows, tangent_size) =
+ ConstMatrixRef(ambient_matrix, num_rows, ambient_size) * plus_jacobian;
+ return true;
+}
+
+SubsetManifold::SubsetManifold(const int size,
+ const std::vector<int>& constant_parameters)
+
+ : tangent_size_(size - constant_parameters.size()),
+ constancy_mask_(size, false) {
+ if (constant_parameters.empty()) {
+ return;
+ }
+
+ std::vector<int> constant = constant_parameters;
+ std::sort(constant.begin(), constant.end());
+ CHECK_GE(constant.front(), 0) << "Indices indicating constant parameter must "
+ "be greater than equal to zero.";
+ CHECK_LT(constant.back(), size)
+ << "Indices indicating constant parameter must be less than the size "
+ << "of the parameter block.";
+ CHECK(std::adjacent_find(constant.begin(), constant.end()) == constant.end())
+ << "The set of constant parameters cannot contain duplicates";
+
+ for (auto index : constant_parameters) {
+ constancy_mask_[index] = true;
+ }
+}
+
+int SubsetManifold::AmbientSize() const { return constancy_mask_.size(); }
+
+int SubsetManifold::TangentSize() const { return tangent_size_; }
+
+bool SubsetManifold::Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const {
+ const int ambient_size = AmbientSize();
+ for (int i = 0, j = 0; i < ambient_size; ++i) {
+ if (constancy_mask_[i]) {
+ x_plus_delta[i] = x[i];
+ } else {
+ x_plus_delta[i] = x[i] + delta[j++];
+ }
+ }
+ return true;
+}
+
+bool SubsetManifold::PlusJacobian(const double* /*x*/,
+ double* plus_jacobian) const {
+ if (tangent_size_ == 0) {
+ return true;
+ }
+
+ const int ambient_size = AmbientSize();
+ MatrixRef m(plus_jacobian, ambient_size, tangent_size_);
+ m.setZero();
+ for (int r = 0, c = 0; r < ambient_size; ++r) {
+ if (!constancy_mask_[r]) {
+ m(r, c++) = 1.0;
+ }
+ }
+ return true;
+}
+
+bool SubsetManifold::RightMultiplyByPlusJacobian(const double* /*x*/,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const {
+ if (tangent_size_ == 0) {
+ return true;
+ }
+
+ const int ambient_size = AmbientSize();
+ for (int r = 0; r < num_rows; ++r) {
+ for (int idx = 0, c = 0; idx < ambient_size; ++idx) {
+ if (!constancy_mask_[idx]) {
+ tangent_matrix[r * tangent_size_ + c++] =
+ ambient_matrix[r * ambient_size + idx];
+ }
+ }
+ }
+ return true;
+}
+
+bool SubsetManifold::Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const {
+ if (tangent_size_ == 0) {
+ return true;
+ }
+
+ const int ambient_size = AmbientSize();
+ for (int i = 0, j = 0; i < ambient_size; ++i) {
+ if (!constancy_mask_[i]) {
+ y_minus_x[j++] = y[i] - x[i];
+ }
+ }
+ return true;
+}
+
+bool SubsetManifold::MinusJacobian(const double* /*x*/,
+ double* minus_jacobian) const {
+ const int ambient_size = AmbientSize();
+ MatrixRef m(minus_jacobian, tangent_size_, ambient_size);
+ m.setZero();
+ for (int c = 0, r = 0; c < ambient_size; ++c) {
+ if (!constancy_mask_[c]) {
+ m(r++, c) = 1.0;
+ }
+ }
+ return true;
+}
+
+bool QuaternionManifold::Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const {
+ QuaternionPlusImpl<CeresQuaternionOrder>(x, delta, x_plus_delta);
+ return true;
+}
+
+bool QuaternionManifold::PlusJacobian(const double* x, double* jacobian) const {
+ QuaternionPlusJacobianImpl<CeresQuaternionOrder>(x, jacobian);
+ return true;
+}
+
+bool QuaternionManifold::Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const {
+ QuaternionMinusImpl<CeresQuaternionOrder>(y, x, y_minus_x);
+ return true;
+}
+
+bool QuaternionManifold::MinusJacobian(const double* x,
+ double* jacobian) const {
+ QuaternionMinusJacobianImpl<CeresQuaternionOrder>(x, jacobian);
+ return true;
+}
+
+bool EigenQuaternionManifold::Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const {
+ QuaternionPlusImpl<EigenQuaternionOrder>(x, delta, x_plus_delta);
+ return true;
+}
+
+bool EigenQuaternionManifold::PlusJacobian(const double* x,
+ double* jacobian) const {
+ QuaternionPlusJacobianImpl<EigenQuaternionOrder>(x, jacobian);
+ return true;
+}
+
+bool EigenQuaternionManifold::Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const {
+ QuaternionMinusImpl<EigenQuaternionOrder>(y, x, y_minus_x);
+ return true;
+}
+
+bool EigenQuaternionManifold::MinusJacobian(const double* x,
+ double* jacobian) const {
+ QuaternionMinusJacobianImpl<EigenQuaternionOrder>(x, jacobian);
+ return true;
+}
+
+} // namespace ceres
diff --git a/internal/ceres/manifold_test.cc b/internal/ceres/manifold_test.cc
new file mode 100644
index 0000000..788e865
--- /dev/null
+++ b/internal/ceres/manifold_test.cc
@@ -0,0 +1,1055 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#include "ceres/manifold.h"
+
+#include <cmath>
+#include <limits>
+#include <memory>
+#include <utility>
+
+#include "Eigen/Geometry"
+#include "ceres/constants.h"
+#include "ceres/dynamic_numeric_diff_cost_function.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/port.h"
+#include "ceres/line_manifold.h"
+#include "ceres/manifold_test_utils.h"
+#include "ceres/numeric_diff_options.h"
+#include "ceres/product_manifold.h"
+#include "ceres/rotation.h"
+#include "ceres/sphere_manifold.h"
+#include "ceres/types.h"
+#include "gmock/gmock.h"
+#include "gtest/gtest.h"
+
+namespace ceres::internal {
+
+constexpr int kNumTrials = 1000;
+constexpr double kTolerance = 1e-9;
+
+TEST(EuclideanManifold, StaticNormalFunctionTest) {
+ EuclideanManifold<3> manifold;
+ EXPECT_EQ(manifold.AmbientSize(), 3);
+ EXPECT_EQ(manifold.TangentSize(), 3);
+
+ Vector zero_tangent = Vector::Zero(manifold.TangentSize());
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(manifold.AmbientSize());
+ const Vector y = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+ Vector x_plus_delta = Vector::Zero(manifold.AmbientSize());
+
+ manifold.Plus(x.data(), delta.data(), x_plus_delta.data());
+ EXPECT_NEAR((x_plus_delta - x - delta).norm() / (x + delta).norm(),
+ 0.0,
+ kTolerance);
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(EuclideanManifold, DynamicNormalFunctionTest) {
+ EuclideanManifold<DYNAMIC> manifold(3);
+ EXPECT_EQ(manifold.AmbientSize(), 3);
+ EXPECT_EQ(manifold.TangentSize(), 3);
+
+ Vector zero_tangent = Vector::Zero(manifold.TangentSize());
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(manifold.AmbientSize());
+ const Vector y = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+ Vector x_plus_delta = Vector::Zero(manifold.AmbientSize());
+
+ manifold.Plus(x.data(), delta.data(), x_plus_delta.data());
+ EXPECT_NEAR((x_plus_delta - x - delta).norm() / (x + delta).norm(),
+ 0.0,
+ kTolerance);
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(SubsetManifold, EmptyConstantParameters) {
+ SubsetManifold manifold(3, {});
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(3);
+ const Vector y = Vector::Random(3);
+ Vector delta = Vector::Random(3);
+ Vector x_plus_delta = Vector::Zero(3);
+
+ manifold.Plus(x.data(), delta.data(), x_plus_delta.data());
+ EXPECT_NEAR((x_plus_delta - x - delta).norm() / (x + delta).norm(),
+ 0.0,
+ kTolerance);
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(SubsetManifold, NegativeParameterIndexDeathTest) {
+ EXPECT_DEATH_IF_SUPPORTED(SubsetManifold manifold(2, {-1}),
+ "greater than equal to zero");
+}
+
+TEST(SubsetManifold, GreaterThanSizeParameterIndexDeathTest) {
+ EXPECT_DEATH_IF_SUPPORTED(SubsetManifold manifold(2, {2}),
+ "less than the size");
+}
+
+TEST(SubsetManifold, DuplicateParametersDeathTest) {
+ EXPECT_DEATH_IF_SUPPORTED(SubsetManifold manifold(2, {1, 1}), "duplicates");
+}
+
+TEST(SubsetManifold, NormalFunctionTest) {
+ const int kAmbientSize = 4;
+ const int kTangentSize = 3;
+
+ for (int i = 0; i < kAmbientSize; ++i) {
+ SubsetManifold manifold_with_ith_parameter_constant(kAmbientSize, {i});
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(kAmbientSize);
+ Vector y = Vector::Random(kAmbientSize);
+ // x and y must have the same i^th coordinate to be on the manifold.
+ y[i] = x[i];
+ Vector delta = Vector::Random(kTangentSize);
+ Vector x_plus_delta = Vector::Zero(kAmbientSize);
+
+ x_plus_delta.setZero();
+ manifold_with_ith_parameter_constant.Plus(
+ x.data(), delta.data(), x_plus_delta.data());
+ int k = 0;
+ for (int j = 0; j < kAmbientSize; ++j) {
+ if (j == i) {
+ EXPECT_EQ(x_plus_delta[j], x[j]);
+ } else {
+ EXPECT_EQ(x_plus_delta[j], x[j] + delta[k++]);
+ }
+ }
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(
+ manifold_with_ith_parameter_constant, x, delta, y, kTolerance);
+ }
+ }
+}
+
+TEST(ProductManifold, Size2) {
+ SubsetManifold manifold1(5, {2});
+ SubsetManifold manifold2(3, {0, 1});
+ ProductManifold<SubsetManifold, SubsetManifold> manifold(manifold1,
+ manifold2);
+
+ EXPECT_EQ(manifold.AmbientSize(),
+ manifold1.AmbientSize() + manifold2.AmbientSize());
+ EXPECT_EQ(manifold.TangentSize(),
+ manifold1.TangentSize() + manifold2.TangentSize());
+}
+
+TEST(ProductManifold, Size3) {
+ SubsetManifold manifold1(5, {2});
+ SubsetManifold manifold2(3, {0, 1});
+ SubsetManifold manifold3(4, {1});
+
+ ProductManifold<SubsetManifold, SubsetManifold, SubsetManifold> manifold(
+ manifold1, manifold2, manifold3);
+
+ EXPECT_EQ(manifold.AmbientSize(),
+ manifold1.AmbientSize() + manifold2.AmbientSize() +
+ manifold3.AmbientSize());
+ EXPECT_EQ(manifold.TangentSize(),
+ manifold1.TangentSize() + manifold2.TangentSize() +
+ manifold3.TangentSize());
+}
+
+TEST(ProductManifold, Size4) {
+ SubsetManifold manifold1(5, {2});
+ SubsetManifold manifold2(3, {0, 1});
+ SubsetManifold manifold3(4, {1});
+ SubsetManifold manifold4(2, {0});
+
+ ProductManifold<SubsetManifold,
+ SubsetManifold,
+ SubsetManifold,
+ SubsetManifold>
+ manifold(manifold1, manifold2, manifold3, manifold4);
+
+ EXPECT_EQ(manifold.AmbientSize(),
+ manifold1.AmbientSize() + manifold2.AmbientSize() +
+ manifold3.AmbientSize() + manifold4.AmbientSize());
+ EXPECT_EQ(manifold.TangentSize(),
+ manifold1.TangentSize() + manifold2.TangentSize() +
+ manifold3.TangentSize() + manifold4.TangentSize());
+}
+
+TEST(ProductManifold, NormalFunctionTest) {
+ SubsetManifold manifold1(5, {2});
+ SubsetManifold manifold2(3, {0, 1});
+ SubsetManifold manifold3(4, {1});
+ SubsetManifold manifold4(2, {0});
+
+ ProductManifold<SubsetManifold,
+ SubsetManifold,
+ SubsetManifold,
+ SubsetManifold>
+ manifold(manifold1, manifold2, manifold3, manifold4);
+
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+ Vector x_plus_delta = Vector::Zero(manifold.AmbientSize());
+ Vector x_plus_delta_expected = Vector::Zero(manifold.AmbientSize());
+
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), x_plus_delta.data()));
+
+ int ambient_cursor = 0;
+ int tangent_cursor = 0;
+
+ EXPECT_TRUE(manifold1.Plus(&x[ambient_cursor],
+ &delta[tangent_cursor],
+ &x_plus_delta_expected[ambient_cursor]));
+ ambient_cursor += manifold1.AmbientSize();
+ tangent_cursor += manifold1.TangentSize();
+
+ EXPECT_TRUE(manifold2.Plus(&x[ambient_cursor],
+ &delta[tangent_cursor],
+ &x_plus_delta_expected[ambient_cursor]));
+ ambient_cursor += manifold2.AmbientSize();
+ tangent_cursor += manifold2.TangentSize();
+
+ EXPECT_TRUE(manifold3.Plus(&x[ambient_cursor],
+ &delta[tangent_cursor],
+ &x_plus_delta_expected[ambient_cursor]));
+ ambient_cursor += manifold3.AmbientSize();
+ tangent_cursor += manifold3.TangentSize();
+
+ EXPECT_TRUE(manifold4.Plus(&x[ambient_cursor],
+ &delta[tangent_cursor],
+ &x_plus_delta_expected[ambient_cursor]));
+ ambient_cursor += manifold4.AmbientSize();
+ tangent_cursor += manifold4.TangentSize();
+
+ for (int i = 0; i < x.size(); ++i) {
+ EXPECT_EQ(x_plus_delta[i], x_plus_delta_expected[i]);
+ }
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(
+ manifold, x, delta, x_plus_delta, kTolerance);
+ }
+}
+
+TEST(ProductManifold, ZeroTangentSizeAndEuclidean) {
+ SubsetManifold subset_manifold(1, {0});
+ EuclideanManifold<2> euclidean_manifold;
+ ProductManifold<SubsetManifold, EuclideanManifold<2>> manifold(
+ subset_manifold, euclidean_manifold);
+ EXPECT_EQ(manifold.AmbientSize(), 3);
+ EXPECT_EQ(manifold.TangentSize(), 2);
+
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(3);
+ Vector y = Vector::Random(3);
+ y[0] = x[0];
+ Vector delta = Vector::Random(2);
+ Vector x_plus_delta = Vector::Zero(3);
+
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), x_plus_delta.data()));
+
+ EXPECT_EQ(x_plus_delta[0], x[0]);
+ EXPECT_EQ(x_plus_delta[1], x[1] + delta[0]);
+ EXPECT_EQ(x_plus_delta[2], x[2] + delta[1]);
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(ProductManifold, EuclideanAndZeroTangentSize) {
+ SubsetManifold subset_manifold(1, {0});
+ EuclideanManifold<2> euclidean_manifold;
+ ProductManifold<EuclideanManifold<2>, SubsetManifold> manifold(
+ euclidean_manifold, subset_manifold);
+ EXPECT_EQ(manifold.AmbientSize(), 3);
+ EXPECT_EQ(manifold.TangentSize(), 2);
+
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(3);
+ Vector y = Vector::Random(3);
+ y[2] = x[2];
+ Vector delta = Vector::Random(2);
+ Vector x_plus_delta = Vector::Zero(3);
+
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), x_plus_delta.data()));
+ EXPECT_EQ(x_plus_delta[0], x[0] + delta[0]);
+ EXPECT_EQ(x_plus_delta[1], x[1] + delta[1]);
+ EXPECT_EQ(x_plus_delta[2], x[2]);
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+struct CopyableManifold : ceres::Manifold {
+ CopyableManifold() = default;
+ CopyableManifold(const CopyableManifold&) = default;
+ // Do not care about copy-assignment
+ CopyableManifold& operator=(const CopyableManifold&) = delete;
+ // Not moveable
+ CopyableManifold(CopyableManifold&&) = delete;
+ CopyableManifold& operator=(CopyableManifold&&) = delete;
+
+ int AmbientSize() const override { return 3; }
+ int TangentSize() const override { return 2; }
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override {
+ return true;
+ }
+
+ bool PlusJacobian(const double* x, double* jacobian) const override {
+ return true;
+ }
+
+ bool RightMultiplyByPlusJacobian(const double* x,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const override {
+ return true;
+ }
+
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override {
+ return true;
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian) const override {
+ return true;
+ }
+};
+
+struct MoveableManifold : ceres::Manifold {
+ MoveableManifold() = default;
+ MoveableManifold(MoveableManifold&&) = default;
+ // Do not care about move-assignment
+ MoveableManifold& operator=(MoveableManifold&&) = delete;
+ // Not copyable
+ MoveableManifold(const MoveableManifold&) = delete;
+ MoveableManifold& operator=(const MoveableManifold&) = delete;
+
+ int AmbientSize() const override { return 3; }
+ int TangentSize() const override { return 2; }
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override {
+ return true;
+ }
+
+ bool PlusJacobian(const double* x, double* jacobian) const override {
+ return true;
+ }
+
+ bool RightMultiplyByPlusJacobian(const double* x,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const override {
+ return true;
+ }
+
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override {
+ return true;
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian) const override {
+ return true;
+ }
+};
+
+TEST(ProductManifold, CopyableOnly) {
+ ProductManifold<CopyableManifold, EuclideanManifold<3>> manifold1{
+ CopyableManifold{}, EuclideanManifold<3>{}};
+
+ CopyableManifold inner2;
+ ProductManifold<CopyableManifold, EuclideanManifold<3>> manifold2{
+ inner2, EuclideanManifold<3>{}};
+
+ EXPECT_EQ(manifold1.AmbientSize(), manifold2.AmbientSize());
+ EXPECT_EQ(manifold1.TangentSize(), manifold2.TangentSize());
+}
+
+TEST(ProductManifold, MoveableOnly) {
+ ProductManifold<MoveableManifold, EuclideanManifold<3>> manifold1{
+ MoveableManifold{}, EuclideanManifold<3>{}};
+
+ MoveableManifold inner2;
+ ProductManifold<MoveableManifold, EuclideanManifold<3>> manifold2{
+ std::move(inner2), EuclideanManifold<3>{}};
+
+ EXPECT_EQ(manifold1.AmbientSize(), manifold2.AmbientSize());
+ EXPECT_EQ(manifold1.TangentSize(), manifold2.TangentSize());
+}
+
+TEST(ProductManifold, CopyableOrMoveable) {
+ const CopyableManifold inner12{};
+ ProductManifold<MoveableManifold, CopyableManifold> manifold1{
+ MoveableManifold{}, inner12};
+
+ MoveableManifold inner21;
+ CopyableManifold inner22;
+ ProductManifold<MoveableManifold, CopyableManifold> manifold2{
+ std::move(inner21), inner22};
+
+ EXPECT_EQ(manifold1.AmbientSize(), manifold2.AmbientSize());
+ EXPECT_EQ(manifold1.TangentSize(), manifold2.TangentSize());
+}
+
+struct NonDefaultConstructibleManifold : ceres::Manifold {
+ NonDefaultConstructibleManifold(int, int) {}
+ int AmbientSize() const override { return 4; }
+ int TangentSize() const override { return 3; }
+
+ bool Plus(const double* x,
+ const double* delta,
+ double* x_plus_delta) const override {
+ return true;
+ }
+
+ bool PlusJacobian(const double* x, double* jacobian) const override {
+ return true;
+ }
+
+ bool RightMultiplyByPlusJacobian(const double* x,
+ const int num_rows,
+ const double* ambient_matrix,
+ double* tangent_matrix) const override {
+ return true;
+ }
+
+ bool Minus(const double* y,
+ const double* x,
+ double* y_minus_x) const override {
+ return true;
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian) const override {
+ return true;
+ }
+};
+
+TEST(ProductManifold, NonDefaultConstructible) {
+ ProductManifold<NonDefaultConstructibleManifold, QuaternionManifold>
+ manifold1{NonDefaultConstructibleManifold{1, 2}, QuaternionManifold{}};
+ ProductManifold<QuaternionManifold, NonDefaultConstructibleManifold>
+ manifold2{QuaternionManifold{}, NonDefaultConstructibleManifold{1, 2}};
+
+ EXPECT_EQ(manifold1.AmbientSize(), manifold2.AmbientSize());
+ EXPECT_EQ(manifold1.TangentSize(), manifold2.TangentSize());
+}
+
+TEST(ProductManifold, DefaultConstructible) {
+ ProductManifold<EuclideanManifold<3>, SphereManifold<4>> manifold1;
+ ProductManifold<SphereManifold<4>, EuclideanManifold<3>> manifold2;
+
+ EXPECT_EQ(manifold1.AmbientSize(), manifold2.AmbientSize());
+ EXPECT_EQ(manifold1.TangentSize(), manifold2.TangentSize());
+}
+
+TEST(ProductManifold, Pointers) {
+ auto p = std::make_unique<QuaternionManifold>();
+ auto q = std::make_shared<EuclideanManifold<3>>();
+
+ ProductManifold<std::unique_ptr<Manifold>,
+ EuclideanManifold<3>,
+ std::shared_ptr<EuclideanManifold<3>>>
+ manifold1{
+ std::make_unique<QuaternionManifold>(), EuclideanManifold<3>{}, q};
+ ProductManifold<QuaternionManifold*,
+ EuclideanManifold<3>,
+ std::shared_ptr<EuclideanManifold<3>>>
+ manifold2{p.get(), EuclideanManifold<3>{}, q};
+
+ EXPECT_EQ(manifold1.AmbientSize(), manifold2.AmbientSize());
+ EXPECT_EQ(manifold1.TangentSize(), manifold2.TangentSize());
+}
+
+TEST(QuaternionManifold, PlusPiBy2) {
+ QuaternionManifold manifold;
+ Vector x = Vector::Zero(4);
+ x[0] = 1.0;
+
+ for (int i = 0; i < 3; ++i) {
+ Vector delta = Vector::Zero(3);
+ delta[i] = constants::pi / 2;
+ Vector x_plus_delta = Vector::Zero(4);
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), x_plus_delta.data()));
+
+ // Expect that the element corresponding to pi/2 is +/- 1. All other
+ // elements should be zero.
+ for (int j = 0; j < 4; ++j) {
+ if (i == (j - 1)) {
+ EXPECT_LT(std::abs(x_plus_delta[j]) - 1,
+ std::numeric_limits<double>::epsilon())
+ << "\ndelta = " << delta.transpose()
+ << "\nx_plus_delta = " << x_plus_delta.transpose()
+ << "\n expected the " << j
+ << "th element of x_plus_delta to be +/- 1.";
+ } else {
+ EXPECT_LT(std::abs(x_plus_delta[j]),
+ std::numeric_limits<double>::epsilon())
+ << "\ndelta = " << delta.transpose()
+ << "\nx_plus_delta = " << x_plus_delta.transpose()
+ << "\n expected the " << j << "th element of x_plus_delta to be 0.";
+ }
+ }
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(
+ manifold, x, delta, x_plus_delta, kTolerance);
+ }
+}
+
+// Compute the expected value of QuaternionManifold::Plus via functions in
+// rotation.h and compares it to the one computed by QuaternionManifold::Plus.
+MATCHER_P2(QuaternionManifoldPlusIsCorrectAt, x, delta, "") {
+ // This multiplication by 2 is needed because AngleAxisToQuaternion uses
+ // |delta|/2 as the angle of rotation where as in the implementation of
+ // QuaternionManifold for historical reasons we use |delta|.
+ const Vector two_delta = delta * 2;
+ Vector delta_q(4);
+ AngleAxisToQuaternion(two_delta.data(), delta_q.data());
+
+ Vector expected(4);
+ QuaternionProduct(delta_q.data(), x.data(), expected.data());
+ Vector actual(4);
+ EXPECT_TRUE(arg.Plus(x.data(), delta.data(), actual.data()));
+
+ const double n = (actual - expected).norm();
+ const double d = expected.norm();
+ const double diffnorm = n / d;
+ if (diffnorm > kTolerance) {
+ *result_listener << "\nx: " << x.transpose()
+ << "\ndelta: " << delta.transpose()
+ << "\nexpected: " << expected.transpose()
+ << "\nactual: " << actual.transpose()
+ << "\ndiff: " << (expected - actual).transpose()
+ << "\ndiffnorm : " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+static Vector RandomQuaternion() {
+ Vector x = Vector::Random(4);
+ x.normalize();
+ return x;
+}
+
+TEST(QuaternionManifold, GenericDelta) {
+ QuaternionManifold manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ EXPECT_THAT(manifold, QuaternionManifoldPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(QuaternionManifold, SmallDelta) {
+ QuaternionManifold manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ delta.normalize();
+ delta *= 1e-6;
+ EXPECT_THAT(manifold, QuaternionManifoldPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(QuaternionManifold, DeltaJustBelowPi) {
+ QuaternionManifold manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ delta.normalize();
+ delta *= (constants::pi - 1e-6);
+ EXPECT_THAT(manifold, QuaternionManifoldPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+// Compute the expected value of EigenQuaternionManifold::Plus using Eigen and
+// compares it to the one computed by QuaternionManifold::Plus.
+MATCHER_P2(EigenQuaternionManifoldPlusIsCorrectAt, x, delta, "") {
+ // This multiplication by 2 is needed because AngleAxisToQuaternion uses
+ // |delta|/2 as the angle of rotation where as in the implementation of
+ // Quaternion for historical reasons we use |delta|.
+ const Vector two_delta = delta * 2;
+ Vector delta_q(4);
+ AngleAxisToQuaternion(two_delta.data(), delta_q.data());
+ Eigen::Quaterniond delta_eigen_q(
+ delta_q[0], delta_q[1], delta_q[2], delta_q[3]);
+
+ Eigen::Map<const Eigen::Quaterniond> x_eigen_q(x.data());
+
+ Eigen::Quaterniond expected = delta_eigen_q * x_eigen_q;
+ double actual[4];
+ EXPECT_TRUE(arg.Plus(x.data(), delta.data(), actual));
+ Eigen::Map<Eigen::Quaterniond> actual_eigen_q(actual);
+
+ const double n = (actual_eigen_q.coeffs() - expected.coeffs()).norm();
+ const double d = expected.norm();
+ const double diffnorm = n / d;
+ if (diffnorm > kTolerance) {
+ *result_listener
+ << "\nx: " << x.transpose() << "\ndelta: " << delta.transpose()
+ << "\nexpected: " << expected.coeffs().transpose()
+ << "\nactual: " << actual_eigen_q.coeffs().transpose() << "\ndiff: "
+ << (expected.coeffs() - actual_eigen_q.coeffs()).transpose()
+ << "\ndiffnorm : " << diffnorm;
+ return false;
+ }
+ return true;
+}
+
+TEST(EigenQuaternionManifold, GenericDelta) {
+ EigenQuaternionManifold manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ EXPECT_THAT(manifold, EigenQuaternionManifoldPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(EigenQuaternionManifold, SmallDelta) {
+ EigenQuaternionManifold manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ delta.normalize();
+ delta *= 1e-6;
+ EXPECT_THAT(manifold, EigenQuaternionManifoldPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(EigenQuaternionManifold, DeltaJustBelowPi) {
+ EigenQuaternionManifold manifold;
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = RandomQuaternion();
+ const Vector y = RandomQuaternion();
+ Vector delta = Vector::Random(3);
+ delta.normalize();
+ delta *= (constants::pi - 1e-6);
+ EXPECT_THAT(manifold, EigenQuaternionManifoldPlusIsCorrectAt(x, delta));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+using Eigen::Vector2d;
+using Eigen::Vector3d;
+using Vector6d = Eigen::Matrix<double, 6, 1>;
+using Eigen::Vector4d;
+using Vector8d = Eigen::Matrix<double, 8, 1>;
+
+TEST(SphereManifold, ZeroTest) {
+ Vector4d x{0.0, 0.0, 0.0, 1.0};
+ Vector3d delta = Vector3d::Zero();
+ Vector4d y = Vector4d::Zero();
+
+ SphereManifold<4> manifold;
+ manifold.Plus(x.data(), delta.data(), y.data());
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(SphereManifold, NearZeroTest1) {
+ Vector4d x{1e-5, 1e-5, 1e-5, 1.0};
+ x.normalize();
+ Vector3d delta{0.0, 1.0, 0.0};
+ Vector4d y = Vector4d::Zero();
+
+ SphereManifold<4> manifold;
+ manifold.Plus(x.data(), delta.data(), y.data());
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(SphereManifold, NearZeroTest2) {
+ Vector4d x{0.01, 0.0, 0.0, 0.0};
+ Vector3d delta{0.0, 1.0, 0.0};
+ Vector4d y = Vector4d::Zero();
+ SphereManifold<4> manifold;
+ manifold.Plus(x.data(), delta.data(), y.data());
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(SphereManifold, Plus2DTest) {
+ Eigen::Vector2d x{0.0, 1.0};
+ SphereManifold<2> manifold;
+
+ {
+ double delta[1]{constants::pi / 4};
+ Eigen::Vector2d y = Eigen::Vector2d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta, y.data()));
+ const Eigen::Vector2d gtY(std::sqrt(2.0) / 2.0, std::sqrt(2.0) / 2.0);
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ double delta[1]{constants::pi / 2};
+ Eigen::Vector2d y = Eigen::Vector2d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta, y.data()));
+ const Eigen::Vector2d gtY = Eigen::Vector2d::UnitX();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ double delta[1]{constants::pi};
+ Eigen::Vector2d y = Eigen::Vector2d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta, y.data()));
+ const Eigen::Vector2d gtY = -Eigen::Vector2d::UnitY();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ double delta[1]{2.0 * constants::pi};
+ Eigen::Vector2d y = Eigen::Vector2d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta, y.data()));
+ const Eigen::Vector2d gtY = Eigen::Vector2d::UnitY();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+}
+
+TEST(SphereManifold, Plus3DTest) {
+ Eigen::Vector3d x{0.0, 0.0, 1.0};
+ SphereManifold<3> manifold;
+
+ {
+ Eigen::Vector2d delta{constants::pi / 2, 0.0};
+ Eigen::Vector3d y = Eigen::Vector3d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ const Eigen::Vector3d gtY = Eigen::Vector3d::UnitX();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Eigen::Vector2d delta{constants::pi, 0.0};
+ Eigen::Vector3d y = Eigen::Vector3d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ const Eigen::Vector3d gtY = -Eigen::Vector3d::UnitZ();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Eigen::Vector2d delta{2.0 * constants::pi, 0.0};
+ Eigen::Vector3d y = Eigen::Vector3d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ const Eigen::Vector3d gtY = Eigen::Vector3d::UnitZ();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Eigen::Vector2d delta{0.0, constants::pi / 2};
+ Eigen::Vector3d y = Eigen::Vector3d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ const Eigen::Vector3d gtY = Eigen::Vector3d::UnitY();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Eigen::Vector2d delta{0.0, constants::pi};
+ Eigen::Vector3d y = Eigen::Vector3d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ const Eigen::Vector3d gtY = -Eigen::Vector3d::UnitZ();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Eigen::Vector2d delta{0.0, 2.0 * constants::pi};
+ Eigen::Vector3d y = Eigen::Vector3d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ const Eigen::Vector3d gtY = Eigen::Vector3d::UnitZ();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Eigen::Vector2d delta =
+ Eigen::Vector2d(1, 1).normalized() * constants::pi / 2;
+ Eigen::Vector3d y = Eigen::Vector3d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ const Eigen::Vector3d gtY(std::sqrt(2.0) / 2.0, std::sqrt(2.0) / 2.0, 0.0);
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Eigen::Vector2d delta = Eigen::Vector2d(1, 1).normalized() * constants::pi;
+ Eigen::Vector3d y = Eigen::Vector3d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ const Eigen::Vector3d gtY = -Eigen::Vector3d::UnitZ();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+}
+
+TEST(SphereManifold, Minus2DTest) {
+ Eigen::Vector2d x{1.0, 0.0};
+ SphereManifold<2> manifold;
+
+ {
+ double delta[1];
+ const Eigen::Vector2d y(std::sqrt(2.0) / 2.0, std::sqrt(2.0) / 2.0);
+ const double gtDelta{constants::pi / 4};
+ EXPECT_TRUE(manifold.Minus(y.data(), x.data(), delta));
+ EXPECT_LT(std::abs(delta[0] - gtDelta), kTolerance);
+ }
+
+ {
+ double delta[1];
+ const Eigen::Vector2d y(-1, 0);
+ const double gtDelta{constants::pi};
+ EXPECT_TRUE(manifold.Minus(y.data(), x.data(), delta));
+ EXPECT_LT(std::abs(delta[0] - gtDelta), kTolerance);
+ }
+}
+
+TEST(SphereManifold, Minus3DTest) {
+ Eigen::Vector3d x{1.0, 0.0, 0.0};
+ SphereManifold<3> manifold;
+
+ {
+ Eigen::Vector2d delta;
+ const Eigen::Vector3d y(std::sqrt(2.0) / 2.0, 0.0, std::sqrt(2.0) / 2.0);
+ const Eigen::Vector2d gtDelta(constants::pi / 4, 0.0);
+ EXPECT_TRUE(manifold.Minus(y.data(), x.data(), delta.data()));
+ EXPECT_LT((delta - gtDelta).norm(), kTolerance);
+ }
+
+ {
+ Eigen::Vector2d delta;
+ const Eigen::Vector3d y(-1, 0, 0);
+ const Eigen::Vector2d gtDelta(0.0, constants::pi);
+ EXPECT_TRUE(manifold.Minus(y.data(), x.data(), delta.data()));
+ EXPECT_LT((delta - gtDelta).norm(), kTolerance);
+ }
+}
+
+TEST(SphereManifold, DeathTests) {
+ EXPECT_DEATH_IF_SUPPORTED(SphereManifold<Eigen::Dynamic> x(1), "size");
+}
+
+TEST(SphereManifold, NormalFunctionTest) {
+ SphereManifold<4> manifold;
+ EXPECT_EQ(manifold.AmbientSize(), 4);
+ EXPECT_EQ(manifold.TangentSize(), 3);
+
+ Vector zero_tangent = Vector::Zero(manifold.TangentSize());
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(manifold.AmbientSize());
+ Vector y = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+
+ if (x.norm() == 0.0 || y.norm() == 0.0) {
+ continue;
+ }
+
+ // X and y need to have the same length.
+ y *= x.norm() / y.norm();
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(SphereManifold, NormalFunctionTestDynamic) {
+ SphereManifold<ceres::DYNAMIC> manifold(5);
+ EXPECT_EQ(manifold.AmbientSize(), 5);
+ EXPECT_EQ(manifold.TangentSize(), 4);
+
+ Vector zero_tangent = Vector::Zero(manifold.TangentSize());
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ const Vector x = Vector::Random(manifold.AmbientSize());
+ Vector y = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+
+ if (x.norm() == 0.0 || y.norm() == 0.0) {
+ continue;
+ }
+
+ // X and y need to have the same length.
+ y *= x.norm() / y.norm();
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(LineManifold, ZeroTest3D) {
+ const Vector6d x = Vector6d::Unit(5);
+ const Vector4d delta = Vector4d::Zero();
+ Vector6d y = Vector6d::Zero();
+
+ LineManifold<3> manifold;
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(LineManifold, ZeroTest4D) {
+ const Vector8d x = Vector8d::Unit(7);
+ const Vector6d delta = Vector6d::Zero();
+ Vector8d y = Vector8d::Zero();
+
+ LineManifold<4> manifold;
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(LineManifold, ZeroOriginPointTest3D) {
+ const Vector6d x = Vector6d::Unit(5);
+ Vector4d delta;
+ delta << 0.0, 0.0, 1.0, 2.0;
+ Vector6d y = Vector6d::Zero();
+
+ LineManifold<3> manifold;
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(LineManifold, ZeroOriginPointTest4D) {
+ const Vector8d x = Vector8d::Unit(7);
+ Vector6d delta;
+ delta << 0.0, 0.0, 0.0, 0.5, 1.0, 1.5;
+ Vector8d y = Vector8d::Zero();
+
+ LineManifold<4> manifold;
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(LineManifold, ZeroDirTest3D) {
+ Vector6d x = Vector6d::Unit(5);
+ Vector4d delta;
+ delta << 3.0, 2.0, 0.0, 0.0;
+ Vector6d y = Vector6d::Zero();
+
+ LineManifold<3> manifold;
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(LineManifold, ZeroDirTest4D) {
+ Vector8d x = Vector8d::Unit(7);
+ Vector6d delta;
+ delta << 3.0, 2.0, 1.0, 0.0, 0.0, 0.0;
+ Vector8d y = Vector8d::Zero();
+
+ LineManifold<4> manifold;
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+}
+
+TEST(LineManifold, Plus) {
+ Vector6d x = Vector6d::Unit(5);
+ LineManifold<3> manifold;
+
+ {
+ Vector4d delta{0.0, 2.0, constants::pi / 2, 0.0};
+ Vector6d y = Vector6d::Random();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ Vector6d gtY;
+ gtY << 2.0 * Vector3d::UnitY(), Vector3d::UnitX();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Vector4d delta{3.0, 0.0, 0.0, constants::pi / 2};
+ Vector6d y = Vector6d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ Vector6d gtY;
+ gtY << 3.0 * Vector3d::UnitX(), Vector3d::UnitY();
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+
+ {
+ Vector4d delta;
+ delta << Vector2d(1.0, 2.0),
+ Vector2d(1, 1).normalized() * constants::pi / 2;
+ Vector6d y = Vector6d::Zero();
+ EXPECT_TRUE(manifold.Plus(x.data(), delta.data(), y.data()));
+ Vector6d gtY;
+ gtY << Vector3d(1.0, 2.0, 0.0),
+ Vector3d(std::sqrt(2.0) / 2.0, std::sqrt(2.0) / 2.0, 0.0);
+ EXPECT_LT((y - gtY).norm(), kTolerance);
+ }
+}
+
+TEST(LineManifold, NormalFunctionTest) {
+ LineManifold<3> manifold;
+ EXPECT_EQ(manifold.AmbientSize(), 6);
+ EXPECT_EQ(manifold.TangentSize(), 4);
+
+ Vector zero_tangent = Vector::Zero(manifold.TangentSize());
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ Vector x = Vector::Random(manifold.AmbientSize());
+ Vector y = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+
+ if (x.tail<3>().norm() == 0.0) {
+ continue;
+ }
+
+ x.tail<3>().normalize();
+ manifold.Plus(x.data(), delta.data(), y.data());
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+TEST(LineManifold, NormalFunctionTestDynamic) {
+ LineManifold<ceres::DYNAMIC> manifold(3);
+ EXPECT_EQ(manifold.AmbientSize(), 6);
+ EXPECT_EQ(manifold.TangentSize(), 4);
+
+ Vector zero_tangent = Vector::Zero(manifold.TangentSize());
+ for (int trial = 0; trial < kNumTrials; ++trial) {
+ Vector x = Vector::Random(manifold.AmbientSize());
+ Vector y = Vector::Random(manifold.AmbientSize());
+ Vector delta = Vector::Random(manifold.TangentSize());
+
+ if (x.tail<3>().norm() == 0.0) {
+ continue;
+ }
+
+ x.tail<3>().normalize();
+ manifold.Plus(x.data(), delta.data(), y.data());
+
+ EXPECT_THAT_MANIFOLD_INVARIANTS_HOLD(manifold, x, delta, y, kTolerance);
+ }
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/map_util.h b/internal/ceres/map_util.h
index 6e310f8..aee2bf5 100644
--- a/internal/ceres/map_util.h
+++ b/internal/ceres/map_util.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,7 +35,7 @@
#include <utility>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "glog/logging.h"
namespace ceres {
@@ -121,7 +121,7 @@
void InsertOrDie(Collection* const collection,
const typename Collection::value_type::first_type& key,
const typename Collection::value_type::second_type& data) {
- typedef typename Collection::value_type value_type;
+ using value_type = typename Collection::value_type;
CHECK(collection->insert(value_type(key, data)).second)
<< "duplicate key: " << key;
}
diff --git a/internal/ceres/miniglog/glog/logging.h b/internal/ceres/miniglog/glog/logging.h
index 98d2b68..5604bd4 100644
--- a/internal/ceres/miniglog/glog/logging.h
+++ b/internal/ceres/miniglog/glog/logging.h
@@ -69,7 +69,7 @@
// The following CHECK macros are defined:
//
// CHECK(condition) - fails if condition is false and logs condition.
-// CHECK_NOTNULL(variable) - fails if the variable is NULL.
+// CHECK_NOTNULL(variable) - fails if the variable is nullptr.
//
// The following binary check macros are also defined :
//
@@ -105,12 +105,8 @@
#include <string>
#include <vector>
-// For appropriate definition of CERES_EXPORT macro.
-// clang-format off
-#include "ceres/internal/port.h"
-// clang-format on
-
#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
// Log severity level constants.
// clang-format off
@@ -124,7 +120,7 @@
namespace google {
-typedef int LogSeverity;
+using LogSeverity = int;
// clang-format off
const int INFO = ::INFO;
const int WARNING = ::WARNING;
@@ -138,7 +134,7 @@
// This implementation is not thread safe.
class CERES_EXPORT LogSink {
public:
- virtual ~LogSink() {}
+ virtual ~LogSink() = default;
virtual void send(LogSeverity severity,
const char* full_filename,
const char* base_filename,
@@ -152,7 +148,7 @@
// Global set of log sinks. The actual object is defined in logging.cc.
extern CERES_EXPORT std::set<LogSink*> log_sinks_global;
-inline void InitGoogleLogging(char* argv) {
+inline void InitGoogleLogging(const char* /* argv */) {
// Do nothing; this is ignored.
}
@@ -294,7 +290,6 @@
// is not used" and "statement has no effect".
class CERES_EXPORT LoggerVoidify {
public:
- LoggerVoidify() {}
// This has to be an operator with a precedence lower than << but
// higher than ?:
void operator&(const std::ostream& s) {}
@@ -407,7 +402,7 @@
// and smart pointers.
template <typename T>
T& CheckNotNullCommon(const char* file, int line, const char* names, T& t) {
- if (t == NULL) {
+ if (t == nullptr) {
LogMessageFatal(file, line, std::string(names));
}
return t;
@@ -425,17 +420,17 @@
// Check that a pointer is not null.
#define CHECK_NOTNULL(val) \
- CheckNotNull(__FILE__, __LINE__, "'" #val "' Must be non NULL", (val))
+ CheckNotNull(__FILE__, __LINE__, "'" #val "' Must be non nullptr", (val))
#ifndef NDEBUG
// Debug only version of CHECK_NOTNULL
#define DCHECK_NOTNULL(val) \
- CheckNotNull(__FILE__, __LINE__, "'" #val "' Must be non NULL", (val))
+ CheckNotNull(__FILE__, __LINE__, "'" #val "' Must be non nullptr", (val))
#else
// Optimized version - generates no code.
#define DCHECK_NOTNULL(val) \
if (false) \
- CheckNotNull(__FILE__, __LINE__, "'" #val "' Must be non NULL", (val))
+ CheckNotNull(__FILE__, __LINE__, "'" #val "' Must be non nullptr", (val))
#endif // NDEBUG
#include "ceres/internal/reenable_warnings.h"
diff --git a/internal/ceres/minimizer.cc b/internal/ceres/minimizer.cc
index b96e0c9..5317388 100644
--- a/internal/ceres/minimizer.cc
+++ b/internal/ceres/minimizer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,28 +30,29 @@
#include "ceres/minimizer.h"
+#include <memory>
+
#include "ceres/line_search_minimizer.h"
#include "ceres/trust_region_minimizer.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-Minimizer* Minimizer::Create(MinimizerType minimizer_type) {
+std::unique_ptr<Minimizer> Minimizer::Create(MinimizerType minimizer_type) {
if (minimizer_type == TRUST_REGION) {
- return new TrustRegionMinimizer;
+ return std::make_unique<TrustRegionMinimizer>();
}
if (minimizer_type == LINE_SEARCH) {
- return new LineSearchMinimizer;
+ return std::make_unique<LineSearchMinimizer>();
}
LOG(FATAL) << "Unknown minimizer_type: " << minimizer_type;
- return NULL;
+ return nullptr;
}
-Minimizer::~Minimizer() {}
+Minimizer::~Minimizer() = default;
bool Minimizer::RunCallbacks(const Minimizer::Options& options,
const IterationSummary& iteration_summary,
@@ -87,5 +88,4 @@
return false;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/minimizer.h b/internal/ceres/minimizer.h
index 246550d..be7290e 100644
--- a/internal/ceres/minimizer.h
+++ b/internal/ceres/minimizer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,21 +35,22 @@
#include <string>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/iteration_callback.h"
#include "ceres/solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class Evaluator;
class SparseMatrix;
class TrustRegionStrategy;
class CoordinateDescentMinimizer;
class LinearSolver;
+class ContextImpl;
// Interface for non-linear least squares solvers.
-class CERES_EXPORT_INTERNAL Minimizer {
+class CERES_NO_EXPORT Minimizer {
public:
// Options struct to control the behaviour of the Minimizer. Please
// see solver.h for detailed information about the meaning and
@@ -113,6 +114,7 @@
int max_num_iterations;
double max_solver_time_in_seconds;
int num_threads;
+ ContextImpl* context = nullptr;
// Number of times the linear solver should be retried in case of
// numerical failure. The retries are done by exponentially scaling up
@@ -178,7 +180,7 @@
std::shared_ptr<CoordinateDescentMinimizer> inner_iteration_minimizer;
};
- static Minimizer* Create(MinimizerType minimizer_type);
+ static std::unique_ptr<Minimizer> Create(MinimizerType minimizer_type);
static bool RunCallbacks(const Options& options,
const IterationSummary& iteration_summary,
Solver::Summary* summary);
@@ -192,7 +194,8 @@
Solver::Summary* summary) = 0;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_MINIMIZER_H_
diff --git a/internal/ceres/minimizer_test.cc b/internal/ceres/minimizer_test.cc
index 3de4abe..10fb9e5 100644
--- a/internal/ceres/minimizer_test.cc
+++ b/internal/ceres/minimizer_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,12 +34,10 @@
#include "ceres/solver.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class FakeIterationCallback : public IterationCallback {
public:
- virtual ~FakeIterationCallback() {}
CallbackReturnType operator()(const IterationSummary& summary) final {
return SOLVER_CONTINUE;
}
@@ -62,7 +60,6 @@
class AbortingIterationCallback : public IterationCallback {
public:
- virtual ~AbortingIterationCallback() {}
CallbackReturnType operator()(const IterationSummary& summary) final {
return SOLVER_ABORT;
}
@@ -80,7 +77,6 @@
class SucceedingIterationCallback : public IterationCallback {
public:
- virtual ~SucceedingIterationCallback() {}
CallbackReturnType operator()(const IterationSummary& summary) final {
return SOLVER_TERMINATE_SUCCESSFULLY;
}
@@ -97,5 +93,4 @@
"User callback returned SOLVER_TERMINATE_SUCCESSFULLY.");
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/normal_prior.cc b/internal/ceres/normal_prior.cc
index 4a62132..c8a7a27 100644
--- a/internal/ceres/normal_prior.cc
+++ b/internal/ceres/normal_prior.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,6 +31,7 @@
#include "ceres/normal_prior.h"
#include <cstddef>
+#include <utility>
#include <vector>
#include "ceres/internal/eigen.h"
@@ -39,7 +40,7 @@
namespace ceres {
-NormalPrior::NormalPrior(const Matrix& A, const Vector& b) : A_(A), b_(b) {
+NormalPrior::NormalPrior(const Matrix& A, Vector b) : A_(A), b_(std::move(b)) {
CHECK_GT(b_.rows(), 0);
CHECK_GT(A_.rows(), 0);
CHECK_EQ(b_.rows(), A.cols());
@@ -54,9 +55,9 @@
VectorRef r(residuals, num_residuals());
// The following line should read
// r = A_ * (p - b_);
- // The extra eval is to get around a bug in the eigen library.
+ // The extra eval is to get around a bug in the Eigen library.
r = A_ * (p - b_).eval();
- if ((jacobians != NULL) && (jacobians[0] != NULL)) {
+ if ((jacobians != nullptr) && (jacobians[0] != nullptr)) {
MatrixRef(jacobians[0], num_residuals(), parameter_block_sizes()[0]) = A_;
}
return true;
diff --git a/internal/ceres/normal_prior_test.cc b/internal/ceres/normal_prior_test.cc
index 518c18e..369ff27 100644
--- a/internal/ceres/normal_prior_test.cc
+++ b/internal/ceres/normal_prior_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,46 +30,31 @@
#include "ceres/normal_prior.h"
+#include <algorithm>
#include <cstddef>
+#include <random>
#include "ceres/internal/eigen.h"
-#include "ceres/random.h"
#include "gtest/gtest.h"
namespace ceres {
namespace internal {
-namespace {
-
-void RandomVector(Vector* v) {
- for (int r = 0; r < v->rows(); ++r) (*v)[r] = 2 * RandDouble() - 1;
-}
-
-void RandomMatrix(Matrix* m) {
- for (int r = 0; r < m->rows(); ++r) {
- for (int c = 0; c < m->cols(); ++c) {
- (*m)(r, c) = 2 * RandDouble() - 1;
- }
- }
-}
-
-} // namespace
-
TEST(NormalPriorTest, ResidualAtRandomPosition) {
- srand(5);
-
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> distribution(-1.0, 1.0);
+ auto randu = [&distribution, &prng] { return distribution(prng); };
for (int num_rows = 1; num_rows < 5; ++num_rows) {
for (int num_cols = 1; num_cols < 5; ++num_cols) {
Vector b(num_cols);
- RandomVector(&b);
-
+ b.setRandom();
Matrix A(num_rows, num_cols);
- RandomMatrix(&A);
+ A.setRandom();
- double* x = new double[num_cols];
- for (int i = 0; i < num_cols; ++i) x[i] = 2 * RandDouble() - 1;
+ auto* x = new double[num_cols];
+ std::generate_n(x, num_cols, randu);
- double* jacobian = new double[num_rows * num_cols];
+ auto* jacobian = new double[num_rows * num_cols];
Vector residuals(num_rows);
NormalPrior prior(A, b);
@@ -92,21 +77,21 @@
}
TEST(NormalPriorTest, ResidualAtRandomPositionNullJacobians) {
- srand(5);
-
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> distribution(-1.0, 1.0);
+ auto randu = [&distribution, &prng] { return distribution(prng); };
for (int num_rows = 1; num_rows < 5; ++num_rows) {
for (int num_cols = 1; num_cols < 5; ++num_cols) {
Vector b(num_cols);
- RandomVector(&b);
-
+ b.setRandom();
Matrix A(num_rows, num_cols);
- RandomMatrix(&A);
+ A.setRandom();
- double* x = new double[num_cols];
- for (int i = 0; i < num_cols; ++i) x[i] = 2 * RandDouble() - 1;
+ auto* x = new double[num_cols];
+ std::generate_n(x, num_cols, randu);
double* jacobians[1];
- jacobians[0] = NULL;
+ jacobians[0] = nullptr;
Vector residuals(num_rows);
@@ -118,7 +103,7 @@
(residuals - A * (VectorRef(x, num_cols) - b)).squaredNorm();
EXPECT_NEAR(residual_diff_norm, 0, 1e-10);
- prior.Evaluate(&x, residuals.data(), NULL);
+ prior.Evaluate(&x, residuals.data(), nullptr);
// Compare the norm of the residual
residual_diff_norm =
(residuals - A * (VectorRef(x, num_cols) - b)).squaredNorm();
diff --git a/internal/ceres/numeric_diff_cost_function_test.cc b/internal/ceres/numeric_diff_cost_function_test.cc
index a5f7a15..0c9074a 100644
--- a/internal/ceres/numeric_diff_cost_function_test.cc
+++ b/internal/ceres/numeric_diff_cost_function_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2024 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,6 +35,7 @@
#include <array>
#include <cmath>
#include <memory>
+#include <random>
#include <string>
#include <vector>
@@ -45,104 +46,105 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST(NumericDiffCostFunction, EasyCaseFunctorCentralDifferences) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<EasyFunctor,
- CENTRAL,
- 3, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(new EasyFunctor));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<EasyFunctor,
+ CENTRAL,
+ 3, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(new EasyFunctor);
EasyFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, CENTRAL);
}
TEST(NumericDiffCostFunction, EasyCaseFunctorForwardDifferences) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<EasyFunctor,
- FORWARD,
- 3, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(new EasyFunctor));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<EasyFunctor,
+ FORWARD,
+ 3, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(new EasyFunctor);
EasyFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, FORWARD);
}
TEST(NumericDiffCostFunction, EasyCaseFunctorRidders) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<EasyFunctor,
- RIDDERS,
- 3, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(new EasyFunctor));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<EasyFunctor,
+ RIDDERS,
+ 3, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(new EasyFunctor);
EasyFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, RIDDERS);
}
TEST(NumericDiffCostFunction, EasyCaseCostFunctionCentralDifferences) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(
- new NumericDiffCostFunction<EasyCostFunction,
- CENTRAL,
- 3, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(new EasyCostFunction, TAKE_OWNERSHIP));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<EasyCostFunction,
+ CENTRAL,
+ 3, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(new EasyCostFunction,
+ TAKE_OWNERSHIP);
EasyFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, CENTRAL);
}
TEST(NumericDiffCostFunction, EasyCaseCostFunctionForwardDifferences) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(
- new NumericDiffCostFunction<EasyCostFunction,
- FORWARD,
- 3, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(new EasyCostFunction, TAKE_OWNERSHIP));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<EasyCostFunction,
+ FORWARD,
+ 3, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(new EasyCostFunction,
+ TAKE_OWNERSHIP);
EasyFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, FORWARD);
}
TEST(NumericDiffCostFunction, EasyCaseCostFunctionRidders) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(
- new NumericDiffCostFunction<EasyCostFunction,
- RIDDERS,
- 3, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(new EasyCostFunction, TAKE_OWNERSHIP));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<EasyCostFunction,
+ RIDDERS,
+ 3, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(new EasyCostFunction,
+ TAKE_OWNERSHIP);
+
EasyFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, RIDDERS);
}
TEST(NumericDiffCostFunction, TranscendentalCaseFunctorCentralDifferences) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<TranscendentalFunctor,
- CENTRAL,
- 2, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(new TranscendentalFunctor));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<TranscendentalFunctor,
+ CENTRAL,
+ 2, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(new TranscendentalFunctor);
TranscendentalFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, CENTRAL);
}
TEST(NumericDiffCostFunction, TranscendentalCaseFunctorForwardDifferences) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<TranscendentalFunctor,
- FORWARD,
- 2, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(new TranscendentalFunctor));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<TranscendentalFunctor,
+ FORWARD,
+ 2, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(new TranscendentalFunctor);
+
TranscendentalFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, FORWARD);
}
@@ -153,43 +155,43 @@
// Using a smaller initial step size to overcome oscillatory function
// behavior.
options.ridders_relative_initial_step_size = 1e-3;
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<TranscendentalFunctor,
+ RIDDERS,
+ 2, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(
+ new TranscendentalFunctor, TAKE_OWNERSHIP, 2, options);
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<TranscendentalFunctor,
- RIDDERS,
- 2, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(
- new TranscendentalFunctor, TAKE_OWNERSHIP, 2, options));
TranscendentalFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, RIDDERS);
}
TEST(NumericDiffCostFunction,
TranscendentalCaseCostFunctionCentralDifferences) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<TranscendentalCostFunction,
- CENTRAL,
- 2, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(
- new TranscendentalCostFunction, TAKE_OWNERSHIP));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<TranscendentalCostFunction,
+ CENTRAL,
+ 2, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(
+ new TranscendentalCostFunction, TAKE_OWNERSHIP);
TranscendentalFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, CENTRAL);
}
TEST(NumericDiffCostFunction,
TranscendentalCaseCostFunctionForwardDifferences) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<TranscendentalCostFunction,
- FORWARD,
- 2, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(
- new TranscendentalCostFunction, TAKE_OWNERSHIP));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<TranscendentalCostFunction,
+ FORWARD,
+ 2, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(
+ new TranscendentalCostFunction, TAKE_OWNERSHIP);
TranscendentalFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, FORWARD);
}
@@ -201,14 +203,14 @@
// behavior.
options.ridders_relative_initial_step_size = 1e-3;
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<TranscendentalCostFunction,
- RIDDERS,
- 2, // number of residuals
- 5, // size of x1
- 5 // size of x2
- >(
- new TranscendentalCostFunction, TAKE_OWNERSHIP, 2, options));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<TranscendentalCostFunction,
+ RIDDERS,
+ 2, // number of residuals
+ 5, // size of x1
+ 5 // size of x2
+ >>(
+ new TranscendentalCostFunction, TAKE_OWNERSHIP, 2, options);
TranscendentalFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, RIDDERS);
}
@@ -230,122 +232,123 @@
// templates are instantiated for various shapes of the Jacobian
// matrix.
TEST(NumericDiffCostFunction, EigenRowMajorColMajorTest) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(
- new NumericDiffCostFunction<SizeTestingCostFunction<1, 1>, CENTRAL, 1, 1>(
- new SizeTestingCostFunction<1, 1>, ceres::TAKE_OWNERSHIP));
+ std::unique_ptr<CostFunction> cost_function = std::make_unique<
+ NumericDiffCostFunction<SizeTestingCostFunction<1, 1>, CENTRAL, 1, 1>>(
+ new SizeTestingCostFunction<1, 1>, ceres::TAKE_OWNERSHIP);
- cost_function.reset(
- new NumericDiffCostFunction<SizeTestingCostFunction<2, 1>, CENTRAL, 2, 1>(
- new SizeTestingCostFunction<2, 1>, ceres::TAKE_OWNERSHIP));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<SizeTestingCostFunction<2, 1>, CENTRAL, 2, 1>>(
+ new SizeTestingCostFunction<2, 1>, ceres::TAKE_OWNERSHIP);
- cost_function.reset(
- new NumericDiffCostFunction<SizeTestingCostFunction<1, 2>, CENTRAL, 1, 2>(
- new SizeTestingCostFunction<1, 2>, ceres::TAKE_OWNERSHIP));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<SizeTestingCostFunction<1, 2>, CENTRAL, 1, 2>>(
+ new SizeTestingCostFunction<1, 2>, ceres::TAKE_OWNERSHIP);
- cost_function.reset(
- new NumericDiffCostFunction<SizeTestingCostFunction<2, 2>, CENTRAL, 2, 2>(
- new SizeTestingCostFunction<2, 2>, ceres::TAKE_OWNERSHIP));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<SizeTestingCostFunction<2, 2>, CENTRAL, 2, 2>>(
+ new SizeTestingCostFunction<2, 2>, ceres::TAKE_OWNERSHIP);
- cost_function.reset(
- new NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 1, 1>(
- new EasyFunctor, TAKE_OWNERSHIP, 1));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 1, 1>>(
+ new EasyFunctor, TAKE_OWNERSHIP, 1);
- cost_function.reset(
- new NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 1, 1>(
- new EasyFunctor, TAKE_OWNERSHIP, 2));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 1, 1>>(
+ new EasyFunctor, TAKE_OWNERSHIP, 2);
- cost_function.reset(
- new NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 1, 2>(
- new EasyFunctor, TAKE_OWNERSHIP, 1));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 1, 2>>(
+ new EasyFunctor, TAKE_OWNERSHIP, 1);
- cost_function.reset(
- new NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 1, 2>(
- new EasyFunctor, TAKE_OWNERSHIP, 2));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 1, 2>>(
+ new EasyFunctor, TAKE_OWNERSHIP, 2);
- cost_function.reset(
- new NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 2, 1>(
- new EasyFunctor, TAKE_OWNERSHIP, 1));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 2, 1>>(
+ new EasyFunctor, TAKE_OWNERSHIP, 1);
- cost_function.reset(
- new NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 2, 1>(
- new EasyFunctor, TAKE_OWNERSHIP, 2));
+ cost_function = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, ceres::DYNAMIC, 2, 1>>(
+ new EasyFunctor, TAKE_OWNERSHIP, 2);
}
TEST(NumericDiffCostFunction,
EasyCaseFunctorCentralDifferencesAndDynamicNumResiduals) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(
- new NumericDiffCostFunction<EasyFunctor,
- CENTRAL,
- ceres::DYNAMIC,
- 5, // size of x1
- 5 // size of x2
- >(new EasyFunctor, TAKE_OWNERSHIP, 3));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<EasyFunctor,
+ CENTRAL,
+ ceres::DYNAMIC,
+ 5, // size of x1
+ 5 // size of x2
+ >>(
+ new EasyFunctor, TAKE_OWNERSHIP, 3);
EasyFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function, CENTRAL);
}
TEST(NumericDiffCostFunction, ExponentialFunctorRidders) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<ExponentialFunctor,
- RIDDERS,
- 1, // number of residuals
- 1 // size of x1
- >(new ExponentialFunctor));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<ExponentialFunctor,
+ RIDDERS,
+ 1, // number of residuals
+ 1 // size of x1
+ >>(new ExponentialFunctor);
ExponentialFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function);
}
TEST(NumericDiffCostFunction, ExponentialCostFunctionRidders) {
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(
- new NumericDiffCostFunction<ExponentialCostFunction,
- RIDDERS,
- 1, // number of residuals
- 1 // size of x1
- >(new ExponentialCostFunction));
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<ExponentialCostFunction,
+ RIDDERS,
+ 1, // number of residuals
+ 1 // size of x1
+ >>(new ExponentialCostFunction);
ExponentialFunctor functor;
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function);
}
TEST(NumericDiffCostFunction, RandomizedFunctorRidders) {
- std::unique_ptr<CostFunction> cost_function;
+ std::mt19937 prng;
NumericDiffOptions options;
// Larger initial step size is chosen to produce robust results in the
// presence of random noise.
options.ridders_relative_initial_step_size = 10.0;
- cost_function.reset(new NumericDiffCostFunction<RandomizedFunctor,
- RIDDERS,
- 1, // number of residuals
- 1 // size of x1
- >(
- new RandomizedFunctor(kNoiseFactor, kRandomSeed),
- TAKE_OWNERSHIP,
- 1,
- options));
- RandomizedFunctor functor(kNoiseFactor, kRandomSeed);
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<RandomizedFunctor,
+ RIDDERS,
+ 1, // number of residuals
+ 1 // size of x1
+ >>(
+ new RandomizedFunctor(kNoiseFactor, prng),
+ TAKE_OWNERSHIP,
+ 1,
+ options);
+ RandomizedFunctor functor(kNoiseFactor, prng);
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function);
}
TEST(NumericDiffCostFunction, RandomizedCostFunctionRidders) {
- std::unique_ptr<CostFunction> cost_function;
+ std::mt19937 prng;
NumericDiffOptions options;
// Larger initial step size is chosen to produce robust results in the
// presence of random noise.
options.ridders_relative_initial_step_size = 10.0;
- cost_function.reset(new NumericDiffCostFunction<RandomizedCostFunction,
- RIDDERS,
- 1, // number of residuals
- 1 // size of x1
- >(
- new RandomizedCostFunction(kNoiseFactor, kRandomSeed),
- TAKE_OWNERSHIP,
- 1,
- options));
- RandomizedFunctor functor(kNoiseFactor, kRandomSeed);
+ auto cost_function =
+ std::make_unique<NumericDiffCostFunction<RandomizedCostFunction,
+ RIDDERS,
+ 1, // number of residuals
+ 1 // size of x1
+ >>(
+ new RandomizedCostFunction(kNoiseFactor, prng),
+ TAKE_OWNERSHIP,
+ 1,
+ options);
+
+ RandomizedFunctor functor(kNoiseFactor, prng);
functor.ExpectCostFunctionEvaluationIsNearlyCorrect(*cost_function);
}
@@ -363,15 +366,15 @@
double* parameters[] = {¶meter};
double* jacobians[] = {jacobian};
- std::unique_ptr<CostFunction> cost_function(
- new NumericDiffCostFunction<OnlyFillsOneOutputFunctor, CENTRAL, 2, 1>(
- new OnlyFillsOneOutputFunctor));
+ auto cost_function = std::make_unique<
+ NumericDiffCostFunction<OnlyFillsOneOutputFunctor, CENTRAL, 2, 1>>(
+ new OnlyFillsOneOutputFunctor);
InvalidateArray(2, jacobian);
InvalidateArray(2, residuals);
EXPECT_TRUE(cost_function->Evaluate(parameters, residuals, jacobians));
EXPECT_FALSE(IsArrayValid(2, residuals));
InvalidateArray(2, residuals);
- EXPECT_TRUE(cost_function->Evaluate(parameters, residuals, NULL));
+ EXPECT_TRUE(cost_function->Evaluate(parameters, residuals, nullptr));
// We are only testing residuals here, because the Jacobians are
// computed using finite differencing from the residuals, so unless
// we introduce a validation step after every evaluation of
@@ -385,12 +388,9 @@
constexpr int kX1 = 5;
constexpr int kX2 = 5;
- std::unique_ptr<CostFunction> cost_function;
- cost_function.reset(new NumericDiffCostFunction<EasyFunctor,
- CENTRAL,
- kNumResiduals,
- kX1,
- kX2>(new EasyFunctor));
+ auto cost_function = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, kNumResiduals, kX1, kX2>>(
+ new EasyFunctor);
// Prepare the parameters and residuals.
std::array<double, kX1> x1{1e-64, 2.0, 3.0, 4.0, 5.0};
@@ -437,5 +437,28 @@
}
}
-} // namespace internal
-} // namespace ceres
+struct MultiArgFunctor {
+ explicit MultiArgFunctor(int a, double c) {}
+ template <class T>
+ bool operator()(const T* params, T* residuals) const noexcept {
+ return false;
+ }
+};
+
+TEST(NumericDiffCostFunction, ArgumentForwarding) {
+ auto cost_function1 = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, 3, 5, 5>>();
+ auto cost_function2 =
+ std::make_unique<NumericDiffCostFunction<MultiArgFunctor, CENTRAL, 1, 1>>(
+ 1, 2);
+}
+
+TEST(NumericDiffCostFunction, UniquePtrCtor) {
+ auto cost_function1 =
+ std::make_unique<NumericDiffCostFunction<EasyFunctor, CENTRAL, 3, 5, 5>>(
+ std::make_unique<EasyFunctor>());
+ auto cost_function2 = std::make_unique<
+ NumericDiffCostFunction<EasyFunctor, CENTRAL, 3, 5, 5>>();
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/numeric_diff_first_order_function_test.cc b/internal/ceres/numeric_diff_first_order_function_test.cc
new file mode 100644
index 0000000..ff57e2d
--- /dev/null
+++ b/internal/ceres/numeric_diff_first_order_function_test.cc
@@ -0,0 +1,101 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: sameeragarwal@google.com (Sameer Agarwal)
+
+#include "ceres/numeric_diff_first_order_function.h"
+
+#include <memory>
+
+#include "ceres/array_utils.h"
+#include "ceres/first_order_function.h"
+#include "gtest/gtest.h"
+
+namespace ceres::internal {
+
+class QuadraticCostFunctor {
+ public:
+ explicit QuadraticCostFunctor(double a) : a_(a) {}
+ bool operator()(const double* const x, double* cost) const {
+ cost[0] = x[0] * x[1] + x[2] * x[3] - a_;
+ return true;
+ }
+
+ private:
+ double a_;
+};
+
+TEST(NumericDiffFirstOrderFunction, BilinearDifferentiationTestStatic) {
+ auto function = std::make_unique<
+ NumericDiffFirstOrderFunction<QuadraticCostFunctor, CENTRAL, 4>>(
+ new QuadraticCostFunctor(1.0));
+
+ double parameters[4] = {1.0, 2.0, 3.0, 4.0};
+ double gradient[4];
+ double cost;
+
+ function->Evaluate(parameters, &cost, nullptr);
+ EXPECT_EQ(cost, 13.0);
+
+ cost = -1.0;
+ function->Evaluate(parameters, &cost, gradient);
+
+ EXPECT_EQ(cost, 13.0);
+
+ const double kTolerance = 1e-9;
+ EXPECT_NEAR(gradient[0], parameters[1], kTolerance);
+ EXPECT_NEAR(gradient[1], parameters[0], kTolerance);
+ EXPECT_NEAR(gradient[2], parameters[3], kTolerance);
+ EXPECT_NEAR(gradient[3], parameters[2], kTolerance);
+}
+
+TEST(NumericDiffFirstOrderFunction, BilinearDifferentiationTestDynamic) {
+ auto function = std::make_unique<
+ NumericDiffFirstOrderFunction<QuadraticCostFunctor, CENTRAL>>(
+ new QuadraticCostFunctor(1.0), 4);
+
+ double parameters[4] = {1.0, 2.0, 3.0, 4.0};
+ double gradient[4];
+ double cost;
+
+ function->Evaluate(parameters, &cost, nullptr);
+ EXPECT_EQ(cost, 13.0);
+
+ cost = -1.0;
+ function->Evaluate(parameters, &cost, gradient);
+
+ EXPECT_EQ(cost, 13.0);
+
+ const double kTolerance = 1e-9;
+ EXPECT_NEAR(gradient[0], parameters[1], kTolerance);
+ EXPECT_NEAR(gradient[1], parameters[0], kTolerance);
+ EXPECT_NEAR(gradient[2], parameters[3], kTolerance);
+ EXPECT_NEAR(gradient[3], parameters[2], kTolerance);
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/numeric_diff_test_utils.cc b/internal/ceres/numeric_diff_test_utils.cc
index d833bbb..0aa1778 100644
--- a/internal/ceres/numeric_diff_test_utils.cc
+++ b/internal/ceres/numeric_diff_test_utils.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,8 +39,7 @@
#include "ceres/types.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
bool EasyFunctor::operator()(const double* x1,
const double* x2,
@@ -149,9 +148,9 @@
};
// clang-format on
- for (int k = 0; k < kTests.size(); ++k) {
- double* x1 = &(kTests[k].x1[0]);
- double* x2 = &(kTests[k].x2[0]);
+ for (auto& test : kTests) {
+ double* x1 = &(test.x1[0]);
+ double* x2 = &(test.x2[0]);
double* parameters[] = {x1, x2};
double dydx1[10];
@@ -207,8 +206,8 @@
// Minimal tolerance w.r.t. the cost function and the tests.
const double kTolerance = 2e-14;
- for (int k = 0; k < kTests.size(); ++k) {
- double* parameters[] = {&kTests[k]};
+ for (double& test : kTests) {
+ double* parameters[] = {&test};
double dydx;
double* jacobians[1] = {&dydx};
double residual;
@@ -216,7 +215,7 @@
ASSERT_TRUE(
cost_function.Evaluate(¶meters[0], &residual, &jacobians[0]));
- double expected_result = exp(kTests[k]);
+ double expected_result = exp(test);
// Expect residual to be close to exp(x).
ExpectClose(residual, expected_result, kTolerance);
@@ -227,14 +226,7 @@
}
bool RandomizedFunctor::operator()(const double* x1, double* residuals) const {
- double random_value =
- static_cast<double>(rand()) / static_cast<double>(RAND_MAX);
-
- // Normalize noise to [-factor, factor].
- random_value *= 2.0;
- random_value -= 1.0;
- random_value *= noise_factor_;
-
+ double random_value = uniform_distribution_(*prng_);
residuals[0] = x1[0] * x1[0] + random_value;
return true;
}
@@ -245,11 +237,8 @@
const double kTolerance = 2e-4;
- // Initialize random number generator with given seed.
- srand(random_seed_);
-
- for (int k = 0; k < kTests.size(); ++k) {
- double* parameters[] = {&kTests[k]};
+ for (double& test : kTests) {
+ double* parameters[] = {&test};
double dydx;
double* jacobians[1] = {&dydx};
double residual;
@@ -258,12 +247,11 @@
cost_function.Evaluate(¶meters[0], &residual, &jacobians[0]));
// Expect residual to be close to x^2 w.r.t. noise factor.
- ExpectClose(residual, kTests[k] * kTests[k], noise_factor_);
+ ExpectClose(residual, test * test, noise_factor_);
// Check evaluated differences. (dy/dx = ~2x)
- ExpectClose(dydx, 2 * kTests[k], kTolerance);
+ ExpectClose(dydx, 2 * test, kTolerance);
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/numeric_diff_test_utils.h b/internal/ceres/numeric_diff_test_utils.h
index 392636e..e258ceb 100644
--- a/internal/ceres/numeric_diff_test_utils.h
+++ b/internal/ceres/numeric_diff_test_utils.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,13 +31,14 @@
#ifndef CERES_INTERNAL_NUMERIC_DIFF_TEST_UTILS_H_
#define CERES_INTERNAL_NUMERIC_DIFF_TEST_UTILS_H_
+#include <random>
+
#include "ceres/cost_function.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/sized_cost_function.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Noise factor for randomized cost function.
static constexpr double kNoiseFactor = 0.01;
@@ -48,7 +49,7 @@
// y1 = x1'x2 -> dy1/dx1 = x2, dy1/dx2 = x1
// y2 = (x1'x2)^2 -> dy2/dx1 = 2 * x2 * (x1'x2), dy2/dx2 = 2 * x1 * (x1'x2)
// y3 = x2'x2 -> dy3/dx1 = 0, dy3/dx2 = 2 * x2
-class CERES_EXPORT_INTERNAL EasyFunctor {
+class CERES_NO_EXPORT EasyFunctor {
public:
bool operator()(const double* x1, const double* x2, double* residuals) const;
void ExpectCostFunctionEvaluationIsNearlyCorrect(
@@ -72,14 +73,14 @@
//
// dy1/dx1 = x2 * cos(x1'x2), dy1/dx2 = x1 * cos(x1'x2)
// dy2/dx1 = -x2 * exp(-x1'x2 / 10) / 10, dy2/dx2 = -x2 * exp(-x1'x2 / 10) / 10
-class CERES_EXPORT TranscendentalFunctor {
+class CERES_NO_EXPORT TranscendentalFunctor {
public:
bool operator()(const double* x1, const double* x2, double* residuals) const;
void ExpectCostFunctionEvaluationIsNearlyCorrect(
const CostFunction& cost_function, NumericDiffMethodType method) const;
};
-class CERES_EXPORT_INTERNAL TranscendentalCostFunction
+class CERES_NO_EXPORT TranscendentalCostFunction
: public SizedCostFunction<2, 5, 5> {
public:
bool Evaluate(double const* const* parameters,
@@ -93,7 +94,7 @@
};
// y = exp(x), dy/dx = exp(x)
-class CERES_EXPORT_INTERNAL ExponentialFunctor {
+class CERES_NO_EXPORT ExponentialFunctor {
public:
bool operator()(const double* x1, double* residuals) const;
void ExpectCostFunctionEvaluationIsNearlyCorrect(
@@ -115,10 +116,12 @@
// Test adaptive numeric differentiation by synthetically adding random noise
// to a functor.
// y = x^2 + [random noise], dy/dx ~ 2x
-class CERES_EXPORT_INTERNAL RandomizedFunctor {
+class CERES_NO_EXPORT RandomizedFunctor {
public:
- RandomizedFunctor(double noise_factor, unsigned int random_seed)
- : noise_factor_(noise_factor), random_seed_(random_seed) {}
+ RandomizedFunctor(double noise_factor, std::mt19937& prng)
+ : noise_factor_(noise_factor),
+ prng_(&prng),
+ uniform_distribution_{-noise_factor, noise_factor} {}
bool operator()(const double* x1, double* residuals) const;
void ExpectCostFunctionEvaluationIsNearlyCorrect(
@@ -126,14 +129,16 @@
private:
double noise_factor_;
- unsigned int random_seed_;
+ // Store the generator as a pointer to be able to modify the instance the
+ // pointer is pointing to.
+ std::mt19937* prng_;
+ mutable std::uniform_real_distribution<> uniform_distribution_;
};
-class CERES_EXPORT_INTERNAL RandomizedCostFunction
- : public SizedCostFunction<1, 1> {
+class CERES_NO_EXPORT RandomizedCostFunction : public SizedCostFunction<1, 1> {
public:
- RandomizedCostFunction(double noise_factor, unsigned int random_seed)
- : functor_(noise_factor, random_seed) {}
+ RandomizedCostFunction(double noise_factor, std::mt19937& prng)
+ : functor_(noise_factor, prng) {}
bool Evaluate(double const* const* parameters,
double* residuals,
@@ -145,7 +150,6 @@
RandomizedFunctor functor_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_NUMERIC_DIFF_TEST_UTILS_H_
diff --git a/internal/ceres/ordered_groups_test.cc b/internal/ceres/ordered_groups_test.cc
index d613a41..d376b4d 100644
--- a/internal/ceres/ordered_groups_test.cc
+++ b/internal/ceres/ordered_groups_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,8 +35,7 @@
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST(OrderedGroups, EmptyOrderedGroupBehavesCorrectly) {
ParameterBlockOrdering ordering;
@@ -229,5 +228,4 @@
// No non-zero groups left.
EXPECT_DEATH_IF_SUPPORTED(ordering.MinNonZeroGroup(), "NumGroups");
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/pair_hash.h b/internal/ceres/pair_hash.h
index abbedcc..64882cd 100644
--- a/internal/ceres/pair_hash.h
+++ b/internal/ceres/pair_hash.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,13 +33,14 @@
#ifndef CERES_INTERNAL_PAIR_HASH_H_
#define CERES_INTERNAL_PAIR_HASH_H_
+#include <cstddef>
#include <cstdint>
+#include <functional>
#include <utility>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
#if defined(_WIN32) && !defined(__MINGW64__) && !defined(__MINGW32__)
#define GG_LONGLONG(x) x##I64
@@ -110,7 +111,6 @@
}
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_PAIR_HASH_H_
diff --git a/internal/ceres/parallel_for.h b/internal/ceres/parallel_for.h
index b64bd31..11db1fb 100644
--- a/internal/ceres/parallel_for.h
+++ b/internal/ceres/parallel_for.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -26,45 +26,161 @@
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
-// Author: vitus@google.com (Michael Vitus)
+// Authors: vitus@google.com (Michael Vitus),
+// dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
-#ifndef CERES_INTERNAL_PARALLEL_FOR_
-#define CERES_INTERNAL_PARALLEL_FOR_
+#ifndef CERES_INTERNAL_PARALLEL_FOR_H_
+#define CERES_INTERNAL_PARALLEL_FOR_H_
-#include <functional>
+#include <mutex>
+#include <vector>
#include "ceres/context_impl.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
+#include "ceres/parallel_invoke.h"
+#include "ceres/partition_range_for_parallel_for.h"
+#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-// Returns the maximum number of threads supported by the threading backend
-// Ceres was compiled with.
-int MaxNumThreadsAvailable();
+// Use a dummy mutex if num_threads = 1.
+inline decltype(auto) MakeConditionalLock(const int num_threads,
+ std::mutex& m) {
+ return (num_threads == 1) ? std::unique_lock<std::mutex>{}
+ : std::unique_lock<std::mutex>{m};
+}
// Execute the function for every element in the range [start, end) with at most
// num_threads. It will execute all the work on the calling thread if
-// num_threads is 1.
-CERES_EXPORT_INTERNAL void ParallelFor(
- ContextImpl* context,
- int start,
- int end,
- int num_threads,
- const std::function<void(int)>& function);
+// num_threads or (end - start) is equal to 1.
+// Depending on function signature, it will be supplied with either loop index
+// or a range of loop indicies; function can also be supplied with thread_id.
+// The following function signatures are supported:
+// - Functions accepting a single loop index:
+// - [](int index) { ... }
+// - [](int thread_id, int index) { ... }
+// - Functions accepting a range of loop index:
+// - [](std::tuple<int, int> index) { ... }
+// - [](int thread_id, std::tuple<int, int> index) { ... }
+//
+// When distributing workload between threads, it is assumed that each loop
+// iteration takes approximately equal time to complete.
+template <typename F>
+void ParallelFor(ContextImpl* context,
+ int start,
+ int end,
+ int num_threads,
+ F&& function,
+ int min_block_size = 1) {
+ CHECK_GT(num_threads, 0);
+ if (start >= end) {
+ return;
+ }
-// Execute the function for every element in the range [start, end) with at most
-// num_threads. It will execute all the work on the calling thread if
-// num_threads is 1. Each invocation of function() will be passed a thread_id
-// in [0, num_threads) that is guaranteed to be distinct from the value passed
-// to any concurrent execution of function().
-CERES_EXPORT_INTERNAL void ParallelFor(
- ContextImpl* context,
- int start,
- int end,
- int num_threads,
- const std::function<void(int thread_id, int i)>& function);
-} // namespace internal
-} // namespace ceres
+ if (num_threads == 1 || end - start < min_block_size * 2) {
+ InvokeOnSegment(0, std::make_tuple(start, end), std::forward<F>(function));
+ return;
+ }
+
+ CHECK(context != nullptr);
+ ParallelInvoke(context,
+ start,
+ end,
+ num_threads,
+ std::forward<F>(function),
+ min_block_size);
+}
+
+// Execute function for every element in the range [start, end) with at most
+// num_threads, using user-provided partitions array.
+// When distributing workload between threads, it is assumed that each segment
+// bounded by adjacent elements of partitions array takes approximately equal
+// time to process.
+template <typename F>
+void ParallelFor(ContextImpl* context,
+ int start,
+ int end,
+ int num_threads,
+ F&& function,
+ const std::vector<int>& partitions) {
+ CHECK_GT(num_threads, 0);
+ if (start >= end) {
+ return;
+ }
+ CHECK_EQ(partitions.front(), start);
+ CHECK_EQ(partitions.back(), end);
+ if (num_threads == 1 || end - start <= num_threads) {
+ ParallelFor(context, start, end, num_threads, std::forward<F>(function));
+ return;
+ }
+ CHECK_GT(partitions.size(), 1);
+ const int num_partitions = partitions.size() - 1;
+ ParallelFor(context,
+ 0,
+ num_partitions,
+ num_threads,
+ [&function, &partitions](int thread_id,
+ std::tuple<int, int> partition_ids) {
+ // partition_ids is a range of partition indices
+ const auto [partition_start, partition_end] = partition_ids;
+ // Execution over several adjacent segments is equivalent
+ // to execution over union of those segments (which is also a
+ // contiguous segment)
+ const int range_start = partitions[partition_start];
+ const int range_end = partitions[partition_end];
+ // Range of original loop indices
+ const auto range = std::make_tuple(range_start, range_end);
+ InvokeOnSegment(thread_id, range, function);
+ });
+}
+
+// Execute function for every element in the range [start, end) with at most
+// num_threads, taking into account user-provided integer cumulative costs of
+// iterations. Cumulative costs of iteration for indices in range [0, end) are
+// stored in objects from cumulative_cost_data. User-provided
+// cumulative_cost_fun returns non-decreasing integer values corresponding to
+// inclusive cumulative cost of loop iterations, provided with a reference to
+// user-defined object. Only indices from [start, end) will be referenced. This
+// routine assumes that cumulative_cost_fun is non-decreasing (in other words,
+// all costs are non-negative);
+// When distributing workload between threads, input range of loop indices will
+// be partitioned into disjoint contiguous intervals, with the maximal cost
+// being minimized.
+// For example, with iteration costs of [1, 1, 5, 3, 1, 4] cumulative_cost_fun
+// should return [1, 2, 7, 10, 11, 15], and with num_threads = 4 this range
+// will be split into segments [0, 2) [2, 3) [3, 5) [5, 6) with costs
+// [2, 5, 4, 4].
+template <typename F, typename CumulativeCostData, typename CumulativeCostFun>
+void ParallelFor(ContextImpl* context,
+ int start,
+ int end,
+ int num_threads,
+ F&& function,
+ const CumulativeCostData* cumulative_cost_data,
+ CumulativeCostFun&& cumulative_cost_fun) {
+ CHECK_GT(num_threads, 0);
+ if (start >= end) {
+ return;
+ }
+ if (num_threads == 1 || end - start <= num_threads) {
+ ParallelFor(context, start, end, num_threads, std::forward<F>(function));
+ return;
+ }
+ // Creating several partitions allows us to tolerate imperfections of
+ // partitioning and user-supplied iteration costs up to a certain extent
+ constexpr int kNumPartitionsPerThread = 4;
+ const int kMaxPartitions = num_threads * kNumPartitionsPerThread;
+ const auto& partitions = PartitionRangeForParallelFor(
+ start,
+ end,
+ kMaxPartitions,
+ cumulative_cost_data,
+ std::forward<CumulativeCostFun>(cumulative_cost_fun));
+ CHECK_GT(partitions.size(), 1);
+ ParallelFor(
+ context, start, end, num_threads, std::forward<F>(function), partitions);
+}
+} // namespace ceres::internal
#endif // CERES_INTERNAL_PARALLEL_FOR_H_
diff --git a/internal/ceres/parallel_for_benchmark.cc b/internal/ceres/parallel_for_benchmark.cc
new file mode 100644
index 0000000..3bfdb87
--- /dev/null
+++ b/internal/ceres/parallel_for_benchmark.cc
@@ -0,0 +1,76 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+
+#include "benchmark/benchmark.h"
+#include "ceres/context_impl.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/parallel_for.h"
+#include "glog/logging.h"
+
+namespace ceres::internal {
+
+// Parallel for with very small amount of work per iteration and small amount of
+// iterations benchmarks performance of task scheduling
+static void SchedulerBenchmark(benchmark::State& state) {
+ const int vector_size = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector x = Vector::Random(vector_size);
+ for (auto _ : state) {
+ ParallelFor(
+ &context, 0, vector_size, num_threads, [&x](int id) { x[id] = 0.; });
+ }
+ CHECK_EQ(x.squaredNorm(), 0.);
+}
+BENCHMARK(SchedulerBenchmark)
+ ->Args({128, 1})
+ ->Args({128, 2})
+ ->Args({128, 4})
+ ->Args({128, 8})
+ ->Args({128, 16})
+ ->Args({256, 1})
+ ->Args({256, 2})
+ ->Args({256, 4})
+ ->Args({256, 8})
+ ->Args({256, 16})
+ ->Args({1024, 1})
+ ->Args({1024, 2})
+ ->Args({1024, 4})
+ ->Args({1024, 8})
+ ->Args({1024, 16})
+ ->Args({4096, 1})
+ ->Args({4096, 2})
+ ->Args({4096, 4})
+ ->Args({4096, 8})
+ ->Args({4096, 16});
+
+} // namespace ceres::internal
+
+BENCHMARK_MAIN();
diff --git a/internal/ceres/parallel_for_cxx.cc b/internal/ceres/parallel_for_cxx.cc
deleted file mode 100644
index 4da40c0..0000000
--- a/internal/ceres/parallel_for_cxx.cc
+++ /dev/null
@@ -1,245 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: vitus@google.com (Michael Vitus)
-
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
-
-#ifdef CERES_USE_CXX_THREADS
-
-#include <cmath>
-#include <condition_variable>
-#include <memory>
-#include <mutex>
-
-#include "ceres/concurrent_queue.h"
-#include "ceres/parallel_for.h"
-#include "ceres/scoped_thread_token.h"
-#include "ceres/thread_token_provider.h"
-#include "glog/logging.h"
-
-namespace ceres {
-namespace internal {
-namespace {
-// This class creates a thread safe barrier which will block until a
-// pre-specified number of threads call Finished. This allows us to block the
-// main thread until all the parallel threads are finished processing all the
-// work.
-class BlockUntilFinished {
- public:
- explicit BlockUntilFinished(int num_total)
- : num_finished_(0), num_total_(num_total) {}
-
- // Increment the number of jobs that have finished and signal the blocking
- // thread if all jobs have finished.
- void Finished() {
- std::lock_guard<std::mutex> lock(mutex_);
- ++num_finished_;
- CHECK_LE(num_finished_, num_total_);
- if (num_finished_ == num_total_) {
- condition_.notify_one();
- }
- }
-
- // Block until all threads have signaled they are finished.
- void Block() {
- std::unique_lock<std::mutex> lock(mutex_);
- condition_.wait(lock, [&]() { return num_finished_ == num_total_; });
- }
-
- private:
- std::mutex mutex_;
- std::condition_variable condition_;
- // The current number of jobs finished.
- int num_finished_;
- // The total number of jobs.
- int num_total_;
-};
-
-// Shared state between the parallel tasks. Each thread will use this
-// information to get the next block of work to be performed.
-struct SharedState {
- SharedState(int start, int end, int num_work_items)
- : start(start),
- end(end),
- num_work_items(num_work_items),
- i(0),
- thread_token_provider(num_work_items),
- block_until_finished(num_work_items) {}
-
- // The start and end index of the for loop.
- const int start;
- const int end;
- // The number of blocks that need to be processed.
- const int num_work_items;
-
- // The next block of work to be assigned to a worker. The parallel for loop
- // range is split into num_work_items blocks of work, i.e. a single block of
- // work is:
- // for (int j = start + i; j < end; j += num_work_items) { ... }.
- int i;
- std::mutex mutex_i;
-
- // Provides a unique thread ID among all active threads working on the same
- // group of tasks. Thread-safe.
- ThreadTokenProvider thread_token_provider;
-
- // Used to signal when all the work has been completed. Thread safe.
- BlockUntilFinished block_until_finished;
-};
-
-} // namespace
-
-int MaxNumThreadsAvailable() { return ThreadPool::MaxNumThreadsAvailable(); }
-
-// See ParallelFor (below) for more details.
-void ParallelFor(ContextImpl* context,
- int start,
- int end,
- int num_threads,
- const std::function<void(int)>& function) {
- CHECK_GT(num_threads, 0);
- CHECK(context != NULL);
- if (end <= start) {
- return;
- }
-
- // Fast path for when it is single threaded.
- if (num_threads == 1) {
- for (int i = start; i < end; ++i) {
- function(i);
- }
- return;
- }
-
- ParallelFor(
- context, start, end, num_threads, [&function](int /*thread_id*/, int i) {
- function(i);
- });
-}
-
-// This implementation uses a fixed size max worker pool with a shared task
-// queue. The problem of executing the function for the interval of [start, end)
-// is broken up into at most num_threads blocks and added to the thread pool. To
-// avoid deadlocks, the calling thread is allowed to steal work from the worker
-// pool. This is implemented via a shared state between the tasks. In order for
-// the calling thread or thread pool to get a block of work, it will query the
-// shared state for the next block of work to be done. If there is nothing left,
-// it will return. We will exit the ParallelFor call when all of the work has
-// been done, not when all of the tasks have been popped off the task queue.
-//
-// A unique thread ID among all active tasks will be acquired once for each
-// block of work. This avoids the significant performance penalty for acquiring
-// it on every iteration of the for loop. The thread ID is guaranteed to be in
-// [0, num_threads).
-//
-// A performance analysis has shown this implementation is onpar with OpenMP and
-// TBB.
-void ParallelFor(ContextImpl* context,
- int start,
- int end,
- int num_threads,
- const std::function<void(int thread_id, int i)>& function) {
- CHECK_GT(num_threads, 0);
- CHECK(context != NULL);
- if (end <= start) {
- return;
- }
-
- // Fast path for when it is single threaded.
- if (num_threads == 1) {
- // Even though we only have one thread, use the thread token provider to
- // guarantee the exact same behavior when running with multiple threads.
- ThreadTokenProvider thread_token_provider(num_threads);
- const ScopedThreadToken scoped_thread_token(&thread_token_provider);
- const int thread_id = scoped_thread_token.token();
- for (int i = start; i < end; ++i) {
- function(thread_id, i);
- }
- return;
- }
-
- // We use a std::shared_ptr because the main thread can finish all
- // the work before the tasks have been popped off the queue. So the
- // shared state needs to exist for the duration of all the tasks.
- const int num_work_items = std::min((end - start), num_threads);
- std::shared_ptr<SharedState> shared_state(
- new SharedState(start, end, num_work_items));
-
- // A function which tries to perform a chunk of work. This returns false if
- // there is no work to be done.
- auto task_function = [shared_state, &function]() {
- int i = 0;
- {
- // Get the next available chunk of work to be performed. If there is no
- // work, return false.
- std::lock_guard<std::mutex> lock(shared_state->mutex_i);
- if (shared_state->i >= shared_state->num_work_items) {
- return false;
- }
- i = shared_state->i;
- ++shared_state->i;
- }
-
- const ScopedThreadToken scoped_thread_token(
- &shared_state->thread_token_provider);
- const int thread_id = scoped_thread_token.token();
-
- // Perform each task.
- for (int j = shared_state->start + i; j < shared_state->end;
- j += shared_state->num_work_items) {
- function(thread_id, j);
- }
- shared_state->block_until_finished.Finished();
- return true;
- };
-
- // Add all the tasks to the thread pool.
- for (int i = 0; i < num_work_items; ++i) {
- // Note we are taking the task_function as value so the shared_state
- // shared pointer is copied and the ref count is increased. This is to
- // prevent it from being deleted when the main thread finishes all the
- // work and exits before the threads finish.
- context->thread_pool.AddTask([task_function]() { task_function(); });
- }
-
- // Try to do any available work on the main thread. This may steal work from
- // the thread pool, but when there is no work left the thread pool tasks
- // will be no-ops.
- while (task_function()) {
- }
-
- // Wait until all tasks have finished.
- shared_state->block_until_finished.Block();
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_USE_CXX_THREADS
diff --git a/internal/ceres/parallel_for_nothreads.cc b/internal/ceres/parallel_for_nothreads.cc
deleted file mode 100644
index d036569..0000000
--- a/internal/ceres/parallel_for_nothreads.cc
+++ /dev/null
@@ -1,78 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: alexs.mac@gmail.com (Alex Stewart)
-
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
-
-#ifdef CERES_NO_THREADS
-
-#include "ceres/parallel_for.h"
-#include "glog/logging.h"
-
-namespace ceres {
-namespace internal {
-
-int MaxNumThreadsAvailable() { return 1; }
-
-void ParallelFor(ContextImpl* context,
- int start,
- int end,
- int num_threads,
- const std::function<void(int)>& function) {
- CHECK_GT(num_threads, 0);
- CHECK(context != NULL);
- if (end <= start) {
- return;
- }
- for (int i = start; i < end; ++i) {
- function(i);
- }
-}
-
-void ParallelFor(ContextImpl* context,
- int start,
- int end,
- int num_threads,
- const std::function<void(int thread_id, int i)>& function) {
- CHECK_GT(num_threads, 0);
- CHECK(context != NULL);
- if (end <= start) {
- return;
- }
- const int thread_id = 0;
- for (int i = start; i < end; ++i) {
- function(thread_id, i);
- }
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_NO_THREADS
diff --git a/internal/ceres/parallel_for_openmp.cc b/internal/ceres/parallel_for_openmp.cc
deleted file mode 100644
index eb9d905..0000000
--- a/internal/ceres/parallel_for_openmp.cc
+++ /dev/null
@@ -1,85 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: vitus@google.com (Michael Vitus)
-
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
-
-#if defined(CERES_USE_OPENMP)
-
-#include "ceres/parallel_for.h"
-#include "ceres/scoped_thread_token.h"
-#include "ceres/thread_token_provider.h"
-#include "glog/logging.h"
-#include "omp.h"
-
-namespace ceres {
-namespace internal {
-
-int MaxNumThreadsAvailable() { return omp_get_max_threads(); }
-
-void ParallelFor(ContextImpl* context,
- int start,
- int end,
- int num_threads,
- const std::function<void(int)>& function) {
- CHECK_GT(num_threads, 0);
- CHECK(context != NULL);
- if (end <= start) {
- return;
- }
-
-#ifdef CERES_USE_OPENMP
-#pragma omp parallel for num_threads(num_threads) \
- schedule(dynamic) if (num_threads > 1)
-#endif // CERES_USE_OPENMP
- for (int i = start; i < end; ++i) {
- function(i);
- }
-}
-
-void ParallelFor(ContextImpl* context,
- int start,
- int end,
- int num_threads,
- const std::function<void(int thread_id, int i)>& function) {
- CHECK(context != NULL);
-
- ThreadTokenProvider thread_token_provider(num_threads);
- ParallelFor(context, start, end, num_threads, [&](int i) {
- const ScopedThreadToken scoped_thread_token(&thread_token_provider);
- const int thread_id = scoped_thread_token.token();
- function(thread_id, i);
- });
-}
-
-} // namespace internal
-} // namespace ceres
-
-#endif // defined(CERES_USE_OPENMP)
diff --git a/internal/ceres/parallel_for_test.cc b/internal/ceres/parallel_for_test.cc
index 434f993..46f5a0f 100644
--- a/internal/ceres/parallel_for_test.cc
+++ b/internal/ceres/parallel_for_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,26 +28,26 @@
//
// Author: vitus@google.com (Michael Vitus)
-// This include must come before any #ifndef check on Ceres compile options.
-// clang-format off
-#include "ceres/internal/port.h"
-// clang-format on
-
#include "ceres/parallel_for.h"
+#include <atomic>
#include <cmath>
#include <condition_variable>
#include <mutex>
+#include <numeric>
+#include <random>
#include <thread>
+#include <tuple>
#include <vector>
#include "ceres/context_impl.h"
+#include "ceres/internal/config.h"
+#include "ceres/parallel_vector_ops.h"
#include "glog/logging.h"
#include "gmock/gmock.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
using testing::ElementsAreArray;
using testing::UnorderedElementsAreArray;
@@ -73,6 +73,51 @@
}
}
+// Tests parallel for loop with ranges
+TEST(ParallelForWithRange, NumThreads) {
+ ContextImpl context;
+ context.EnsureMinimumThreads(/*num_threads=*/2);
+
+ const int size = 16;
+ std::vector<int> expected_results(size, 0);
+ for (int i = 0; i < size; ++i) {
+ expected_results[i] = std::sqrt(i);
+ }
+
+ for (int num_threads = 1; num_threads <= 8; ++num_threads) {
+ std::vector<int> values(size, 0);
+ ParallelFor(
+ &context, 0, size, num_threads, [&values](std::tuple<int, int> range) {
+ auto [start, end] = range;
+ for (int i = start; i < end; ++i) values[i] = std::sqrt(i);
+ });
+ EXPECT_THAT(values, ElementsAreArray(expected_results));
+ }
+}
+
+// Tests parallel for loop with ranges and lower bound on minimal range size
+TEST(ParallelForWithRange, MinimalSize) {
+ ContextImpl context;
+ constexpr int kNumThreads = 4;
+ constexpr int kMinBlockSize = 5;
+ context.EnsureMinimumThreads(kNumThreads);
+
+ for (int size = kMinBlockSize; size <= 25; ++size) {
+ std::atomic<bool> failed(false);
+ ParallelFor(
+ &context,
+ 0,
+ size,
+ kNumThreads,
+ [&failed, kMinBlockSize](std::tuple<int, int> range) {
+ auto [start, end] = range;
+ if (end - start < kMinBlockSize) failed = true;
+ },
+ kMinBlockSize);
+ EXPECT_EQ(failed, false);
+ }
+}
+
// Tests the parallel for loop with the thread ID interface computes the correct
// result for various number of threads.
TEST(ParallelForWithThreadId, NumThreads) {
@@ -132,8 +177,6 @@
}
}
-// This test is only valid when multithreading support is enabled.
-#ifndef CERES_NO_THREADS
TEST(ParallelForWithThreadId, UniqueThreadIds) {
// Ensure the hardware supports more than 1 thread to ensure the test will
// pass.
@@ -165,7 +208,289 @@
EXPECT_THAT(x, UnorderedElementsAreArray({0, 1}));
}
-#endif // CERES_NO_THREADS
-} // namespace internal
-} // namespace ceres
+// Helper function for partition tests
+bool BruteForcePartition(
+ int* costs, int start, int end, int max_partitions, int max_cost);
+// Basic test if MaxPartitionCostIsFeasible and BruteForcePartition agree on
+// simple test-cases
+TEST(GuidedParallelFor, MaxPartitionCostIsFeasible) {
+ std::vector<int> costs, cumulative_costs, partition;
+ costs = {1, 2, 3, 5, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0};
+ cumulative_costs.resize(costs.size());
+ std::partial_sum(costs.begin(), costs.end(), cumulative_costs.begin());
+ const auto dummy_getter = [](const int v) { return v; };
+
+ // [1, 2, 3] [5], [0 ... 0, 7, 0, ... 0]
+ EXPECT_TRUE(MaxPartitionCostIsFeasible(0,
+ costs.size(),
+ 3,
+ 7,
+ 0,
+ cumulative_costs.data(),
+ dummy_getter,
+ &partition));
+ EXPECT_TRUE(BruteForcePartition(costs.data(), 0, costs.size(), 3, 7));
+ // [1, 2, 3, 5, 0 ... 0, 7, 0, ... 0]
+ EXPECT_TRUE(MaxPartitionCostIsFeasible(0,
+ costs.size(),
+ 3,
+ 18,
+ 0,
+ cumulative_costs.data(),
+ dummy_getter,
+ &partition));
+ EXPECT_TRUE(BruteForcePartition(costs.data(), 0, costs.size(), 3, 18));
+ // Impossible since there is item of cost 7
+ EXPECT_FALSE(MaxPartitionCostIsFeasible(0,
+ costs.size(),
+ 3,
+ 6,
+ 0,
+ cumulative_costs.data(),
+ dummy_getter,
+ &partition));
+ EXPECT_FALSE(BruteForcePartition(costs.data(), 0, costs.size(), 3, 6));
+ // Impossible
+ EXPECT_FALSE(MaxPartitionCostIsFeasible(0,
+ costs.size(),
+ 2,
+ 10,
+ 0,
+ cumulative_costs.data(),
+ dummy_getter,
+ &partition));
+ EXPECT_FALSE(BruteForcePartition(costs.data(), 0, costs.size(), 2, 10));
+}
+
+// Randomized tests for MaxPartitionCostIsFeasible
+TEST(GuidedParallelFor, MaxPartitionCostIsFeasibleRandomized) {
+ std::vector<int> costs, cumulative_costs, partition;
+ const auto dummy_getter = [](const int v) { return v; };
+
+ // Random tests
+ const int kNumTests = 1000;
+ const int kMaxElements = 32;
+ const int kMaxPartitions = 16;
+ const int kMaxElCost = 8;
+ std::mt19937 rng;
+ std::uniform_int_distribution<int> rng_N(1, kMaxElements);
+ std::uniform_int_distribution<int> rng_M(1, kMaxPartitions);
+ std::uniform_int_distribution<int> rng_e(0, kMaxElCost);
+ for (int t = 0; t < kNumTests; ++t) {
+ const int N = rng_N(rng);
+ const int M = rng_M(rng);
+ int total = 0;
+ costs.clear();
+ for (int i = 0; i < N; ++i) {
+ costs.push_back(rng_e(rng));
+ total += costs.back();
+ }
+
+ cumulative_costs.resize(N);
+ std::partial_sum(costs.begin(), costs.end(), cumulative_costs.begin());
+
+ std::uniform_int_distribution<int> rng_seg(0, N - 1);
+ int start = rng_seg(rng);
+ int end = rng_seg(rng);
+ if (start > end) std::swap(start, end);
+ ++end;
+
+ int first_admissible = 0;
+ for (int threshold = 1; threshold <= total; ++threshold) {
+ const bool bruteforce =
+ BruteForcePartition(costs.data(), start, end, M, threshold);
+ if (bruteforce && !first_admissible) {
+ first_admissible = threshold;
+ }
+ const bool binary_search =
+ MaxPartitionCostIsFeasible(start,
+ end,
+ M,
+ threshold,
+ start ? cumulative_costs[start - 1] : 0,
+ cumulative_costs.data(),
+ dummy_getter,
+ &partition);
+ EXPECT_EQ(bruteforce, binary_search);
+ EXPECT_LE(partition.size(), M + 1);
+ // check partition itself
+ if (binary_search) {
+ ASSERT_GT(partition.size(), 1);
+ EXPECT_EQ(partition.front(), start);
+ EXPECT_EQ(partition.back(), end);
+
+ const int num_partitions = partition.size() - 1;
+ EXPECT_LE(num_partitions, M);
+ for (int j = 0; j < num_partitions; ++j) {
+ int total = 0;
+ for (int k = partition[j]; k < partition[j + 1]; ++k) {
+ EXPECT_LT(k, end);
+ EXPECT_GE(k, start);
+ total += costs[k];
+ }
+ EXPECT_LE(total, threshold);
+ }
+ }
+ }
+ }
+}
+
+TEST(GuidedParallelFor, PartitionRangeForParallelFor) {
+ std::vector<int> costs, cumulative_costs, partition;
+ const auto dummy_getter = [](const int v) { return v; };
+
+ // Random tests
+ const int kNumTests = 1000;
+ const int kMaxElements = 32;
+ const int kMaxPartitions = 16;
+ const int kMaxElCost = 8;
+ std::mt19937 rng;
+ std::uniform_int_distribution<int> rng_N(1, kMaxElements);
+ std::uniform_int_distribution<int> rng_M(1, kMaxPartitions);
+ std::uniform_int_distribution<int> rng_e(0, kMaxElCost);
+ for (int t = 0; t < kNumTests; ++t) {
+ const int N = rng_N(rng);
+ const int M = rng_M(rng);
+ int total = 0;
+ costs.clear();
+ for (int i = 0; i < N; ++i) {
+ costs.push_back(rng_e(rng));
+ total += costs.back();
+ }
+
+ cumulative_costs.resize(N);
+ std::partial_sum(costs.begin(), costs.end(), cumulative_costs.begin());
+
+ std::uniform_int_distribution<int> rng_seg(0, N - 1);
+ int start = rng_seg(rng);
+ int end = rng_seg(rng);
+ if (start > end) std::swap(start, end);
+ ++end;
+
+ int first_admissible = 0;
+ for (int threshold = 1; threshold <= total; ++threshold) {
+ const bool bruteforce =
+ BruteForcePartition(costs.data(), start, end, M, threshold);
+ if (bruteforce) {
+ first_admissible = threshold;
+ break;
+ }
+ }
+ EXPECT_TRUE(first_admissible != 0 || total == 0);
+ partition = PartitionRangeForParallelFor(
+ start, end, M, cumulative_costs.data(), dummy_getter);
+ ASSERT_GT(partition.size(), 1);
+ EXPECT_EQ(partition.front(), start);
+ EXPECT_EQ(partition.back(), end);
+
+ const int num_partitions = partition.size() - 1;
+ EXPECT_LE(num_partitions, M);
+ for (int j = 0; j < num_partitions; ++j) {
+ int total = 0;
+ for (int k = partition[j]; k < partition[j + 1]; ++k) {
+ EXPECT_LT(k, end);
+ EXPECT_GE(k, start);
+ total += costs[k];
+ }
+ EXPECT_LE(total, first_admissible);
+ }
+ }
+}
+
+// Recursively try to partition range into segements of total cost
+// less than max_cost
+bool BruteForcePartition(
+ int* costs, int start, int end, int max_partitions, int max_cost) {
+ if (start == end) return true;
+ if (start < end && max_partitions == 0) return false;
+ int total_cost = 0;
+ for (int last_curr = start + 1; last_curr <= end; ++last_curr) {
+ total_cost += costs[last_curr - 1];
+ if (total_cost > max_cost) break;
+ if (BruteForcePartition(
+ costs, last_curr, end, max_partitions - 1, max_cost))
+ return true;
+ }
+ return false;
+}
+
+// Tests if guided parallel for loop computes the correct result for various
+// number of threads.
+TEST(GuidedParallelFor, NumThreads) {
+ ContextImpl context;
+ context.EnsureMinimumThreads(/*num_threads=*/2);
+
+ const int size = 16;
+ std::vector<int> expected_results(size, 0);
+ for (int i = 0; i < size; ++i) {
+ expected_results[i] = std::sqrt(i);
+ }
+
+ std::vector<int> costs, cumulative_costs;
+ for (int i = 1; i <= size; ++i) {
+ int cost = i * i;
+ costs.push_back(cost);
+ if (i == 1) {
+ cumulative_costs.push_back(cost);
+ } else {
+ cumulative_costs.push_back(cost + cumulative_costs.back());
+ }
+ }
+
+ for (int num_threads = 1; num_threads <= 8; ++num_threads) {
+ std::vector<int> values(size, 0);
+ ParallelFor(
+ &context,
+ 0,
+ size,
+ num_threads,
+ [&values](int i) { values[i] = std::sqrt(i); },
+ cumulative_costs.data(),
+ [](const int v) { return v; });
+ EXPECT_THAT(values, ElementsAreArray(expected_results));
+ }
+}
+
+TEST(ParallelAssign, D2MulX) {
+ const int kVectorSize = 1024 * 1024;
+ const int kMaxNumThreads = 8;
+ const double kEpsilon = 1e-16;
+
+ const Vector D_full = Vector::Random(kVectorSize * 2);
+ const ConstVectorRef D(D_full.data() + kVectorSize, kVectorSize);
+ const Vector x = Vector::Random(kVectorSize);
+ const Vector y_expected = D.array().square() * x.array();
+ ContextImpl context;
+ context.EnsureMinimumThreads(kMaxNumThreads);
+
+ for (int num_threads = 1; num_threads <= kMaxNumThreads; ++num_threads) {
+ Vector y_observed(kVectorSize);
+ ParallelAssign(
+ &context, num_threads, y_observed, D.array().square() * x.array());
+
+ // We might get non-bit-exact result due to different precision in scalar
+ // and vector code. For example, in x86 mode mingw might emit x87
+ // instructions for scalar code, thus making bit-exact check fail
+ EXPECT_NEAR((y_expected - y_observed).squaredNorm(),
+ 0.,
+ kEpsilon * y_expected.squaredNorm());
+ }
+}
+
+TEST(ParallelAssign, SetZero) {
+ const int kVectorSize = 1024 * 1024;
+ const int kMaxNumThreads = 8;
+
+ ContextImpl context;
+ context.EnsureMinimumThreads(kMaxNumThreads);
+
+ for (int num_threads = 1; num_threads <= kMaxNumThreads; ++num_threads) {
+ Vector x = Vector::Random(kVectorSize);
+ ParallelSetZero(&context, num_threads, x);
+
+ CHECK_EQ(x.squaredNorm(), 0.);
+ }
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/parallel_invoke.cc b/internal/ceres/parallel_invoke.cc
new file mode 100644
index 0000000..0e387c5
--- /dev/null
+++ b/internal/ceres/parallel_invoke.cc
@@ -0,0 +1,77 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: vitus@google.com (Michael Vitus)
+
+#include <algorithm>
+#include <atomic>
+#include <cmath>
+#include <condition_variable>
+#include <memory>
+#include <mutex>
+#include <tuple>
+
+#include "ceres/internal/config.h"
+#include "ceres/parallel_for.h"
+#include "ceres/parallel_vector_ops.h"
+#include "glog/logging.h"
+
+namespace ceres::internal {
+
+BlockUntilFinished::BlockUntilFinished(int num_total_jobs)
+ : num_total_jobs_finished_(0), num_total_jobs_(num_total_jobs) {}
+
+void BlockUntilFinished::Finished(int num_jobs_finished) {
+ if (num_jobs_finished == 0) return;
+ std::lock_guard<std::mutex> lock(mutex_);
+ num_total_jobs_finished_ += num_jobs_finished;
+ CHECK_LE(num_total_jobs_finished_, num_total_jobs_);
+ if (num_total_jobs_finished_ == num_total_jobs_) {
+ condition_.notify_one();
+ }
+}
+
+void BlockUntilFinished::Block() {
+ std::unique_lock<std::mutex> lock(mutex_);
+ condition_.wait(
+ lock, [this]() { return num_total_jobs_finished_ == num_total_jobs_; });
+}
+
+ParallelInvokeState::ParallelInvokeState(int start,
+ int end,
+ int num_work_blocks)
+ : start(start),
+ end(end),
+ num_work_blocks(num_work_blocks),
+ base_block_size((end - start) / num_work_blocks),
+ num_base_p1_sized_blocks((end - start) % num_work_blocks),
+ block_id(0),
+ thread_id(0),
+ block_until_finished(num_work_blocks) {}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/parallel_invoke.h b/internal/ceres/parallel_invoke.h
new file mode 100644
index 0000000..398f8f2
--- /dev/null
+++ b/internal/ceres/parallel_invoke.h
@@ -0,0 +1,272 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: vitus@google.com (Michael Vitus),
+// dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#ifndef CERES_INTERNAL_PARALLEL_INVOKE_H_
+#define CERES_INTERNAL_PARALLEL_INVOKE_H_
+
+#include <atomic>
+#include <condition_variable>
+#include <memory>
+#include <mutex>
+#include <tuple>
+#include <type_traits>
+
+namespace ceres::internal {
+
+// InvokeWithThreadId handles passing thread_id to the function
+template <typename F, typename... Args>
+void InvokeWithThreadId(int thread_id, F&& function, Args&&... args) {
+ constexpr bool kPassThreadId = std::is_invocable_v<F, int, Args...>;
+
+ if constexpr (kPassThreadId) {
+ function(thread_id, std::forward<Args>(args)...);
+ } else {
+ function(std::forward<Args>(args)...);
+ }
+}
+
+// InvokeOnSegment either runs a loop over segment indices or passes it to the
+// function
+template <typename F>
+void InvokeOnSegment(int thread_id, std::tuple<int, int> range, F&& function) {
+ constexpr bool kExplicitLoop =
+ std::is_invocable_v<F, int> || std::is_invocable_v<F, int, int>;
+
+ if constexpr (kExplicitLoop) {
+ const auto [start, end] = range;
+ for (int i = start; i != end; ++i) {
+ InvokeWithThreadId(thread_id, std::forward<F>(function), i);
+ }
+ } else {
+ InvokeWithThreadId(thread_id, std::forward<F>(function), range);
+ }
+}
+
+// This class creates a thread safe barrier which will block until a
+// pre-specified number of threads call Finished. This allows us to block the
+// main thread until all the parallel threads are finished processing all the
+// work.
+class BlockUntilFinished {
+ public:
+ explicit BlockUntilFinished(int num_total_jobs);
+
+ // Increment the number of jobs that have been processed by the number of
+ // jobs processed by caller and signal the blocking thread if all jobs
+ // have finished.
+ void Finished(int num_jobs_finished);
+
+ // Block until receiving confirmation of all jobs being finished.
+ void Block();
+
+ private:
+ std::mutex mutex_;
+ std::condition_variable condition_;
+ int num_total_jobs_finished_;
+ const int num_total_jobs_;
+};
+
+// Shared state between the parallel tasks. Each thread will use this
+// information to get the next block of work to be performed.
+struct ParallelInvokeState {
+ // The entire range [start, end) is split into num_work_blocks contiguous
+ // disjoint intervals (blocks), which are as equal as possible given
+ // total index count and requested number of blocks.
+ //
+ // Those num_work_blocks blocks are then processed in parallel.
+ //
+ // Total number of integer indices in interval [start, end) is
+ // end - start, and when splitting them into num_work_blocks blocks
+ // we can either
+ // - Split into equal blocks when (end - start) is divisible by
+ // num_work_blocks
+ // - Split into blocks with size difference at most 1:
+ // - Size of the smallest block(s) is (end - start) / num_work_blocks
+ // - (end - start) % num_work_blocks will need to be 1 index larger
+ //
+ // Note that this splitting is optimal in the sense of maximal difference
+ // between block sizes, since splitting into equal blocks is possible
+ // if and only if number of indices is divisible by number of blocks.
+ ParallelInvokeState(int start, int end, int num_work_blocks);
+
+ // The start and end index of the for loop.
+ const int start;
+ const int end;
+ // The number of blocks that need to be processed.
+ const int num_work_blocks;
+ // Size of the smallest block
+ const int base_block_size;
+ // Number of blocks of size base_block_size + 1
+ const int num_base_p1_sized_blocks;
+
+ // The next block of work to be assigned to a worker. The parallel for loop
+ // range is split into num_work_blocks blocks of work, with a single block of
+ // work being of size
+ // - base_block_size + 1 for the first num_base_p1_sized_blocks blocks
+ // - base_block_size for the rest of the blocks
+ // blocks of indices are contiguous and disjoint
+ std::atomic<int> block_id;
+
+ // Provides a unique thread ID among all active threads
+ // We do not schedule more than num_threads threads via thread pool
+ // and caller thread might steal one ID
+ std::atomic<int> thread_id;
+
+ // Used to signal when all the work has been completed. Thread safe.
+ BlockUntilFinished block_until_finished;
+};
+
+// This implementation uses a fixed size max worker pool with a shared task
+// queue. The problem of executing the function for the interval of [start, end)
+// is broken up into at most num_threads * kWorkBlocksPerThread blocks (each of
+// size at least min_block_size) and added to the thread pool. To avoid
+// deadlocks, the calling thread is allowed to steal work from the worker pool.
+// This is implemented via a shared state between the tasks. In order for
+// the calling thread or thread pool to get a block of work, it will query the
+// shared state for the next block of work to be done. If there is nothing left,
+// it will return. We will exit the ParallelFor call when all of the work has
+// been done, not when all of the tasks have been popped off the task queue.
+//
+// A unique thread ID among all active tasks will be acquired once for each
+// block of work. This avoids the significant performance penalty for acquiring
+// it on every iteration of the for loop. The thread ID is guaranteed to be in
+// [0, num_threads).
+//
+// A performance analysis has shown this implementation is on par with OpenMP
+// and TBB.
+template <typename F>
+void ParallelInvoke(ContextImpl* context,
+ int start,
+ int end,
+ int num_threads,
+ F&& function,
+ int min_block_size) {
+ CHECK(context != nullptr);
+
+ // Maximal number of work items scheduled for a single thread
+ // - Lower number of work items results in larger runtimes on unequal tasks
+ // - Higher number of work items results in larger losses for synchronization
+ constexpr int kWorkBlocksPerThread = 4;
+
+ // Interval [start, end) is being split into
+ // num_threads * kWorkBlocksPerThread contiguous disjoint blocks.
+ //
+ // In order to avoid creating empty blocks of work, we need to limit
+ // number of work blocks by a total number of indices.
+ const int num_work_blocks = std::min((end - start) / min_block_size,
+ num_threads * kWorkBlocksPerThread);
+
+ // We use a std::shared_ptr because the main thread can finish all
+ // the work before the tasks have been popped off the queue. So the
+ // shared state needs to exist for the duration of all the tasks.
+ auto shared_state =
+ std::make_shared<ParallelInvokeState>(start, end, num_work_blocks);
+
+ // A function which tries to schedule another task in the thread pool and
+ // perform several chunks of work. Function expects itself as the argument in
+ // order to schedule next task in the thread pool.
+ auto task = [context, shared_state, num_threads, &function](auto& task_copy) {
+ int num_jobs_finished = 0;
+ const int thread_id = shared_state->thread_id.fetch_add(1);
+ // In order to avoid dead-locks in nested parallel for loops, task() will be
+ // invoked num_threads + 1 times:
+ // - num_threads times via enqueueing task into thread pool
+ // - one more time in the main thread
+ // Tasks enqueued to thread pool might take some time before execution, and
+ // the last task being executed will be terminated here in order to avoid
+ // having more than num_threads active threads
+ if (thread_id >= num_threads) return;
+ const int num_work_blocks = shared_state->num_work_blocks;
+ if (thread_id + 1 < num_threads &&
+ shared_state->block_id < num_work_blocks) {
+ // Add another thread to the thread pool.
+ // Note we are taking the task as value so the copy of shared_state shared
+ // pointer (captured by value at declaration of task lambda-function) is
+ // copied and the ref count is increased. This is to prevent it from being
+ // deleted when the main thread finishes all the work and exits before the
+ // threads finish.
+ context->thread_pool.AddTask([task_copy]() { task_copy(task_copy); });
+ }
+
+ const int start = shared_state->start;
+ const int base_block_size = shared_state->base_block_size;
+ const int num_base_p1_sized_blocks = shared_state->num_base_p1_sized_blocks;
+
+ while (true) {
+ // Get the next available chunk of work to be performed. If there is no
+ // work, return.
+ int block_id = shared_state->block_id.fetch_add(1);
+ if (block_id >= num_work_blocks) {
+ break;
+ }
+ ++num_jobs_finished;
+
+ // For-loop interval [start, end) was split into num_work_blocks,
+ // with num_base_p1_sized_blocks of size base_block_size + 1 and remaining
+ // num_work_blocks - num_base_p1_sized_blocks of size base_block_size
+ //
+ // Then, start index of the block #block_id is given by a total
+ // length of preceeding blocks:
+ // * Total length of preceeding blocks of size base_block_size + 1:
+ // min(block_id, num_base_p1_sized_blocks) * (base_block_size + 1)
+ //
+ // * Total length of preceeding blocks of size base_block_size:
+ // (block_id - min(block_id, num_base_p1_sized_blocks)) *
+ // base_block_size
+ //
+ // Simplifying sum of those quantities yields a following
+ // expression for start index of the block #block_id
+ const int curr_start = start + block_id * base_block_size +
+ std::min(block_id, num_base_p1_sized_blocks);
+ // First num_base_p1_sized_blocks have size base_block_size + 1
+ //
+ // Note that it is guaranteed that all blocks are within
+ // [start, end) interval
+ const int curr_end = curr_start + base_block_size +
+ (block_id < num_base_p1_sized_blocks ? 1 : 0);
+ // Perform each task in current block
+ const auto range = std::make_tuple(curr_start, curr_end);
+ InvokeOnSegment(thread_id, range, function);
+ }
+ shared_state->block_until_finished.Finished(num_jobs_finished);
+ };
+
+ // Start scheduling threads and doing work. We might end up with less threads
+ // scheduled than expected, if scheduling overhead is larger than the amount
+ // of work to be done.
+ task(task);
+
+ // Wait until all tasks have finished.
+ shared_state->block_until_finished.Block();
+}
+
+} // namespace ceres::internal
+
+#endif
diff --git a/internal/ceres/parallel_utils.cc b/internal/ceres/parallel_utils.cc
index e1cb5f9..2e6ee13 100644
--- a/internal/ceres/parallel_utils.cc
+++ b/internal/ceres/parallel_utils.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,8 +30,7 @@
#include "ceres/parallel_utils.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
void LinearIndexToUpperTriangularIndex(int k, int n, int* i, int* j) {
// This works by unfolding a rectangle into a triangle.
@@ -86,5 +85,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/parallel_utils.h b/internal/ceres/parallel_utils.h
index 89d2110..2a7925f 100644
--- a/internal/ceres/parallel_utils.h
+++ b/internal/ceres/parallel_utils.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,10 +31,9 @@
#ifndef CERES_INTERNAL_PARALLEL_UTILS_H_
#define CERES_INTERNAL_PARALLEL_UTILS_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Converts a linear iteration order into a triangular iteration order.
// Suppose you have nested loops that look like
@@ -61,12 +60,11 @@
// });
// which in each iteration will produce i and j satisfying
// 0 <= i <= j < n
-CERES_EXPORT_INTERNAL void LinearIndexToUpperTriangularIndex(int k,
- int n,
- int* i,
- int* j);
+CERES_NO_EXPORT void LinearIndexToUpperTriangularIndex(int k,
+ int n,
+ int* i,
+ int* j);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_PARALLEL_UTILS_H_
diff --git a/internal/ceres/parallel_utils_test.cc b/internal/ceres/parallel_utils_test.cc
index 53870bb..bea6f0d 100644
--- a/internal/ceres/parallel_utils_test.cc
+++ b/internal/ceres/parallel_utils_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,18 +28,13 @@
//
// Author: wjr@google.com (William Rucklidge)
-// This include must come before any #ifndef check on Ceres compile options.
-// clang-format off
-#include "ceres/internal/port.h"
-// clang-format on
-
#include "ceres/parallel_utils.h"
+#include "ceres/internal/config.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Tests that unfolding linear iterations to triangular iterations produces
// indices that are in-range and unique.
@@ -60,5 +55,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/parallel_vector_operations_benchmark.cc b/internal/ceres/parallel_vector_operations_benchmark.cc
new file mode 100644
index 0000000..8b55def
--- /dev/null
+++ b/internal/ceres/parallel_vector_operations_benchmark.cc
@@ -0,0 +1,326 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+
+#include <algorithm>
+
+#include "benchmark/benchmark.h"
+#include "ceres/eigen_vector_ops.h"
+#include "ceres/parallel_for.h"
+
+namespace ceres::internal {
+// Older versions of benchmark library (for example, one shipped with
+// ubuntu 20.04) do not support range generation and range products
+#define VECTOR_SIZES(num_threads) \
+ Args({1 << 7, num_threads}) \
+ ->Args({1 << 8, num_threads}) \
+ ->Args({1 << 9, num_threads}) \
+ ->Args({1 << 10, num_threads}) \
+ ->Args({1 << 11, num_threads}) \
+ ->Args({1 << 12, num_threads}) \
+ ->Args({1 << 13, num_threads}) \
+ ->Args({1 << 14, num_threads}) \
+ ->Args({1 << 15, num_threads}) \
+ ->Args({1 << 16, num_threads}) \
+ ->Args({1 << 17, num_threads}) \
+ ->Args({1 << 18, num_threads}) \
+ ->Args({1 << 19, num_threads}) \
+ ->Args({1 << 20, num_threads}) \
+ ->Args({1 << 21, num_threads}) \
+ ->Args({1 << 22, num_threads}) \
+ ->Args({1 << 23, num_threads})
+
+#define VECTOR_SIZE_THREADS \
+ VECTOR_SIZES(1) \
+ ->VECTOR_SIZES(2) \
+ ->VECTOR_SIZES(4) \
+ ->VECTOR_SIZES(8) \
+ ->VECTOR_SIZES(16)
+
+static void SetZero(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ Vector x = Vector::Random(kVectorSize);
+ for (auto _ : state) {
+ x.setZero();
+ }
+ CHECK_EQ(x.squaredNorm(), 0.);
+}
+BENCHMARK(SetZero)->VECTOR_SIZES(1);
+
+static void SetZeroParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector x = Vector::Random(kVectorSize);
+ for (auto _ : state) {
+ ParallelSetZero(&context, num_threads, x);
+ }
+ CHECK_EQ(x.squaredNorm(), 0.);
+}
+BENCHMARK(SetZeroParallel)->VECTOR_SIZE_THREADS;
+
+static void Negate(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ Vector x = Vector::Random(kVectorSize).normalized();
+ const Vector x_init = x;
+
+ for (auto _ : state) {
+ x = -x;
+ }
+ CHECK((x - x_init).squaredNorm() == 0. || (x + x_init).squaredNorm() == 0);
+}
+BENCHMARK(Negate)->VECTOR_SIZES(1);
+
+static void NegateParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector x = Vector::Random(kVectorSize).normalized();
+ const Vector x_init = x;
+
+ for (auto _ : state) {
+ ParallelAssign(&context, num_threads, x, -x);
+ }
+ CHECK((x - x_init).squaredNorm() == 0. || (x + x_init).squaredNorm() == 0);
+}
+BENCHMARK(NegateParallel)->VECTOR_SIZE_THREADS;
+
+static void Assign(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ Vector x = Vector::Random(kVectorSize);
+ Vector y = Vector(kVectorSize);
+ for (auto _ : state) {
+ y.block(0, 0, kVectorSize, 1) = x.block(0, 0, kVectorSize, 1);
+ }
+ CHECK_EQ((y - x).squaredNorm(), 0.);
+}
+BENCHMARK(Assign)->VECTOR_SIZES(1);
+
+static void AssignParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector x = Vector::Random(kVectorSize);
+ Vector y = Vector(kVectorSize);
+
+ for (auto _ : state) {
+ ParallelAssign(&context, num_threads, y, x);
+ }
+ CHECK_EQ((y - x).squaredNorm(), 0.);
+}
+BENCHMARK(AssignParallel)->VECTOR_SIZE_THREADS;
+
+static void D2X(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const Vector x = Vector::Random(kVectorSize);
+ const Vector D = Vector::Random(kVectorSize);
+ Vector y = Vector::Zero(kVectorSize);
+ for (auto _ : state) {
+ y = D.array().square() * x.array();
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+BENCHMARK(D2X)->VECTOR_SIZES(1);
+
+static void D2XParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ const Vector x = Vector::Random(kVectorSize);
+ const Vector D = Vector::Random(kVectorSize);
+ Vector y = Vector(kVectorSize);
+
+ for (auto _ : state) {
+ ParallelAssign(&context, num_threads, y, D.array().square() * x.array());
+ }
+ CHECK_GT(y.squaredNorm(), 0.);
+}
+BENCHMARK(D2XParallel)->VECTOR_SIZE_THREADS;
+
+static void DivideSqrt(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ Vector diagonal = Vector::Random(kVectorSize).array().abs();
+ const double radius = 0.5;
+ for (auto _ : state) {
+ diagonal = (diagonal / radius).array().sqrt();
+ }
+ CHECK_GT(diagonal.squaredNorm(), 0.);
+}
+BENCHMARK(DivideSqrt)->VECTOR_SIZES(1);
+
+static void DivideSqrtParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector diagonal = Vector::Random(kVectorSize).array().abs();
+ const double radius = 0.5;
+ for (auto _ : state) {
+ ParallelAssign(
+ &context, num_threads, diagonal, (diagonal / radius).cwiseSqrt());
+ }
+ CHECK_GT(diagonal.squaredNorm(), 0.);
+}
+BENCHMARK(DivideSqrtParallel)->VECTOR_SIZE_THREADS;
+
+static void Clamp(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ Vector diagonal = Vector::Random(kVectorSize);
+ const double min = -0.5;
+ const double max = 0.5;
+ for (auto _ : state) {
+ for (int i = 0; i < kVectorSize; ++i) {
+ diagonal[i] = std::min(std::max(diagonal[i], min), max);
+ }
+ }
+ CHECK_LE(diagonal.maxCoeff(), 0.5);
+ CHECK_GE(diagonal.minCoeff(), -0.5);
+}
+BENCHMARK(Clamp)->VECTOR_SIZES(1);
+
+static void ClampParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector diagonal = Vector::Random(kVectorSize);
+ const double min = -0.5;
+ const double max = 0.5;
+ for (auto _ : state) {
+ ParallelAssign(
+ &context, num_threads, diagonal, diagonal.array().max(min).min(max));
+ }
+ CHECK_LE(diagonal.maxCoeff(), 0.5);
+ CHECK_GE(diagonal.minCoeff(), -0.5);
+}
+BENCHMARK(ClampParallel)->VECTOR_SIZE_THREADS;
+
+static void Norm(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const Vector x = Vector::Random(kVectorSize);
+
+ double total = 0.;
+ for (auto _ : state) {
+ total += x.norm();
+ }
+ CHECK_GT(total, 0.);
+}
+BENCHMARK(Norm)->VECTOR_SIZES(1);
+
+static void NormParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ const Vector x = Vector::Random(kVectorSize);
+
+ double total = 0.;
+ for (auto _ : state) {
+ total += Norm(x, &context, num_threads);
+ }
+ CHECK_GT(total, 0.);
+}
+BENCHMARK(NormParallel)->VECTOR_SIZE_THREADS;
+
+static void Dot(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const Vector x = Vector::Random(kVectorSize);
+ const Vector y = Vector::Random(kVectorSize);
+
+ double total = 0.;
+ for (auto _ : state) {
+ total += x.dot(y);
+ }
+ CHECK_NE(total, 0.);
+}
+BENCHMARK(Dot)->VECTOR_SIZES(1);
+
+static void DotParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ const Vector x = Vector::Random(kVectorSize);
+ const Vector y = Vector::Random(kVectorSize);
+
+ double total = 0.;
+ for (auto _ : state) {
+ total += Dot(x, y, &context, num_threads);
+ }
+ CHECK_NE(total, 0.);
+}
+BENCHMARK(DotParallel)->VECTOR_SIZE_THREADS;
+
+static void Axpby(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const Vector x = Vector::Random(kVectorSize);
+ const Vector y = Vector::Random(kVectorSize);
+ Vector z = Vector::Zero(kVectorSize);
+ const double a = 3.1415;
+ const double b = 1.2345;
+
+ for (auto _ : state) {
+ z = a * x + b * y;
+ }
+ CHECK_GT(z.squaredNorm(), 0.);
+}
+BENCHMARK(Axpby)->VECTOR_SIZES(1);
+
+static void AxpbyParallel(benchmark::State& state) {
+ const int kVectorSize = static_cast<int>(state.range(0));
+ const int num_threads = static_cast<int>(state.range(1));
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ const Vector x = Vector::Random(kVectorSize);
+ const Vector y = Vector::Random(kVectorSize);
+ Vector z = Vector::Zero(kVectorSize);
+ const double a = 3.1415;
+ const double b = 1.2345;
+
+ for (auto _ : state) {
+ Axpby(a, x, b, y, z, &context, num_threads);
+ }
+ CHECK_GT(z.squaredNorm(), 0.);
+}
+BENCHMARK(AxpbyParallel)->VECTOR_SIZE_THREADS;
+
+} // namespace ceres::internal
+
+BENCHMARK_MAIN();
diff --git a/internal/ceres/float_cxsparse.cc b/internal/ceres/parallel_vector_ops.cc
similarity index 70%
copy from internal/ceres/float_cxsparse.cc
copy to internal/ceres/parallel_vector_ops.cc
index 6c68830..9ebce29 100644
--- a/internal/ceres/float_cxsparse.cc
+++ b/internal/ceres/parallel_vector_ops.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -25,23 +25,30 @@
// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: sameeragarwal@google.com (Sameer Agarwal)
-#include "ceres/float_cxsparse.h"
+#include "ceres/parallel_vector_ops.h"
-#if !defined(CERES_NO_CXSPARSE)
+#include <algorithm>
+#include <tuple>
-namespace ceres {
-namespace internal {
+#include "ceres/context_impl.h"
+#include "ceres/parallel_for.h"
-std::unique_ptr<SparseCholesky> FloatCXSparseCholesky::Create(
- OrderingType ordering_type) {
- LOG(FATAL) << "FloatCXSparseCholesky is not available.";
- return std::unique_ptr<SparseCholesky>();
+namespace ceres::internal {
+void ParallelSetZero(ContextImpl* context,
+ int num_threads,
+ double* values,
+ int num_values) {
+ ParallelFor(
+ context,
+ 0,
+ num_values,
+ num_threads,
+ [values](std::tuple<int, int> range) {
+ auto [start, end] = range;
+ std::fill(values + start, values + end, 0.);
+ },
+ kMinBlockSizeParallelVectorOps);
}
-} // namespace internal
-} // namespace ceres
-
-#endif // !defined(CERES_NO_CXSPARSE)
+} // namespace ceres::internal
diff --git a/internal/ceres/parallel_vector_ops.h b/internal/ceres/parallel_vector_ops.h
new file mode 100644
index 0000000..812950a
--- /dev/null
+++ b/internal/ceres/parallel_vector_ops.h
@@ -0,0 +1,90 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: vitus@google.com (Michael Vitus),
+// dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#ifndef CERES_INTERNAL_PARALLEL_VECTOR_OPS_H_
+#define CERES_INTERNAL_PARALLEL_VECTOR_OPS_H_
+
+#include <mutex>
+#include <vector>
+
+#include "ceres/context_impl.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
+#include "ceres/parallel_for.h"
+
+namespace ceres::internal {
+
+// Lower bound on block size for parallel vector operations.
+// Operations with vectors of less than kMinBlockSizeParallelVectorOps elements
+// will be executed in a single thread.
+constexpr int kMinBlockSizeParallelVectorOps = 1 << 16;
+// Evaluate vector expression in parallel
+// Assuming LhsExpression and RhsExpression are some sort of column-vector
+// expression, assignment lhs = rhs is eavluated over a set of contiguous blocks
+// in parallel. This is expected to work well in the case of vector-based
+// expressions (since they typically do not result into temporaries). This
+// method expects lhs to be size-compatible with rhs
+template <typename LhsExpression, typename RhsExpression>
+void ParallelAssign(ContextImpl* context,
+ int num_threads,
+ LhsExpression& lhs,
+ const RhsExpression& rhs) {
+ static_assert(LhsExpression::ColsAtCompileTime == 1);
+ static_assert(RhsExpression::ColsAtCompileTime == 1);
+ CHECK_EQ(lhs.rows(), rhs.rows());
+ const int num_rows = lhs.rows();
+ ParallelFor(
+ context,
+ 0,
+ num_rows,
+ num_threads,
+ [&lhs, &rhs](const std::tuple<int, int>& range) {
+ auto [start, end] = range;
+ lhs.segment(start, end - start) = rhs.segment(start, end - start);
+ },
+ kMinBlockSizeParallelVectorOps);
+}
+
+// Set vector to zero using num_threads
+template <typename VectorType>
+void ParallelSetZero(ContextImpl* context,
+ int num_threads,
+ VectorType& vector) {
+ ParallelSetZero(context, num_threads, vector.data(), vector.rows());
+}
+void ParallelSetZero(ContextImpl* context,
+ int num_threads,
+ double* values,
+ int num_values);
+
+} // namespace ceres::internal
+
+#endif // CERES_INTERNAL_PARALLEL_FOR_H_
diff --git a/internal/ceres/parameter_block.h b/internal/ceres/parameter_block.h
index 88943df..925d1c4 100644
--- a/internal/ceres/parameter_block.h
+++ b/internal/ceres/parameter_block.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,14 +40,14 @@
#include <unordered_set>
#include "ceres/array_utils.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/internal/export.h"
+#include "ceres/manifold.h"
#include "ceres/stringprintf.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class ProblemImpl;
class ResidualBlock;
@@ -58,12 +58,12 @@
// methods are performance sensitive.
//
// The class is not thread-safe, unless only const methods are called. The
-// parameter block may also hold a pointer to a local parameterization; the
-// parameter block does not take ownership of this pointer, so the user is
-// responsible for the proper disposal of the local parameterization.
-class ParameterBlock {
+// parameter block may also hold a pointer to a manifold; the parameter block
+// does not take ownership of this pointer, so the user is responsible for the
+// proper disposal of the manifold.
+class CERES_NO_EXPORT ParameterBlock {
public:
- typedef std::unordered_set<ResidualBlock*> ResidualBlockSet;
+ using ResidualBlockSet = std::unordered_set<ResidualBlock*>;
// Create a parameter block with the user state, size, and index specified.
// The size is the size of the parameter block and the index is the position
@@ -74,16 +74,13 @@
state_(user_state),
index_(index) {}
- ParameterBlock(double* user_state,
- int size,
- int index,
- LocalParameterization* local_parameterization)
+ ParameterBlock(double* user_state, int size, int index, Manifold* manifold)
: user_state_(user_state),
size_(size),
state_(user_state),
index_(index) {
- if (local_parameterization != nullptr) {
- SetParameterization(local_parameterization);
+ if (manifold != nullptr) {
+ SetManifold(manifold);
}
}
@@ -98,7 +95,7 @@
<< "with user location " << user_state_;
state_ = x;
- return UpdateLocalParameterizationJacobian();
+ return UpdatePlusJacobian();
}
// Copy the current parameter state out to x. This is "GetState()" rather than
@@ -114,17 +111,13 @@
const double* state() const { return state_; }
const double* user_state() const { return user_state_; }
double* mutable_user_state() { return user_state_; }
- const LocalParameterization* local_parameterization() const {
- return local_parameterization_;
- }
- LocalParameterization* mutable_local_parameterization() {
- return local_parameterization_;
- }
+ const Manifold* manifold() const { return manifold_; }
+ Manifold* mutable_manifold() { return manifold_; }
// Set this parameter block to vary or not.
void SetConstant() { is_set_constant_ = true; }
void SetVarying() { is_set_constant_ = false; }
- bool IsConstant() const { return (is_set_constant_ || LocalSize() == 0); }
+ bool IsConstant() const { return (is_set_constant_ || TangentSize() == 0); }
double UpperBound(int index) const {
return (upper_bounds_ ? upper_bounds_[index]
@@ -151,51 +144,46 @@
int delta_offset() const { return delta_offset_; }
void set_delta_offset(int delta_offset) { delta_offset_ = delta_offset; }
- // Methods relating to the parameter block's parameterization.
+ // Methods relating to the parameter block's manifold.
- // The local to global jacobian. Returns nullptr if there is no local
- // parameterization for this parameter block. The returned matrix is row-major
- // and has Size() rows and LocalSize() columns.
- const double* LocalParameterizationJacobian() const {
- return local_parameterization_jacobian_.get();
+ // The local to global jacobian. Returns nullptr if there is no manifold for
+ // this parameter block. The returned matrix is row-major and has Size() rows
+ // and TangentSize() columns.
+ const double* PlusJacobian() const { return plus_jacobian_.get(); }
+
+ int TangentSize() const {
+ return (manifold_ == nullptr) ? size_ : manifold_->TangentSize();
}
- int LocalSize() const {
- return (local_parameterization_ == nullptr)
- ? size_
- : local_parameterization_->LocalSize();
- }
-
- // Set the parameterization. The parameter block does not take
- // ownership of the parameterization.
- void SetParameterization(LocalParameterization* new_parameterization) {
- // Nothing to do if the new parameterization is the same as the
- // old parameterization.
- if (new_parameterization == local_parameterization_) {
+ // Set the manifold. The parameter block does not take ownership of
+ // the manifold.
+ void SetManifold(Manifold* new_manifold) {
+ // Nothing to do if the new manifold is the same as the old
+ // manifold.
+ if (new_manifold == manifold_) {
return;
}
- if (new_parameterization == nullptr) {
- local_parameterization_ = nullptr;
+ if (new_manifold == nullptr) {
+ manifold_ = nullptr;
+ plus_jacobian_ = nullptr;
return;
}
- CHECK(new_parameterization->GlobalSize() == size_)
- << "Invalid parameterization for parameter block. The parameter block "
- << "has size " << size_ << " while the parameterization has a global "
- << "size of " << new_parameterization->GlobalSize() << ". Did you "
- << "accidentally use the wrong parameter block or parameterization?";
+ CHECK_EQ(new_manifold->AmbientSize(), size_)
+ << "The parameter block has size = " << size_
+ << " while the manifold has ambient size = "
+ << new_manifold->AmbientSize();
- CHECK_GE(new_parameterization->LocalSize(), 0)
- << "Invalid parameterization. Parameterizations must have a "
+ CHECK_GE(new_manifold->TangentSize(), 0)
+ << "Invalid Manifold. Manifolds must have a "
<< "non-negative dimensional tangent space.";
- local_parameterization_ = new_parameterization;
- local_parameterization_jacobian_.reset(
- new double[local_parameterization_->GlobalSize() *
- local_parameterization_->LocalSize()]);
- CHECK(UpdateLocalParameterizationJacobian())
- << "Local parameterization Jacobian computation failed for x: "
+ manifold_ = new_manifold;
+ plus_jacobian_ = std::make_unique<double[]>(manifold_->AmbientSize() *
+ manifold_->TangentSize());
+ CHECK(UpdatePlusJacobian())
+ << "Manifold::PlusJacobian computation failed for x: "
<< ConstVectorRef(state_, Size()).transpose();
}
@@ -207,7 +195,7 @@
}
if (!upper_bounds_) {
- upper_bounds_.reset(new double[size_]);
+ upper_bounds_ = std::make_unique<double[]>(size_);
std::fill(upper_bounds_.get(),
upper_bounds_.get() + size_,
std::numeric_limits<double>::max());
@@ -224,7 +212,7 @@
}
if (!lower_bounds_) {
- lower_bounds_.reset(new double[size_]);
+ lower_bounds_ = std::make_unique<double[]>(size_);
std::fill(lower_bounds_.get(),
lower_bounds_.get() + size_,
-std::numeric_limits<double>::max());
@@ -234,11 +222,11 @@
}
// Generalization of the addition operation. This is the same as
- // LocalParameterization::Plus() followed by projection onto the
+ // Manifold::Plus() followed by projection onto the
// hyper cube implied by the bounds constraints.
bool Plus(const double* x, const double* delta, double* x_plus_delta) {
- if (local_parameterization_ != nullptr) {
- if (!local_parameterization_->Plus(x, delta, x_plus_delta)) {
+ if (manifold_ != nullptr) {
+ if (!manifold_->Plus(x, delta, x_plus_delta)) {
return false;
}
} else {
@@ -281,7 +269,7 @@
CHECK(residual_blocks_.get() == nullptr)
<< "Ceres bug: There is already a residual block collection "
<< "for parameter block: " << ToString();
- residual_blocks_.reset(new ResidualBlockSet);
+ residual_blocks_ = std::make_unique<ResidualBlockSet>();
}
void AddResidualBlock(ResidualBlock* residual_block) {
@@ -321,33 +309,30 @@
}
private:
- bool UpdateLocalParameterizationJacobian() {
- if (local_parameterization_ == nullptr) {
+ bool UpdatePlusJacobian() {
+ if (manifold_ == nullptr) {
return true;
}
- // Update the local to global Jacobian. In some cases this is
+ // Update the Plus Jacobian. In some cases this is
// wasted effort; if this is a bottleneck, we will find a solution
// at that time.
-
- const int jacobian_size = Size() * LocalSize();
- InvalidateArray(jacobian_size, local_parameterization_jacobian_.get());
- if (!local_parameterization_->ComputeJacobian(
- state_, local_parameterization_jacobian_.get())) {
- LOG(WARNING) << "Local parameterization Jacobian computation failed"
+ const int jacobian_size = Size() * TangentSize();
+ InvalidateArray(jacobian_size, plus_jacobian_.get());
+ if (!manifold_->PlusJacobian(state_, plus_jacobian_.get())) {
+ LOG(WARNING) << "Manifold::PlusJacobian computation failed"
"for x: "
<< ConstVectorRef(state_, Size()).transpose();
return false;
}
- if (!IsArrayValid(jacobian_size, local_parameterization_jacobian_.get())) {
- LOG(WARNING) << "Local parameterization Jacobian computation returned"
+ if (!IsArrayValid(jacobian_size, plus_jacobian_.get())) {
+ LOG(WARNING) << "Manifold::PlusJacobian computation returned "
<< "an invalid matrix for x: "
<< ConstVectorRef(state_, Size()).transpose()
<< "\n Jacobian matrix : "
- << ConstMatrixRef(local_parameterization_jacobian_.get(),
- Size(),
- LocalSize());
+ << ConstMatrixRef(
+ plus_jacobian_.get(), Size(), TangentSize());
return false;
}
return true;
@@ -356,14 +341,14 @@
double* user_state_ = nullptr;
int size_ = -1;
bool is_set_constant_ = false;
- LocalParameterization* local_parameterization_ = nullptr;
+ Manifold* manifold_ = nullptr;
// The "state" of the parameter. These fields are only needed while the
// solver is running. While at first glance using mutable is a bad idea, this
// ends up simplifying the internals of Ceres enough to justify the potential
// pitfalls of using "mutable."
mutable const double* state_ = nullptr;
- mutable std::unique_ptr<double[]> local_parameterization_jacobian_;
+ mutable std::unique_ptr<double[]> plus_jacobian_;
// The index of the parameter. This is used by various other parts of Ceres to
// permit switching from a ParameterBlock* to an index in another array.
@@ -392,11 +377,12 @@
std::unique_ptr<double[]> upper_bounds_;
std::unique_ptr<double[]> lower_bounds_;
- // Necessary so ProblemImpl can clean up the parameterizations.
+ // Necessary so ProblemImpl can clean up the manifolds.
friend class ProblemImpl;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_PARAMETER_BLOCK_H_
diff --git a/internal/ceres/parameter_block_ordering.cc b/internal/ceres/parameter_block_ordering.cc
index 9899c24..2b8bf6e 100644
--- a/internal/ceres/parameter_block_ordering.cc
+++ b/internal/ceres/parameter_block_ordering.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,8 +30,11 @@
#include "ceres/parameter_block_ordering.h"
+#include <map>
#include <memory>
+#include <set>
#include <unordered_set>
+#include <vector>
#include "ceres/graph.h"
#include "ceres/graph_algorithms.h"
@@ -42,26 +45,22 @@
#include "ceres/wall_time.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::map;
-using std::set;
-using std::vector;
+namespace ceres::internal {
int ComputeStableSchurOrdering(const Program& program,
- vector<ParameterBlock*>* ordering) {
+ std::vector<ParameterBlock*>* ordering) {
CHECK(ordering != nullptr);
ordering->clear();
EventLogger event_logger("ComputeStableSchurOrdering");
- std::unique_ptr<Graph<ParameterBlock*>> graph(CreateHessianGraph(program));
+ auto graph = CreateHessianGraph(program);
event_logger.AddEvent("CreateHessianGraph");
- const vector<ParameterBlock*>& parameter_blocks = program.parameter_blocks();
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program.parameter_blocks();
const std::unordered_set<ParameterBlock*>& vertices = graph->vertices();
- for (int i = 0; i < parameter_blocks.size(); ++i) {
- if (vertices.count(parameter_blocks[i]) > 0) {
- ordering->push_back(parameter_blocks[i]);
+ for (auto* parameter_block : parameter_blocks) {
+ if (vertices.count(parameter_block) > 0) {
+ ordering->push_back(parameter_block);
}
}
event_logger.AddEvent("Preordering");
@@ -70,8 +69,7 @@
event_logger.AddEvent("StableIndependentSet");
// Add the excluded blocks to back of the ordering vector.
- for (int i = 0; i < parameter_blocks.size(); ++i) {
- ParameterBlock* parameter_block = parameter_blocks[i];
+ for (auto* parameter_block : parameter_blocks) {
if (parameter_block->IsConstant()) {
ordering->push_back(parameter_block);
}
@@ -82,17 +80,17 @@
}
int ComputeSchurOrdering(const Program& program,
- vector<ParameterBlock*>* ordering) {
+ std::vector<ParameterBlock*>* ordering) {
CHECK(ordering != nullptr);
ordering->clear();
- std::unique_ptr<Graph<ParameterBlock*>> graph(CreateHessianGraph(program));
+ auto graph = CreateHessianGraph(program);
int independent_set_size = IndependentSetOrdering(*graph, ordering);
- const vector<ParameterBlock*>& parameter_blocks = program.parameter_blocks();
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program.parameter_blocks();
// Add the excluded blocks to back of the ordering vector.
- for (int i = 0; i < parameter_blocks.size(); ++i) {
- ParameterBlock* parameter_block = parameter_blocks[i];
+ for (auto* parameter_block : parameter_blocks) {
if (parameter_block->IsConstant()) {
ordering->push_back(parameter_block);
}
@@ -105,13 +103,14 @@
ParameterBlockOrdering* ordering) {
CHECK(ordering != nullptr);
ordering->Clear();
- const vector<ParameterBlock*> parameter_blocks = program.parameter_blocks();
- std::unique_ptr<Graph<ParameterBlock*>> graph(CreateHessianGraph(program));
+ const std::vector<ParameterBlock*> parameter_blocks =
+ program.parameter_blocks();
+ auto graph = CreateHessianGraph(program);
int num_covered = 0;
int round = 0;
while (num_covered < parameter_blocks.size()) {
- vector<ParameterBlock*> independent_set_ordering;
+ std::vector<ParameterBlock*> independent_set_ordering;
const int independent_set_size =
IndependentSetOrdering(*graph, &independent_set_ordering);
for (int i = 0; i < independent_set_size; ++i) {
@@ -124,20 +123,21 @@
}
}
-Graph<ParameterBlock*>* CreateHessianGraph(const Program& program) {
- Graph<ParameterBlock*>* graph = new Graph<ParameterBlock*>;
+std::unique_ptr<Graph<ParameterBlock*>> CreateHessianGraph(
+ const Program& program) {
+ auto graph = std::make_unique<Graph<ParameterBlock*>>();
CHECK(graph != nullptr);
- const vector<ParameterBlock*>& parameter_blocks = program.parameter_blocks();
- for (int i = 0; i < parameter_blocks.size(); ++i) {
- ParameterBlock* parameter_block = parameter_blocks[i];
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program.parameter_blocks();
+ for (auto* parameter_block : parameter_blocks) {
if (!parameter_block->IsConstant()) {
graph->AddVertex(parameter_block);
}
}
- const vector<ResidualBlock*>& residual_blocks = program.residual_blocks();
- for (int i = 0; i < residual_blocks.size(); ++i) {
- const ResidualBlock* residual_block = residual_blocks[i];
+ const std::vector<ResidualBlock*>& residual_blocks =
+ program.residual_blocks();
+ for (auto* residual_block : residual_blocks) {
const int num_parameter_blocks = residual_block->NumParameterBlocks();
ParameterBlock* const* parameter_blocks =
residual_block->parameter_blocks();
@@ -160,19 +160,20 @@
}
void OrderingToGroupSizes(const ParameterBlockOrdering* ordering,
- vector<int>* group_sizes) {
+ std::vector<int>* group_sizes) {
CHECK(group_sizes != nullptr);
group_sizes->clear();
- if (ordering == NULL) {
+ if (ordering == nullptr) {
return;
}
- const map<int, set<double*>>& group_to_elements =
+ // TODO(sameeragarwal): Investigate if this should be a set or an
+ // unordered_set.
+ const std::map<int, std::set<double*>>& group_to_elements =
ordering->group_to_elements();
for (const auto& g_t_e : group_to_elements) {
group_sizes->push_back(g_t_e.second.size());
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/parameter_block_ordering.h b/internal/ceres/parameter_block_ordering.h
index 82ab75d..2ec3db7 100644
--- a/internal/ceres/parameter_block_ordering.h
+++ b/internal/ceres/parameter_block_ordering.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,22 +31,23 @@
#ifndef CERES_INTERNAL_PARAMETER_BLOCK_ORDERING_H_
#define CERES_INTERNAL_PARAMETER_BLOCK_ORDERING_H_
+#include <memory>
#include <vector>
#include "ceres/graph.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/ordered_groups.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class Program;
class ParameterBlock;
// Uses an approximate independent set ordering to order the parameter
-// blocks of a problem so that it is suitable for use with Schur
-// complement based solvers. The output variable ordering contains an
+// blocks of a problem so that it is suitable for use with Schur-
+// complement-based solvers. The output variable ordering contains an
// ordering of the parameter blocks and the return value is size of
// the independent set or the number of e_blocks (see
// schur_complement_solver.h for an explanation). Constant parameters
@@ -57,20 +58,20 @@
// ordering = [independent set,
// complement of the independent set,
// fixed blocks]
-CERES_EXPORT_INTERNAL int ComputeSchurOrdering(
+CERES_NO_EXPORT int ComputeSchurOrdering(
const Program& program, std::vector<ParameterBlock*>* ordering);
// Same as above, except that ties while computing the independent set
// ordering are resolved in favour of the order in which the parameter
// blocks occur in the program.
-CERES_EXPORT_INTERNAL int ComputeStableSchurOrdering(
+CERES_NO_EXPORT int ComputeStableSchurOrdering(
const Program& program, std::vector<ParameterBlock*>* ordering);
// Use an approximate independent set ordering to decompose the
// parameter blocks of a problem in a sequence of independent
// sets. The ordering covers all the non-constant parameter blocks in
// the program.
-CERES_EXPORT_INTERNAL void ComputeRecursiveIndependentSetOrdering(
+CERES_NO_EXPORT void ComputeRecursiveIndependentSetOrdering(
const Program& program, ParameterBlockOrdering* ordering);
// Builds a graph on the parameter blocks of a Problem, whose
@@ -78,15 +79,16 @@
// vertex corresponds to a parameter block in the Problem except for
// parameter blocks that are marked constant. An edge connects two
// parameter blocks, if they co-occur in a residual block.
-CERES_EXPORT_INTERNAL Graph<ParameterBlock*>* CreateHessianGraph(
+CERES_NO_EXPORT std::unique_ptr<Graph<ParameterBlock*>> CreateHessianGraph(
const Program& program);
// Iterate over each of the groups in order of their priority and fill
// summary with their sizes.
-CERES_EXPORT_INTERNAL void OrderingToGroupSizes(
+CERES_NO_EXPORT void OrderingToGroupSizes(
const ParameterBlockOrdering* ordering, std::vector<int>* group_sizes);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_PARAMETER_BLOCK_ORDERING_H_
diff --git a/internal/ceres/parameter_block_ordering_test.cc b/internal/ceres/parameter_block_ordering_test.cc
index 1078893..459a055 100644
--- a/internal/ceres/parameter_block_ordering_test.cc
+++ b/internal/ceres/parameter_block_ordering_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -43,13 +43,9 @@
#include "ceres/stl_util.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::vector;
-
-typedef Graph<ParameterBlock*> HessianGraph;
-typedef std::unordered_set<ParameterBlock*> VertexSet;
+using VertexSet = std::unordered_set<ParameterBlock*>;
template <int M, int... Ns>
class DummyCostFunction : public SizedCostFunction<M, Ns...> {
@@ -71,12 +67,12 @@
problem_.AddParameterBlock(z_, 5);
problem_.AddParameterBlock(w_, 6);
- problem_.AddResidualBlock(new DummyCostFunction<2, 3>, NULL, x_);
- problem_.AddResidualBlock(new DummyCostFunction<6, 5, 4>, NULL, z_, y_);
- problem_.AddResidualBlock(new DummyCostFunction<3, 3, 5>, NULL, x_, z_);
- problem_.AddResidualBlock(new DummyCostFunction<7, 5, 3>, NULL, z_, x_);
+ problem_.AddResidualBlock(new DummyCostFunction<2, 3>, nullptr, x_);
+ problem_.AddResidualBlock(new DummyCostFunction<6, 5, 4>, nullptr, z_, y_);
+ problem_.AddResidualBlock(new DummyCostFunction<3, 3, 5>, nullptr, x_, z_);
+ problem_.AddResidualBlock(new DummyCostFunction<7, 5, 3>, nullptr, z_, x_);
problem_.AddResidualBlock(
- new DummyCostFunction<1, 5, 3, 6>, NULL, z_, x_, w_);
+ new DummyCostFunction<1, 5, 3, 6>, nullptr, z_, x_, w_);
}
ProblemImpl problem_;
@@ -85,8 +81,9 @@
TEST_F(SchurOrderingTest, NoFixed) {
const Program& program = problem_.program();
- const vector<ParameterBlock*>& parameter_blocks = program.parameter_blocks();
- std::unique_ptr<HessianGraph> graph(CreateHessianGraph(program));
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program.parameter_blocks();
+ auto graph = CreateHessianGraph(program);
const VertexSet& vertices = graph->vertices();
EXPECT_EQ(vertices.size(), 4);
@@ -131,7 +128,7 @@
problem_.SetParameterBlockConstant(w_);
const Program& program = problem_.program();
- std::unique_ptr<HessianGraph> graph(CreateHessianGraph(program));
+ auto graph = CreateHessianGraph(program);
EXPECT_EQ(graph->vertices().size(), 0);
}
@@ -139,8 +136,9 @@
problem_.SetParameterBlockConstant(x_);
const Program& program = problem_.program();
- const vector<ParameterBlock*>& parameter_blocks = program.parameter_blocks();
- std::unique_ptr<HessianGraph> graph(CreateHessianGraph(program));
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program.parameter_blocks();
+ auto graph = CreateHessianGraph(program);
const VertexSet& vertices = graph->vertices();
@@ -171,10 +169,9 @@
}
// The constant parameter block is at the end.
- vector<ParameterBlock*> ordering;
+ std::vector<ParameterBlock*> ordering;
ComputeSchurOrdering(program, &ordering);
EXPECT_EQ(ordering.back(), parameter_blocks[0]);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/parameter_block_test.cc b/internal/ceres/parameter_block_test.cc
index a5a4230..0bb9b40 100644
--- a/internal/ceres/parameter_block_test.cc
+++ b/internal/ceres/parameter_block_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,70 +36,69 @@
namespace ceres {
namespace internal {
-TEST(ParameterBlock, SetParameterizationDiesOnSizeMismatch) {
+TEST(ParameterBlock, SetManifoldDiesOnSizeMismatch) {
double x[3] = {1.0, 2.0, 3.0};
ParameterBlock parameter_block(x, 3, -1);
std::vector<int> indices;
indices.push_back(1);
- SubsetParameterization subset_wrong_size(4, indices);
- EXPECT_DEATH_IF_SUPPORTED(
- parameter_block.SetParameterization(&subset_wrong_size), "global");
+ SubsetManifold subset_wrong_size(4, indices);
+ EXPECT_DEATH_IF_SUPPORTED(parameter_block.SetManifold(&subset_wrong_size),
+ "ambient");
}
-TEST(ParameterBlock, SetParameterizationWithSameExistingParameterization) {
+TEST(ParameterBlock, SetManifoldWithSameExistingManifold) {
double x[3] = {1.0, 2.0, 3.0};
ParameterBlock parameter_block(x, 3, -1);
std::vector<int> indices;
indices.push_back(1);
- SubsetParameterization subset(3, indices);
- parameter_block.SetParameterization(&subset);
- parameter_block.SetParameterization(&subset);
+ SubsetManifold subset(3, indices);
+ parameter_block.SetManifold(&subset);
+ parameter_block.SetManifold(&subset);
}
-TEST(ParameterBlock, SetParameterizationAllowsResettingToNull) {
+TEST(ParameterBlock, SetManifoldAllowsResettingToNull) {
double x[3] = {1.0, 2.0, 3.0};
ParameterBlock parameter_block(x, 3, -1);
std::vector<int> indices;
indices.push_back(1);
- SubsetParameterization subset(3, indices);
- parameter_block.SetParameterization(&subset);
- EXPECT_EQ(parameter_block.local_parameterization(), &subset);
- parameter_block.SetParameterization(nullptr);
- EXPECT_EQ(parameter_block.local_parameterization(), nullptr);
+ SubsetManifold subset(3, indices);
+ parameter_block.SetManifold(&subset);
+ EXPECT_EQ(parameter_block.manifold(), &subset);
+ parameter_block.SetManifold(nullptr);
+ EXPECT_EQ(parameter_block.manifold(), nullptr);
+ EXPECT_EQ(parameter_block.PlusJacobian(), nullptr);
}
-TEST(ParameterBlock,
- SetParameterizationAllowsResettingToDifferentParameterization) {
+TEST(ParameterBlock, SetManifoldAllowsResettingToDifferentManifold) {
double x[3] = {1.0, 2.0, 3.0};
ParameterBlock parameter_block(x, 3, -1);
std::vector<int> indices;
indices.push_back(1);
- SubsetParameterization subset(3, indices);
- parameter_block.SetParameterization(&subset);
- EXPECT_EQ(parameter_block.local_parameterization(), &subset);
+ SubsetManifold subset(3, indices);
+ parameter_block.SetManifold(&subset);
+ EXPECT_EQ(parameter_block.manifold(), &subset);
- SubsetParameterization subset_different(3, indices);
- parameter_block.SetParameterization(&subset_different);
- EXPECT_EQ(parameter_block.local_parameterization(), &subset_different);
+ SubsetManifold subset_different(3, indices);
+ parameter_block.SetManifold(&subset_different);
+ EXPECT_EQ(parameter_block.manifold(), &subset_different);
}
-TEST(ParameterBlock, SetParameterizationAndNormalOperation) {
+TEST(ParameterBlock, SetManifoldAndNormalOperation) {
double x[3] = {1.0, 2.0, 3.0};
ParameterBlock parameter_block(x, 3, -1);
std::vector<int> indices;
indices.push_back(1);
- SubsetParameterization subset(3, indices);
- parameter_block.SetParameterization(&subset);
+ SubsetManifold subset(3, indices);
+ parameter_block.SetManifold(&subset);
- // Ensure the local parameterization jacobian result is correctly computed.
- ConstMatrixRef local_parameterization_jacobian(
- parameter_block.LocalParameterizationJacobian(), 3, 2);
- ASSERT_EQ(1.0, local_parameterization_jacobian(0, 0));
- ASSERT_EQ(0.0, local_parameterization_jacobian(0, 1));
- ASSERT_EQ(0.0, local_parameterization_jacobian(1, 0));
- ASSERT_EQ(0.0, local_parameterization_jacobian(1, 1));
- ASSERT_EQ(0.0, local_parameterization_jacobian(2, 0));
- ASSERT_EQ(1.0, local_parameterization_jacobian(2, 1));
+ // Ensure the manifold plus jacobian result is correctly computed.
+ ConstMatrixRef manifold_jacobian(parameter_block.PlusJacobian(), 3, 2);
+ ASSERT_EQ(1.0, manifold_jacobian(0, 0));
+ ASSERT_EQ(0.0, manifold_jacobian(0, 1));
+ ASSERT_EQ(0.0, manifold_jacobian(1, 0));
+ ASSERT_EQ(0.0, manifold_jacobian(1, 1));
+ ASSERT_EQ(0.0, manifold_jacobian(2, 0));
+ ASSERT_EQ(1.0, manifold_jacobian(2, 1));
// Check that updating works as expected.
double x_plus_delta[3];
@@ -110,37 +109,47 @@
ASSERT_EQ(3.3, x_plus_delta[2]);
}
-struct TestParameterization : public LocalParameterization {
+struct TestManifold : public Manifold {
public:
- virtual ~TestParameterization() {}
bool Plus(const double* x,
const double* delta,
double* x_plus_delta) const final {
LOG(FATAL) << "Shouldn't get called.";
return true;
}
- bool ComputeJacobian(const double* x, double* jacobian) const final {
+
+ bool PlusJacobian(const double* x, double* jacobian) const final {
jacobian[0] = *x * 2;
return true;
}
- int GlobalSize() const final { return 1; }
- int LocalSize() const final { return 1; }
+ bool Minus(const double* y, const double* x, double* y_minus_x) const final {
+ LOG(FATAL) << "Shouldn't get called";
+ return true;
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian) const final {
+ jacobian[0] = *x * 2;
+ return true;
+ }
+
+ int AmbientSize() const final { return 1; }
+ int TangentSize() const final { return 1; }
};
-TEST(ParameterBlock, SetStateUpdatesLocalParameterizationJacobian) {
- TestParameterization test_parameterization;
+TEST(ParameterBlock, SetStateUpdatesPlusJacobian) {
+ TestManifold test_manifold;
double x[1] = {1.0};
- ParameterBlock parameter_block(x, 1, -1, &test_parameterization);
+ ParameterBlock parameter_block(x, 1, -1, &test_manifold);
- EXPECT_EQ(2.0, *parameter_block.LocalParameterizationJacobian());
+ EXPECT_EQ(2.0, *parameter_block.PlusJacobian());
x[0] = 5.5;
parameter_block.SetState(x);
- EXPECT_EQ(11.0, *parameter_block.LocalParameterizationJacobian());
+ EXPECT_EQ(11.0, *parameter_block.PlusJacobian());
}
-TEST(ParameterBlock, PlusWithNoLocalParameterization) {
+TEST(ParameterBlock, PlusWithNoManifold) {
double x[2] = {1.0, 2.0};
ParameterBlock parameter_block(x, 2, -1);
@@ -151,12 +160,11 @@
EXPECT_EQ(2.3, x_plus_delta[1]);
}
-// Stops computing the jacobian after the first time.
-class BadLocalParameterization : public LocalParameterization {
+// Stops computing the plus_jacobian after the first time.
+class BadManifold : public Manifold {
public:
- BadLocalParameterization() : calls_(0) {}
+ BadManifold() = default;
- virtual ~BadLocalParameterization() {}
bool Plus(const double* x,
const double* delta,
double* x_plus_delta) const final {
@@ -164,7 +172,7 @@
return true;
}
- bool ComputeJacobian(const double* x, double* jacobian) const final {
+ bool PlusJacobian(const double* x, double* jacobian) const final {
if (calls_ == 0) {
jacobian[0] = 0;
}
@@ -172,17 +180,27 @@
return true;
}
- int GlobalSize() const final { return 1; }
- int LocalSize() const final { return 1; }
+ bool Minus(const double* y, const double* x, double* y_minus_x) const final {
+ LOG(FATAL) << "Shouldn't get called";
+ return true;
+ }
+
+ bool MinusJacobian(const double* x, double* jacobian) const final {
+ jacobian[0] = *x * 2;
+ return true;
+ }
+
+ int AmbientSize() const final { return 1; }
+ int TangentSize() const final { return 1; }
private:
- mutable int calls_;
+ mutable int calls_{0};
};
-TEST(ParameterBlock, DetectBadLocalParameterization) {
+TEST(ParameterBlock, DetectBadManifold) {
double x = 1;
- BadLocalParameterization bad_parameterization;
- ParameterBlock parameter_block(&x, 1, -1, &bad_parameterization);
+ BadManifold bad_manifold;
+ ParameterBlock parameter_block(&x, 1, -1, &bad_manifold);
double y = 2;
EXPECT_FALSE(parameter_block.SetState(&y));
}
@@ -227,39 +245,39 @@
EXPECT_EQ(x_plus_delta[1], -1.0);
}
-TEST(ParameterBlock, ResetLocalParameterizationToNull) {
+TEST(ParameterBlock, ResetManifoldToNull) {
double x[3] = {1.0, 2.0, 3.0};
ParameterBlock parameter_block(x, 3, -1);
std::vector<int> indices;
indices.push_back(1);
- SubsetParameterization subset(3, indices);
- parameter_block.SetParameterization(&subset);
- EXPECT_EQ(parameter_block.local_parameterization(), &subset);
- parameter_block.SetParameterization(nullptr);
- EXPECT_EQ(parameter_block.local_parameterization(), nullptr);
+ SubsetManifold subset(3, indices);
+ parameter_block.SetManifold(&subset);
+ EXPECT_EQ(parameter_block.manifold(), &subset);
+ parameter_block.SetManifold(nullptr);
+ EXPECT_EQ(parameter_block.manifold(), nullptr);
}
-TEST(ParameterBlock, ResetLocalParameterizationToNotNull) {
+TEST(ParameterBlock, ResetManifoldToNotNull) {
double x[3] = {1.0, 2.0, 3.0};
ParameterBlock parameter_block(x, 3, -1);
std::vector<int> indices;
indices.push_back(1);
- SubsetParameterization subset(3, indices);
- parameter_block.SetParameterization(&subset);
- EXPECT_EQ(parameter_block.local_parameterization(), &subset);
+ SubsetManifold subset(3, indices);
+ parameter_block.SetManifold(&subset);
+ EXPECT_EQ(parameter_block.manifold(), &subset);
- SubsetParameterization subset_different(3, indices);
- parameter_block.SetParameterization(&subset_different);
- EXPECT_EQ(parameter_block.local_parameterization(), &subset_different);
+ SubsetManifold subset_different(3, indices);
+ parameter_block.SetManifold(&subset_different);
+ EXPECT_EQ(parameter_block.manifold(), &subset_different);
}
-TEST(ParameterBlock, SetNullLocalParameterization) {
+TEST(ParameterBlock, SetNullManifold) {
double x[3] = {1.0, 2.0, 3.0};
ParameterBlock parameter_block(x, 3, -1);
- EXPECT_EQ(parameter_block.local_parameterization(), nullptr);
+ EXPECT_EQ(parameter_block.manifold(), nullptr);
- parameter_block.SetParameterization(nullptr);
- EXPECT_EQ(parameter_block.local_parameterization(), nullptr);
+ parameter_block.SetManifold(nullptr);
+ EXPECT_EQ(parameter_block.manifold(), nullptr);
}
} // namespace internal
diff --git a/internal/ceres/parameter_dims_test.cc b/internal/ceres/parameter_dims_test.cc
index ee3be8f..58d2500 100644
--- a/internal/ceres/parameter_dims_test.cc
+++ b/internal/ceres/parameter_dims_test.cc
@@ -32,20 +32,6 @@
namespace ceres {
namespace internal {
-// Is valid parameter dims unit test
-static_assert(IsValidParameterDimensionSequence(std::integer_sequence<int>()) ==
- true,
- "Unit test of is valid parameter dimension sequence failed.");
-static_assert(IsValidParameterDimensionSequence(
- std::integer_sequence<int, 2, 1>()) == true,
- "Unit test of is valid parameter dimension sequence failed.");
-static_assert(IsValidParameterDimensionSequence(
- std::integer_sequence<int, 0, 1>()) == false,
- "Unit test of is valid parameter dimension sequence failed.");
-static_assert(IsValidParameterDimensionSequence(
- std::integer_sequence<int, 3, 0>()) == false,
- "Unit test of is valid parameter dimension sequence failed.");
-
// Static parameter dims unit test
static_assert(
std::is_same<StaticParameterDims<4, 2, 1>::Parameters,
diff --git a/internal/ceres/partition_range_for_parallel_for.h b/internal/ceres/partition_range_for_parallel_for.h
new file mode 100644
index 0000000..309d7a8
--- /dev/null
+++ b/internal/ceres/partition_range_for_parallel_for.h
@@ -0,0 +1,150 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: vitus@google.com (Michael Vitus),
+// dmitriy.korchemkin@gmail.com (Dmitriy Korchemkin)
+
+#ifndef CERES_INTERNAL_PARTITION_RANGE_FOR_PARALLEL_FOR_H_
+#define CERES_INTERNAL_PARTITION_RANGE_FOR_PARALLEL_FOR_H_
+
+#include <algorithm>
+#include <vector>
+
+namespace ceres::internal {
+// Check if it is possible to split range [start; end) into at most
+// max_num_partitions contiguous partitions of cost not greater than
+// max_partition_cost. Inclusive integer cumulative costs are provided by
+// cumulative_cost_data objects, with cumulative_cost_offset being a total cost
+// of all indices (starting from zero) preceding start element. Cumulative costs
+// are returned by cumulative_cost_fun called with a reference to
+// cumulative_cost_data element with index from range[start; end), and should be
+// non-decreasing. Partition of the range is returned via partition argument
+template <typename CumulativeCostData, typename CumulativeCostFun>
+bool MaxPartitionCostIsFeasible(int start,
+ int end,
+ int max_num_partitions,
+ int max_partition_cost,
+ int cumulative_cost_offset,
+ const CumulativeCostData* cumulative_cost_data,
+ CumulativeCostFun&& cumulative_cost_fun,
+ std::vector<int>* partition) {
+ partition->clear();
+ partition->push_back(start);
+ int partition_start = start;
+ int cost_offset = cumulative_cost_offset;
+
+ while (partition_start < end) {
+ // Already have max_num_partitions
+ if (partition->size() > max_num_partitions) {
+ return false;
+ }
+ const int target = max_partition_cost + cost_offset;
+ const int partition_end =
+ std::partition_point(
+ cumulative_cost_data + partition_start,
+ cumulative_cost_data + end,
+ [&cumulative_cost_fun, target](const CumulativeCostData& item) {
+ return cumulative_cost_fun(item) <= target;
+ }) -
+ cumulative_cost_data;
+ // Unable to make a partition from a single element
+ if (partition_end == partition_start) {
+ return false;
+ }
+
+ const int cost_last =
+ cumulative_cost_fun(cumulative_cost_data[partition_end - 1]);
+ partition->push_back(partition_end);
+ partition_start = partition_end;
+ cost_offset = cost_last;
+ }
+ return true;
+}
+
+// Split integer interval [start, end) into at most max_num_partitions
+// contiguous intervals, minimizing maximal total cost of a single interval.
+// Inclusive integer cumulative costs for each (zero-based) index are provided
+// by cumulative_cost_data objects, and are returned by cumulative_cost_fun call
+// with a reference to one of the objects from range [start, end)
+template <typename CumulativeCostData, typename CumulativeCostFun>
+std::vector<int> PartitionRangeForParallelFor(
+ int start,
+ int end,
+ int max_num_partitions,
+ const CumulativeCostData* cumulative_cost_data,
+ CumulativeCostFun&& cumulative_cost_fun) {
+ // Given maximal partition cost, it is possible to verify if it is admissible
+ // and obtain corresponding partition using MaxPartitionCostIsFeasible
+ // function. In order to find the lowest admissible value, a binary search
+ // over all potentially optimal cost values is being performed
+ const int cumulative_cost_last =
+ cumulative_cost_fun(cumulative_cost_data[end - 1]);
+ const int cumulative_cost_offset =
+ start ? cumulative_cost_fun(cumulative_cost_data[start - 1]) : 0;
+ const int total_cost = cumulative_cost_last - cumulative_cost_offset;
+
+ // Minimal maximal partition cost is not smaller than the average
+ // We will use non-inclusive lower bound
+ int partition_cost_lower_bound = total_cost / max_num_partitions - 1;
+ // Minimal maximal partition cost is not larger than the total cost
+ // Upper bound is inclusive
+ int partition_cost_upper_bound = total_cost;
+
+ std::vector<int> partition;
+ // Range partition corresponding to the latest evaluated upper bound.
+ // A single segment covering the whole input interval [start, end) corresponds
+ // to minimal maximal partition cost of total_cost.
+ std::vector<int> partition_upper_bound = {start, end};
+ // Binary search over partition cost, returning the lowest admissible cost
+ while (partition_cost_upper_bound - partition_cost_lower_bound > 1) {
+ partition.reserve(max_num_partitions + 1);
+ const int partition_cost =
+ partition_cost_lower_bound +
+ (partition_cost_upper_bound - partition_cost_lower_bound) / 2;
+ bool admissible = MaxPartitionCostIsFeasible(
+ start,
+ end,
+ max_num_partitions,
+ partition_cost,
+ cumulative_cost_offset,
+ cumulative_cost_data,
+ std::forward<CumulativeCostFun>(cumulative_cost_fun),
+ &partition);
+ if (admissible) {
+ partition_cost_upper_bound = partition_cost;
+ std::swap(partition, partition_upper_bound);
+ } else {
+ partition_cost_lower_bound = partition_cost;
+ }
+ }
+
+ return partition_upper_bound;
+}
+} // namespace ceres::internal
+
+#endif
diff --git a/internal/ceres/partitioned_matrix_view.cc b/internal/ceres/partitioned_matrix_view.cc
index b67bc90..cffdbc5 100644
--- a/internal/ceres/partitioned_matrix_view.cc
+++ b/internal/ceres/partitioned_matrix_view.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,145 +39,147 @@
//
// This file is generated using generate_template_specializations.py.
+#include <memory>
+
#include "ceres/linear_solver.h"
#include "ceres/partitioned_matrix_view.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-PartitionedMatrixViewBase* PartitionedMatrixViewBase::Create(
+PartitionedMatrixViewBase::~PartitionedMatrixViewBase() = default;
+
+std::unique_ptr<PartitionedMatrixViewBase> PartitionedMatrixViewBase::Create(
const LinearSolver::Options& options, const BlockSparseMatrix& matrix) {
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
if ((options.row_block_size == 2) &&
(options.e_block_size == 2) &&
(options.f_block_size == 2)) {
- return new PartitionedMatrixView<2, 2, 2>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,2, 2>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 2) &&
(options.f_block_size == 3)) {
- return new PartitionedMatrixView<2, 2, 3>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,2, 3>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 2) &&
(options.f_block_size == 4)) {
- return new PartitionedMatrixView<2, 2, 4>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,2, 4>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 2)) {
- return new PartitionedMatrixView<2, 2, Eigen::Dynamic>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,2, Eigen::Dynamic>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3) &&
(options.f_block_size == 3)) {
- return new PartitionedMatrixView<2, 3, 3>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,3, 3>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3) &&
(options.f_block_size == 4)) {
- return new PartitionedMatrixView<2, 3, 4>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,3, 4>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3) &&
(options.f_block_size == 6)) {
- return new PartitionedMatrixView<2, 3, 6>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,3, 6>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3) &&
(options.f_block_size == 9)) {
- return new PartitionedMatrixView<2, 3, 9>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,3, 9>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3)) {
- return new PartitionedMatrixView<2, 3, Eigen::Dynamic>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,3, Eigen::Dynamic>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 3)) {
- return new PartitionedMatrixView<2, 4, 3>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,4, 3>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 4)) {
- return new PartitionedMatrixView<2, 4, 4>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,4, 4>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 6)) {
- return new PartitionedMatrixView<2, 4, 6>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,4, 6>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 8)) {
- return new PartitionedMatrixView<2, 4, 8>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,4, 8>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 9)) {
- return new PartitionedMatrixView<2, 4, 9>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,4, 9>>(
+ options, matrix);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4)) {
- return new PartitionedMatrixView<2, 4, Eigen::Dynamic>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,4, Eigen::Dynamic>>(
+ options, matrix);
}
if (options.row_block_size == 2) {
- return new PartitionedMatrixView<2, Eigen::Dynamic, Eigen::Dynamic>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<2,Eigen::Dynamic, Eigen::Dynamic>>(
+ options, matrix);
}
if ((options.row_block_size == 3) &&
(options.e_block_size == 3) &&
(options.f_block_size == 3)) {
- return new PartitionedMatrixView<3, 3, 3>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<3,3, 3>>(
+ options, matrix);
}
if ((options.row_block_size == 4) &&
(options.e_block_size == 4) &&
(options.f_block_size == 2)) {
- return new PartitionedMatrixView<4, 4, 2>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<4,4, 2>>(
+ options, matrix);
}
if ((options.row_block_size == 4) &&
(options.e_block_size == 4) &&
(options.f_block_size == 3)) {
- return new PartitionedMatrixView<4, 4, 3>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<4,4, 3>>(
+ options, matrix);
}
if ((options.row_block_size == 4) &&
(options.e_block_size == 4) &&
(options.f_block_size == 4)) {
- return new PartitionedMatrixView<4, 4, 4>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<4,4, 4>>(
+ options, matrix);
}
if ((options.row_block_size == 4) &&
(options.e_block_size == 4)) {
- return new PartitionedMatrixView<4, 4, Eigen::Dynamic>(matrix,
- options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<4,4, Eigen::Dynamic>>(
+ options, matrix);
}
#endif
VLOG(1) << "Template specializations not found for <"
<< options.row_block_size << "," << options.e_block_size << ","
<< options.f_block_size << ">";
- return new PartitionedMatrixView<Eigen::Dynamic,
- Eigen::Dynamic,
- Eigen::Dynamic>(
- matrix, options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<Eigen::Dynamic,
+ Eigen::Dynamic,
+ Eigen::Dynamic>>(
+ options, matrix);
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/partitioned_matrix_view.h b/internal/ceres/partitioned_matrix_view.h
index 9f204ee..8589a3b 100644
--- a/internal/ceres/partitioned_matrix_view.h
+++ b/internal/ceres/partitioned_matrix_view.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -38,21 +38,25 @@
#include <algorithm>
#include <cstring>
+#include <memory>
#include <vector>
#include "ceres/block_structure.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
#include "ceres/small_blas.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
+
+class ContextImpl;
// Given generalized bi-partite matrix A = [E F], with the same block
// structure as required by the Schur complement based solver, found
-// in explicit_schur_complement_solver.h, provide access to the
+// in schur_complement_solver.h, provide access to the
// matrices E and F and their outer products E'E and F'F with
// themselves.
//
@@ -60,28 +64,38 @@
// block structure of the matrix does not satisfy the requirements of
// the Schur complement solver it will result in unpredictable and
// wrong output.
-class CERES_EXPORT_INTERNAL PartitionedMatrixViewBase {
+class CERES_NO_EXPORT PartitionedMatrixViewBase {
public:
- virtual ~PartitionedMatrixViewBase() {}
+ virtual ~PartitionedMatrixViewBase();
// y += E'x
- virtual void LeftMultiplyE(const double* x, double* y) const = 0;
+ virtual void LeftMultiplyAndAccumulateE(const double* x, double* y) const = 0;
+ virtual void LeftMultiplyAndAccumulateESingleThreaded(const double* x,
+ double* y) const = 0;
+ virtual void LeftMultiplyAndAccumulateEMultiThreaded(const double* x,
+ double* y) const = 0;
// y += F'x
- virtual void LeftMultiplyF(const double* x, double* y) const = 0;
+ virtual void LeftMultiplyAndAccumulateF(const double* x, double* y) const = 0;
+ virtual void LeftMultiplyAndAccumulateFSingleThreaded(const double* x,
+ double* y) const = 0;
+ virtual void LeftMultiplyAndAccumulateFMultiThreaded(const double* x,
+ double* y) const = 0;
// y += Ex
- virtual void RightMultiplyE(const double* x, double* y) const = 0;
+ virtual void RightMultiplyAndAccumulateE(const double* x,
+ double* y) const = 0;
// y += Fx
- virtual void RightMultiplyF(const double* x, double* y) const = 0;
+ virtual void RightMultiplyAndAccumulateF(const double* x,
+ double* y) const = 0;
// Create and return the block diagonal of the matrix E'E.
- virtual BlockSparseMatrix* CreateBlockDiagonalEtE() const = 0;
+ virtual std::unique_ptr<BlockSparseMatrix> CreateBlockDiagonalEtE() const = 0;
// Create and return the block diagonal of the matrix F'F. Caller
// owns the result.
- virtual BlockSparseMatrix* CreateBlockDiagonalFtF() const = 0;
+ virtual std::unique_ptr<BlockSparseMatrix> CreateBlockDiagonalFtF() const = 0;
// Compute the block diagonal of the matrix E'E and store it in
// block_diagonal. The matrix block_diagonal is expected to have a
@@ -106,30 +120,61 @@
virtual int num_cols_f() const = 0;
virtual int num_rows() const = 0;
virtual int num_cols() const = 0;
+ virtual const std::vector<int>& e_cols_partition() const = 0;
+ virtual const std::vector<int>& f_cols_partition() const = 0;
// clang-format on
- static PartitionedMatrixViewBase* Create(const LinearSolver::Options& options,
- const BlockSparseMatrix& matrix);
+ static std::unique_ptr<PartitionedMatrixViewBase> Create(
+ const LinearSolver::Options& options, const BlockSparseMatrix& matrix);
};
template <int kRowBlockSize = Eigen::Dynamic,
int kEBlockSize = Eigen::Dynamic,
int kFBlockSize = Eigen::Dynamic>
-class PartitionedMatrixView : public PartitionedMatrixViewBase {
+class CERES_NO_EXPORT PartitionedMatrixView final
+ : public PartitionedMatrixViewBase {
public:
// matrix = [E F], where the matrix E contains the first
- // num_col_blocks_a column blocks.
- PartitionedMatrixView(const BlockSparseMatrix& matrix, int num_col_blocks_e);
+ // options.elimination_groups[0] column blocks.
+ PartitionedMatrixView(const LinearSolver::Options& options,
+ const BlockSparseMatrix& matrix);
- virtual ~PartitionedMatrixView();
- void LeftMultiplyE(const double* x, double* y) const final;
- void LeftMultiplyF(const double* x, double* y) const final;
- void RightMultiplyE(const double* x, double* y) const final;
- void RightMultiplyF(const double* x, double* y) const final;
- BlockSparseMatrix* CreateBlockDiagonalEtE() const final;
- BlockSparseMatrix* CreateBlockDiagonalFtF() const final;
+ // y += E'x
+ virtual void LeftMultiplyAndAccumulateE(const double* x,
+ double* y) const final;
+ virtual void LeftMultiplyAndAccumulateESingleThreaded(const double* x,
+ double* y) const final;
+ virtual void LeftMultiplyAndAccumulateEMultiThreaded(const double* x,
+ double* y) const final;
+
+ // y += F'x
+ virtual void LeftMultiplyAndAccumulateF(const double* x,
+ double* y) const final;
+ virtual void LeftMultiplyAndAccumulateFSingleThreaded(const double* x,
+ double* y) const final;
+ virtual void LeftMultiplyAndAccumulateFMultiThreaded(const double* x,
+ double* y) const final;
+
+ // y += Ex
+ virtual void RightMultiplyAndAccumulateE(const double* x,
+ double* y) const final;
+
+ // y += Fx
+ virtual void RightMultiplyAndAccumulateF(const double* x,
+ double* y) const final;
+
+ std::unique_ptr<BlockSparseMatrix> CreateBlockDiagonalEtE() const final;
+ std::unique_ptr<BlockSparseMatrix> CreateBlockDiagonalFtF() const final;
void UpdateBlockDiagonalEtE(BlockSparseMatrix* block_diagonal) const final;
+ void UpdateBlockDiagonalEtESingleThreaded(
+ BlockSparseMatrix* block_diagonal) const;
+ void UpdateBlockDiagonalEtEMultiThreaded(
+ BlockSparseMatrix* block_diagonal) const;
void UpdateBlockDiagonalFtF(BlockSparseMatrix* block_diagonal) const final;
+ void UpdateBlockDiagonalFtFSingleThreaded(
+ BlockSparseMatrix* block_diagonal) const;
+ void UpdateBlockDiagonalFtFMultiThreaded(
+ BlockSparseMatrix* block_diagonal) const;
// clang-format off
int num_col_blocks_e() const final { return num_col_blocks_e_; }
int num_col_blocks_f() const final { return num_col_blocks_f_; }
@@ -138,20 +183,30 @@
int num_rows() const final { return matrix_.num_rows(); }
int num_cols() const final { return matrix_.num_cols(); }
// clang-format on
+ const std::vector<int>& e_cols_partition() const final {
+ return e_cols_partition_;
+ }
+ const std::vector<int>& f_cols_partition() const final {
+ return f_cols_partition_;
+ }
private:
- BlockSparseMatrix* CreateBlockDiagonalMatrixLayout(int start_col_block,
- int end_col_block) const;
+ std::unique_ptr<BlockSparseMatrix> CreateBlockDiagonalMatrixLayout(
+ int start_col_block, int end_col_block) const;
+ const LinearSolver::Options options_;
const BlockSparseMatrix& matrix_;
int num_row_blocks_e_;
int num_col_blocks_e_;
int num_col_blocks_f_;
int num_cols_e_;
int num_cols_f_;
+ std::vector<int> e_cols_partition_;
+ std::vector<int> f_cols_partition_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_PARTITIONED_MATRIX_VIEW_H_
diff --git a/internal/ceres/partitioned_matrix_view_impl.h b/internal/ceres/partitioned_matrix_view_impl.h
index 0b6a57f..bd02439 100644
--- a/internal/ceres/partitioned_matrix_view_impl.h
+++ b/internal/ceres/partitioned_matrix_view_impl.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,35 +30,40 @@
#include <algorithm>
#include <cstring>
+#include <memory>
#include <vector>
#include "ceres/block_sparse_matrix.h"
#include "ceres/block_structure.h"
#include "ceres/internal/eigen.h"
+#include "ceres/parallel_for.h"
+#include "ceres/partition_range_for_parallel_for.h"
#include "ceres/partitioned_matrix_view.h"
#include "ceres/small_blas.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
- PartitionedMatrixView(const BlockSparseMatrix& matrix, int num_col_blocks_e)
- : matrix_(matrix), num_col_blocks_e_(num_col_blocks_e) {
+ PartitionedMatrixView(const LinearSolver::Options& options,
+ const BlockSparseMatrix& matrix)
+
+ : options_(options), matrix_(matrix) {
const CompressedRowBlockStructure* bs = matrix_.block_structure();
CHECK(bs != nullptr);
+ num_col_blocks_e_ = options_.elimination_groups[0];
num_col_blocks_f_ = bs->cols.size() - num_col_blocks_e_;
// Compute the number of row blocks in E. The number of row blocks
// in E maybe less than the number of row blocks in the input matrix
// as some of the row blocks at the bottom may not have any
// e_blocks. For a definition of what an e_block is, please see
- // explicit_schur_complement_solver.h
+ // schur_complement_solver.h
num_row_blocks_e_ = 0;
- for (int r = 0; r < bs->rows.size(); ++r) {
- const std::vector<Cell>& cells = bs->rows[r].cells;
+ for (const auto& row : bs->rows) {
+ const std::vector<Cell>& cells = row.cells;
if (cells[0].block_id < num_col_blocks_e_) {
++num_row_blocks_e_;
}
@@ -78,11 +83,26 @@
}
CHECK_EQ(num_cols_e_ + num_cols_f_, matrix_.num_cols());
-}
-template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
-PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
- ~PartitionedMatrixView() {}
+ auto transpose_bs = matrix_.transpose_block_structure();
+ const int num_threads = options_.num_threads;
+ if (transpose_bs != nullptr && num_threads > 1) {
+ int kMaxPartitions = num_threads * 4;
+ e_cols_partition_ = PartitionRangeForParallelFor(
+ 0,
+ num_col_blocks_e_,
+ kMaxPartitions,
+ transpose_bs->rows.data(),
+ [](const CompressedRow& row) { return row.cumulative_nnz; });
+
+ f_cols_partition_ = PartitionRangeForParallelFor(
+ num_col_blocks_e_,
+ num_col_blocks_e_ + num_col_blocks_f_,
+ kMaxPartitions,
+ transpose_bs->rows.data(),
+ [](const CompressedRow& row) { return row.cumulative_nnz; });
+ }
+}
// The next four methods don't seem to be particularly cache
// friendly. This is an artifact of how the BlockStructure of the
@@ -91,77 +111,101 @@
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
- RightMultiplyE(const double* x, double* y) const {
- const CompressedRowBlockStructure* bs = matrix_.block_structure();
-
+ RightMultiplyAndAccumulateE(const double* x, double* y) const {
// Iterate over the first num_row_blocks_e_ row blocks, and multiply
// by the first cell in each row block.
+ auto bs = matrix_.block_structure();
const double* values = matrix_.values();
- for (int r = 0; r < num_row_blocks_e_; ++r) {
- const Cell& cell = bs->rows[r].cells[0];
- const int row_block_pos = bs->rows[r].block.position;
- const int row_block_size = bs->rows[r].block.size;
- const int col_block_id = cell.block_id;
- const int col_block_pos = bs->cols[col_block_id].position;
- const int col_block_size = bs->cols[col_block_id].size;
- // clang-format off
- MatrixVectorMultiply<kRowBlockSize, kEBlockSize, 1>(
- values + cell.position, row_block_size, col_block_size,
- x + col_block_pos,
- y + row_block_pos);
- // clang-format on
- }
+ ParallelFor(options_.context,
+ 0,
+ num_row_blocks_e_,
+ options_.num_threads,
+ [values, bs, x, y](int row_block_id) {
+ const Cell& cell = bs->rows[row_block_id].cells[0];
+ const int row_block_pos = bs->rows[row_block_id].block.position;
+ const int row_block_size = bs->rows[row_block_id].block.size;
+ const int col_block_id = cell.block_id;
+ const int col_block_pos = bs->cols[col_block_id].position;
+ const int col_block_size = bs->cols[col_block_id].size;
+ // clang-format off
+ MatrixVectorMultiply<kRowBlockSize, kEBlockSize, 1>(
+ values + cell.position, row_block_size, col_block_size,
+ x + col_block_pos,
+ y + row_block_pos);
+ // clang-format on
+ });
}
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
- RightMultiplyF(const double* x, double* y) const {
- const CompressedRowBlockStructure* bs = matrix_.block_structure();
-
+ RightMultiplyAndAccumulateF(const double* x, double* y) const {
// Iterate over row blocks, and if the row block is in E, then
// multiply by all the cells except the first one which is of type
// E. If the row block is not in E (i.e its in the bottom
// num_row_blocks - num_row_blocks_e row blocks), then all the cells
// are of type F and multiply by them all.
+ const CompressedRowBlockStructure* bs = matrix_.block_structure();
+ const int num_row_blocks = bs->rows.size();
+ const int num_cols_e = num_cols_e_;
const double* values = matrix_.values();
- for (int r = 0; r < num_row_blocks_e_; ++r) {
- const int row_block_pos = bs->rows[r].block.position;
- const int row_block_size = bs->rows[r].block.size;
- const std::vector<Cell>& cells = bs->rows[r].cells;
- for (int c = 1; c < cells.size(); ++c) {
- const int col_block_id = cells[c].block_id;
- const int col_block_pos = bs->cols[col_block_id].position;
- const int col_block_size = bs->cols[col_block_id].size;
- // clang-format off
- MatrixVectorMultiply<kRowBlockSize, kFBlockSize, 1>(
- values + cells[c].position, row_block_size, col_block_size,
- x + col_block_pos - num_cols_e_,
- y + row_block_pos);
- // clang-format on
- }
- }
+ ParallelFor(options_.context,
+ 0,
+ num_row_blocks_e_,
+ options_.num_threads,
+ [values, bs, num_cols_e, x, y](int row_block_id) {
+ const int row_block_pos = bs->rows[row_block_id].block.position;
+ const int row_block_size = bs->rows[row_block_id].block.size;
+ const auto& cells = bs->rows[row_block_id].cells;
+ for (int c = 1; c < cells.size(); ++c) {
+ const int col_block_id = cells[c].block_id;
+ const int col_block_pos = bs->cols[col_block_id].position;
+ const int col_block_size = bs->cols[col_block_id].size;
+ // clang-format off
+ MatrixVectorMultiply<kRowBlockSize, kFBlockSize, 1>(
+ values + cells[c].position, row_block_size, col_block_size,
+ x + col_block_pos - num_cols_e,
+ y + row_block_pos);
+ // clang-format on
+ }
+ });
+ ParallelFor(options_.context,
+ num_row_blocks_e_,
+ num_row_blocks,
+ options_.num_threads,
+ [values, bs, num_cols_e, x, y](int row_block_id) {
+ const int row_block_pos = bs->rows[row_block_id].block.position;
+ const int row_block_size = bs->rows[row_block_id].block.size;
+ const auto& cells = bs->rows[row_block_id].cells;
+ for (const auto& cell : cells) {
+ const int col_block_id = cell.block_id;
+ const int col_block_pos = bs->cols[col_block_id].position;
+ const int col_block_size = bs->cols[col_block_id].size;
+ // clang-format off
+ MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
+ values + cell.position, row_block_size, col_block_size,
+ x + col_block_pos - num_cols_e,
+ y + row_block_pos);
+ // clang-format on
+ }
+ });
+}
- for (int r = num_row_blocks_e_; r < bs->rows.size(); ++r) {
- const int row_block_pos = bs->rows[r].block.position;
- const int row_block_size = bs->rows[r].block.size;
- const std::vector<Cell>& cells = bs->rows[r].cells;
- for (int c = 0; c < cells.size(); ++c) {
- const int col_block_id = cells[c].block_id;
- const int col_block_pos = bs->cols[col_block_id].position;
- const int col_block_size = bs->cols[col_block_id].size;
- // clang-format off
- MatrixVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
- values + cells[c].position, row_block_size, col_block_size,
- x + col_block_pos - num_cols_e_,
- y + row_block_pos);
- // clang-format on
- }
+template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
+void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ LeftMultiplyAndAccumulateE(const double* x, double* y) const {
+ if (!num_col_blocks_e_) return;
+ if (!num_row_blocks_e_) return;
+ if (options_.num_threads == 1) {
+ LeftMultiplyAndAccumulateESingleThreaded(x, y);
+ } else {
+ CHECK(options_.context != nullptr);
+ LeftMultiplyAndAccumulateEMultiThreaded(x, y);
}
}
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
- LeftMultiplyE(const double* x, double* y) const {
+ LeftMultiplyAndAccumulateESingleThreaded(const double* x, double* y) const {
const CompressedRowBlockStructure* bs = matrix_.block_structure();
// Iterate over the first num_row_blocks_e_ row blocks, and multiply
@@ -185,7 +229,55 @@
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
- LeftMultiplyF(const double* x, double* y) const {
+ LeftMultiplyAndAccumulateEMultiThreaded(const double* x, double* y) const {
+ auto transpose_bs = matrix_.transpose_block_structure();
+ CHECK(transpose_bs != nullptr);
+
+ // Local copies of class members in order to avoid capturing pointer to the
+ // whole object in lambda function
+ auto values = matrix_.values();
+ const int num_row_blocks_e = num_row_blocks_e_;
+ ParallelFor(
+ options_.context,
+ 0,
+ num_col_blocks_e_,
+ options_.num_threads,
+ [values, transpose_bs, num_row_blocks_e, x, y](int row_block_id) {
+ int row_block_pos = transpose_bs->rows[row_block_id].block.position;
+ int row_block_size = transpose_bs->rows[row_block_id].block.size;
+ auto& cells = transpose_bs->rows[row_block_id].cells;
+
+ for (auto& cell : cells) {
+ const int col_block_id = cell.block_id;
+ const int col_block_size = transpose_bs->cols[col_block_id].size;
+ const int col_block_pos = transpose_bs->cols[col_block_id].position;
+ if (col_block_id >= num_row_blocks_e) break;
+ MatrixTransposeVectorMultiply<kRowBlockSize, kEBlockSize, 1>(
+ values + cell.position,
+ col_block_size,
+ row_block_size,
+ x + col_block_pos,
+ y + row_block_pos);
+ }
+ },
+ e_cols_partition());
+}
+
+template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
+void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ LeftMultiplyAndAccumulateF(const double* x, double* y) const {
+ if (!num_col_blocks_f_) return;
+ if (options_.num_threads == 1) {
+ LeftMultiplyAndAccumulateFSingleThreaded(x, y);
+ } else {
+ CHECK(options_.context != nullptr);
+ LeftMultiplyAndAccumulateFMultiThreaded(x, y);
+ }
+}
+
+template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
+void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ LeftMultiplyAndAccumulateFSingleThreaded(const double* x, double* y) const {
const CompressedRowBlockStructure* bs = matrix_.block_structure();
// Iterate over row blocks, and if the row block is in E, then
@@ -215,13 +307,13 @@
const int row_block_pos = bs->rows[r].block.position;
const int row_block_size = bs->rows[r].block.size;
const std::vector<Cell>& cells = bs->rows[r].cells;
- for (int c = 0; c < cells.size(); ++c) {
- const int col_block_id = cells[c].block_id;
+ for (const auto& cell : cells) {
+ const int col_block_id = cell.block_id;
const int col_block_pos = bs->cols[col_block_id].position;
const int col_block_size = bs->cols[col_block_id].size;
// clang-format off
MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
- values + cells[c].position, row_block_size, col_block_size,
+ values + cell.position, row_block_size, col_block_size,
x + row_block_pos,
y + col_block_pos - num_cols_e_);
// clang-format on
@@ -229,19 +321,71 @@
}
}
+template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
+void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ LeftMultiplyAndAccumulateFMultiThreaded(const double* x, double* y) const {
+ auto transpose_bs = matrix_.transpose_block_structure();
+ CHECK(transpose_bs != nullptr);
+ // Local copies of class members in order to avoid capturing pointer to the
+ // whole object in lambda function
+ auto values = matrix_.values();
+ const int num_row_blocks_e = num_row_blocks_e_;
+ const int num_cols_e = num_cols_e_;
+ ParallelFor(
+ options_.context,
+ num_col_blocks_e_,
+ num_col_blocks_e_ + num_col_blocks_f_,
+ options_.num_threads,
+ [values, transpose_bs, num_row_blocks_e, num_cols_e, x, y](
+ int row_block_id) {
+ int row_block_pos = transpose_bs->rows[row_block_id].block.position;
+ int row_block_size = transpose_bs->rows[row_block_id].block.size;
+ auto& cells = transpose_bs->rows[row_block_id].cells;
+
+ const int num_cells = cells.size();
+ int cell_idx = 0;
+ for (; cell_idx < num_cells; ++cell_idx) {
+ auto& cell = cells[cell_idx];
+ const int col_block_id = cell.block_id;
+ const int col_block_size = transpose_bs->cols[col_block_id].size;
+ const int col_block_pos = transpose_bs->cols[col_block_id].position;
+ if (col_block_id >= num_row_blocks_e) break;
+
+ MatrixTransposeVectorMultiply<kRowBlockSize, kFBlockSize, 1>(
+ values + cell.position,
+ col_block_size,
+ row_block_size,
+ x + col_block_pos,
+ y + row_block_pos - num_cols_e);
+ }
+ for (; cell_idx < num_cells; ++cell_idx) {
+ auto& cell = cells[cell_idx];
+ const int col_block_id = cell.block_id;
+ const int col_block_size = transpose_bs->cols[col_block_id].size;
+ const int col_block_pos = transpose_bs->cols[col_block_id].position;
+ MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
+ values + cell.position,
+ col_block_size,
+ row_block_size,
+ x + col_block_pos,
+ y + row_block_pos - num_cols_e);
+ }
+ },
+ f_cols_partition());
+}
+
// Given a range of columns blocks of a matrix m, compute the block
// structure of the block diagonal of the matrix m(:,
// start_col_block:end_col_block)'m(:, start_col_block:end_col_block)
-// and return a BlockSparseMatrix with the this block structure. The
+// and return a BlockSparseMatrix with this block structure. The
// caller owns the result.
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
-BlockSparseMatrix*
+std::unique_ptr<BlockSparseMatrix>
PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
CreateBlockDiagonalMatrixLayout(int start_col_block,
int end_col_block) const {
const CompressedRowBlockStructure* bs = matrix_.block_structure();
- CompressedRowBlockStructure* block_diagonal_structure =
- new CompressedRowBlockStructure;
+ auto* block_diagonal_structure = new CompressedRowBlockStructure;
int block_position = 0;
int diagonal_cell_position = 0;
@@ -250,16 +394,16 @@
// each column block.
for (int c = start_col_block; c < end_col_block; ++c) {
const Block& block = bs->cols[c];
- block_diagonal_structure->cols.push_back(Block());
+ block_diagonal_structure->cols.emplace_back();
Block& diagonal_block = block_diagonal_structure->cols.back();
diagonal_block.size = block.size;
diagonal_block.position = block_position;
- block_diagonal_structure->rows.push_back(CompressedRow());
+ block_diagonal_structure->rows.emplace_back();
CompressedRow& row = block_diagonal_structure->rows.back();
row.block = diagonal_block;
- row.cells.push_back(Cell());
+ row.cells.emplace_back();
Cell& cell = row.cells.back();
cell.block_id = c - start_col_block;
cell.position = diagonal_cell_position;
@@ -270,42 +414,41 @@
// Build a BlockSparseMatrix with the just computed block
// structure.
- return new BlockSparseMatrix(block_diagonal_structure);
+ return std::make_unique<BlockSparseMatrix>(block_diagonal_structure);
}
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
-BlockSparseMatrix* PartitionedMatrixView<kRowBlockSize,
- kEBlockSize,
- kFBlockSize>::CreateBlockDiagonalEtE()
- const {
- BlockSparseMatrix* block_diagonal =
+std::unique_ptr<BlockSparseMatrix>
+PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ CreateBlockDiagonalEtE() const {
+ std::unique_ptr<BlockSparseMatrix> block_diagonal =
CreateBlockDiagonalMatrixLayout(0, num_col_blocks_e_);
- UpdateBlockDiagonalEtE(block_diagonal);
+ UpdateBlockDiagonalEtE(block_diagonal.get());
return block_diagonal;
}
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
-BlockSparseMatrix* PartitionedMatrixView<kRowBlockSize,
- kEBlockSize,
- kFBlockSize>::CreateBlockDiagonalFtF()
- const {
- BlockSparseMatrix* block_diagonal = CreateBlockDiagonalMatrixLayout(
- num_col_blocks_e_, num_col_blocks_e_ + num_col_blocks_f_);
- UpdateBlockDiagonalFtF(block_diagonal);
+std::unique_ptr<BlockSparseMatrix>
+PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ CreateBlockDiagonalFtF() const {
+ std::unique_ptr<BlockSparseMatrix> block_diagonal =
+ CreateBlockDiagonalMatrixLayout(num_col_blocks_e_,
+ num_col_blocks_e_ + num_col_blocks_f_);
+ UpdateBlockDiagonalFtF(block_diagonal.get());
return block_diagonal;
}
-// Similar to the code in RightMultiplyE, except instead of the matrix
-// vector multiply its an outer product.
+// Similar to the code in RightMultiplyAndAccumulateE, except instead of the
+// matrix vector multiply its an outer product.
//
// block_diagonal = block_diagonal(E'E)
//
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
- UpdateBlockDiagonalEtE(BlockSparseMatrix* block_diagonal) const {
- const CompressedRowBlockStructure* bs = matrix_.block_structure();
- const CompressedRowBlockStructure* block_diagonal_structure =
- block_diagonal->block_structure();
+ UpdateBlockDiagonalEtESingleThreaded(
+ BlockSparseMatrix* block_diagonal) const {
+ auto bs = matrix_.block_structure();
+ auto block_diagonal_structure = block_diagonal->block_structure();
block_diagonal->SetZero();
const double* values = matrix_.values();
@@ -328,17 +471,68 @@
}
}
-// Similar to the code in RightMultiplyF, except instead of the matrix
-// vector multiply its an outer product.
+template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
+void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ UpdateBlockDiagonalEtEMultiThreaded(
+ BlockSparseMatrix* block_diagonal) const {
+ auto transpose_block_structure = matrix_.transpose_block_structure();
+ CHECK(transpose_block_structure != nullptr);
+ auto block_diagonal_structure = block_diagonal->block_structure();
+
+ const double* values = matrix_.values();
+ double* values_diagonal = block_diagonal->mutable_values();
+ ParallelFor(
+ options_.context,
+ 0,
+ num_col_blocks_e_,
+ options_.num_threads,
+ [values,
+ transpose_block_structure,
+ values_diagonal,
+ block_diagonal_structure](int col_block_id) {
+ int cell_position =
+ block_diagonal_structure->rows[col_block_id].cells[0].position;
+ double* cell_values = values_diagonal + cell_position;
+ int col_block_size =
+ transpose_block_structure->rows[col_block_id].block.size;
+ auto& cells = transpose_block_structure->rows[col_block_id].cells;
+ MatrixRef(cell_values, col_block_size, col_block_size).setZero();
+
+ for (auto& c : cells) {
+ int row_block_size = transpose_block_structure->cols[c.block_id].size;
+ // clang-format off
+ MatrixTransposeMatrixMultiply<kRowBlockSize, kEBlockSize, kRowBlockSize, kEBlockSize, 1>(
+ values + c.position, row_block_size, col_block_size,
+ values + c.position, row_block_size, col_block_size,
+ cell_values, 0, 0, col_block_size, col_block_size);
+ // clang-format on
+ }
+ },
+ e_cols_partition_);
+}
+
+template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
+void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ UpdateBlockDiagonalEtE(BlockSparseMatrix* block_diagonal) const {
+ if (options_.num_threads == 1) {
+ UpdateBlockDiagonalEtESingleThreaded(block_diagonal);
+ } else {
+ CHECK(options_.context != nullptr);
+ UpdateBlockDiagonalEtEMultiThreaded(block_diagonal);
+ }
+}
+
+// Similar to the code in RightMultiplyAndAccumulateF, except instead of the
+// matrix vector multiply its an outer product.
//
// block_diagonal = block_diagonal(F'F)
//
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
- UpdateBlockDiagonalFtF(BlockSparseMatrix* block_diagonal) const {
- const CompressedRowBlockStructure* bs = matrix_.block_structure();
- const CompressedRowBlockStructure* block_diagonal_structure =
- block_diagonal->block_structure();
+ UpdateBlockDiagonalFtFSingleThreaded(
+ BlockSparseMatrix* block_diagonal) const {
+ auto bs = matrix_.block_structure();
+ auto block_diagonal_structure = block_diagonal->block_structure();
block_diagonal->SetZero();
const double* values = matrix_.values();
@@ -366,8 +560,8 @@
for (int r = num_row_blocks_e_; r < bs->rows.size(); ++r) {
const int row_block_size = bs->rows[r].block.size;
const std::vector<Cell>& cells = bs->rows[r].cells;
- for (int c = 0; c < cells.size(); ++c) {
- const int col_block_id = cells[c].block_id;
+ for (const auto& cell : cells) {
+ const int col_block_id = cell.block_id;
const int col_block_size = bs->cols[col_block_id].size;
const int diagonal_block_id = col_block_id - num_col_blocks_e_;
const int cell_position =
@@ -376,8 +570,8 @@
// clang-format off
MatrixTransposeMatrixMultiply
<Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic, 1>(
- values + cells[c].position, row_block_size, col_block_size,
- values + cells[c].position, row_block_size, col_block_size,
+ values + cell.position, row_block_size, col_block_size,
+ values + cell.position, row_block_size, col_block_size,
block_diagonal->mutable_values() + cell_position,
0, 0, col_block_size, col_block_size);
// clang-format on
@@ -385,5 +579,82 @@
}
}
-} // namespace internal
-} // namespace ceres
+template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
+void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ UpdateBlockDiagonalFtFMultiThreaded(
+ BlockSparseMatrix* block_diagonal) const {
+ auto transpose_block_structure = matrix_.transpose_block_structure();
+ CHECK(transpose_block_structure != nullptr);
+ auto block_diagonal_structure = block_diagonal->block_structure();
+
+ const double* values = matrix_.values();
+ double* values_diagonal = block_diagonal->mutable_values();
+
+ const int num_col_blocks_e = num_col_blocks_e_;
+ const int num_row_blocks_e = num_row_blocks_e_;
+ ParallelFor(
+ options_.context,
+ num_col_blocks_e_,
+ num_col_blocks_e + num_col_blocks_f_,
+ options_.num_threads,
+ [transpose_block_structure,
+ block_diagonal_structure,
+ num_col_blocks_e,
+ num_row_blocks_e,
+ values,
+ values_diagonal](int col_block_id) {
+ const int col_block_size =
+ transpose_block_structure->rows[col_block_id].block.size;
+ const int diagonal_block_id = col_block_id - num_col_blocks_e;
+ const int cell_position =
+ block_diagonal_structure->rows[diagonal_block_id].cells[0].position;
+ double* cell_values = values_diagonal + cell_position;
+
+ MatrixRef(cell_values, col_block_size, col_block_size).setZero();
+
+ auto& cells = transpose_block_structure->rows[col_block_id].cells;
+ const int num_cells = cells.size();
+ int i = 0;
+ for (; i < num_cells; ++i) {
+ auto& cell = cells[i];
+ const int row_block_id = cell.block_id;
+ if (row_block_id >= num_row_blocks_e) break;
+ const int row_block_size =
+ transpose_block_structure->cols[row_block_id].size;
+ // clang-format off
+ MatrixTransposeMatrixMultiply
+ <kRowBlockSize, kFBlockSize, kRowBlockSize, kFBlockSize, 1>(
+ values + cell.position, row_block_size, col_block_size,
+ values + cell.position, row_block_size, col_block_size,
+ cell_values, 0, 0, col_block_size, col_block_size);
+ // clang-format on
+ }
+ for (; i < num_cells; ++i) {
+ auto& cell = cells[i];
+ const int row_block_id = cell.block_id;
+ const int row_block_size =
+ transpose_block_structure->cols[row_block_id].size;
+ // clang-format off
+ MatrixTransposeMatrixMultiply
+ <Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic, 1>(
+ values + cell.position, row_block_size, col_block_size,
+ values + cell.position, row_block_size, col_block_size,
+ cell_values, 0, 0, col_block_size, col_block_size);
+ // clang-format on
+ }
+ },
+ f_cols_partition_);
+}
+
+template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
+void PartitionedMatrixView<kRowBlockSize, kEBlockSize, kFBlockSize>::
+ UpdateBlockDiagonalFtF(BlockSparseMatrix* block_diagonal) const {
+ if (options_.num_threads == 1) {
+ UpdateBlockDiagonalFtFSingleThreaded(block_diagonal);
+ } else {
+ CHECK(options_.context != nullptr);
+ UpdateBlockDiagonalFtFMultiThreaded(block_diagonal);
+ }
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/partitioned_matrix_view_template.py b/internal/ceres/partitioned_matrix_view_template.py
index 05a25bf..9af4c0e 100644
--- a/internal/ceres/partitioned_matrix_view_template.py
+++ b/internal/ceres/partitioned_matrix_view_template.py
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -47,7 +47,7 @@
# specializations that is generated.
HEADER = """// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -91,61 +91,59 @@
DYNAMIC_FILE = """
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<%s,
%s,
%s>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
"""
SPECIALIZATION_FILE = """
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/partitioned_matrix_view_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class PartitionedMatrixView<%s, %s, %s>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
"""
FACTORY_FILE_HEADER = """
+#include <memory>
+
#include "ceres/linear_solver.h"
#include "ceres/partitioned_matrix_view.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-PartitionedMatrixViewBase* PartitionedMatrixViewBase::Create(
+PartitionedMatrixViewBase::~PartitionedMatrixViewBase() = default;
+
+std::unique_ptr<PartitionedMatrixViewBase> PartitionedMatrixViewBase::Create(
const LinearSolver::Options& options, const BlockSparseMatrix& matrix) {
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
"""
-FACTORY = """ return new PartitionedMatrixView<%s, %s, %s>(matrix,
- options.elimination_groups[0]);"""
+FACTORY = """ return std::make_unique<PartitionedMatrixView<%s,%s, %s>>(
+ options, matrix);"""
FACTORY_FOOTER = """
#endif
VLOG(1) << "Template specializations not found for <"
<< options.row_block_size << "," << options.e_block_size << ","
<< options.f_block_size << ">";
- return new PartitionedMatrixView<Eigen::Dynamic,
- Eigen::Dynamic,
- Eigen::Dynamic>(
- matrix, options.elimination_groups[0]);
+ return std::make_unique<PartitionedMatrixView<Eigen::Dynamic,
+ Eigen::Dynamic,
+ Eigen::Dynamic>>(
+ options, matrix);
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
"""
diff --git a/internal/ceres/partitioned_matrix_view_test.cc b/internal/ceres/partitioned_matrix_view_test.cc
index b66d0b8..3addba6 100644
--- a/internal/ceres/partitioned_matrix_view_test.cc
+++ b/internal/ceres/partitioned_matrix_view_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,13 +31,15 @@
#include "ceres/partitioned_matrix_view.h"
#include <memory>
+#include <random>
+#include <sstream>
+#include <string>
#include <vector>
#include "ceres/block_structure.h"
#include "ceres/casts.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_least_squares_problems.h"
-#include "ceres/random.h"
#include "ceres/sparse_matrix.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
@@ -47,41 +49,58 @@
const double kEpsilon = 1e-14;
-class PartitionedMatrixViewTest : public ::testing::Test {
- protected:
- void SetUp() final {
- srand(5);
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(2));
- CHECK(problem != nullptr);
- A_.reset(problem->A.release());
+// Param = <problem_id, num_threads>
+using Param = ::testing::tuple<int, int>;
- num_cols_ = A_->num_cols();
- num_rows_ = A_->num_rows();
- num_eliminate_blocks_ = problem->num_eliminate_blocks;
- LinearSolver::Options options;
- options.elimination_groups.push_back(num_eliminate_blocks_);
- pmv_.reset(PartitionedMatrixViewBase::Create(
- options, *down_cast<BlockSparseMatrix*>(A_.get())));
- }
-
- int num_rows_;
- int num_cols_;
- int num_eliminate_blocks_;
- std::unique_ptr<SparseMatrix> A_;
- std::unique_ptr<PartitionedMatrixViewBase> pmv_;
-};
-
-TEST_F(PartitionedMatrixViewTest, DimensionsTest) {
- EXPECT_EQ(pmv_->num_col_blocks_e(), num_eliminate_blocks_);
- EXPECT_EQ(pmv_->num_col_blocks_f(), num_cols_ - num_eliminate_blocks_);
- EXPECT_EQ(pmv_->num_cols_e(), num_eliminate_blocks_);
- EXPECT_EQ(pmv_->num_cols_f(), num_cols_ - num_eliminate_blocks_);
- EXPECT_EQ(pmv_->num_cols(), A_->num_cols());
- EXPECT_EQ(pmv_->num_rows(), A_->num_rows());
+static std::string ParamInfoToString(testing::TestParamInfo<Param> info) {
+ Param param = info.param;
+ std::stringstream ss;
+ ss << ::testing::get<0>(param) << "_" << ::testing::get<1>(param);
+ return ss.str();
}
-TEST_F(PartitionedMatrixViewTest, RightMultiplyE) {
+class PartitionedMatrixViewTest : public ::testing::TestWithParam<Param> {
+ protected:
+ void SetUp() final {
+ const int problem_id = ::testing::get<0>(GetParam());
+ const int num_threads = ::testing::get<1>(GetParam());
+ auto problem = CreateLinearLeastSquaresProblemFromId(problem_id);
+ CHECK(problem != nullptr);
+ A_ = std::move(problem->A);
+ auto block_sparse = down_cast<BlockSparseMatrix*>(A_.get());
+
+ options_.num_threads = num_threads;
+ options_.context = &context_;
+ options_.elimination_groups.push_back(problem->num_eliminate_blocks);
+ pmv_ = PartitionedMatrixViewBase::Create(options_, *block_sparse);
+
+ LinearSolver::Options options_single_threaded = options_;
+ options_single_threaded.num_threads = 1;
+ pmv_single_threaded_ =
+ PartitionedMatrixViewBase::Create(options_, *block_sparse);
+
+ EXPECT_EQ(pmv_->num_col_blocks_e(), problem->num_eliminate_blocks);
+ EXPECT_EQ(pmv_->num_col_blocks_f(),
+ block_sparse->block_structure()->cols.size() -
+ problem->num_eliminate_blocks);
+ EXPECT_EQ(pmv_->num_cols(), A_->num_cols());
+ EXPECT_EQ(pmv_->num_rows(), A_->num_rows());
+ }
+
+ double RandDouble() { return distribution_(prng_); }
+
+ LinearSolver::Options options_;
+ ContextImpl context_;
+ std::unique_ptr<LinearLeastSquaresProblem> problem_;
+ std::unique_ptr<SparseMatrix> A_;
+ std::unique_ptr<PartitionedMatrixViewBase> pmv_;
+ std::unique_ptr<PartitionedMatrixViewBase> pmv_single_threaded_;
+ std::mt19937 prng_;
+ std::uniform_real_distribution<double> distribution_ =
+ std::uniform_real_distribution<double>(0.0, 1.0);
+};
+
+TEST_P(PartitionedMatrixViewTest, RightMultiplyAndAccumulateE) {
Vector x1(pmv_->num_cols_e());
Vector x2(pmv_->num_cols());
x2.setZero();
@@ -90,85 +109,164 @@
x1(i) = x2(i) = RandDouble();
}
- Vector y1 = Vector::Zero(pmv_->num_rows());
- pmv_->RightMultiplyE(x1.data(), y1.data());
+ Vector expected = Vector::Zero(pmv_->num_rows());
+ A_->RightMultiplyAndAccumulate(x2.data(), expected.data());
- Vector y2 = Vector::Zero(pmv_->num_rows());
- A_->RightMultiply(x2.data(), y2.data());
+ Vector actual = Vector::Zero(pmv_->num_rows());
+ pmv_->RightMultiplyAndAccumulateE(x1.data(), actual.data());
for (int i = 0; i < pmv_->num_rows(); ++i) {
- EXPECT_NEAR(y1(i), y2(i), kEpsilon);
+ EXPECT_NEAR(actual(i), expected(i), kEpsilon);
}
}
-TEST_F(PartitionedMatrixViewTest, RightMultiplyF) {
+TEST_P(PartitionedMatrixViewTest, RightMultiplyAndAccumulateF) {
Vector x1(pmv_->num_cols_f());
- Vector x2 = Vector::Zero(pmv_->num_cols());
+ Vector x2(pmv_->num_cols());
+ x2.setZero();
for (int i = 0; i < pmv_->num_cols_f(); ++i) {
- x1(i) = RandDouble();
- x2(i + pmv_->num_cols_e()) = x1(i);
+ x1(i) = x2(i + pmv_->num_cols_e()) = RandDouble();
}
- Vector y1 = Vector::Zero(pmv_->num_rows());
- pmv_->RightMultiplyF(x1.data(), y1.data());
+ Vector actual = Vector::Zero(pmv_->num_rows());
+ pmv_->RightMultiplyAndAccumulateF(x1.data(), actual.data());
- Vector y2 = Vector::Zero(pmv_->num_rows());
- A_->RightMultiply(x2.data(), y2.data());
+ Vector expected = Vector::Zero(pmv_->num_rows());
+ A_->RightMultiplyAndAccumulate(x2.data(), expected.data());
for (int i = 0; i < pmv_->num_rows(); ++i) {
- EXPECT_NEAR(y1(i), y2(i), kEpsilon);
+ EXPECT_NEAR(actual(i), expected(i), kEpsilon);
}
}
-TEST_F(PartitionedMatrixViewTest, LeftMultiply) {
+TEST_P(PartitionedMatrixViewTest, LeftMultiplyAndAccumulate) {
Vector x = Vector::Zero(pmv_->num_rows());
for (int i = 0; i < pmv_->num_rows(); ++i) {
x(i) = RandDouble();
}
+ Vector x_pre = x;
- Vector y = Vector::Zero(pmv_->num_cols());
- Vector y1 = Vector::Zero(pmv_->num_cols_e());
- Vector y2 = Vector::Zero(pmv_->num_cols_f());
+ Vector expected = Vector::Zero(pmv_->num_cols());
+ Vector e_actual = Vector::Zero(pmv_->num_cols_e());
+ Vector f_actual = Vector::Zero(pmv_->num_cols_f());
- A_->LeftMultiply(x.data(), y.data());
- pmv_->LeftMultiplyE(x.data(), y1.data());
- pmv_->LeftMultiplyF(x.data(), y2.data());
+ A_->LeftMultiplyAndAccumulate(x.data(), expected.data());
+ pmv_->LeftMultiplyAndAccumulateE(x.data(), e_actual.data());
+ pmv_->LeftMultiplyAndAccumulateF(x.data(), f_actual.data());
for (int i = 0; i < pmv_->num_cols(); ++i) {
- EXPECT_NEAR(y(i),
- (i < pmv_->num_cols_e()) ? y1(i) : y2(i - pmv_->num_cols_e()),
+ EXPECT_NEAR(expected(i),
+ (i < pmv_->num_cols_e()) ? e_actual(i)
+ : f_actual(i - pmv_->num_cols_e()),
kEpsilon);
}
}
-TEST_F(PartitionedMatrixViewTest, BlockDiagonalEtE) {
+TEST_P(PartitionedMatrixViewTest, BlockDiagonalFtF) {
+ std::unique_ptr<BlockSparseMatrix> block_diagonal_ff(
+ pmv_->CreateBlockDiagonalFtF());
+ const auto bs_diagonal = block_diagonal_ff->block_structure();
+ const int num_rows = pmv_->num_rows();
+ const int num_cols_f = pmv_->num_cols_f();
+ const int num_cols_e = pmv_->num_cols_e();
+ const int num_col_blocks_f = pmv_->num_col_blocks_f();
+ const int num_col_blocks_e = pmv_->num_col_blocks_e();
+
+ CHECK_EQ(block_diagonal_ff->num_rows(), num_cols_f);
+ CHECK_EQ(block_diagonal_ff->num_cols(), num_cols_f);
+
+ EXPECT_EQ(bs_diagonal->cols.size(), num_col_blocks_f);
+ EXPECT_EQ(bs_diagonal->rows.size(), num_col_blocks_f);
+
+ Matrix EF;
+ A_->ToDenseMatrix(&EF);
+ const auto F = EF.topRightCorner(num_rows, num_cols_f);
+
+ Matrix expected_FtF = F.transpose() * F;
+ Matrix actual_FtF;
+ block_diagonal_ff->ToDenseMatrix(&actual_FtF);
+
+ // FtF might be not block-diagonal
+ auto bs = down_cast<BlockSparseMatrix*>(A_.get())->block_structure();
+ for (int i = 0; i < num_col_blocks_f; ++i) {
+ const auto col_block_f = bs->cols[num_col_blocks_e + i];
+ const int block_size = col_block_f.size;
+ const int block_pos = col_block_f.position - num_cols_e;
+ const auto cell_expected =
+ expected_FtF.block(block_pos, block_pos, block_size, block_size);
+ auto cell_actual =
+ actual_FtF.block(block_pos, block_pos, block_size, block_size);
+ cell_actual -= cell_expected;
+ EXPECT_NEAR(cell_actual.norm(), 0., kEpsilon);
+ }
+ // There should be nothing remaining outside block-diagonal
+ EXPECT_NEAR(actual_FtF.norm(), 0., kEpsilon);
+}
+
+TEST_P(PartitionedMatrixViewTest, BlockDiagonalEtE) {
std::unique_ptr<BlockSparseMatrix> block_diagonal_ee(
pmv_->CreateBlockDiagonalEtE());
const CompressedRowBlockStructure* bs = block_diagonal_ee->block_structure();
+ const int num_rows = pmv_->num_rows();
+ const int num_cols_e = pmv_->num_cols_e();
+ const int num_col_blocks_e = pmv_->num_col_blocks_e();
- EXPECT_EQ(block_diagonal_ee->num_rows(), 2);
- EXPECT_EQ(block_diagonal_ee->num_cols(), 2);
- EXPECT_EQ(bs->cols.size(), 2);
- EXPECT_EQ(bs->rows.size(), 2);
+ CHECK_EQ(block_diagonal_ee->num_rows(), num_cols_e);
+ CHECK_EQ(block_diagonal_ee->num_cols(), num_cols_e);
- EXPECT_NEAR(block_diagonal_ee->values()[0], 10.0, kEpsilon);
- EXPECT_NEAR(block_diagonal_ee->values()[1], 155.0, kEpsilon);
+ EXPECT_EQ(bs->cols.size(), num_col_blocks_e);
+ EXPECT_EQ(bs->rows.size(), num_col_blocks_e);
+
+ Matrix EF;
+ A_->ToDenseMatrix(&EF);
+ const auto E = EF.topLeftCorner(num_rows, num_cols_e);
+
+ Matrix expected_EtE = E.transpose() * E;
+ Matrix actual_EtE;
+ block_diagonal_ee->ToDenseMatrix(&actual_EtE);
+
+ EXPECT_NEAR((expected_EtE - actual_EtE).norm(), 0., kEpsilon);
}
-TEST_F(PartitionedMatrixViewTest, BlockDiagonalFtF) {
- std::unique_ptr<BlockSparseMatrix> block_diagonal_ff(
+TEST_P(PartitionedMatrixViewTest, UpdateBlockDiagonalEtE) {
+ std::unique_ptr<BlockSparseMatrix> block_diagonal_ete(
+ pmv_->CreateBlockDiagonalEtE());
+ const int num_cols = pmv_->num_cols_e();
+
+ Matrix multi_threaded(num_cols, num_cols);
+ pmv_->UpdateBlockDiagonalEtE(block_diagonal_ete.get());
+ block_diagonal_ete->ToDenseMatrix(&multi_threaded);
+
+ Matrix single_threaded(num_cols, num_cols);
+ pmv_single_threaded_->UpdateBlockDiagonalEtE(block_diagonal_ete.get());
+ block_diagonal_ete->ToDenseMatrix(&single_threaded);
+
+ EXPECT_NEAR((multi_threaded - single_threaded).norm(), 0., kEpsilon);
+}
+
+TEST_P(PartitionedMatrixViewTest, UpdateBlockDiagonalFtF) {
+ std::unique_ptr<BlockSparseMatrix> block_diagonal_ftf(
pmv_->CreateBlockDiagonalFtF());
- const CompressedRowBlockStructure* bs = block_diagonal_ff->block_structure();
+ const int num_cols = pmv_->num_cols_f();
- EXPECT_EQ(block_diagonal_ff->num_rows(), 3);
- EXPECT_EQ(block_diagonal_ff->num_cols(), 3);
- EXPECT_EQ(bs->cols.size(), 3);
- EXPECT_EQ(bs->rows.size(), 3);
- EXPECT_NEAR(block_diagonal_ff->values()[0], 70.0, kEpsilon);
- EXPECT_NEAR(block_diagonal_ff->values()[1], 17.0, kEpsilon);
- EXPECT_NEAR(block_diagonal_ff->values()[2], 37.0, kEpsilon);
+ Matrix multi_threaded(num_cols, num_cols);
+ pmv_->UpdateBlockDiagonalFtF(block_diagonal_ftf.get());
+ block_diagonal_ftf->ToDenseMatrix(&multi_threaded);
+
+ Matrix single_threaded(num_cols, num_cols);
+ pmv_single_threaded_->UpdateBlockDiagonalFtF(block_diagonal_ftf.get());
+ block_diagonal_ftf->ToDenseMatrix(&single_threaded);
+
+ EXPECT_NEAR((multi_threaded - single_threaded).norm(), 0., kEpsilon);
}
+INSTANTIATE_TEST_SUITE_P(
+ ParallelProducts,
+ PartitionedMatrixViewTest,
+ ::testing::Combine(::testing::Values(2, 4, 6),
+ ::testing::Values(1, 2, 3, 4, 5, 6, 7, 8)),
+ ParamInfoToString);
+
} // namespace internal
} // namespace ceres
diff --git a/internal/ceres/polynomial.cc b/internal/ceres/polynomial.cc
index 20812f4..8e99e34 100644
--- a/internal/ceres/polynomial.cc
+++ b/internal/ceres/polynomial.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,13 +37,10 @@
#include "Eigen/Dense"
#include "ceres/function_sample.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
namespace {
@@ -128,12 +125,12 @@
Vector* real,
Vector* imaginary) {
CHECK_EQ(polynomial.size(), 2);
- if (real != NULL) {
+ if (real != nullptr) {
real->resize(1);
(*real)(0) = -polynomial(1) / polynomial(0);
}
- if (imaginary != NULL) {
+ if (imaginary != nullptr) {
imaginary->setZero(1);
}
}
@@ -147,16 +144,16 @@
const double c = polynomial(2);
const double D = b * b - 4 * a * c;
const double sqrt_D = sqrt(fabs(D));
- if (real != NULL) {
+ if (real != nullptr) {
real->setZero(2);
}
- if (imaginary != NULL) {
+ if (imaginary != nullptr) {
imaginary->setZero(2);
}
// Real roots.
if (D >= 0) {
- if (real != NULL) {
+ if (real != nullptr) {
// Stable quadratic roots according to BKP Horn.
// http://people.csail.mit.edu/bkph/articles/Quadratics.pdf
if (b >= 0) {
@@ -171,11 +168,11 @@
}
// Use the normal quadratic formula for the complex case.
- if (real != NULL) {
+ if (real != nullptr) {
(*real)(0) = -b / (2.0 * a);
(*real)(1) = -b / (2.0 * a);
}
- if (imaginary != NULL) {
+ if (imaginary != nullptr) {
(*imaginary)(0) = sqrt_D / (2.0 * a);
(*imaginary)(1) = -sqrt_D / (2.0 * a);
}
@@ -240,14 +237,14 @@
}
// Output roots
- if (real != NULL) {
+ if (real != nullptr) {
*real = solver.eigenvalues().real();
} else {
- LOG(WARNING) << "NULL pointer passed as real argument to "
+ LOG(WARNING) << "nullptr pointer passed as real argument to "
<< "FindPolynomialRoots. Real parts of the roots will not "
<< "be returned.";
}
- if (imaginary != NULL) {
+ if (imaginary != nullptr) {
*imaginary = solver.eigenvalues().imag();
}
return true;
@@ -304,7 +301,7 @@
const Vector derivative = DifferentiatePolynomial(polynomial);
Vector roots_real;
- if (!FindPolynomialRoots(derivative, &roots_real, NULL)) {
+ if (!FindPolynomialRoots(derivative, &roots_real, nullptr)) {
LOG(WARNING) << "Unable to find the critical points of "
<< "the interpolating polynomial.";
return;
@@ -326,7 +323,7 @@
}
}
-Vector FindInterpolatingPolynomial(const vector<FunctionSample>& samples) {
+Vector FindInterpolatingPolynomial(const std::vector<FunctionSample>& samples) {
const int num_samples = samples.size();
int num_constraints = 0;
for (int i = 0; i < num_samples; ++i) {
@@ -369,15 +366,14 @@
return lu.setThreshold(0.0).solve(rhs);
}
-void MinimizeInterpolatingPolynomial(const vector<FunctionSample>& samples,
+void MinimizeInterpolatingPolynomial(const std::vector<FunctionSample>& samples,
double x_min,
double x_max,
double* optimal_x,
double* optimal_value) {
const Vector polynomial = FindInterpolatingPolynomial(samples);
MinimizePolynomial(polynomial, x_min, x_max, optimal_x, optimal_value);
- for (int i = 0; i < samples.size(); ++i) {
- const FunctionSample& sample = samples[i];
+ for (const auto& sample : samples) {
if ((sample.x < x_min) || (sample.x > x_max)) {
continue;
}
@@ -390,5 +386,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/polynomial.h b/internal/ceres/polynomial.h
index 20071f2..8c40628 100644
--- a/internal/ceres/polynomial.h
+++ b/internal/ceres/polynomial.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,11 +34,11 @@
#include <vector>
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
struct FunctionSample;
@@ -49,6 +49,7 @@
// and are given by a vector of coefficients of size N + 1.
// Evaluate the polynomial at x using the Horner scheme.
+CERES_NO_EXPORT
inline double EvaluatePolynomial(const Vector& polynomial, double x) {
double v = 0.0;
for (int i = 0; i < polynomial.size(); ++i) {
@@ -64,15 +65,16 @@
// Failure indicates that the polynomial is invalid (of size 0) or
// that the eigenvalues of the companion matrix could not be computed.
// On failure, a more detailed message will be written to LOG(ERROR).
-// If real is not NULL, the real parts of the roots will be returned in it.
-// Likewise, if imaginary is not NULL, imaginary parts will be returned in it.
-CERES_EXPORT_INTERNAL bool FindPolynomialRoots(const Vector& polynomial,
- Vector* real,
- Vector* imaginary);
+// If real is not nullptr, the real parts of the roots will be returned in it.
+// Likewise, if imaginary is not nullptr, imaginary parts will be returned in
+// it.
+CERES_NO_EXPORT bool FindPolynomialRoots(const Vector& polynomial,
+ Vector* real,
+ Vector* imaginary);
// Return the derivative of the given polynomial. It is assumed that
// the input polynomial is at least of degree zero.
-CERES_EXPORT_INTERNAL Vector DifferentiatePolynomial(const Vector& polynomial);
+CERES_NO_EXPORT Vector DifferentiatePolynomial(const Vector& polynomial);
// Find the minimum value of the polynomial in the interval [x_min,
// x_max]. The minimum is obtained by computing all the roots of the
@@ -80,11 +82,11 @@
// interval [x_min, x_max] are considered as well as the end points
// x_min and x_max. Since polynomials are differentiable functions,
// this ensures that the true minimum is found.
-CERES_EXPORT_INTERNAL void MinimizePolynomial(const Vector& polynomial,
- double x_min,
- double x_max,
- double* optimal_x,
- double* optimal_value);
+CERES_NO_EXPORT void MinimizePolynomial(const Vector& polynomial,
+ double x_min,
+ double x_max,
+ double* optimal_x,
+ double* optimal_value);
// Given a set of function value and/or gradient samples, find a
// polynomial whose value and gradients are exactly equal to the ones
@@ -97,7 +99,7 @@
// Of course its possible to sample a polynomial any number of times,
// in which case, generally speaking the spurious higher order
// coefficients will be zero.
-CERES_EXPORT_INTERNAL Vector
+CERES_NO_EXPORT Vector
FindInterpolatingPolynomial(const std::vector<FunctionSample>& samples);
// Interpolate the function described by samples with a polynomial,
@@ -106,14 +108,15 @@
// finding algorithms may fail due to numerical difficulties. But the
// function is guaranteed to return its best guess of an answer, by
// considering the samples and the end points as possible solutions.
-CERES_EXPORT_INTERNAL void MinimizeInterpolatingPolynomial(
+CERES_NO_EXPORT void MinimizeInterpolatingPolynomial(
const std::vector<FunctionSample>& samples,
double x_min,
double x_max,
double* optimal_x,
double* optimal_value);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_POLYNOMIAL_SOLVER_H_
diff --git a/internal/ceres/polynomial_test.cc b/internal/ceres/polynomial_test.cc
index 0ff73ea..a87ea46 100644
--- a/internal/ceres/polynomial_test.cc
+++ b/internal/ceres/polynomial_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,15 +35,13 @@
#include <cmath>
#include <cstddef>
#include <limits>
+#include <vector>
#include "ceres/function_sample.h"
#include "ceres/test_util.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
namespace {
@@ -88,8 +86,8 @@
}
// Run a test with the polynomial defined by the N real roots in roots_real.
-// If use_real is false, NULL is passed as the real argument to
-// FindPolynomialRoots. If use_imaginary is false, NULL is passed as the
+// If use_real is false, nullptr is passed as the real argument to
+// FindPolynomialRoots. If use_imaginary is false, nullptr is passed as the
// imaginary argument to FindPolynomialRoots.
template <int N>
void RunPolynomialTestRealRoots(const double (&real_roots)[N],
@@ -102,8 +100,8 @@
for (int i = 0; i < N; ++i) {
poly = AddRealRoot(poly, real_roots[i]);
}
- Vector* const real_ptr = use_real ? &real : NULL;
- Vector* const imaginary_ptr = use_imaginary ? &imaginary : NULL;
+ Vector* const real_ptr = use_real ? &real : nullptr;
+ Vector* const imaginary_ptr = use_imaginary ? &imaginary : nullptr;
bool success = FindPolynomialRoots(poly, real_ptr, imaginary_ptr);
EXPECT_EQ(success, true);
@@ -315,7 +313,7 @@
Vector true_polynomial(1);
true_polynomial << 1.0;
- vector<FunctionSample> samples;
+ std::vector<FunctionSample> samples;
FunctionSample sample;
sample.x = 1.0;
sample.value = 1.0;
@@ -331,7 +329,7 @@
Vector true_polynomial(2);
true_polynomial << 2.0, -1.0;
- vector<FunctionSample> samples;
+ std::vector<FunctionSample> samples;
FunctionSample sample;
sample.x = 1.0;
sample.value = 1.0;
@@ -349,7 +347,7 @@
Vector true_polynomial(3);
true_polynomial << 2.0, 3.0, 2.0;
- vector<FunctionSample> samples;
+ std::vector<FunctionSample> samples;
{
FunctionSample sample;
sample.x = 1.0;
@@ -377,7 +375,7 @@
Vector true_polynomial(4);
true_polynomial << 0.0, 2.0, 3.0, 2.0;
- vector<FunctionSample> samples;
+ std::vector<FunctionSample> samples;
{
FunctionSample sample;
sample.x = 1.0;
@@ -407,7 +405,7 @@
Vector true_polynomial(4);
true_polynomial << 1.0, 2.0, 3.0, 2.0;
- vector<FunctionSample> samples;
+ std::vector<FunctionSample> samples;
{
FunctionSample sample;
sample.x = 1.0;
@@ -450,7 +448,7 @@
true_polynomial << 1.0, 2.0, 3.0, 2.0;
Vector true_gradient_polynomial = DifferentiatePolynomial(true_polynomial);
- vector<FunctionSample> samples;
+ std::vector<FunctionSample> samples;
{
FunctionSample sample;
sample.x = 1.0;
@@ -487,7 +485,7 @@
true_polynomial << 1.0, 2.0, 3.0, 2.0;
Vector true_gradient_polynomial = DifferentiatePolynomial(true_polynomial);
- vector<FunctionSample> samples;
+ std::vector<FunctionSample> samples;
{
FunctionSample sample;
sample.x = -3.0;
@@ -512,5 +510,4 @@
EXPECT_NEAR((true_polynomial - polynomial).norm(), 0.0, 1e-14);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/power_series_expansion_preconditioner.cc b/internal/ceres/power_series_expansion_preconditioner.cc
new file mode 100644
index 0000000..af98646
--- /dev/null
+++ b/internal/ceres/power_series_expansion_preconditioner.cc
@@ -0,0 +1,88 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: markshachkov@gmail.com (Mark Shachkov)
+
+#include "ceres/power_series_expansion_preconditioner.h"
+
+#include "ceres/eigen_vector_ops.h"
+#include "ceres/parallel_vector_ops.h"
+#include "ceres/preconditioner.h"
+
+namespace ceres::internal {
+
+PowerSeriesExpansionPreconditioner::PowerSeriesExpansionPreconditioner(
+ const ImplicitSchurComplement* isc,
+ const int max_num_spse_iterations,
+ const double spse_tolerance,
+ const Preconditioner::Options& options)
+ : isc_(isc),
+ max_num_spse_iterations_(max_num_spse_iterations),
+ spse_tolerance_(spse_tolerance),
+ options_(options) {}
+
+PowerSeriesExpansionPreconditioner::~PowerSeriesExpansionPreconditioner() =
+ default;
+
+bool PowerSeriesExpansionPreconditioner::Update(const LinearOperator& /*A*/,
+ const double* /*D*/) {
+ return true;
+}
+
+void PowerSeriesExpansionPreconditioner::RightMultiplyAndAccumulate(
+ const double* x, double* y) const {
+ VectorRef yref(y, num_rows());
+ Vector series_term(num_rows());
+ Vector previous_series_term(num_rows());
+ ParallelSetZero(options_.context, options_.num_threads, yref);
+ isc_->block_diagonal_FtF_inverse()->RightMultiplyAndAccumulate(
+ x, y, options_.context, options_.num_threads);
+ ParallelAssign(
+ options_.context, options_.num_threads, previous_series_term, yref);
+
+ const double norm_threshold =
+ spse_tolerance_ * Norm(yref, options_.context, options_.num_threads);
+
+ for (int i = 1;; i++) {
+ ParallelSetZero(options_.context, options_.num_threads, series_term);
+ isc_->InversePowerSeriesOperatorRightMultiplyAccumulate(
+ previous_series_term.data(), series_term.data());
+ ParallelAssign(
+ options_.context, options_.num_threads, yref, yref + series_term);
+ if (i >= max_num_spse_iterations_ || series_term.norm() < norm_threshold) {
+ break;
+ }
+ std::swap(previous_series_term, series_term);
+ }
+}
+
+int PowerSeriesExpansionPreconditioner::num_rows() const {
+ return isc_->num_rows();
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/power_series_expansion_preconditioner.h b/internal/ceres/power_series_expansion_preconditioner.h
new file mode 100644
index 0000000..9a993cf
--- /dev/null
+++ b/internal/ceres/power_series_expansion_preconditioner.h
@@ -0,0 +1,71 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: markshachkov@gmail.com (Mark Shachkov)
+
+#ifndef CERES_INTERNAL_POWER_SERIES_EXPANSION_PRECONDITIONER_H_
+#define CERES_INTERNAL_POWER_SERIES_EXPANSION_PRECONDITIONER_H_
+
+#include "ceres/implicit_schur_complement.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/internal/export.h"
+#include "ceres/preconditioner.h"
+
+namespace ceres::internal {
+
+// This is a preconditioner via power series expansion of Schur
+// complement inverse based on "Weber et al, Power Bundle Adjustment for
+// Large-Scale 3D Reconstruction".
+class CERES_NO_EXPORT PowerSeriesExpansionPreconditioner
+ : public Preconditioner {
+ public:
+ // TODO: Consider moving max_num_spse_iterations and spse_tolerance to
+ // Preconditioner::Options
+ PowerSeriesExpansionPreconditioner(const ImplicitSchurComplement* isc,
+ const int max_num_spse_iterations,
+ const double spse_tolerance,
+ const Preconditioner::Options& options);
+ PowerSeriesExpansionPreconditioner(
+ const PowerSeriesExpansionPreconditioner&) = delete;
+ void operator=(const PowerSeriesExpansionPreconditioner&) = delete;
+ ~PowerSeriesExpansionPreconditioner() override;
+
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
+ bool Update(const LinearOperator& A, const double* D) final;
+ int num_rows() const final;
+
+ private:
+ const ImplicitSchurComplement* isc_;
+ const int max_num_spse_iterations_;
+ const double spse_tolerance_;
+ const Preconditioner::Options options_;
+};
+
+} // namespace ceres::internal
+
+#endif // CERES_INTERNAL_POWER_SERIES_EXPANSION_PRECONDITIONER_H_
diff --git a/internal/ceres/power_series_expansion_preconditioner_test.cc b/internal/ceres/power_series_expansion_preconditioner_test.cc
new file mode 100644
index 0000000..1c04162
--- /dev/null
+++ b/internal/ceres/power_series_expansion_preconditioner_test.cc
@@ -0,0 +1,175 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: markshachkov@gmail.com (Mark Shachkov)
+
+#include "ceres/power_series_expansion_preconditioner.h"
+
+#include <memory>
+
+#include "Eigen/Dense"
+#include "ceres/linear_least_squares_problems.h"
+#include "gtest/gtest.h"
+
+namespace ceres::internal {
+
+const double kEpsilon = 1e-14;
+
+class PowerSeriesExpansionPreconditionerTest : public ::testing::Test {
+ protected:
+ void SetUp() final {
+ problem_ = CreateLinearLeastSquaresProblemFromId(5);
+ const auto A = down_cast<BlockSparseMatrix*>(problem_->A.get());
+ const auto D = problem_->D.get();
+
+ options_.elimination_groups.push_back(problem_->num_eliminate_blocks);
+ options_.preconditioner_type = SCHUR_POWER_SERIES_EXPANSION;
+ preconditioner_options_ = Preconditioner::Options(options_);
+ isc_ = std::make_unique<ImplicitSchurComplement>(options_);
+ isc_->Init(*A, D, problem_->b.get());
+ num_f_cols_ = isc_->rhs().rows();
+ const int num_rows = A->num_rows(), num_cols = A->num_cols(),
+ num_e_cols = num_cols - num_f_cols_;
+
+ // Using predefined linear operator with schur structure and block-diagonal
+ // F'F to explicitly construct schur complement and to calculate its inverse
+ // to be used as a reference.
+ Matrix A_dense, E, F, DE, DF;
+ problem_->A->ToDenseMatrix(&A_dense);
+ E = A_dense.leftCols(num_e_cols);
+ F = A_dense.rightCols(num_f_cols_);
+ DE = VectorRef(D, num_e_cols).asDiagonal();
+ DF = VectorRef(D + num_e_cols, num_f_cols_).asDiagonal();
+
+ sc_inverse_expected_ =
+ (F.transpose() *
+ (Matrix::Identity(num_rows, num_rows) -
+ E * (E.transpose() * E + DE).inverse() * E.transpose()) *
+ F +
+ DF)
+ .inverse();
+ }
+ std::unique_ptr<LinearLeastSquaresProblem> problem_;
+ std::unique_ptr<ImplicitSchurComplement> isc_;
+ int num_f_cols_;
+ Matrix sc_inverse_expected_;
+ LinearSolver::Options options_;
+ Preconditioner::Options preconditioner_options_;
+};
+
+TEST_F(PowerSeriesExpansionPreconditionerTest,
+ InverseValidPreconditionerToleranceReached) {
+ const double spse_tolerance = kEpsilon;
+ const int max_num_iterations = 50;
+ PowerSeriesExpansionPreconditioner preconditioner(
+ isc_.get(), max_num_iterations, spse_tolerance, preconditioner_options_);
+
+ Vector x(num_f_cols_), y(num_f_cols_);
+ for (int i = 0; i < num_f_cols_; i++) {
+ x.setZero();
+ x(i) = 1.0;
+
+ y.setZero();
+ preconditioner.RightMultiplyAndAccumulate(x.data(), y.data());
+ EXPECT_LT((y - sc_inverse_expected_.col(i)).norm(), kEpsilon)
+ << "Reference Schur complement inverse and its estimate via "
+ "PowerSeriesExpansionPreconditioner differs in "
+ << i
+ << " column.\nreference : " << sc_inverse_expected_.col(i).transpose()
+ << "\nestimated: " << y.transpose();
+ }
+}
+
+TEST_F(PowerSeriesExpansionPreconditionerTest,
+ InverseValidPreconditionerMaxIterations) {
+ const double spse_tolerance = 0;
+ const int max_num_iterations = 50;
+ PowerSeriesExpansionPreconditioner preconditioner_fixed_n_iterations(
+ isc_.get(), max_num_iterations, spse_tolerance, preconditioner_options_);
+
+ Vector x(num_f_cols_), y(num_f_cols_);
+ for (int i = 0; i < num_f_cols_; i++) {
+ x.setZero();
+ x(i) = 1.0;
+
+ y.setZero();
+ preconditioner_fixed_n_iterations.RightMultiplyAndAccumulate(x.data(),
+ y.data());
+ EXPECT_LT((y - sc_inverse_expected_.col(i)).norm(), kEpsilon)
+ << "Reference Schur complement inverse and its estimate via "
+ "PowerSeriesExpansionPreconditioner differs in "
+ << i
+ << " column.\nreference : " << sc_inverse_expected_.col(i).transpose()
+ << "\nestimated: " << y.transpose();
+ }
+}
+
+TEST_F(PowerSeriesExpansionPreconditionerTest,
+ InverseInvalidBadPreconditionerTolerance) {
+ const double spse_tolerance = 1 / kEpsilon;
+ const int max_num_iterations = 50;
+ PowerSeriesExpansionPreconditioner preconditioner_bad_tolerance(
+ isc_.get(), max_num_iterations, spse_tolerance, preconditioner_options_);
+
+ Vector x(num_f_cols_), y(num_f_cols_);
+ for (int i = 0; i < num_f_cols_; i++) {
+ x.setZero();
+ x(i) = 1.0;
+
+ y.setZero();
+ preconditioner_bad_tolerance.RightMultiplyAndAccumulate(x.data(), y.data());
+ EXPECT_GT((y - sc_inverse_expected_.col(i)).norm(), kEpsilon)
+ << "Reference Schur complement inverse and its estimate via "
+ "PowerSeriesExpansionPreconditioner are too similar, tolerance "
+ "stopping criteria failed.";
+ }
+}
+
+TEST_F(PowerSeriesExpansionPreconditionerTest,
+ InverseInvalidBadPreconditionerMaxIterations) {
+ const double spse_tolerance = kEpsilon;
+ const int max_num_iterations = 1;
+ PowerSeriesExpansionPreconditioner preconditioner_bad_iterations_limit(
+ isc_.get(), max_num_iterations, spse_tolerance, preconditioner_options_);
+
+ Vector x(num_f_cols_), y(num_f_cols_);
+ for (int i = 0; i < num_f_cols_; i++) {
+ x.setZero();
+ x(i) = 1.0;
+
+ y.setZero();
+ preconditioner_bad_iterations_limit.RightMultiplyAndAccumulate(x.data(),
+ y.data());
+ EXPECT_GT((y - sc_inverse_expected_.col(i)).norm(), kEpsilon)
+ << "Reference Schur complement inverse and its estimate via "
+ "PowerSeriesExpansionPreconditioner are too similar, maximum "
+ "iterations stopping criteria failed.";
+ }
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/preconditioner.cc b/internal/ceres/preconditioner.cc
index 69ba04d..0b9ce96 100644
--- a/internal/ceres/preconditioner.cc
+++ b/internal/ceres/preconditioner.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,10 +32,9 @@
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-Preconditioner::~Preconditioner() {}
+Preconditioner::~Preconditioner() = default;
PreconditionerType Preconditioner::PreconditionerForZeroEBlocks(
PreconditionerType preconditioner_type) {
@@ -48,26 +47,27 @@
}
SparseMatrixPreconditionerWrapper::SparseMatrixPreconditionerWrapper(
- const SparseMatrix* matrix)
- : matrix_(matrix) {
+ const SparseMatrix* matrix, const Preconditioner::Options& options)
+ : matrix_(matrix), options_(options) {
CHECK(matrix != nullptr);
}
-SparseMatrixPreconditionerWrapper::~SparseMatrixPreconditionerWrapper() {}
+SparseMatrixPreconditionerWrapper::~SparseMatrixPreconditionerWrapper() =
+ default;
-bool SparseMatrixPreconditionerWrapper::UpdateImpl(const SparseMatrix& A,
- const double* D) {
+bool SparseMatrixPreconditionerWrapper::UpdateImpl(const SparseMatrix& /*A*/,
+ const double* /*D*/) {
return true;
}
-void SparseMatrixPreconditionerWrapper::RightMultiply(const double* x,
- double* y) const {
- matrix_->RightMultiply(x, y);
+void SparseMatrixPreconditionerWrapper::RightMultiplyAndAccumulate(
+ const double* x, double* y) const {
+ matrix_->RightMultiplyAndAccumulate(
+ x, y, options_.context, options_.num_threads);
}
int SparseMatrixPreconditionerWrapper::num_rows() const {
return matrix_->num_rows();
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/preconditioner.h b/internal/ceres/preconditioner.h
index dd843b0..42dc6cc 100644
--- a/internal/ceres/preconditioner.h
+++ b/internal/ceres/preconditioner.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,24 +36,40 @@
#include "ceres/casts.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/context_impl.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_operator.h"
+#include "ceres/linear_solver.h"
#include "ceres/sparse_matrix.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockSparseMatrix;
class SparseMatrix;
-class CERES_EXPORT_INTERNAL Preconditioner : public LinearOperator {
+class CERES_NO_EXPORT Preconditioner : public LinearOperator {
public:
struct Options {
+ Options() = default;
+ Options(const LinearSolver::Options& linear_solver_options)
+ : type(linear_solver_options.preconditioner_type),
+ visibility_clustering_type(
+ linear_solver_options.visibility_clustering_type),
+ sparse_linear_algebra_library_type(
+ linear_solver_options.sparse_linear_algebra_library_type),
+ num_threads(linear_solver_options.num_threads),
+ elimination_groups(linear_solver_options.elimination_groups),
+ row_block_size(linear_solver_options.row_block_size),
+ e_block_size(linear_solver_options.e_block_size),
+ f_block_size(linear_solver_options.f_block_size),
+ context(linear_solver_options.context) {}
+
PreconditionerType type = JACOBI;
VisibilityClusteringType visibility_clustering_type = CANONICAL_VIEWS;
SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type =
SUITE_SPARSE;
+ OrderingType ordering_type = OrderingType::NATURAL;
// When using the subset preconditioner, all row blocks starting
// from this row block are used to construct the preconditioner.
@@ -67,9 +83,6 @@
// and the preconditioner is the inverse of the matrix Q'Q.
int subset_preconditioner_start_row_block = -1;
- // See solver.h for information about these flags.
- bool use_postordering = false;
-
// If possible, how many threads the preconditioner can use.
int num_threads = 1;
@@ -115,7 +128,7 @@
static PreconditionerType PreconditionerForZeroEBlocks(
PreconditionerType preconditioner_type);
- virtual ~Preconditioner();
+ ~Preconditioner() override;
// Update the numerical value of the preconditioner for the linear
// system:
@@ -126,30 +139,48 @@
// for some vector b. It is important that the matrix A have the
// same block structure as the one used to construct this object.
//
- // D can be NULL, in which case its interpreted as a diagonal matrix
+ // D can be nullptr, in which case its interpreted as a diagonal matrix
// of size zero.
virtual bool Update(const LinearOperator& A, const double* D) = 0;
// LinearOperator interface. Since the operator is symmetric,
- // LeftMultiply and num_cols are just calls to RightMultiply and
- // num_rows respectively. Update() must be called before
- // RightMultiply can be called.
- void RightMultiply(const double* x, double* y) const override = 0;
- void LeftMultiply(const double* x, double* y) const override {
- return RightMultiply(x, y);
+ // LeftMultiplyAndAccumulate and num_cols are just calls to
+ // RightMultiplyAndAccumulate and num_rows respectively. Update() must be
+ // called before RightMultiplyAndAccumulate can be called.
+ void RightMultiplyAndAccumulate(const double* x,
+ double* y) const override = 0;
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const override {
+ return RightMultiplyAndAccumulate(x, y);
}
int num_rows() const override = 0;
int num_cols() const override { return num_rows(); }
};
+class CERES_NO_EXPORT IdentityPreconditioner : public Preconditioner {
+ public:
+ IdentityPreconditioner(int num_rows) : num_rows_(num_rows) {}
+
+ bool Update(const LinearOperator& /*A*/, const double* /*D*/) final {
+ return true;
+ }
+
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final {
+ VectorRef(y, num_rows_) += ConstVectorRef(x, num_rows_);
+ }
+
+ int num_rows() const final { return num_rows_; }
+
+ private:
+ int num_rows_ = -1;
+};
+
// This templated subclass of Preconditioner serves as a base class for
// other preconditioners that depend on the particular matrix layout of
// the underlying linear operator.
template <typename MatrixType>
-class TypedPreconditioner : public Preconditioner {
+class CERES_NO_EXPORT TypedPreconditioner : public Preconditioner {
public:
- virtual ~TypedPreconditioner() {}
bool Update(const LinearOperator& A, const double* D) final {
return UpdateImpl(*down_cast<const MatrixType*>(&A), D);
}
@@ -161,28 +192,32 @@
// Preconditioners that depend on access to the low level structure
// of a SparseMatrix.
// clang-format off
-typedef TypedPreconditioner<SparseMatrix> SparseMatrixPreconditioner;
-typedef TypedPreconditioner<BlockSparseMatrix> BlockSparseMatrixPreconditioner;
-typedef TypedPreconditioner<CompressedRowSparseMatrix> CompressedRowSparseMatrixPreconditioner;
+using SparseMatrixPreconditioner = TypedPreconditioner<SparseMatrix>;
+using BlockSparseMatrixPreconditioner = TypedPreconditioner<BlockSparseMatrix>;
+using CompressedRowSparseMatrixPreconditioner = TypedPreconditioner<CompressedRowSparseMatrix>;
// clang-format on
// Wrap a SparseMatrix object as a preconditioner.
-class SparseMatrixPreconditionerWrapper : public SparseMatrixPreconditioner {
+class CERES_NO_EXPORT SparseMatrixPreconditionerWrapper final
+ : public SparseMatrixPreconditioner {
public:
// Wrapper does NOT take ownership of the matrix pointer.
- explicit SparseMatrixPreconditionerWrapper(const SparseMatrix* matrix);
- virtual ~SparseMatrixPreconditionerWrapper();
+ explicit SparseMatrixPreconditionerWrapper(
+ const SparseMatrix* matrix, const Preconditioner::Options& options);
+ ~SparseMatrixPreconditionerWrapper() override;
// Preconditioner interface
- virtual void RightMultiply(const double* x, double* y) const;
- virtual int num_rows() const;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const override;
+ int num_rows() const override;
private:
- virtual bool UpdateImpl(const SparseMatrix& A, const double* D);
+ bool UpdateImpl(const SparseMatrix& A, const double* D) override;
const SparseMatrix* matrix_;
+ const Preconditioner::Options options_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_PRECONDITIONER_H_
diff --git a/internal/ceres/preprocessor.cc b/internal/ceres/preprocessor.cc
index 6a67d38..83c05d4 100644
--- a/internal/ceres/preprocessor.cc
+++ b/internal/ceres/preprocessor.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,34 +30,39 @@
#include "ceres/preprocessor.h"
+#include <memory>
+
#include "ceres/callbacks.h"
#include "ceres/gradient_checking_cost_function.h"
#include "ceres/line_search_preprocessor.h"
-#include "ceres/parallel_for.h"
#include "ceres/problem_impl.h"
#include "ceres/solver.h"
+#include "ceres/thread_pool.h"
#include "ceres/trust_region_preprocessor.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-Preprocessor* Preprocessor::Create(MinimizerType minimizer_type) {
+std::unique_ptr<Preprocessor> Preprocessor::Create(
+ MinimizerType minimizer_type) {
if (minimizer_type == TRUST_REGION) {
- return new TrustRegionPreprocessor;
+ return std::make_unique<TrustRegionPreprocessor>();
}
if (minimizer_type == LINE_SEARCH) {
- return new LineSearchPreprocessor;
+ return std::make_unique<LineSearchPreprocessor>();
}
LOG(FATAL) << "Unknown minimizer_type: " << minimizer_type;
- return NULL;
+ return nullptr;
}
-Preprocessor::~Preprocessor() {}
+Preprocessor::~Preprocessor() = default;
void ChangeNumThreadsIfNeeded(Solver::Options* options) {
- const int num_threads_available = MaxNumThreadsAvailable();
+ if (options->num_threads == 1) {
+ return;
+ }
+ const int num_threads_available = ThreadPool::MaxNumThreadsAvailable();
if (options->num_threads > num_threads_available) {
LOG(WARNING) << "Specified options.num_threads: " << options->num_threads
<< " exceeds maximum available from the threading model Ceres "
@@ -77,20 +82,22 @@
double* reduced_parameters = pp->reduced_parameters.data();
program->ParameterBlocksToStateVector(reduced_parameters);
+ auto context = pp->problem->context();
Minimizer::Options& minimizer_options = pp->minimizer_options;
minimizer_options = Minimizer::Options(options);
minimizer_options.evaluator = pp->evaluator;
+ minimizer_options.context = context;
if (options.logging_type != SILENT) {
- pp->logging_callback.reset(new LoggingCallback(
- options.minimizer_type, options.minimizer_progress_to_stdout));
+ pp->logging_callback = std::make_unique<LoggingCallback>(
+ options.minimizer_type, options.minimizer_progress_to_stdout);
minimizer_options.callbacks.insert(minimizer_options.callbacks.begin(),
pp->logging_callback.get());
}
if (options.update_state_every_iteration) {
- pp->state_updating_callback.reset(
- new StateUpdatingCallback(program, reduced_parameters));
+ pp->state_updating_callback =
+ std::make_unique<StateUpdatingCallback>(program, reduced_parameters);
// This must get pushed to the front of the callbacks so that it
// is run before any of the user callbacks.
minimizer_options.callbacks.insert(minimizer_options.callbacks.begin(),
@@ -98,5 +105,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/preprocessor.h b/internal/ceres/preprocessor.h
index ec56c6e..ed031f6 100644
--- a/internal/ceres/preprocessor.h
+++ b/internal/ceres/preprocessor.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,8 +37,9 @@
#include "ceres/coordinate_descent_minimizer.h"
#include "ceres/evaluator.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/iteration_callback.h"
#include "ceres/linear_solver.h"
#include "ceres/minimizer.h"
@@ -46,8 +47,7 @@
#include "ceres/program.h"
#include "ceres/solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
struct PreprocessedProblem;
@@ -67,10 +67,10 @@
//
// The output of the Preprocessor is stored in a PreprocessedProblem
// object.
-class CERES_EXPORT_INTERNAL Preprocessor {
+class CERES_NO_EXPORT Preprocessor {
public:
// Factory.
- static Preprocessor* Create(MinimizerType minimizer_type);
+ static std::unique_ptr<Preprocessor> Create(MinimizerType minimizer_type);
virtual ~Preprocessor();
virtual bool Preprocess(const Solver::Options& options,
ProblemImpl* problem,
@@ -79,8 +79,8 @@
// A PreprocessedProblem is the result of running the Preprocessor on
// a Problem and Solver::Options object.
-struct PreprocessedProblem {
- PreprocessedProblem() : fixed_cost(0.0) {}
+struct CERES_NO_EXPORT PreprocessedProblem {
+ PreprocessedProblem() = default;
std::string error;
Solver::Options options;
@@ -100,7 +100,7 @@
std::vector<double*> removed_parameter_blocks;
Vector reduced_parameters;
- double fixed_cost;
+ double fixed_cost{0.0};
};
// Common functions used by various preprocessors.
@@ -108,14 +108,17 @@
// If the user has specified a num_threads > the maximum number of threads
// available from the compiled threading model, bound the number of threads
// to the maximum.
+CERES_NO_EXPORT
void ChangeNumThreadsIfNeeded(Solver::Options* options);
// Extract the effective parameter vector from the preprocessed
// problem and setup bits of the Minimizer::Options object that are
// common to all Preprocessors.
+CERES_NO_EXPORT
void SetupCommonMinimizerOptions(PreprocessedProblem* pp);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_PREPROCESSOR_H_
diff --git a/internal/ceres/problem.cc b/internal/ceres/problem.cc
index f3ffd54..00c1786 100644
--- a/internal/ceres/problem.cc
+++ b/internal/ceres/problem.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,6 +31,7 @@
#include "ceres/problem.h"
+#include <memory>
#include <vector>
#include "ceres/crs_matrix.h"
@@ -38,20 +39,18 @@
namespace ceres {
-using std::vector;
-
Problem::Problem() : impl_(new internal::ProblemImpl) {}
Problem::Problem(const Problem::Options& options)
: impl_(new internal::ProblemImpl(options)) {}
// Not inline defaulted in declaration due to use of std::unique_ptr.
Problem::Problem(Problem&&) = default;
Problem& Problem::operator=(Problem&&) = default;
-Problem::~Problem() {}
+Problem::~Problem() = default;
ResidualBlockId Problem::AddResidualBlock(
CostFunction* cost_function,
LossFunction* loss_function,
- const vector<double*>& parameter_blocks) {
+ const std::vector<double*>& parameter_blocks) {
return impl_->AddResidualBlock(cost_function,
loss_function,
parameter_blocks.data(),
@@ -70,10 +69,8 @@
impl_->AddParameterBlock(values, size);
}
-void Problem::AddParameterBlock(double* values,
- int size,
- LocalParameterization* local_parameterization) {
- impl_->AddParameterBlock(values, size, local_parameterization);
+void Problem::AddParameterBlock(double* values, int size, Manifold* manifold) {
+ impl_->AddParameterBlock(values, size, manifold);
}
void Problem::RemoveResidualBlock(ResidualBlockId residual_block) {
@@ -96,14 +93,16 @@
return impl_->IsParameterBlockConstant(values);
}
-void Problem::SetParameterization(
- double* values, LocalParameterization* local_parameterization) {
- impl_->SetParameterization(values, local_parameterization);
+void Problem::SetManifold(double* values, Manifold* manifold) {
+ impl_->SetManifold(values, manifold);
}
-const LocalParameterization* Problem::GetParameterization(
- const double* values) const {
- return impl_->GetParameterization(values);
+const Manifold* Problem::GetManifold(const double* values) const {
+ return impl_->GetManifold(values);
+}
+
+bool Problem::HasManifold(const double* values) const {
+ return impl_->HasManifold(values);
}
void Problem::SetParameterLowerBound(double* values,
@@ -128,8 +127,8 @@
bool Problem::Evaluate(const EvaluateOptions& evaluate_options,
double* cost,
- vector<double>* residuals,
- vector<double>* gradient,
+ std::vector<double>* residuals,
+ std::vector<double>* gradient,
CRSMatrix* jacobian) {
return impl_->Evaluate(evaluate_options, cost, residuals, gradient, jacobian);
}
@@ -169,30 +168,30 @@
int Problem::NumResiduals() const { return impl_->NumResiduals(); }
-int Problem::ParameterBlockSize(const double* parameter_block) const {
- return impl_->ParameterBlockSize(parameter_block);
+int Problem::ParameterBlockSize(const double* values) const {
+ return impl_->ParameterBlockSize(values);
}
-int Problem::ParameterBlockLocalSize(const double* parameter_block) const {
- return impl_->ParameterBlockLocalSize(parameter_block);
+int Problem::ParameterBlockTangentSize(const double* values) const {
+ return impl_->ParameterBlockTangentSize(values);
}
bool Problem::HasParameterBlock(const double* values) const {
return impl_->HasParameterBlock(values);
}
-void Problem::GetParameterBlocks(vector<double*>* parameter_blocks) const {
+void Problem::GetParameterBlocks(std::vector<double*>* parameter_blocks) const {
impl_->GetParameterBlocks(parameter_blocks);
}
void Problem::GetResidualBlocks(
- vector<ResidualBlockId>* residual_blocks) const {
+ std::vector<ResidualBlockId>* residual_blocks) const {
impl_->GetResidualBlocks(residual_blocks);
}
void Problem::GetParameterBlocksForResidualBlock(
const ResidualBlockId residual_block,
- vector<double*>* parameter_blocks) const {
+ std::vector<double*>* parameter_blocks) const {
impl_->GetParameterBlocksForResidualBlock(residual_block, parameter_blocks);
}
@@ -207,8 +206,12 @@
}
void Problem::GetResidualBlocksForParameterBlock(
- const double* values, vector<ResidualBlockId>* residual_blocks) const {
+ const double* values, std::vector<ResidualBlockId>* residual_blocks) const {
impl_->GetResidualBlocksForParameterBlock(values, residual_blocks);
}
+const Problem::Options& Problem::options() const { return impl_->options(); }
+
+internal::ProblemImpl* Problem::mutable_impl() { return impl_.get(); }
+
} // namespace ceres
diff --git a/internal/ceres/problem_impl.cc b/internal/ceres/problem_impl.cc
index 3155bc3..52575ee 100644
--- a/internal/ceres/problem_impl.cc
+++ b/internal/ceres/problem_impl.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -49,9 +49,10 @@
#include "ceres/crs_matrix.h"
#include "ceres/evaluation_callback.h"
#include "ceres/evaluator.h"
+#include "ceres/internal/export.h"
#include "ceres/internal/fixed_array.h"
-#include "ceres/internal/port.h"
#include "ceres/loss_function.h"
+#include "ceres/manifold.h"
#include "ceres/map_util.h"
#include "ceres/parameter_block.h"
#include "ceres/program.h"
@@ -62,13 +63,7 @@
#include "ceres/stringprintf.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::map;
-using std::string;
-using std::vector;
-
+namespace ceres::internal {
namespace {
// Returns true if two regions of memory, a and b, with sizes size_a and size_b
// respectively, overlap.
@@ -130,7 +125,7 @@
<< "for a parameter with size " << size;
// Ignore the request if there is a block for the given pointer already.
- ParameterMap::iterator it = parameter_block_map_.find(values);
+ auto it = parameter_block_map_.find(values);
if (it != parameter_block_map_.end()) {
if (!options_.disable_all_safety_checks) {
int existing_size = it->second->Size();
@@ -146,11 +141,11 @@
// Before adding the parameter block, also check that it doesn't alias any
// other parameter blocks.
if (!parameter_block_map_.empty()) {
- ParameterMap::iterator lb = parameter_block_map_.lower_bound(values);
+ auto lb = parameter_block_map_.lower_bound(values);
// If lb is not the first block, check the previous block for aliasing.
if (lb != parameter_block_map_.begin()) {
- ParameterMap::iterator previous = lb;
+ auto previous = lb;
--previous;
CheckForNoAliasing(
previous->first, previous->second->Size(), values, size);
@@ -165,7 +160,7 @@
// Pass the index of the new parameter block as well to keep the index in
// sync with the position of the parameter in the program's parameter vector.
- ParameterBlock* new_parameter_block =
+ auto* new_parameter_block =
new ParameterBlock(values, size, program_->parameter_blocks_.size());
// For dynamic problems, add the list of dependent residual blocks, which is
@@ -192,7 +187,7 @@
residual_block);
}
- ResidualBlockSet::iterator it = residual_block_set_.find(residual_block);
+ auto it = residual_block_set_.find(residual_block);
residual_block_set_.erase(it);
}
DeleteBlockInVector(program_->mutable_residual_blocks(), residual_block);
@@ -207,13 +202,13 @@
// The const casts here are legit, since ResidualBlock holds these
// pointers as const pointers but we have ownership of them and
// have the right to destroy them when the destructor is called.
- CostFunction* cost_function =
+ auto* cost_function =
const_cast<CostFunction*>(residual_block->cost_function());
if (options_.cost_function_ownership == TAKE_OWNERSHIP) {
DecrementValueOrDeleteKey(cost_function, &cost_function_ref_count_);
}
- LossFunction* loss_function =
+ auto* loss_function =
const_cast<LossFunction*>(residual_block->loss_function());
if (options_.loss_function_ownership == TAKE_OWNERSHIP &&
loss_function != nullptr) {
@@ -225,15 +220,7 @@
// Deletes the parameter block in question, assuming there are no other
// references to it inside the problem (e.g. by any residual blocks).
-// Referenced parameterizations are tucked away for future deletion, since it
-// is not possible to know whether other parts of the problem depend on them
-// without doing a full scan.
void ProblemImpl::DeleteBlock(ParameterBlock* parameter_block) {
- if (options_.local_parameterization_ownership == TAKE_OWNERSHIP &&
- parameter_block->local_parameterization() != nullptr) {
- local_parameterizations_to_delete_.push_back(
- parameter_block->mutable_local_parameterization());
- }
parameter_block_map_.erase(parameter_block->mutable_user_state());
delete parameter_block;
}
@@ -264,13 +251,13 @@
}
// Collect the unique parameterizations and delete the parameters.
- for (int i = 0; i < program_->parameter_blocks_.size(); ++i) {
- DeleteBlock(program_->parameter_blocks_[i]);
+ for (auto* parameter_block : program_->parameter_blocks_) {
+ DeleteBlock(parameter_block);
}
- // Delete the owned parameterizations.
- STLDeleteUniqueContainerPointers(local_parameterizations_to_delete_.begin(),
- local_parameterizations_to_delete_.end());
+ // Delete the owned manifolds.
+ STLDeleteUniqueContainerPointers(manifolds_to_delete_.begin(),
+ manifolds_to_delete_.end());
if (context_impl_owned_) {
delete context_impl_;
@@ -286,7 +273,7 @@
CHECK_EQ(num_parameter_blocks, cost_function->parameter_block_sizes().size());
// Check the sizes match.
- const vector<int32_t>& parameter_block_sizes =
+ const std::vector<int32_t>& parameter_block_sizes =
cost_function->parameter_block_sizes();
if (!options_.disable_all_safety_checks) {
@@ -295,15 +282,15 @@
<< "that the cost function expects.";
// Check for duplicate parameter blocks.
- vector<double*> sorted_parameter_blocks(
+ std::vector<double*> sorted_parameter_blocks(
parameter_blocks, parameter_blocks + num_parameter_blocks);
- sort(sorted_parameter_blocks.begin(), sorted_parameter_blocks.end());
+ std::sort(sorted_parameter_blocks.begin(), sorted_parameter_blocks.end());
const bool has_duplicate_items =
(std::adjacent_find(sorted_parameter_blocks.begin(),
sorted_parameter_blocks.end()) !=
sorted_parameter_blocks.end());
if (has_duplicate_items) {
- string blocks;
+ std::string blocks;
for (int i = 0; i < num_parameter_blocks; ++i) {
blocks += StringPrintf(" %p ", parameter_blocks[i]);
}
@@ -315,7 +302,7 @@
}
// Add parameter blocks and convert the double*'s to parameter blocks.
- vector<ParameterBlock*> parameter_block_ptrs(num_parameter_blocks);
+ std::vector<ParameterBlock*> parameter_block_ptrs(num_parameter_blocks);
for (int i = 0; i < num_parameter_blocks; ++i) {
parameter_block_ptrs[i] = InternalAddParameterBlock(
parameter_blocks[i], parameter_block_sizes[i]);
@@ -334,7 +321,7 @@
}
}
- ResidualBlock* new_residual_block =
+ auto* new_residual_block =
new ResidualBlock(cost_function,
loss_function,
parameter_block_ptrs,
@@ -372,12 +359,20 @@
InternalAddParameterBlock(values, size);
}
-void ProblemImpl::AddParameterBlock(
- double* values, int size, LocalParameterization* local_parameterization) {
- ParameterBlock* parameter_block = InternalAddParameterBlock(values, size);
- if (local_parameterization != nullptr) {
- parameter_block->SetParameterization(local_parameterization);
+void ProblemImpl::InternalSetManifold(double* /*values*/,
+ ParameterBlock* parameter_block,
+ Manifold* manifold) {
+ if (manifold != nullptr && options_.manifold_ownership == TAKE_OWNERSHIP) {
+ manifolds_to_delete_.push_back(manifold);
}
+ parameter_block->SetManifold(manifold);
+}
+
+void ProblemImpl::AddParameterBlock(double* values,
+ int size,
+ Manifold* manifold) {
+ ParameterBlock* parameter_block = InternalAddParameterBlock(values, size);
+ InternalSetManifold(values, parameter_block, manifold);
}
// Delete a block from a vector of blocks, maintaining the indexing invariant.
@@ -385,7 +380,7 @@
// vector over the element to remove, then popping the last element. It
// destroys the ordering in the interest of speed.
template <typename Block>
-void ProblemImpl::DeleteBlockInVector(vector<Block*>* mutable_blocks,
+void ProblemImpl::DeleteBlockInVector(std::vector<Block*>* mutable_blocks,
Block* block_to_remove) {
CHECK_EQ((*mutable_blocks)[block_to_remove->index()], block_to_remove)
<< "You found a Ceres bug! \n"
@@ -411,7 +406,7 @@
CHECK(residual_block != nullptr);
// Verify that residual_block identifies a residual in the current problem.
- const string residual_not_found_message = StringPrintf(
+ const std::string residual_not_found_message = StringPrintf(
"Residual block to remove: %p not found. This usually means "
"one of three things have happened:\n"
" 1) residual_block is uninitialised and points to a random "
@@ -449,11 +444,11 @@
if (options_.enable_fast_removal) {
// Copy the dependent residuals from the parameter block because the set of
// dependents will change after each call to RemoveResidualBlock().
- vector<ResidualBlock*> residual_blocks_to_remove(
+ std::vector<ResidualBlock*> residual_blocks_to_remove(
parameter_block->mutable_residual_blocks()->begin(),
parameter_block->mutable_residual_blocks()->end());
- for (int i = 0; i < residual_blocks_to_remove.size(); ++i) {
- InternalRemoveResidualBlock(residual_blocks_to_remove[i]);
+ for (auto* residual_block : residual_blocks_to_remove) {
+ InternalRemoveResidualBlock(residual_block);
}
} else {
// Scan all the residual blocks to remove ones that depend on the parameter
@@ -508,39 +503,32 @@
parameter_block->SetVarying();
}
-void ProblemImpl::SetParameterization(
- double* values, LocalParameterization* local_parameterization) {
+void ProblemImpl::SetManifold(double* values, Manifold* manifold) {
ParameterBlock* parameter_block =
FindWithDefault(parameter_block_map_, values, nullptr);
if (parameter_block == nullptr) {
LOG(FATAL) << "Parameter block not found: " << values
<< ". You must add the parameter block to the problem before "
- << "you can set its local parameterization.";
+ << "you can set its manifold.";
}
- // If the parameter block already has a local parameterization and
- // we are to take ownership of local parameterizations, then add it
- // to local_parameterizations_to_delete_ for eventual deletion.
- if (parameter_block->local_parameterization_ &&
- options_.local_parameterization_ownership == TAKE_OWNERSHIP) {
- local_parameterizations_to_delete_.push_back(
- parameter_block->local_parameterization_);
- }
-
- parameter_block->SetParameterization(local_parameterization);
+ InternalSetManifold(values, parameter_block, manifold);
}
-const LocalParameterization* ProblemImpl::GetParameterization(
- const double* values) const {
+const Manifold* ProblemImpl::GetManifold(const double* values) const {
ParameterBlock* parameter_block = FindWithDefault(
parameter_block_map_, const_cast<double*>(values), nullptr);
if (parameter_block == nullptr) {
LOG(FATAL) << "Parameter block not found: " << values
<< ". You must add the parameter block to the problem before "
- << "you can get its local parameterization.";
+ << "you can get its manifold.";
}
- return parameter_block->local_parameterization();
+ return parameter_block->manifold();
+}
+
+bool ProblemImpl::HasManifold(const double* values) const {
+ return GetManifold(values) != nullptr;
}
void ProblemImpl::SetParameterLowerBound(double* values,
@@ -596,8 +584,8 @@
bool ProblemImpl::Evaluate(const Problem::EvaluateOptions& evaluate_options,
double* cost,
- vector<double>* residuals,
- vector<double>* gradient,
+ std::vector<double>* residuals,
+ std::vector<double>* gradient,
CRSMatrix* jacobian) {
if (cost == nullptr && residuals == nullptr && gradient == nullptr &&
jacobian == nullptr) {
@@ -612,11 +600,11 @@
? evaluate_options.residual_blocks
: program_->residual_blocks());
- const vector<double*>& parameter_block_ptrs =
+ const std::vector<double*>& parameter_block_ptrs =
evaluate_options.parameter_blocks;
- vector<ParameterBlock*> variable_parameter_blocks;
- vector<ParameterBlock*>& parameter_blocks =
+ std::vector<ParameterBlock*> variable_parameter_blocks;
+ std::vector<ParameterBlock*>& parameter_blocks =
*program.mutable_parameter_blocks();
if (parameter_block_ptrs.size() == 0) {
@@ -649,13 +637,15 @@
// columns of the jacobian, we need to make sure that they are
// constant during evaluation and then make them variable again
// after we are done.
- vector<ParameterBlock*> all_parameter_blocks(program_->parameter_blocks());
- vector<ParameterBlock*> included_parameter_blocks(
+ std::vector<ParameterBlock*> all_parameter_blocks(
+ program_->parameter_blocks());
+ std::vector<ParameterBlock*> included_parameter_blocks(
program.parameter_blocks());
- vector<ParameterBlock*> excluded_parameter_blocks;
- sort(all_parameter_blocks.begin(), all_parameter_blocks.end());
- sort(included_parameter_blocks.begin(), included_parameter_blocks.end());
+ std::vector<ParameterBlock*> excluded_parameter_blocks;
+ std::sort(all_parameter_blocks.begin(), all_parameter_blocks.end());
+ std::sort(included_parameter_blocks.begin(),
+ included_parameter_blocks.end());
set_difference(all_parameter_blocks.begin(),
all_parameter_blocks.end(),
included_parameter_blocks.begin(),
@@ -663,8 +653,7 @@
back_inserter(excluded_parameter_blocks));
variable_parameter_blocks.reserve(excluded_parameter_blocks.size());
- for (int i = 0; i < excluded_parameter_blocks.size(); ++i) {
- ParameterBlock* parameter_block = excluded_parameter_blocks[i];
+ for (auto* parameter_block : excluded_parameter_blocks) {
if (!parameter_block->IsConstant()) {
variable_parameter_blocks.push_back(parameter_block);
parameter_block->SetConstant();
@@ -678,23 +667,13 @@
Evaluator::Options evaluator_options;
- // Even though using SPARSE_NORMAL_CHOLESKY requires SuiteSparse or
- // CXSparse, here it just being used for telling the evaluator to
- // use a SparseRowCompressedMatrix for the jacobian. This is because
- // the Evaluator decides the storage for the Jacobian based on the
- // type of linear solver being used.
+ // Even though using SPARSE_NORMAL_CHOLESKY requires a sparse linear algebra
+ // library here it just being used for telling the evaluator to use a
+ // SparseRowCompressedMatrix for the jacobian. This is because the Evaluator
+ // decides the storage for the Jacobian based on the type of linear solver
+ // being used.
evaluator_options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
-#ifdef CERES_NO_THREADS
- if (evaluate_options.num_threads > 1) {
- LOG(WARNING)
- << "No threading support is compiled into this binary; "
- << "only evaluate_options.num_threads = 1 is supported. Switching "
- << "to single threaded mode.";
- }
- evaluator_options.num_threads = 1;
-#else
evaluator_options.num_threads = evaluate_options.num_threads;
-#endif // CERES_NO_THREADS
// The main thread also does work so we only need to launch num_threads - 1.
context_impl_->EnsureMinimumThreads(evaluator_options.num_threads - 1);
@@ -716,8 +695,8 @@
std::unique_ptr<CompressedRowSparseMatrix> tmp_jacobian;
if (jacobian != nullptr) {
- tmp_jacobian.reset(
- down_cast<CompressedRowSparseMatrix*>(evaluator->CreateJacobian()));
+ tmp_jacobian.reset(down_cast<CompressedRowSparseMatrix*>(
+ evaluator->CreateJacobian().release()));
}
// Point the state pointers to the user state pointers. This is
@@ -749,8 +728,8 @@
// Make the parameter blocks that were temporarily marked constant,
// variable again.
- for (int i = 0; i < variable_parameter_blocks.size(); ++i) {
- variable_parameter_blocks[i]->SetVarying();
+ for (auto* parameter_block : variable_parameter_blocks) {
+ parameter_block->SetVarying();
}
if (status) {
@@ -829,24 +808,25 @@
return parameter_block->Size();
}
-int ProblemImpl::ParameterBlockLocalSize(const double* values) const {
+int ProblemImpl::ParameterBlockTangentSize(const double* values) const {
ParameterBlock* parameter_block = FindWithDefault(
parameter_block_map_, const_cast<double*>(values), nullptr);
if (parameter_block == nullptr) {
LOG(FATAL) << "Parameter block not found: " << values
<< ". You must add the parameter block to the problem before "
- << "you can get its local size.";
+ << "you can get its tangent size.";
}
- return parameter_block->LocalSize();
+ return parameter_block->TangentSize();
}
-bool ProblemImpl::HasParameterBlock(const double* parameter_block) const {
- return (parameter_block_map_.find(const_cast<double*>(parameter_block)) !=
+bool ProblemImpl::HasParameterBlock(const double* values) const {
+ return (parameter_block_map_.find(const_cast<double*>(values)) !=
parameter_block_map_.end());
}
-void ProblemImpl::GetParameterBlocks(vector<double*>* parameter_blocks) const {
+void ProblemImpl::GetParameterBlocks(
+ std::vector<double*>* parameter_blocks) const {
CHECK(parameter_blocks != nullptr);
parameter_blocks->resize(0);
parameter_blocks->reserve(parameter_block_map_.size());
@@ -856,14 +836,14 @@
}
void ProblemImpl::GetResidualBlocks(
- vector<ResidualBlockId>* residual_blocks) const {
+ std::vector<ResidualBlockId>* residual_blocks) const {
CHECK(residual_blocks != nullptr);
*residual_blocks = program().residual_blocks();
}
void ProblemImpl::GetParameterBlocksForResidualBlock(
const ResidualBlockId residual_block,
- vector<double*>* parameter_blocks) const {
+ std::vector<double*>* parameter_blocks) const {
int num_parameter_blocks = residual_block->NumParameterBlocks();
CHECK(parameter_blocks != nullptr);
parameter_blocks->resize(num_parameter_blocks);
@@ -884,7 +864,7 @@
}
void ProblemImpl::GetResidualBlocksForParameterBlock(
- const double* values, vector<ResidualBlockId>* residual_blocks) const {
+ const double* values, std::vector<ResidualBlockId>* residual_blocks) const {
ParameterBlock* parameter_block = FindWithDefault(
parameter_block_map_, const_cast<double*>(values), nullptr);
if (parameter_block == nullptr) {
@@ -921,5 +901,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/problem_impl.h b/internal/ceres/problem_impl.h
index 9abff3f..733f26e 100644
--- a/internal/ceres/problem_impl.h
+++ b/internal/ceres/problem_impl.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -42,11 +42,15 @@
#include <array>
#include <map>
#include <memory>
+#include <unordered_map>
#include <unordered_set>
#include <vector>
#include "ceres/context_impl.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/internal/port.h"
+#include "ceres/manifold.h"
#include "ceres/problem.h"
#include "ceres/types.h"
@@ -55,7 +59,6 @@
class CostFunction;
class EvaluationCallback;
class LossFunction;
-class LocalParameterization;
struct CRSMatrix;
namespace internal {
@@ -63,12 +66,12 @@
class Program;
class ResidualBlock;
-class CERES_EXPORT_INTERNAL ProblemImpl {
+class CERES_NO_EXPORT ProblemImpl {
public:
- typedef std::map<double*, ParameterBlock*> ParameterMap;
- typedef std::unordered_set<ResidualBlock*> ResidualBlockSet;
- typedef std::map<CostFunction*, int> CostFunctionRefCount;
- typedef std::map<LossFunction*, int> LossFunctionRefCount;
+ using ParameterMap = std::map<double*, ParameterBlock*>;
+ using ResidualBlockSet = std::unordered_set<ResidualBlock*>;
+ using CostFunctionRefCount = std::map<CostFunction*, int>;
+ using LossFunctionRefCount = std::map<LossFunction*, int>;
ProblemImpl();
explicit ProblemImpl(const Problem::Options& options);
@@ -96,9 +99,7 @@
}
void AddParameterBlock(double* values, int size);
- void AddParameterBlock(double* values,
- int size,
- LocalParameterization* local_parameterization);
+ void AddParameterBlock(double* values, int size, Manifold* manifold);
void RemoveResidualBlock(ResidualBlock* residual_block);
void RemoveParameterBlock(const double* values);
@@ -107,9 +108,9 @@
void SetParameterBlockVariable(double* values);
bool IsParameterBlockConstant(const double* values) const;
- void SetParameterization(double* values,
- LocalParameterization* local_parameterization);
- const LocalParameterization* GetParameterization(const double* values) const;
+ void SetManifold(double* values, Manifold* manifold);
+ const Manifold* GetManifold(const double* values) const;
+ bool HasManifold(const double* values) const;
void SetParameterLowerBound(double* values, int index, double lower_bound);
void SetParameterUpperBound(double* values, int index, double upper_bound);
@@ -134,10 +135,10 @@
int NumResidualBlocks() const;
int NumResiduals() const;
- int ParameterBlockSize(const double* parameter_block) const;
- int ParameterBlockLocalSize(const double* parameter_block) const;
+ int ParameterBlockSize(const double* values) const;
+ int ParameterBlockTangentSize(const double* values) const;
- bool HasParameterBlock(const double* parameter_block) const;
+ bool HasParameterBlock(const double* values) const;
void GetParameterBlocks(std::vector<double*>* parameter_blocks) const;
void GetResidualBlocks(std::vector<ResidualBlockId>* residual_blocks) const;
@@ -165,10 +166,16 @@
return residual_block_set_;
}
+ const Problem::Options& options() const { return options_; }
+
ContextImpl* context() { return context_impl_; }
private:
ParameterBlock* InternalAddParameterBlock(double* values, int size);
+ void InternalSetManifold(double* values,
+ ParameterBlock* parameter_block,
+ Manifold* manifold);
+
void InternalRemoveResidualBlock(ResidualBlock* residual_block);
// Delete the arguments in question. These differ from the Remove* functions
@@ -194,13 +201,15 @@
// The actual parameter and residual blocks.
std::unique_ptr<internal::Program> program_;
- // When removing parameter blocks, parameterizations have ambiguous
+ // TODO(sameeragarwal): Unify the shared object handling across object types.
+ // Right now we are using vectors for Manifold objects and reference counting
+ // for CostFunctions and LossFunctions. Ideally this should be done uniformly.
+
+ // When removing parameter blocks, manifolds have ambiguous
// ownership. Instead of scanning the entire problem to see if the
- // parameterization is shared with other parameter blocks, buffer
+ // manifold is shared with other parameter blocks, buffer
// them until destruction.
- //
- // TODO(keir): See if it makes sense to use sets instead.
- std::vector<LocalParameterization*> local_parameterizations_to_delete_;
+ std::vector<Manifold*> manifolds_to_delete_;
// For each cost function and loss function in the problem, a count
// of the number of residual blocks that refer to them. When the
@@ -213,4 +222,6 @@
} // namespace internal
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_PUBLIC_PROBLEM_IMPL_H_
diff --git a/internal/ceres/problem_test.cc b/internal/ceres/problem_test.cc
index 5129b9a..ebf15cd 100644
--- a/internal/ceres/problem_test.cc
+++ b/internal/ceres/problem_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,8 @@
#include "ceres/problem.h"
#include <memory>
+#include <string>
+#include <vector>
#include "ceres/autodiff_cost_function.h"
#include "ceres/casts.h"
@@ -39,7 +41,6 @@
#include "ceres/crs_matrix.h"
#include "ceres/evaluator_test_utils.h"
#include "ceres/internal/eigen.h"
-#include "ceres/local_parameterization.h"
#include "ceres/loss_function.h"
#include "ceres/map_util.h"
#include "ceres/parameter_block.h"
@@ -51,10 +52,7 @@
#include "gmock/gmock.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
// The following three classes are for the purposes of defining
// function signatures. They have dummy Evaluate functions.
@@ -67,8 +65,6 @@
mutable_parameter_block_sizes()->push_back(parameter_block_size);
}
- virtual ~UnaryCostFunction() {}
-
bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const final {
@@ -148,7 +144,7 @@
problem.AddParameterBlock(y, 4);
problem.AddParameterBlock(z, 5);
- EXPECT_DEATH_IF_SUPPORTED(problem.AddResidualBlock(NULL, NULL, x),
+ EXPECT_DEATH_IF_SUPPORTED(problem.AddResidualBlock(nullptr, nullptr, x),
"cost_function != nullptr");
}
@@ -162,7 +158,7 @@
// UnaryCostFunction takes only one parameter, but two are passed.
EXPECT_DEATH_IF_SUPPORTED(
- problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x, y),
+ problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x, y),
"num_parameter_blocks");
}
@@ -170,10 +166,10 @@
double x[3];
Problem problem;
- problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
+ problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
EXPECT_DEATH_IF_SUPPORTED(
problem.AddResidualBlock(
- new UnaryCostFunction(2, 4 /* 4 != 3 */), NULL, x),
+ new UnaryCostFunction(2, 4 /* 4 != 3 */), nullptr, x),
"different block sizes");
}
@@ -182,11 +178,11 @@
Problem problem;
EXPECT_DEATH_IF_SUPPORTED(
- problem.AddResidualBlock(new BinaryCostFunction(2, 3, 3), NULL, x, x),
+ problem.AddResidualBlock(new BinaryCostFunction(2, 3, 3), nullptr, x, x),
"Duplicate parameter blocks");
EXPECT_DEATH_IF_SUPPORTED(
problem.AddResidualBlock(
- new TernaryCostFunction(1, 5, 3, 5), NULL, z, x, z),
+ new TernaryCostFunction(1, 5, 3, 5), nullptr, z, x, z),
"Duplicate parameter blocks");
}
@@ -201,7 +197,7 @@
// The cost function expects the size of the second parameter, z, to be 4
// instead of 5 as declared above. This is fatal.
EXPECT_DEATH_IF_SUPPORTED(
- problem.AddResidualBlock(new BinaryCostFunction(2, 3, 4), NULL, x, z),
+ problem.AddResidualBlock(new BinaryCostFunction(2, 3, 4), nullptr, x, z),
"different block sizes");
}
@@ -209,10 +205,10 @@
double x[3], y[4], z[5];
Problem problem;
- problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
- problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
- problem.AddResidualBlock(new UnaryCostFunction(2, 4), NULL, y);
- problem.AddResidualBlock(new UnaryCostFunction(2, 5), NULL, z);
+ problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
+ problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
+ problem.AddResidualBlock(new UnaryCostFunction(2, 4), nullptr, y);
+ problem.AddResidualBlock(new UnaryCostFunction(2, 5), nullptr, z);
EXPECT_EQ(3, problem.NumParameterBlocks());
EXPECT_EQ(12, problem.NumParameters());
@@ -278,62 +274,26 @@
// Creating parameter blocks multiple times is ignored.
problem.AddParameterBlock(x, 3);
- problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
+ problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
// ... even repeatedly.
problem.AddParameterBlock(x, 3);
- problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
+ problem.AddResidualBlock(new UnaryCostFunction(2, 3), nullptr, x);
// More parameters are fine.
problem.AddParameterBlock(y, 4);
- problem.AddResidualBlock(new UnaryCostFunction(2, 4), NULL, y);
+ problem.AddResidualBlock(new UnaryCostFunction(2, 4), nullptr, y);
EXPECT_EQ(2, problem.NumParameterBlocks());
EXPECT_EQ(7, problem.NumParameters());
}
-TEST(Problem, AddingParametersAndResidualsResultsInExpectedProblem) {
- double x[3], y[4], z[5], w[4];
-
- Problem problem;
- problem.AddParameterBlock(x, 3);
- EXPECT_EQ(1, problem.NumParameterBlocks());
- EXPECT_EQ(3, problem.NumParameters());
-
- problem.AddParameterBlock(y, 4);
- EXPECT_EQ(2, problem.NumParameterBlocks());
- EXPECT_EQ(7, problem.NumParameters());
-
- problem.AddParameterBlock(z, 5);
- EXPECT_EQ(3, problem.NumParameterBlocks());
- EXPECT_EQ(12, problem.NumParameters());
-
- // Add a parameter that has a local parameterization.
- w[0] = 1.0;
- w[1] = 0.0;
- w[2] = 0.0;
- w[3] = 0.0;
- problem.AddParameterBlock(w, 4, new QuaternionParameterization);
- EXPECT_EQ(4, problem.NumParameterBlocks());
- EXPECT_EQ(16, problem.NumParameters());
-
- problem.AddResidualBlock(new UnaryCostFunction(2, 3), NULL, x);
- problem.AddResidualBlock(new BinaryCostFunction(6, 5, 4), NULL, z, y);
- problem.AddResidualBlock(new BinaryCostFunction(3, 3, 5), NULL, x, z);
- problem.AddResidualBlock(new BinaryCostFunction(7, 5, 3), NULL, z, x);
- problem.AddResidualBlock(new TernaryCostFunction(1, 5, 3, 4), NULL, z, x, y);
-
- const int total_residuals = 2 + 6 + 3 + 7 + 1;
- EXPECT_EQ(problem.NumResidualBlocks(), 5);
- EXPECT_EQ(problem.NumResiduals(), total_residuals);
-}
-
class DestructorCountingCostFunction : public SizedCostFunction<3, 4, 5> {
public:
explicit DestructorCountingCostFunction(int* num_destructions)
: num_destructions_(num_destructions) {}
- virtual ~DestructorCountingCostFunction() { *num_destructions_ += 1; }
+ ~DestructorCountingCostFunction() override { *num_destructions_ += 1; }
bool Evaluate(double const* const* parameters,
double* residuals,
@@ -357,9 +317,9 @@
problem.AddParameterBlock(z, 5);
CostFunction* cost = new DestructorCountingCostFunction(&num_destructions);
- problem.AddResidualBlock(cost, NULL, y, z);
- problem.AddResidualBlock(cost, NULL, y, z);
- problem.AddResidualBlock(cost, NULL, y, z);
+ problem.AddResidualBlock(cost, nullptr, y, z);
+ problem.AddResidualBlock(cost, nullptr, y, z);
+ problem.AddResidualBlock(cost, nullptr, y, z);
EXPECT_EQ(3, problem.NumResidualBlocks());
}
@@ -372,10 +332,11 @@
Problem problem;
CostFunction* cost_function = new UnaryCostFunction(2, 3);
const ResidualBlockId residual_block =
- problem.AddResidualBlock(cost_function, NULL, x);
+ problem.AddResidualBlock(cost_function, nullptr, x);
EXPECT_EQ(problem.GetCostFunctionForResidualBlock(residual_block),
cost_function);
- EXPECT_TRUE(problem.GetLossFunctionForResidualBlock(residual_block) == NULL);
+ EXPECT_TRUE(problem.GetLossFunctionForResidualBlock(residual_block) ==
+ nullptr);
}
TEST(Problem, GetLossFunctionForResidualBlock) {
@@ -403,8 +364,8 @@
new DestructorCountingCostFunction(&num_destructions);
CostFunction* cost_wz =
new DestructorCountingCostFunction(&num_destructions);
- ResidualBlock* r_yz = problem.AddResidualBlock(cost_yz, NULL, y, z);
- ResidualBlock* r_wz = problem.AddResidualBlock(cost_wz, NULL, w, z);
+ ResidualBlock* r_yz = problem.AddResidualBlock(cost_yz, nullptr, y, z);
+ ResidualBlock* r_wz = problem.AddResidualBlock(cost_wz, nullptr, w, z);
EXPECT_EQ(2, problem.NumResidualBlocks());
problem.RemoveResidualBlock(r_yz);
@@ -427,7 +388,7 @@
DynamicProblem() {
Problem::Options options;
options.enable_fast_removal = GetParam();
- problem.reset(new ProblemImpl(options));
+ problem = std::make_unique<ProblemImpl>(options);
}
ParameterBlock* GetParameterBlock(int block) {
@@ -575,16 +536,15 @@
"Parameter block not found:");
}
-TEST(Problem, SetLocalParameterizationWithUnknownPtrDies) {
+TEST(Problem, SetManifoldWithUnknownPtrDies) {
double x[3];
double y[2];
Problem problem;
problem.AddParameterBlock(x, 3);
- EXPECT_DEATH_IF_SUPPORTED(
- problem.SetParameterization(y, new IdentityParameterization(3)),
- "Parameter block not found:");
+ EXPECT_DEATH_IF_SUPPORTED(problem.SetManifold(y, new EuclideanManifold<3>),
+ "Parameter block not found:");
}
TEST(Problem, RemoveParameterBlockWithUnknownPtrDies) {
@@ -598,7 +558,7 @@
"Parameter block not found:");
}
-TEST(Problem, GetParameterization) {
+TEST(Problem, GetManifold) {
double x[3];
double y[2];
@@ -606,10 +566,84 @@
problem.AddParameterBlock(x, 3);
problem.AddParameterBlock(y, 2);
- LocalParameterization* parameterization = new IdentityParameterization(3);
- problem.SetParameterization(x, parameterization);
- EXPECT_EQ(problem.GetParameterization(x), parameterization);
- EXPECT_TRUE(problem.GetParameterization(y) == NULL);
+ Manifold* manifold = new EuclideanManifold<3>;
+ problem.SetManifold(x, manifold);
+ EXPECT_EQ(problem.GetManifold(x), manifold);
+ EXPECT_TRUE(problem.GetManifold(y) == nullptr);
+}
+
+TEST(Problem, HasManifold) {
+ double x[3];
+ double y[2];
+
+ Problem problem;
+ problem.AddParameterBlock(x, 3);
+ problem.AddParameterBlock(y, 2);
+
+ Manifold* manifold = new EuclideanManifold<3>;
+ problem.SetManifold(x, manifold);
+ EXPECT_TRUE(problem.HasManifold(x));
+ EXPECT_FALSE(problem.HasManifold(y));
+}
+
+TEST(Problem, RepeatedAddParameterBlockResetsManifold) {
+ double x[4];
+ double y[2];
+
+ Problem problem;
+ problem.AddParameterBlock(x, 4, new SubsetManifold(4, {0, 1}));
+ problem.AddParameterBlock(y, 2);
+
+ EXPECT_FALSE(problem.HasManifold(y));
+
+ EXPECT_TRUE(problem.HasManifold(x));
+ EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+ EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+ EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+ EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
+
+ problem.AddParameterBlock(x, 4, static_cast<Manifold*>(nullptr));
+ EXPECT_FALSE(problem.HasManifold(x));
+ EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+ EXPECT_EQ(problem.ParameterBlockTangentSize(x), 4);
+ EXPECT_EQ(problem.GetManifold(x), nullptr);
+
+ problem.AddParameterBlock(x, 4, new SubsetManifold(4, {0, 1, 2}));
+ problem.AddParameterBlock(y, 2);
+ EXPECT_TRUE(problem.HasManifold(x));
+ EXPECT_EQ(problem.ParameterBlockSize(x), 4);
+ EXPECT_EQ(problem.ParameterBlockTangentSize(x), 1);
+ EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 4);
+ EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 1);
+}
+
+TEST(Problem, ParameterBlockQueryTestUsingManifold) {
+ double x[3];
+ double y[4];
+ Problem problem;
+ problem.AddParameterBlock(x, 3);
+ problem.AddParameterBlock(y, 4);
+
+ std::vector<int> constant_parameters;
+ constant_parameters.push_back(0);
+ problem.SetManifold(x, new SubsetManifold(3, constant_parameters));
+ EXPECT_EQ(problem.ParameterBlockSize(x), 3);
+ EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+ EXPECT_EQ(problem.ParameterBlockTangentSize(y), 4);
+
+ std::vector<double*> parameter_blocks;
+ problem.GetParameterBlocks(¶meter_blocks);
+ EXPECT_EQ(parameter_blocks.size(), 2);
+ EXPECT_NE(parameter_blocks[0], parameter_blocks[1]);
+ EXPECT_TRUE(parameter_blocks[0] == x || parameter_blocks[0] == y);
+ EXPECT_TRUE(parameter_blocks[1] == x || parameter_blocks[1] == y);
+
+ EXPECT_TRUE(problem.HasParameterBlock(x));
+ problem.RemoveParameterBlock(x);
+ EXPECT_FALSE(problem.HasParameterBlock(x));
+ problem.GetParameterBlocks(¶meter_blocks);
+ EXPECT_EQ(parameter_blocks.size(), 1);
+ EXPECT_TRUE(parameter_blocks[0] == y);
}
TEST(Problem, ParameterBlockQueryTest) {
@@ -619,15 +653,14 @@
problem.AddParameterBlock(x, 3);
problem.AddParameterBlock(y, 4);
- vector<int> constant_parameters;
+ std::vector<int> constant_parameters;
constant_parameters.push_back(0);
- problem.SetParameterization(
- x, new SubsetParameterization(3, constant_parameters));
+ problem.SetManifold(x, new SubsetManifold(3, constant_parameters));
EXPECT_EQ(problem.ParameterBlockSize(x), 3);
- EXPECT_EQ(problem.ParameterBlockLocalSize(x), 2);
- EXPECT_EQ(problem.ParameterBlockLocalSize(y), 4);
+ EXPECT_EQ(problem.ParameterBlockTangentSize(x), 2);
+ EXPECT_EQ(problem.ParameterBlockTangentSize(y), 4);
- vector<double*> parameter_blocks;
+ std::vector<double*> parameter_blocks;
problem.GetParameterBlocks(¶meter_blocks);
EXPECT_EQ(parameter_blocks.size(), 2);
EXPECT_NE(parameter_blocks[0], parameter_blocks[1]);
@@ -720,13 +753,13 @@
CostFunction* cost_z = new UnaryCostFunction (1, 5);
CostFunction* cost_w = new UnaryCostFunction (1, 3);
- ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, NULL, y, z, w);
- ResidualBlock* r_yz = problem->AddResidualBlock(cost_yz, NULL, y, z);
- ResidualBlock* r_yw = problem->AddResidualBlock(cost_yw, NULL, y, w);
- ResidualBlock* r_zw = problem->AddResidualBlock(cost_zw, NULL, z, w);
- ResidualBlock* r_y = problem->AddResidualBlock(cost_y, NULL, y);
- ResidualBlock* r_z = problem->AddResidualBlock(cost_z, NULL, z);
- ResidualBlock* r_w = problem->AddResidualBlock(cost_w, NULL, w);
+ ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, nullptr, y, z, w);
+ ResidualBlock* r_yz = problem->AddResidualBlock(cost_yz, nullptr, y, z);
+ ResidualBlock* r_yw = problem->AddResidualBlock(cost_yw, nullptr, y, w);
+ ResidualBlock* r_zw = problem->AddResidualBlock(cost_zw, nullptr, z, w);
+ ResidualBlock* r_y = problem->AddResidualBlock(cost_y, nullptr, y);
+ ResidualBlock* r_z = problem->AddResidualBlock(cost_z, nullptr, z);
+ ResidualBlock* r_w = problem->AddResidualBlock(cost_w, nullptr, w);
EXPECT_EQ(3, problem->NumParameterBlocks());
EXPECT_EQ(7, NumResidualBlocks());
@@ -781,13 +814,13 @@
CostFunction* cost_z = new UnaryCostFunction (1, 5);
CostFunction* cost_w = new UnaryCostFunction (1, 3);
- ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, NULL, y, z, w);
- ResidualBlock* r_yz = problem->AddResidualBlock(cost_yz, NULL, y, z);
- ResidualBlock* r_yw = problem->AddResidualBlock(cost_yw, NULL, y, w);
- ResidualBlock* r_zw = problem->AddResidualBlock(cost_zw, NULL, z, w);
- ResidualBlock* r_y = problem->AddResidualBlock(cost_y, NULL, y);
- ResidualBlock* r_z = problem->AddResidualBlock(cost_z, NULL, z);
- ResidualBlock* r_w = problem->AddResidualBlock(cost_w, NULL, w);
+ ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, nullptr, y, z, w);
+ ResidualBlock* r_yz = problem->AddResidualBlock(cost_yz, nullptr, y, z);
+ ResidualBlock* r_yw = problem->AddResidualBlock(cost_yw, nullptr, y, w);
+ ResidualBlock* r_zw = problem->AddResidualBlock(cost_zw, nullptr, z, w);
+ ResidualBlock* r_y = problem->AddResidualBlock(cost_y, nullptr, y);
+ ResidualBlock* r_z = problem->AddResidualBlock(cost_z, nullptr, z);
+ ResidualBlock* r_w = problem->AddResidualBlock(cost_w, nullptr, w);
if (GetParam()) {
// In this test parameterization, there should be back-pointers from the
@@ -797,9 +830,9 @@
ExpectParameterBlockContains(w, r_yzw, r_yw, r_zw, r_w);
} else {
// Otherwise, nothing.
- EXPECT_TRUE(GetParameterBlock(0)->mutable_residual_blocks() == NULL);
- EXPECT_TRUE(GetParameterBlock(1)->mutable_residual_blocks() == NULL);
- EXPECT_TRUE(GetParameterBlock(2)->mutable_residual_blocks() == NULL);
+ EXPECT_TRUE(GetParameterBlock(0)->mutable_residual_blocks() == nullptr);
+ EXPECT_TRUE(GetParameterBlock(1)->mutable_residual_blocks() == nullptr);
+ EXPECT_TRUE(GetParameterBlock(2)->mutable_residual_blocks() == nullptr);
}
EXPECT_EQ(3, problem->NumParameterBlocks());
EXPECT_EQ(7, NumResidualBlocks());
@@ -906,13 +939,13 @@
CostFunction* cost_z = new UnaryCostFunction (1, 5);
CostFunction* cost_w = new UnaryCostFunction (1, 3);
- ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, NULL, y, z, w);
- ResidualBlock* r_yz = problem->AddResidualBlock(cost_yz, NULL, y, z);
- ResidualBlock* r_yw = problem->AddResidualBlock(cost_yw, NULL, y, w);
- ResidualBlock* r_zw = problem->AddResidualBlock(cost_zw, NULL, z, w);
- ResidualBlock* r_y = problem->AddResidualBlock(cost_y, NULL, y);
- ResidualBlock* r_z = problem->AddResidualBlock(cost_z, NULL, z);
- ResidualBlock* r_w = problem->AddResidualBlock(cost_w, NULL, w);
+ ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, nullptr, y, z, w);
+ ResidualBlock* r_yz = problem->AddResidualBlock(cost_yz, nullptr, y, z);
+ ResidualBlock* r_yw = problem->AddResidualBlock(cost_yw, nullptr, y, w);
+ ResidualBlock* r_zw = problem->AddResidualBlock(cost_zw, nullptr, z, w);
+ ResidualBlock* r_y = problem->AddResidualBlock(cost_y, nullptr, y);
+ ResidualBlock* r_z = problem->AddResidualBlock(cost_z, nullptr, z);
+ ResidualBlock* r_w = problem->AddResidualBlock(cost_w, nullptr, w);
// clang-format on
@@ -925,8 +958,7 @@
// Attempt to remove a cast pointer never added as a residual.
int trash_memory = 1234;
- ResidualBlock* invalid_residual =
- reinterpret_cast<ResidualBlock*>(&trash_memory);
+ auto* invalid_residual = reinterpret_cast<ResidualBlock*>(&trash_memory);
EXPECT_DEATH_IF_SUPPORTED(problem->RemoveResidualBlock(invalid_residual),
"not found");
@@ -946,7 +978,7 @@
// Check that a null-terminated array, a, has the same elements as b.
template <typename T>
-void ExpectVectorContainsUnordered(const T* a, const vector<T>& b) {
+void ExpectVectorContainsUnordered(const T* a, const std::vector<T>& b) {
// Compute the size of a.
int size = 0;
while (a[size]) {
@@ -955,12 +987,12 @@
ASSERT_EQ(size, b.size());
// Sort a.
- vector<T> a_sorted(size);
+ std::vector<T> a_sorted(size);
copy(a, a + size, a_sorted.begin());
sort(a_sorted.begin(), a_sorted.end());
// Sort b.
- vector<T> b_sorted(b);
+ std::vector<T> b_sorted(b);
sort(b_sorted.begin(), b_sorted.end());
// Compare.
@@ -972,7 +1004,7 @@
static void ExpectProblemHasResidualBlocks(
const ProblemImpl& problem,
const ResidualBlockId* expected_residual_blocks) {
- vector<ResidualBlockId> residual_blocks;
+ std::vector<ResidualBlockId> residual_blocks;
problem.GetResidualBlocks(&residual_blocks);
ExpectVectorContainsUnordered(expected_residual_blocks, residual_blocks);
}
@@ -993,48 +1025,48 @@
CostFunction* cost_z = new UnaryCostFunction (1, 5);
CostFunction* cost_w = new UnaryCostFunction (1, 3);
- ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, NULL, y, z, w);
+ ResidualBlock* r_yzw = problem->AddResidualBlock(cost_yzw, nullptr, y, z, w);
{
- ResidualBlockId expected_residuals[] = {r_yzw, 0};
+ ResidualBlockId expected_residuals[] = {r_yzw, nullptr};
ExpectProblemHasResidualBlocks(*problem, expected_residuals);
}
- ResidualBlock* r_yz = problem->AddResidualBlock(cost_yz, NULL, y, z);
+ ResidualBlock* r_yz = problem->AddResidualBlock(cost_yz, nullptr, y, z);
{
- ResidualBlockId expected_residuals[] = {r_yzw, r_yz, 0};
+ ResidualBlockId expected_residuals[] = {r_yzw, r_yz, nullptr};
ExpectProblemHasResidualBlocks(*problem, expected_residuals);
}
- ResidualBlock* r_yw = problem->AddResidualBlock(cost_yw, NULL, y, w);
+ ResidualBlock* r_yw = problem->AddResidualBlock(cost_yw, nullptr, y, w);
{
- ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, 0};
+ ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, nullptr};
ExpectProblemHasResidualBlocks(*problem, expected_residuals);
}
- ResidualBlock* r_zw = problem->AddResidualBlock(cost_zw, NULL, z, w);
+ ResidualBlock* r_zw = problem->AddResidualBlock(cost_zw, nullptr, z, w);
{
- ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, r_zw, 0};
+ ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, r_zw, nullptr};
ExpectProblemHasResidualBlocks(*problem, expected_residuals);
}
- ResidualBlock* r_y = problem->AddResidualBlock(cost_y, NULL, y);
+ ResidualBlock* r_y = problem->AddResidualBlock(cost_y, nullptr, y);
{
- ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, r_zw, r_y, 0};
+ ResidualBlock *expected_residuals[] = {r_yzw, r_yz, r_yw, r_zw, r_y, nullptr};
ExpectProblemHasResidualBlocks(*problem, expected_residuals);
}
- ResidualBlock* r_z = problem->AddResidualBlock(cost_z, NULL, z);
+ ResidualBlock* r_z = problem->AddResidualBlock(cost_z, nullptr, z);
{
ResidualBlock *expected_residuals[] = {
- r_yzw, r_yz, r_yw, r_zw, r_y, r_z, 0
+ r_yzw, r_yz, r_yw, r_zw, r_y, r_z, nullptr
};
ExpectProblemHasResidualBlocks(*problem, expected_residuals);
}
- ResidualBlock* r_w = problem->AddResidualBlock(cost_w, NULL, w);
+ ResidualBlock* r_w = problem->AddResidualBlock(cost_w, nullptr, w);
{
ResidualBlock *expected_residuals[] = {
- r_yzw, r_yz, r_yw, r_zw, r_y, r_z, r_w, 0
+ r_yzw, r_yz, r_yw, r_zw, r_y, r_z, r_w, nullptr
};
ExpectProblemHasResidualBlocks(*problem, expected_residuals);
}
- vector<double*> parameter_blocks;
- vector<ResidualBlockId> residual_blocks;
+ std::vector<double*> parameter_blocks;
+ std::vector<ResidualBlockId> residual_blocks;
// Check GetResidualBlocksForParameterBlock() for all parameter blocks.
struct GetResidualBlocksForParameterBlockTestCase {
@@ -1042,10 +1074,10 @@
ResidualBlockId expected_residual_blocks[10];
};
GetResidualBlocksForParameterBlockTestCase get_residual_blocks_cases[] = {
- { y, { r_yzw, r_yz, r_yw, r_y, NULL} },
- { z, { r_yzw, r_yz, r_zw, r_z, NULL} },
- { w, { r_yzw, r_yw, r_zw, r_w, NULL} },
- { NULL }
+ { y, { r_yzw, r_yz, r_yw, r_y, nullptr} },
+ { z, { r_yzw, r_yz, r_zw, r_z, nullptr} },
+ { w, { r_yzw, r_yw, r_zw, r_w, nullptr} },
+ { nullptr, { nullptr } }
};
for (int i = 0; get_residual_blocks_cases[i].parameter_block; ++i) {
problem->GetResidualBlocksForParameterBlock(
@@ -1062,14 +1094,14 @@
double* expected_parameter_blocks[10];
};
GetParameterBlocksForResidualBlockTestCase get_parameter_blocks_cases[] = {
- { r_yzw, { y, z, w, NULL } },
- { r_yz , { y, z, NULL } },
- { r_yw , { y, w, NULL } },
- { r_zw , { z, w, NULL } },
- { r_y , { y, NULL } },
- { r_z , { z, NULL } },
- { r_w , { w, NULL } },
- { NULL }
+ { r_yzw, { y, z, w, nullptr } },
+ { r_yz , { y, z, nullptr } },
+ { r_yw , { y, w, nullptr } },
+ { r_zw , { z, w, nullptr } },
+ { r_y , { y, nullptr } },
+ { r_z , { z, nullptr } },
+ { r_w , { w, nullptr } },
+ { nullptr, { nullptr } }
};
for (int i = 0; get_parameter_blocks_cases[i].residual_block; ++i) {
problem->GetParameterBlocksForResidualBlock(
@@ -1112,12 +1144,12 @@
}
}
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return true;
}
for (int j = 0; j < kNumParameterBlocks; ++j) {
- if (jacobians[j] != NULL) {
+ if (jacobians[j] != nullptr) {
MatrixRef(jacobians[j], kNumResiduals, kNumResiduals) =
(-2.0 * (j + 1.0) * ConstVectorRef(parameters[j], kNumResiduals))
.asDiagonal();
@@ -1144,7 +1176,7 @@
class ProblemEvaluateTest : public ::testing::Test {
protected:
- void SetUp() {
+ void SetUp() override {
for (int i = 0; i < 6; ++i) {
parameters_[i] = static_cast<double>(i + 1);
}
@@ -1157,16 +1189,16 @@
// f(x, y)
residual_blocks_.push_back(problem_.AddResidualBlock(
- cost_function, NULL, parameters_, parameters_ + 2));
+ cost_function, nullptr, parameters_, parameters_ + 2));
// g(y, z)
residual_blocks_.push_back(problem_.AddResidualBlock(
- cost_function, NULL, parameters_ + 2, parameters_ + 4));
+ cost_function, nullptr, parameters_ + 2, parameters_ + 4));
// h(z, x)
residual_blocks_.push_back(problem_.AddResidualBlock(
- cost_function, NULL, parameters_ + 4, parameters_));
+ cost_function, nullptr, parameters_ + 4, parameters_));
}
- void TearDown() { EXPECT_TRUE(problem_.program().IsValid()); }
+ void TearDown() override { EXPECT_TRUE(problem_.program().IsValid()); }
void EvaluateAndCompare(const Problem::EvaluateOptions& options,
const int expected_num_rows,
@@ -1176,32 +1208,32 @@
const double* expected_gradient,
const double* expected_jacobian) {
double cost;
- vector<double> residuals;
- vector<double> gradient;
+ std::vector<double> residuals;
+ std::vector<double> gradient;
CRSMatrix jacobian;
EXPECT_TRUE(
problem_.Evaluate(options,
&cost,
- expected_residuals != NULL ? &residuals : NULL,
- expected_gradient != NULL ? &gradient : NULL,
- expected_jacobian != NULL ? &jacobian : NULL));
+ expected_residuals != nullptr ? &residuals : nullptr,
+ expected_gradient != nullptr ? &gradient : nullptr,
+ expected_jacobian != nullptr ? &jacobian : nullptr));
- if (expected_residuals != NULL) {
+ if (expected_residuals != nullptr) {
EXPECT_EQ(residuals.size(), expected_num_rows);
}
- if (expected_gradient != NULL) {
+ if (expected_gradient != nullptr) {
EXPECT_EQ(gradient.size(), expected_num_cols);
}
- if (expected_jacobian != NULL) {
+ if (expected_jacobian != nullptr) {
EXPECT_EQ(jacobian.num_rows, expected_num_rows);
EXPECT_EQ(jacobian.num_cols, expected_num_cols);
}
Matrix dense_jacobian;
- if (expected_jacobian != NULL) {
+ if (expected_jacobian != nullptr) {
CRSToDenseMatrix(jacobian, &dense_jacobian);
}
@@ -1212,8 +1244,8 @@
expected_gradient,
expected_jacobian,
cost,
- residuals.size() > 0 ? &residuals[0] : NULL,
- gradient.size() > 0 ? &gradient[0] : NULL,
+ !residuals.empty() ? &residuals[0] : nullptr,
+ !gradient.empty() ? &gradient[0] : nullptr,
dense_jacobian.data());
}
@@ -1224,16 +1256,16 @@
expected.num_rows,
expected.num_cols,
expected.cost,
- (i & 1) ? expected.residuals : NULL,
- (i & 2) ? expected.gradient : NULL,
- (i & 4) ? expected.jacobian : NULL);
+ (i & 1) ? expected.residuals : nullptr,
+ (i & 2) ? expected.gradient : nullptr,
+ (i & 4) ? expected.jacobian : nullptr);
}
}
ProblemImpl problem_;
double parameters_[6];
- vector<double*> parameter_blocks_;
- vector<ResidualBlockId> residual_blocks_;
+ std::vector<double*> parameter_blocks_;
+ std::vector<ResidualBlockId> residual_blocks_;
};
TEST_F(ProblemEvaluateTest, MultipleParameterAndResidualBlocks) {
@@ -1530,7 +1562,7 @@
CheckAllEvaluationCombinations(evaluate_options, expected);
}
-TEST_F(ProblemEvaluateTest, LocalParameterization) {
+TEST_F(ProblemEvaluateTest, Manifold) {
// clang-format off
ExpectedEvaluation expected = {
// Rows/columns
@@ -1544,7 +1576,7 @@
},
// Gradient
{ 146.0, 484.0, // x
- 1256.0, // y with SubsetParameterization
+ 1256.0, // y with SubsetManifold
1450.0, 2604.0, // z
},
// Jacobian
@@ -1559,10 +1591,10 @@
};
// clang-format on
- vector<int> constant_parameters;
+ std::vector<int> constant_parameters;
constant_parameters.push_back(0);
- problem_.SetParameterization(
- parameters_ + 2, new SubsetParameterization(2, constant_parameters));
+ problem_.SetManifold(parameters_ + 2,
+ new SubsetManifold(2, constant_parameters));
CheckAllEvaluationCombinations(Problem::EvaluateOptions(), expected);
}
@@ -1849,11 +1881,10 @@
<< actual_dfdy;
}
-TEST_F(ProblemEvaluateResidualBlockTest,
- OneResidualBlockWithOneLocalParameterization) {
+TEST_F(ProblemEvaluateResidualBlockTest, OneResidualBlockWithOneManifold) {
ResidualBlockId residual_block_id =
problem_.AddResidualBlock(IdentityFunctor::Create(), nullptr, x_, y_);
- problem_.SetParameterization(x_, new SubsetParameterization(2, {1}));
+ problem_.SetManifold(x_, new SubsetManifold(2, {1}));
Vector expected_f(5);
expected_f << 1, 2, 1, 2, 3;
@@ -1893,12 +1924,11 @@
<< actual_dfdy;
}
-TEST_F(ProblemEvaluateResidualBlockTest,
- OneResidualBlockWithTwoLocalParameterizations) {
+TEST_F(ProblemEvaluateResidualBlockTest, OneResidualBlockWithTwoManifolds) {
ResidualBlockId residual_block_id =
problem_.AddResidualBlock(IdentityFunctor::Create(), nullptr, x_, y_);
- problem_.SetParameterization(x_, new SubsetParameterization(2, {1}));
- problem_.SetParameterization(y_, new SubsetParameterization(3, {2}));
+ problem_.SetManifold(x_, new SubsetManifold(2, {1}));
+ problem_.SetManifold(y_, new SubsetManifold(3, {2}));
Vector expected_f(5);
expected_f << 1, 2, 1, 2, 3;
@@ -2139,39 +2169,39 @@
std::numeric_limits<double>::max());
}
-TEST(Problem, SetParameterizationTwice) {
+TEST(Problem, SetManifoldTwice) {
Problem problem;
double x[] = {1.0, 2.0, 3.0};
problem.AddParameterBlock(x, 3);
- problem.SetParameterization(x, new SubsetParameterization(3, {1}));
- EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 3);
- EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 2);
+ problem.SetManifold(x, new SubsetManifold(3, {1}));
+ EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 3);
+ EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
- problem.SetParameterization(x, new SubsetParameterization(3, {0, 1}));
- EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 3);
- EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 1);
+ problem.SetManifold(x, new SubsetManifold(3, {0, 1}));
+ EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 3);
+ EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 1);
}
-TEST(Problem, SetParameterizationAndThenClearItWithNull) {
+TEST(Problem, SetManifoldAndThenClearItWithNull) {
Problem problem;
double x[] = {1.0, 2.0, 3.0};
problem.AddParameterBlock(x, 3);
- problem.SetParameterization(x, new SubsetParameterization(3, {1}));
- EXPECT_EQ(problem.GetParameterization(x)->GlobalSize(), 3);
- EXPECT_EQ(problem.GetParameterization(x)->LocalSize(), 2);
+ problem.SetManifold(x, new SubsetManifold(3, {1}));
+ EXPECT_EQ(problem.GetManifold(x)->AmbientSize(), 3);
+ EXPECT_EQ(problem.GetManifold(x)->TangentSize(), 2);
- problem.SetParameterization(x, nullptr);
- EXPECT_EQ(problem.GetParameterization(x), nullptr);
- EXPECT_EQ(problem.ParameterBlockLocalSize(x), 3);
+ problem.SetManifold(x, nullptr);
+ EXPECT_EQ(problem.GetManifold(x), nullptr);
+ EXPECT_EQ(problem.ParameterBlockTangentSize(x), 3);
EXPECT_EQ(problem.ParameterBlockSize(x), 3);
}
-TEST(Solver, ZeroSizedLocalParameterizationMeansParameterBlockIsConstant) {
+TEST(Solver, ZeroTangentSizedManifoldMeansParameterBlockIsConstant) {
double x = 0.0;
double y = 1.0;
Problem problem;
problem.AddResidualBlock(new BinaryCostFunction(1, 1, 1), nullptr, &x, &y);
- problem.SetParameterization(&y, new SubsetParameterization(1, {0}));
+ problem.SetManifold(&y, new SubsetManifold(1, {0}));
EXPECT_TRUE(problem.IsParameterBlockConstant(&y));
}
@@ -2279,5 +2309,4 @@
jacobians));
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/program.cc b/internal/ceres/program.cc
index f1ded2e..a5a243d 100644
--- a/internal/ceres/program.cc
+++ b/internal/ceres/program.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,6 +33,7 @@
#include <algorithm>
#include <map>
#include <memory>
+#include <string>
#include <vector>
#include "ceres/array_utils.h"
@@ -40,44 +41,32 @@
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/cost_function.h"
#include "ceres/evaluator.h"
-#include "ceres/internal/port.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/internal/export.h"
#include "ceres/loss_function.h"
+#include "ceres/manifold.h"
#include "ceres/map_util.h"
+#include "ceres/parallel_for.h"
#include "ceres/parameter_block.h"
#include "ceres/problem.h"
#include "ceres/residual_block.h"
#include "ceres/stl_util.h"
#include "ceres/triplet_sparse_matrix.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::max;
-using std::set;
-using std::string;
-using std::vector;
-
-Program::Program() {}
-
-Program::Program(const Program& program)
- : parameter_blocks_(program.parameter_blocks_),
- residual_blocks_(program.residual_blocks_),
- evaluation_callback_(program.evaluation_callback_) {}
-
-const vector<ParameterBlock*>& Program::parameter_blocks() const {
+const std::vector<ParameterBlock*>& Program::parameter_blocks() const {
return parameter_blocks_;
}
-const vector<ResidualBlock*>& Program::residual_blocks() const {
+const std::vector<ResidualBlock*>& Program::residual_blocks() const {
return residual_blocks_;
}
-vector<ParameterBlock*>* Program::mutable_parameter_blocks() {
+std::vector<ParameterBlock*>* Program::mutable_parameter_blocks() {
return ¶meter_blocks_;
}
-vector<ResidualBlock*>* Program::mutable_residual_blocks() {
+std::vector<ResidualBlock*>* Program::mutable_residual_blocks() {
return &residual_blocks_;
}
@@ -86,33 +75,32 @@
}
bool Program::StateVectorToParameterBlocks(const double* state) {
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- if (!parameter_blocks_[i]->IsConstant() &&
- !parameter_blocks_[i]->SetState(state)) {
+ for (auto* parameter_block : parameter_blocks_) {
+ if (!parameter_block->IsConstant() && !parameter_block->SetState(state)) {
return false;
}
- state += parameter_blocks_[i]->Size();
+ state += parameter_block->Size();
}
return true;
}
void Program::ParameterBlocksToStateVector(double* state) const {
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- parameter_blocks_[i]->GetState(state);
- state += parameter_blocks_[i]->Size();
+ for (auto* parameter_block : parameter_blocks_) {
+ parameter_block->GetState(state);
+ state += parameter_block->Size();
}
}
void Program::CopyParameterBlockStateToUserState() {
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- parameter_blocks_[i]->GetState(parameter_blocks_[i]->mutable_user_state());
+ for (auto* parameter_block : parameter_blocks_) {
+ parameter_block->GetState(parameter_block->mutable_user_state());
}
}
bool Program::SetParameterBlockStatePtrsToUserStatePtrs() {
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- if (!parameter_blocks_[i]->IsConstant() &&
- !parameter_blocks_[i]->SetState(parameter_blocks_[i]->user_state())) {
+ for (auto* parameter_block : parameter_blocks_) {
+ if (!parameter_block->IsConstant() &&
+ !parameter_block->SetState(parameter_block->user_state())) {
return false;
}
}
@@ -121,23 +109,38 @@
bool Program::Plus(const double* state,
const double* delta,
- double* state_plus_delta) const {
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- if (!parameter_blocks_[i]->Plus(state, delta, state_plus_delta)) {
- return false;
- }
- state += parameter_blocks_[i]->Size();
- delta += parameter_blocks_[i]->LocalSize();
- state_plus_delta += parameter_blocks_[i]->Size();
- }
- return true;
+ double* state_plus_delta,
+ ContextImpl* context,
+ int num_threads) const {
+ std::atomic<bool> abort(false);
+ auto* parameter_blocks = parameter_blocks_.data();
+ ParallelFor(
+ context,
+ 0,
+ parameter_blocks_.size(),
+ num_threads,
+ [&abort, state, delta, state_plus_delta, parameter_blocks](int block_id) {
+ if (abort) {
+ return;
+ }
+ auto parameter_block = parameter_blocks[block_id];
+
+ auto block_state = state + parameter_block->state_offset();
+ auto block_delta = delta + parameter_block->delta_offset();
+ auto block_state_plus_delta =
+ state_plus_delta + parameter_block->state_offset();
+ if (!parameter_block->Plus(
+ block_state, block_delta, block_state_plus_delta)) {
+ abort = true;
+ }
+ });
+ return abort == false;
}
void Program::SetParameterOffsetsAndIndex() {
// Set positions for all parameters appearing as arguments to residuals to one
// past the end of the parameter block array.
- for (int i = 0; i < residual_blocks_.size(); ++i) {
- ResidualBlock* residual_block = residual_blocks_[i];
+ for (auto* residual_block : residual_blocks_) {
for (int j = 0; j < residual_block->NumParameterBlocks(); ++j) {
residual_block->parameter_blocks()[j]->set_index(-1);
}
@@ -150,7 +153,7 @@
parameter_blocks_[i]->set_state_offset(state_offset);
parameter_blocks_[i]->set_delta_offset(delta_offset);
state_offset += parameter_blocks_[i]->Size();
- delta_offset += parameter_blocks_[i]->LocalSize();
+ delta_offset += parameter_blocks_[i]->TangentSize();
}
}
@@ -178,16 +181,15 @@
}
state_offset += parameter_blocks_[i]->Size();
- delta_offset += parameter_blocks_[i]->LocalSize();
+ delta_offset += parameter_blocks_[i]->TangentSize();
}
return true;
}
-bool Program::ParameterBlocksAreFinite(string* message) const {
+bool Program::ParameterBlocksAreFinite(std::string* message) const {
CHECK(message != nullptr);
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- const ParameterBlock* parameter_block = parameter_blocks_[i];
+ for (auto* parameter_block : parameter_blocks_) {
const double* array = parameter_block->user_state();
const int size = parameter_block->Size();
const int invalid_index = FindInvalidValue(size, array);
@@ -207,8 +209,7 @@
}
bool Program::IsBoundsConstrained() const {
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- const ParameterBlock* parameter_block = parameter_blocks_[i];
+ for (auto* parameter_block : parameter_blocks_) {
if (parameter_block->IsConstant()) {
continue;
}
@@ -225,10 +226,9 @@
return false;
}
-bool Program::IsFeasible(string* message) const {
+bool Program::IsFeasible(std::string* message) const {
CHECK(message != nullptr);
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- const ParameterBlock* parameter_block = parameter_blocks_[i];
+ for (auto* parameter_block : parameter_blocks_) {
const double* parameters = parameter_block->user_state();
const int size = parameter_block->Size();
if (parameter_block->IsConstant()) {
@@ -284,42 +284,42 @@
return true;
}
-Program* Program::CreateReducedProgram(
- vector<double*>* removed_parameter_blocks,
+std::unique_ptr<Program> Program::CreateReducedProgram(
+ std::vector<double*>* removed_parameter_blocks,
double* fixed_cost,
- string* error) const {
+ std::string* error) const {
CHECK(removed_parameter_blocks != nullptr);
CHECK(fixed_cost != nullptr);
CHECK(error != nullptr);
- std::unique_ptr<Program> reduced_program(new Program(*this));
+ std::unique_ptr<Program> reduced_program = std::make_unique<Program>(*this);
if (!reduced_program->RemoveFixedBlocks(
removed_parameter_blocks, fixed_cost, error)) {
return nullptr;
}
reduced_program->SetParameterOffsetsAndIndex();
- return reduced_program.release();
+ return reduced_program;
}
-bool Program::RemoveFixedBlocks(vector<double*>* removed_parameter_blocks,
+bool Program::RemoveFixedBlocks(std::vector<double*>* removed_parameter_blocks,
double* fixed_cost,
- string* error) {
+ std::string* error) {
CHECK(removed_parameter_blocks != nullptr);
CHECK(fixed_cost != nullptr);
CHECK(error != nullptr);
std::unique_ptr<double[]> residual_block_evaluate_scratch;
- residual_block_evaluate_scratch.reset(
- new double[MaxScratchDoublesNeededForEvaluate()]);
+ residual_block_evaluate_scratch =
+ std::make_unique<double[]>(MaxScratchDoublesNeededForEvaluate());
*fixed_cost = 0.0;
bool need_to_call_prepare_for_evaluation = evaluation_callback_ != nullptr;
// Mark all the parameters as unused. Abuse the index member of the
// parameter blocks for the marking.
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- parameter_blocks_[i]->set_index(-1);
+ for (auto* parameter_block : parameter_blocks_) {
+ parameter_block->set_index(-1);
}
// Filter out residual that have all-constant parameters, and mark
@@ -391,8 +391,7 @@
// Filter out unused or fixed parameter blocks.
int num_active_parameter_blocks = 0;
removed_parameter_blocks->clear();
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- ParameterBlock* parameter_block = parameter_blocks_[i];
+ for (auto* parameter_block : parameter_blocks_) {
if (parameter_block->index() == -1) {
removed_parameter_blocks->push_back(
parameter_block->mutable_user_state());
@@ -412,7 +411,7 @@
}
bool Program::IsParameterBlockSetIndependent(
- const set<double*>& independent_set) const {
+ const std::set<double*>& independent_set) const {
// Loop over each residual block and ensure that no two parameter
// blocks in the same residual block are part of
// parameter_block_ptrs as that would violate the assumption that it
@@ -483,24 +482,24 @@
int Program::NumResiduals() const {
int num_residuals = 0;
- for (int i = 0; i < residual_blocks_.size(); ++i) {
- num_residuals += residual_blocks_[i]->NumResiduals();
+ for (auto* residual_block : residual_blocks_) {
+ num_residuals += residual_block->NumResiduals();
}
return num_residuals;
}
int Program::NumParameters() const {
int num_parameters = 0;
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- num_parameters += parameter_blocks_[i]->Size();
+ for (auto* parameter_block : parameter_blocks_) {
+ num_parameters += parameter_block->Size();
}
return num_parameters;
}
int Program::NumEffectiveParameters() const {
int num_parameters = 0;
- for (int i = 0; i < parameter_blocks_.size(); ++i) {
- num_parameters += parameter_blocks_[i]->LocalSize();
+ for (auto* parameter_block : parameter_blocks_) {
+ num_parameters += parameter_block->TangentSize();
}
return num_parameters;
}
@@ -511,48 +510,47 @@
int Program::MaxScratchDoublesNeededForEvaluate() const {
// Compute the scratch space needed for evaluate.
int max_scratch_bytes_for_evaluate = 0;
- for (int i = 0; i < residual_blocks_.size(); ++i) {
+ for (auto* residual_block : residual_blocks_) {
max_scratch_bytes_for_evaluate =
- max(max_scratch_bytes_for_evaluate,
- residual_blocks_[i]->NumScratchDoublesForEvaluate());
+ std::max(max_scratch_bytes_for_evaluate,
+ residual_block->NumScratchDoublesForEvaluate());
}
return max_scratch_bytes_for_evaluate;
}
int Program::MaxDerivativesPerResidualBlock() const {
int max_derivatives = 0;
- for (int i = 0; i < residual_blocks_.size(); ++i) {
+ for (auto* residual_block : residual_blocks_) {
int derivatives = 0;
- ResidualBlock* residual_block = residual_blocks_[i];
int num_parameters = residual_block->NumParameterBlocks();
for (int j = 0; j < num_parameters; ++j) {
derivatives += residual_block->NumResiduals() *
- residual_block->parameter_blocks()[j]->LocalSize();
+ residual_block->parameter_blocks()[j]->TangentSize();
}
- max_derivatives = max(max_derivatives, derivatives);
+ max_derivatives = std::max(max_derivatives, derivatives);
}
return max_derivatives;
}
int Program::MaxParametersPerResidualBlock() const {
int max_parameters = 0;
- for (int i = 0; i < residual_blocks_.size(); ++i) {
+ for (auto* residual_block : residual_blocks_) {
max_parameters =
- max(max_parameters, residual_blocks_[i]->NumParameterBlocks());
+ std::max(max_parameters, residual_block->NumParameterBlocks());
}
return max_parameters;
}
int Program::MaxResidualsPerResidualBlock() const {
int max_residuals = 0;
- for (int i = 0; i < residual_blocks_.size(); ++i) {
- max_residuals = max(max_residuals, residual_blocks_[i]->NumResiduals());
+ for (auto* residual_block : residual_blocks_) {
+ max_residuals = std::max(max_residuals, residual_block->NumResiduals());
}
return max_residuals;
}
-string Program::ToString() const {
- string ret = "Program dump\n";
+std::string Program::ToString() const {
+ std::string ret = "Program dump\n";
ret += StringPrintf("Number of parameter blocks: %d\n", NumParameterBlocks());
ret += StringPrintf("Number of parameters: %d\n", NumParameters());
ret += "Parameters:\n";
@@ -563,5 +561,4 @@
return ret;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/program.h b/internal/ceres/program.h
index ca29d31..e2b9bd7 100644
--- a/internal/ceres/program.h
+++ b/internal/ceres/program.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,15 +37,16 @@
#include <vector>
#include "ceres/evaluation_callback.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class ParameterBlock;
class ProblemImpl;
class ResidualBlock;
class TripletSparseMatrix;
+class ContextImpl;
// A nonlinear least squares optimization problem. This is different from the
// similarly-named "Problem" object, which offers a mutation interface for
@@ -57,11 +58,8 @@
// another; for example, the first stage of solving involves stripping all
// constant parameters and residuals. This is in contrast with Problem, which is
// not built for transformation.
-class CERES_EXPORT_INTERNAL Program {
+class CERES_NO_EXPORT Program {
public:
- Program();
- explicit Program(const Program& program);
-
// The ordered parameter and residual blocks for the program.
const std::vector<ParameterBlock*>& parameter_blocks() const;
const std::vector<ResidualBlock*>& residual_blocks() const;
@@ -72,9 +70,9 @@
// Serialize to/from the program and update states.
//
// NOTE: Setting the state of a parameter block can trigger the
- // computation of the Jacobian of its local parameterization. If
- // this computation fails for some reason, then this method returns
- // false and the state of the parameter blocks cannot be trusted.
+ // computation of the Jacobian of its manifold. If this computation fails for
+ // some reason, then this method returns false and the state of the parameter
+ // blocks cannot be trusted.
bool StateVectorToParameterBlocks(const double* state);
void ParameterBlocksToStateVector(double* state) const;
@@ -82,14 +80,16 @@
void CopyParameterBlockStateToUserState();
// Set the parameter block pointers to the user pointers. Since this
- // runs parameter block set state internally, which may call local
- // parameterizations, this can fail. False is returned on failure.
+ // runs parameter block set state internally, which may call manifold, this
+ // can fail. False is returned on failure.
bool SetParameterBlockStatePtrsToUserStatePtrs();
// Update a state vector for the program given a delta.
bool Plus(const double* state,
const double* delta,
- double* state_plus_delta) const;
+ double* state_plus_delta,
+ ContextImpl* context,
+ int num_threads) const;
// Set the parameter indices and offsets. This permits mapping backward
// from a ParameterBlock* to an index in the parameter_blocks() vector. For
@@ -146,12 +146,13 @@
// fixed_cost will be equal to the sum of the costs of the residual
// blocks that were removed.
//
- // If there was a problem, then the function will return a NULL
+ // If there was a problem, then the function will return a nullptr
// pointer and error will contain a human readable description of
// the problem.
- Program* CreateReducedProgram(std::vector<double*>* removed_parameter_blocks,
- double* fixed_cost,
- std::string* error) const;
+ std::unique_ptr<Program> CreateReducedProgram(
+ std::vector<double*>* removed_parameter_blocks,
+ double* fixed_cost,
+ std::string* error) const;
// See problem.h for what these do.
int NumParameterBlocks() const;
@@ -193,7 +194,8 @@
friend class ProblemImpl;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_PROGRAM_H_
diff --git a/internal/ceres/program_evaluator.h b/internal/ceres/program_evaluator.h
index 36c9c64..5d549a7 100644
--- a/internal/ceres/program_evaluator.h
+++ b/internal/ceres/program_evaluator.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -43,7 +43,7 @@
// residual jacobians are written directly into their final position in the
// block sparse matrix by the user's CostFunction; there is no copying.
//
-// The evaluation is threaded with OpenMP or C++ threads.
+// The evaluation is threaded with C++ threads.
//
// The EvaluatePreparer and JacobianWriter interfaces are as follows:
//
@@ -59,11 +59,13 @@
// class JacobianWriter {
// // Create a jacobian that this writer can write. Same as
// // Evaluator::CreateJacobian.
-// SparseMatrix* CreateJacobian() const;
+// std::unique_ptr<SparseMatrix> CreateJacobian() const;
//
-// // Create num_threads evaluate preparers. Caller owns result which must
-// // be freed with delete[]. Resulting preparers are valid while *this is.
-// EvaluatePreparer* CreateEvaluatePreparers(int num_threads);
+// // Create num_threads evaluate preparers.Resulting preparers are valid
+// // while *this is.
+//
+// std::unique_ptr<EvaluatePreparer[]> CreateEvaluatePreparers(
+// int num_threads);
//
// // Write the block jacobians from a residual block evaluation to the
// // larger sparse jacobian.
@@ -81,7 +83,7 @@
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
// clang-format on
#include <atomic>
@@ -94,6 +96,7 @@
#include "ceres/execution_summary.h"
#include "ceres/internal/eigen.h"
#include "ceres/parallel_for.h"
+#include "ceres/parallel_vector_ops.h"
#include "ceres/parameter_block.h"
#include "ceres/program.h"
#include "ceres/residual_block.h"
@@ -103,36 +106,28 @@
namespace internal {
struct NullJacobianFinalizer {
- void operator()(SparseMatrix* jacobian, int num_parameters) {}
+ void operator()(SparseMatrix* /*jacobian*/, int /*num_parameters*/) {}
};
template <typename EvaluatePreparer,
typename JacobianWriter,
typename JacobianFinalizer = NullJacobianFinalizer>
-class ProgramEvaluator : public Evaluator {
+class ProgramEvaluator final : public Evaluator {
public:
ProgramEvaluator(const Evaluator::Options& options, Program* program)
: options_(options),
program_(program),
jacobian_writer_(options, program),
- evaluate_preparers_(
- jacobian_writer_.CreateEvaluatePreparers(options.num_threads)) {
-#ifdef CERES_NO_THREADS
- if (options_.num_threads > 1) {
- LOG(WARNING) << "No threading support is compiled into this binary; "
- << "only options.num_threads = 1 is supported. Switching "
- << "to single threaded mode.";
- options_.num_threads = 1;
- }
-#endif // CERES_NO_THREADS
-
+ evaluate_preparers_(std::move(
+ jacobian_writer_.CreateEvaluatePreparers(options.num_threads))),
+ num_parameters_(program->NumEffectiveParameters()) {
BuildResidualLayout(*program, &residual_layout_);
- evaluate_scratch_.reset(
- CreateEvaluatorScratch(*program, options.num_threads));
+ evaluate_scratch_ = std::move(CreateEvaluatorScratch(
+ *program, static_cast<unsigned>(options.num_threads)));
}
// Implementation of Evaluator interface.
- SparseMatrix* CreateJacobian() const final {
+ std::unique_ptr<SparseMatrix> CreateJacobian() const final {
return jacobian_writer_.CreateJacobian();
}
@@ -162,20 +157,24 @@
}
if (residuals != nullptr) {
- VectorRef(residuals, program_->NumResiduals()).setZero();
+ ParallelSetZero(options_.context,
+ options_.num_threads,
+ residuals,
+ program_->NumResiduals());
}
if (jacobian != nullptr) {
- jacobian->SetZero();
+ jacobian->SetZero(options_.context, options_.num_threads);
}
// Each thread gets it's own cost and evaluate scratch space.
for (int i = 0; i < options_.num_threads; ++i) {
evaluate_scratch_[i].cost = 0.0;
if (gradient != nullptr) {
- VectorRef(evaluate_scratch_[i].gradient.get(),
- program_->NumEffectiveParameters())
- .setZero();
+ ParallelSetZero(options_.context,
+ options_.num_threads,
+ evaluate_scratch_[i].gradient.get(),
+ num_parameters_);
}
}
@@ -250,45 +249,62 @@
MatrixTransposeVectorMultiply<Eigen::Dynamic, Eigen::Dynamic, 1>(
block_jacobians[j],
num_residuals,
- parameter_block->LocalSize(),
+ parameter_block->TangentSize(),
block_residuals,
scratch->gradient.get() + parameter_block->delta_offset());
}
}
});
- if (!abort) {
- const int num_parameters = program_->NumEffectiveParameters();
+ if (abort) {
+ return false;
+ }
- // Sum the cost and gradient (if requested) from each thread.
- (*cost) = 0.0;
+ // Sum the cost and gradient (if requested) from each thread.
+ (*cost) = 0.0;
+ if (gradient != nullptr) {
+ auto gradient_vector = VectorRef(gradient, num_parameters_);
+ ParallelSetZero(options_.context, options_.num_threads, gradient_vector);
+ }
+
+ for (int i = 0; i < options_.num_threads; ++i) {
+ (*cost) += evaluate_scratch_[i].cost;
if (gradient != nullptr) {
- VectorRef(gradient, num_parameters).setZero();
- }
- for (int i = 0; i < options_.num_threads; ++i) {
- (*cost) += evaluate_scratch_[i].cost;
- if (gradient != nullptr) {
- VectorRef(gradient, num_parameters) +=
- VectorRef(evaluate_scratch_[i].gradient.get(), num_parameters);
- }
- }
-
- // Finalize the Jacobian if it is available.
- // `num_parameters` is passed to the finalizer so that additional
- // storage can be reserved for additional diagonal elements if
- // necessary.
- if (jacobian != nullptr) {
- JacobianFinalizer f;
- f(jacobian, num_parameters);
+ auto gradient_vector = VectorRef(gradient, num_parameters_);
+ ParallelAssign(
+ options_.context,
+ options_.num_threads,
+ gradient_vector,
+ gradient_vector + VectorRef(evaluate_scratch_[i].gradient.get(),
+ num_parameters_));
}
}
- return !abort;
+
+ // It is possible that after accumulation that the cost has become infinite
+ // or a nan.
+ if (!std::isfinite(*cost)) {
+ LOG(ERROR) << "Accumulated cost = " << *cost
+ << " is not a finite number. Evaluation failed.";
+ return false;
+ }
+
+ // Finalize the Jacobian if it is available.
+ // `num_parameters` is passed to the finalizer so that additional
+ // storage can be reserved for additional diagonal elements if
+ // necessary.
+ if (jacobian != nullptr) {
+ JacobianFinalizer f;
+ f(jacobian, num_parameters_);
+ }
+
+ return true;
}
bool Plus(const double* state,
const double* delta,
double* state_plus_delta) const final {
- return program_->Plus(state, delta, state_plus_delta);
+ return program_->Plus(
+ state, delta, state_plus_delta, options_.context, options_.num_threads);
}
int NumParameters() const final { return program_->NumParameters(); }
@@ -309,18 +325,19 @@
int max_scratch_doubles_needed_for_evaluate,
int max_residuals_per_residual_block,
int num_parameters) {
- residual_block_evaluate_scratch.reset(
- new double[max_scratch_doubles_needed_for_evaluate]);
- gradient.reset(new double[num_parameters]);
+ residual_block_evaluate_scratch =
+ std::make_unique<double[]>(max_scratch_doubles_needed_for_evaluate);
+ gradient = std::make_unique<double[]>(num_parameters);
VectorRef(gradient.get(), num_parameters).setZero();
- residual_block_residuals.reset(
- new double[max_residuals_per_residual_block]);
- jacobian_block_ptrs.reset(new double*[max_parameters_per_residual_block]);
+ residual_block_residuals =
+ std::make_unique<double[]>(max_residuals_per_residual_block);
+ jacobian_block_ptrs =
+ std::make_unique<double*[]>(max_parameters_per_residual_block);
}
double cost;
std::unique_ptr<double[]> residual_block_evaluate_scratch;
- // The gradient in the local parameterization.
+ // The gradient on the manifold.
std::unique_ptr<double[]> gradient;
// Enough space to store the residual for the largest residual block.
std::unique_ptr<double[]> residual_block_residuals;
@@ -341,8 +358,8 @@
}
// Create scratch space for each thread evaluating the program.
- static EvaluateScratch* CreateEvaluatorScratch(const Program& program,
- int num_threads) {
+ static std::unique_ptr<EvaluateScratch[]> CreateEvaluatorScratch(
+ const Program& program, unsigned num_threads) {
int max_parameters_per_residual_block =
program.MaxParametersPerResidualBlock();
int max_scratch_doubles_needed_for_evaluate =
@@ -351,7 +368,7 @@
program.MaxResidualsPerResidualBlock();
int num_parameters = program.NumEffectiveParameters();
- EvaluateScratch* evaluate_scratch = new EvaluateScratch[num_threads];
+ auto evaluate_scratch = std::make_unique<EvaluateScratch[]>(num_threads);
for (int i = 0; i < num_threads; i++) {
evaluate_scratch[i].Init(max_parameters_per_residual_block,
max_scratch_doubles_needed_for_evaluate,
@@ -367,6 +384,7 @@
std::unique_ptr<EvaluatePreparer[]> evaluate_preparers_;
std::unique_ptr<EvaluateScratch[]> evaluate_scratch_;
std::vector<int> residual_layout_;
+ int num_parameters_;
::ceres::internal::ExecutionSummary execution_summary_;
};
diff --git a/internal/ceres/program_test.cc b/internal/ceres/program_test.cc
index 1d9f49c..9c51ff9 100644
--- a/internal/ceres/program_test.cc
+++ b/internal/ceres/program_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,6 +33,7 @@
#include <cmath>
#include <limits>
#include <memory>
+#include <string>
#include <utility>
#include <vector>
@@ -46,9 +47,6 @@
namespace ceres {
namespace internal {
-using std::string;
-using std::vector;
-
// A cost function that simply returns its argument.
class UnaryIdentityCostFunction : public SizedCostFunction<1, 1> {
public:
@@ -70,7 +68,7 @@
bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const final {
- const int kNumParameters = Sum<std::integer_sequence<int, Ns...>>::Value;
+ constexpr int kNumParameters = (Ns + ... + 0);
for (int i = 0; i < kNumResiduals; ++i) {
residuals[i] = kNumResiduals + kNumParameters;
@@ -96,9 +94,9 @@
problem.AddResidualBlock(new BinaryCostFunction(), nullptr, &x, &y);
problem.AddResidualBlock(new TernaryCostFunction(), nullptr, &x, &y, &z);
- vector<double*> removed_parameter_blocks;
+ std::vector<double*> removed_parameter_blocks;
double fixed_cost = 0.0;
- string message;
+ std::string message;
std::unique_ptr<Program> reduced_program(
problem.program().CreateReducedProgram(
&removed_parameter_blocks, &fixed_cost, &message));
@@ -117,9 +115,9 @@
problem.AddResidualBlock(new UnaryCostFunction(), nullptr, &x);
problem.SetParameterBlockConstant(&x);
- vector<double*> removed_parameter_blocks;
+ std::vector<double*> removed_parameter_blocks;
double fixed_cost = 0.0;
- string message;
+ std::string message;
std::unique_ptr<Program> reduced_program(
problem.program().CreateReducedProgram(
&removed_parameter_blocks, &fixed_cost, &message));
@@ -141,9 +139,9 @@
problem.AddParameterBlock(&y, 1);
problem.AddParameterBlock(&z, 1);
- vector<double*> removed_parameter_blocks;
+ std::vector<double*> removed_parameter_blocks;
double fixed_cost = 0.0;
- string message;
+ std::string message;
std::unique_ptr<Program> reduced_program(
problem.program().CreateReducedProgram(
&removed_parameter_blocks, &fixed_cost, &message));
@@ -167,9 +165,9 @@
problem.AddResidualBlock(new BinaryCostFunction(), nullptr, &x, &y);
problem.SetParameterBlockConstant(&x);
- vector<double*> removed_parameter_blocks;
+ std::vector<double*> removed_parameter_blocks;
double fixed_cost = 0.0;
- string message;
+ std::string message;
std::unique_ptr<Program> reduced_program(
problem.program().CreateReducedProgram(
&removed_parameter_blocks, &fixed_cost, &message));
@@ -191,9 +189,9 @@
problem.AddResidualBlock(new BinaryCostFunction(), nullptr, &x, &y);
problem.SetParameterBlockConstant(&x);
- vector<double*> removed_parameter_blocks;
+ std::vector<double*> removed_parameter_blocks;
double fixed_cost = 0.0;
- string message;
+ std::string message;
std::unique_ptr<Program> reduced_program(
problem.program().CreateReducedProgram(
&removed_parameter_blocks, &fixed_cost, &message));
@@ -223,9 +221,9 @@
expected_removed_block->Evaluate(
true, &expected_fixed_cost, nullptr, nullptr, scratch.get());
- vector<double*> removed_parameter_blocks;
+ std::vector<double*> removed_parameter_blocks;
double fixed_cost = 0.0;
- string message;
+ std::string message;
std::unique_ptr<Program> reduced_program(
problem.program().CreateReducedProgram(
&removed_parameter_blocks, &fixed_cost, &message));
@@ -331,8 +329,6 @@
}
}
- virtual ~NumParameterBlocksCostFunction() {}
-
bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const final {
@@ -349,7 +345,7 @@
ProblemImpl problem;
double x[20];
- vector<double*> parameter_blocks;
+ std::vector<double*> parameter_blocks;
for (int i = 0; i < 20; ++i) {
problem.AddParameterBlock(x + i, 1);
parameter_blocks.push_back(x + i);
@@ -394,9 +390,9 @@
x[0] = 1.0;
x[1] = std::numeric_limits<double>::quiet_NaN();
problem.AddResidualBlock(new MockCostFunctionBase<1, 2>(), nullptr, x);
- string error;
+ std::string error;
EXPECT_FALSE(problem.program().ParameterBlocksAreFinite(&error));
- EXPECT_NE(error.find("has at least one invalid value"), string::npos)
+ EXPECT_NE(error.find("has at least one invalid value"), std::string::npos)
<< error;
}
@@ -406,9 +402,9 @@
problem.AddResidualBlock(new MockCostFunctionBase<1, 2>(), nullptr, x);
problem.SetParameterLowerBound(x, 0, 2.0);
problem.SetParameterUpperBound(x, 0, 1.0);
- string error;
+ std::string error;
EXPECT_FALSE(problem.program().IsFeasible(&error));
- EXPECT_NE(error.find("infeasible bound"), string::npos) << error;
+ EXPECT_NE(error.find("infeasible bound"), std::string::npos) << error;
}
TEST(Program, InfeasibleConstantParameterBlock) {
@@ -418,9 +414,9 @@
problem.SetParameterLowerBound(x, 0, 1.0);
problem.SetParameterUpperBound(x, 0, 2.0);
problem.SetParameterBlockConstant(x);
- string error;
+ std::string error;
EXPECT_FALSE(problem.program().IsFeasible(&error));
- EXPECT_NE(error.find("infeasible value"), string::npos) << error;
+ EXPECT_NE(error.find("infeasible value"), std::string::npos) << error;
}
} // namespace internal
diff --git a/internal/ceres/random.h b/internal/ceres/random.h
deleted file mode 100644
index 6b280f9..0000000
--- a/internal/ceres/random.h
+++ /dev/null
@@ -1,73 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: keir@google.com (Keir Mierle)
-// sameeragarwal@google.com (Sameer Agarwal)
-
-#ifndef CERES_INTERNAL_RANDOM_H_
-#define CERES_INTERNAL_RANDOM_H_
-
-#include <cmath>
-#include <cstdlib>
-
-#include "ceres/internal/port.h"
-
-namespace ceres {
-
-inline void SetRandomState(int state) { srand(state); }
-
-inline int Uniform(int n) {
- if (n) {
- return rand() % n;
- } else {
- return 0;
- }
-}
-
-inline double RandDouble() {
- double r = static_cast<double>(rand());
- return r / RAND_MAX;
-}
-
-// Box-Muller algorithm for normal random number generation.
-// http://en.wikipedia.org/wiki/Box-Muller_transform
-inline double RandNormal() {
- double x1, x2, w;
- do {
- x1 = 2.0 * RandDouble() - 1.0;
- x2 = 2.0 * RandDouble() - 1.0;
- w = x1 * x1 + x2 * x2;
- } while (w >= 1.0 || w == 0.0);
-
- w = sqrt((-2.0 * log(w)) / w);
- return x1 * w;
-}
-
-} // namespace ceres
-
-#endif // CERES_INTERNAL_RANDOM_H_
diff --git a/internal/ceres/reorder_program.cc b/internal/ceres/reorder_program.cc
index 5d80236..44c4e46 100644
--- a/internal/ceres/reorder_program.cc
+++ b/internal/ceres/reorder_program.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,13 +31,16 @@
#include "ceres/reorder_program.h"
#include <algorithm>
+#include <map>
#include <memory>
#include <numeric>
+#include <set>
+#include <string>
#include <vector>
#include "Eigen/SparseCore"
-#include "ceres/cxsparse.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/export.h"
#include "ceres/ordered_groups.h"
#include "ceres/parameter_block.h"
#include "ceres/parameter_block_ordering.h"
@@ -50,18 +53,19 @@
#include "ceres/types.h"
#ifdef CERES_USE_EIGEN_SPARSE
+
+#ifndef CERES_NO_EIGEN_METIS
+#include <iostream> // Need this because MetisSupport refers to std::cerr.
+
+#include "Eigen/MetisSupport"
+#endif
+
#include "Eigen/OrderingMethods"
#endif
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::map;
-using std::set;
-using std::string;
-using std::vector;
+namespace ceres::internal {
namespace {
@@ -85,19 +89,18 @@
return min_parameter_block_position;
}
-#if defined(CERES_USE_EIGEN_SPARSE)
Eigen::SparseMatrix<int> CreateBlockJacobian(
const TripletSparseMatrix& block_jacobian_transpose) {
- typedef Eigen::SparseMatrix<int> SparseMatrix;
- typedef Eigen::Triplet<int> Triplet;
+ using SparseMatrix = Eigen::SparseMatrix<int>;
+ using Triplet = Eigen::Triplet<int>;
const int* rows = block_jacobian_transpose.rows();
const int* cols = block_jacobian_transpose.cols();
int num_nonzeros = block_jacobian_transpose.num_nonzeros();
- vector<Triplet> triplets;
+ std::vector<Triplet> triplets;
triplets.reserve(num_nonzeros);
for (int i = 0; i < num_nonzeros; ++i) {
- triplets.push_back(Triplet(cols[i], rows[i], 1));
+ triplets.emplace_back(cols[i], rows[i], 1);
}
SparseMatrix block_jacobian(block_jacobian_transpose.num_cols(),
@@ -105,14 +108,20 @@
block_jacobian.setFromTriplets(triplets.begin(), triplets.end());
return block_jacobian;
}
-#endif
void OrderingForSparseNormalCholeskyUsingSuiteSparse(
+ const LinearSolverOrderingType linear_solver_ordering_type,
const TripletSparseMatrix& tsm_block_jacobian_transpose,
- const vector<ParameterBlock*>& parameter_blocks,
+ const std::vector<ParameterBlock*>& parameter_blocks,
const ParameterBlockOrdering& parameter_block_ordering,
int* ordering) {
#ifdef CERES_NO_SUITESPARSE
+ // "Void"ing values to avoid compiler warnings about unused parameters
+ (void)linear_solver_ordering_type;
+ (void)tsm_block_jacobian_transpose;
+ (void)parameter_blocks;
+ (void)parameter_block_ordering;
+ (void)ordering;
LOG(FATAL) << "Congratulations, you found a Ceres bug! "
<< "Please report this error to the developers.";
#else
@@ -120,61 +129,47 @@
cholmod_sparse* block_jacobian_transpose = ss.CreateSparseMatrix(
const_cast<TripletSparseMatrix*>(&tsm_block_jacobian_transpose));
- // No CAMD or the user did not supply a useful ordering, then just
- // use regular AMD.
- if (parameter_block_ordering.NumGroups() <= 1 ||
- !SuiteSparse::IsConstrainedApproximateMinimumDegreeOrderingAvailable()) {
- ss.ApproximateMinimumDegreeOrdering(block_jacobian_transpose, &ordering[0]);
- } else {
- vector<int> constraints;
- for (int i = 0; i < parameter_blocks.size(); ++i) {
- constraints.push_back(parameter_block_ordering.GroupId(
- parameter_blocks[i]->mutable_user_state()));
+ if (linear_solver_ordering_type == ceres::AMD) {
+ if (parameter_block_ordering.NumGroups() <= 1) {
+ // The user did not supply a useful ordering so just go ahead
+ // and use AMD.
+ ss.Ordering(block_jacobian_transpose, OrderingType::AMD, ordering);
+ } else {
+ // The user supplied an ordering, so use CAMD.
+ std::vector<int> constraints;
+ constraints.reserve(parameter_blocks.size());
+ for (auto* parameter_block : parameter_blocks) {
+ constraints.push_back(parameter_block_ordering.GroupId(
+ parameter_block->mutable_user_state()));
+ }
+
+ // Renumber the entries of constraints to be contiguous integers
+ // as CAMD requires that the group ids be in the range [0,
+ // parameter_blocks.size() - 1].
+ MapValuesToContiguousRange(constraints.size(), constraints.data());
+ ss.ConstrainedApproximateMinimumDegreeOrdering(
+ block_jacobian_transpose, constraints.data(), ordering);
}
-
- // Renumber the entries of constraints to be contiguous integers
- // as CAMD requires that the group ids be in the range [0,
- // parameter_blocks.size() - 1].
- MapValuesToContiguousRange(constraints.size(), &constraints[0]);
- ss.ConstrainedApproximateMinimumDegreeOrdering(
- block_jacobian_transpose, &constraints[0], ordering);
+ } else if (linear_solver_ordering_type == ceres::NESDIS) {
+ // If nested dissection is chosen as an ordering algorithm, then
+ // ignore any user provided linear_solver_ordering.
+ CHECK(SuiteSparse::IsNestedDissectionAvailable())
+ << "Congratulations, you found a Ceres bug! "
+ << "Please report this error to the developers.";
+ ss.Ordering(block_jacobian_transpose, OrderingType::NESDIS, ordering);
+ } else {
+ LOG(FATAL) << "Congratulations, you found a Ceres bug! "
+ << "Please report this error to the developers.";
}
- VLOG(2) << "Block ordering stats: "
- << " flops: " << ss.mutable_cc()->fl
- << " lnz : " << ss.mutable_cc()->lnz
- << " anz : " << ss.mutable_cc()->anz;
-
ss.Free(block_jacobian_transpose);
#endif // CERES_NO_SUITESPARSE
}
-void OrderingForSparseNormalCholeskyUsingCXSparse(
- const TripletSparseMatrix& tsm_block_jacobian_transpose, int* ordering) {
-#ifdef CERES_NO_CXSPARSE
- LOG(FATAL) << "Congratulations, you found a Ceres bug! "
- << "Please report this error to the developers.";
-#else
- // CXSparse works with J'J instead of J'. So compute the block
- // sparsity for J'J and compute an approximate minimum degree
- // ordering.
- CXSparse cxsparse;
- cs_di* block_jacobian_transpose;
- block_jacobian_transpose = cxsparse.CreateSparseMatrix(
- const_cast<TripletSparseMatrix*>(&tsm_block_jacobian_transpose));
- cs_di* block_jacobian = cxsparse.TransposeMatrix(block_jacobian_transpose);
- cs_di* block_hessian =
- cxsparse.MatrixMatrixMultiply(block_jacobian_transpose, block_jacobian);
- cxsparse.Free(block_jacobian);
- cxsparse.Free(block_jacobian_transpose);
-
- cxsparse.ApproximateMinimumDegreeOrdering(block_hessian, ordering);
- cxsparse.Free(block_hessian);
-#endif // CERES_NO_CXSPARSE
-}
-
void OrderingForSparseNormalCholeskyUsingEigenSparse(
- const TripletSparseMatrix& tsm_block_jacobian_transpose, int* ordering) {
+ const LinearSolverOrderingType linear_solver_ordering_type,
+ const TripletSparseMatrix& tsm_block_jacobian_transpose,
+ int* ordering) {
#ifndef CERES_USE_EIGEN_SPARSE
LOG(FATAL) << "SPARSE_NORMAL_CHOLESKY cannot be used with EIGEN_SPARSE "
"because Ceres was not built with support for "
@@ -182,22 +177,32 @@
"This requires enabling building with -DEIGENSPARSE=ON.";
#else
- // This conversion from a TripletSparseMatrix to a Eigen::Triplet
- // matrix is unfortunate, but unavoidable for now. It is not a
- // significant performance penalty in the grand scheme of
- // things. The right thing to do here would be to get a compressed
- // row sparse matrix representation of the jacobian and go from
- // there. But that is a project for another day.
- typedef Eigen::SparseMatrix<int> SparseMatrix;
+ // TODO(sameeragarwal): This conversion from a TripletSparseMatrix
+ // to a Eigen::Triplet matrix is unfortunate, but unavoidable for
+ // now. It is not a significant performance penalty in the grand
+ // scheme of things. The right thing to do here would be to get a
+ // compressed row sparse matrix representation of the jacobian and
+ // go from there. But that is a project for another day.
+ using SparseMatrix = Eigen::SparseMatrix<int>;
const SparseMatrix block_jacobian =
CreateBlockJacobian(tsm_block_jacobian_transpose);
const SparseMatrix block_hessian =
block_jacobian.transpose() * block_jacobian;
- Eigen::AMDOrdering<int> amd_ordering;
Eigen::PermutationMatrix<Eigen::Dynamic, Eigen::Dynamic, int> perm;
- amd_ordering(block_hessian, perm);
+ if (linear_solver_ordering_type == ceres::AMD) {
+ Eigen::AMDOrdering<int> amd_ordering;
+ amd_ordering(block_hessian, perm);
+ } else {
+#ifndef CERES_NO_EIGEN_METIS
+ Eigen::MetisOrdering<int> metis_ordering;
+ metis_ordering(block_hessian, perm);
+#else
+ perm.setIdentity(block_hessian.rows());
+#endif
+ }
+
for (int i = 0; i < block_hessian.rows(); ++i) {
ordering[i] = perm.indices()[i];
}
@@ -209,7 +214,7 @@
bool ApplyOrdering(const ProblemImpl::ParameterMap& parameter_map,
const ParameterBlockOrdering& ordering,
Program* program,
- string* error) {
+ std::string* error) {
const int num_parameter_blocks = program->NumParameterBlocks();
if (ordering.NumElements() != num_parameter_blocks) {
*error = StringPrintf(
@@ -221,13 +226,15 @@
return false;
}
- vector<ParameterBlock*>* parameter_blocks =
+ std::vector<ParameterBlock*>* parameter_blocks =
program->mutable_parameter_blocks();
parameter_blocks->clear();
- const map<int, set<double*>>& groups = ordering.group_to_elements();
+ // TODO(sameeragarwal): Investigate whether this should be a set or an
+ // unordered_set.
+ const std::map<int, std::set<double*>>& groups = ordering.group_to_elements();
for (const auto& p : groups) {
- const set<double*>& group = p.second;
+ const std::set<double*>& group = p.second;
for (double* parameter_block_ptr : group) {
auto it = parameter_map.find(parameter_block_ptr);
if (it == parameter_map.end()) {
@@ -247,16 +254,18 @@
bool LexicographicallyOrderResidualBlocks(
const int size_of_first_elimination_group,
Program* program,
- string* error) {
+ std::string* /*error*/) {
CHECK_GE(size_of_first_elimination_group, 1)
<< "Congratulations, you found a Ceres bug! Please report this error "
<< "to the developers.";
// Create a histogram of the number of residuals for each E block. There is an
// extra bucket at the end to catch all non-eliminated F blocks.
- vector<int> residual_blocks_per_e_block(size_of_first_elimination_group + 1);
- vector<ResidualBlock*>* residual_blocks = program->mutable_residual_blocks();
- vector<int> min_position_per_residual(residual_blocks->size());
+ std::vector<int> residual_blocks_per_e_block(size_of_first_elimination_group +
+ 1);
+ std::vector<ResidualBlock*>* residual_blocks =
+ program->mutable_residual_blocks();
+ std::vector<int> min_position_per_residual(residual_blocks->size());
for (int i = 0; i < residual_blocks->size(); ++i) {
ResidualBlock* residual_block = (*residual_blocks)[i];
int position =
@@ -269,7 +278,7 @@
// Run a cumulative sum on the histogram, to obtain offsets to the start of
// each histogram bucket (where each bucket is for the residuals for that
// E-block).
- vector<int> offsets(size_of_first_elimination_group + 1);
+ std::vector<int> offsets(size_of_first_elimination_group + 1);
std::partial_sum(residual_blocks_per_e_block.begin(),
residual_blocks_per_e_block.end(),
offsets.begin());
@@ -277,9 +286,9 @@
<< "Congratulations, you found a Ceres bug! Please report this error "
<< "to the developers.";
- CHECK(find(residual_blocks_per_e_block.begin(),
- residual_blocks_per_e_block.end() - 1,
- 0) != residual_blocks_per_e_block.end())
+ CHECK(std::find(residual_blocks_per_e_block.begin(),
+ residual_blocks_per_e_block.end() - 1,
+ 0) == residual_blocks_per_e_block.end() - 1)
<< "Congratulations, you found a Ceres bug! Please report this error "
<< "to the developers.";
@@ -288,10 +297,10 @@
// of the bucket. The filling order among the buckets is dictated by the
// residual blocks. This loop uses the offsets as counters; subtracting one
// from each offset as a residual block is placed in the bucket. When the
- // filling is finished, the offset pointerts should have shifted down one
+ // filling is finished, the offset pointers should have shifted down one
// entry (this is verified below).
- vector<ResidualBlock*> reordered_residual_blocks(
- (*residual_blocks).size(), static_cast<ResidualBlock*>(NULL));
+ std::vector<ResidualBlock*> reordered_residual_blocks(
+ (*residual_blocks).size(), static_cast<ResidualBlock*>(nullptr));
for (int i = 0; i < residual_blocks->size(); ++i) {
int bucket = min_position_per_residual[i];
@@ -299,7 +308,7 @@
offsets[bucket]--;
// Sanity.
- CHECK(reordered_residual_blocks[offsets[bucket]] == NULL)
+ CHECK(reordered_residual_blocks[offsets[bucket]] == nullptr)
<< "Congratulations, you found a Ceres bug! Please report this error "
<< "to the developers.";
@@ -313,9 +322,9 @@
<< "Congratulations, you found a Ceres bug! Please report this error "
<< "to the developers.";
}
- // Sanity check #2: No NULL's left behind.
- for (int i = 0; i < reordered_residual_blocks.size(); ++i) {
- CHECK(reordered_residual_blocks[i] != NULL)
+ // Sanity check #2: No nullptr's left behind.
+ for (auto* residual_block : reordered_residual_blocks) {
+ CHECK(residual_block != nullptr)
<< "Congratulations, you found a Ceres bug! Please report this error "
<< "to the developers.";
}
@@ -325,29 +334,29 @@
return true;
}
-// Pre-order the columns corresponding to the schur complement if
+// Pre-order the columns corresponding to the Schur complement if
// possible.
-static void MaybeReorderSchurComplementColumnsUsingSuiteSparse(
+static void ReorderSchurComplementColumnsUsingSuiteSparse(
const ParameterBlockOrdering& parameter_block_ordering, Program* program) {
-#ifndef CERES_NO_SUITESPARSE
+#ifdef CERES_NO_SUITESPARSE
+ // "Void"ing values to avoid compiler warnings about unused parameters
+ (void)parameter_block_ordering;
+ (void)program;
+#else
SuiteSparse ss;
- if (!SuiteSparse::IsConstrainedApproximateMinimumDegreeOrderingAvailable()) {
- return;
- }
-
- vector<int> constraints;
- vector<ParameterBlock*>& parameter_blocks =
+ std::vector<int> constraints;
+ std::vector<ParameterBlock*>& parameter_blocks =
*(program->mutable_parameter_blocks());
- for (int i = 0; i < parameter_blocks.size(); ++i) {
+ for (auto* parameter_block : parameter_blocks) {
constraints.push_back(parameter_block_ordering.GroupId(
- parameter_blocks[i]->mutable_user_state()));
+ parameter_block->mutable_user_state()));
}
// Renumber the entries of constraints to be contiguous integers as
// CAMD requires that the group ids be in the range [0,
// parameter_blocks.size() - 1].
- MapValuesToContiguousRange(constraints.size(), &constraints[0]);
+ MapValuesToContiguousRange(constraints.size(), constraints.data());
// Compute a block sparse presentation of J'.
std::unique_ptr<TripletSparseMatrix> tsm_block_jacobian_transpose(
@@ -356,12 +365,12 @@
cholmod_sparse* block_jacobian_transpose =
ss.CreateSparseMatrix(tsm_block_jacobian_transpose.get());
- vector<int> ordering(parameter_blocks.size(), 0);
+ std::vector<int> ordering(parameter_blocks.size(), 0);
ss.ConstrainedApproximateMinimumDegreeOrdering(
- block_jacobian_transpose, &constraints[0], &ordering[0]);
+ block_jacobian_transpose, constraints.data(), ordering.data());
ss.Free(block_jacobian_transpose);
- const vector<ParameterBlock*> parameter_blocks_copy(parameter_blocks);
+ const std::vector<ParameterBlock*> parameter_blocks_copy(parameter_blocks);
for (int i = 0; i < program->NumParameterBlocks(); ++i) {
parameter_blocks[i] = parameter_blocks_copy[ordering[i]];
}
@@ -370,15 +379,15 @@
#endif
}
-static void MaybeReorderSchurComplementColumnsUsingEigen(
+static void ReorderSchurComplementColumnsUsingEigen(
+ LinearSolverOrderingType ordering_type,
const int size_of_first_elimination_group,
- const ProblemImpl::ParameterMap& parameter_map,
+ const ProblemImpl::ParameterMap& /*parameter_map*/,
Program* program) {
#if defined(CERES_USE_EIGEN_SPARSE)
std::unique_ptr<TripletSparseMatrix> tsm_block_jacobian_transpose(
program->CreateJacobianBlockSparsityTranspose());
-
- typedef Eigen::SparseMatrix<int> SparseMatrix;
+ using SparseMatrix = Eigen::SparseMatrix<int>;
const SparseMatrix block_jacobian =
CreateBlockJacobian(*tsm_block_jacobian_transpose);
const int num_rows = block_jacobian.rows();
@@ -398,12 +407,22 @@
const SparseMatrix block_schur_complement =
F.transpose() * F - F.transpose() * E * E.transpose() * F;
- Eigen::AMDOrdering<int> amd_ordering;
Eigen::PermutationMatrix<Eigen::Dynamic, Eigen::Dynamic, int> perm;
- amd_ordering(block_schur_complement, perm);
+ if (ordering_type == ceres::AMD) {
+ Eigen::AMDOrdering<int> amd_ordering;
+ amd_ordering(block_schur_complement, perm);
+ } else {
+#ifndef CERES_NO_EIGEN_METIS
+ Eigen::MetisOrdering<int> metis_ordering;
+ metis_ordering(block_schur_complement, perm);
+#else
+ perm.setIdentity(block_schur_complement.rows());
+#endif
+ }
- const vector<ParameterBlock*>& parameter_blocks = program->parameter_blocks();
- vector<ParameterBlock*> ordering(num_cols);
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program->parameter_blocks();
+ std::vector<ParameterBlock*> ordering(num_cols);
// The ordering of the first size_of_first_elimination_group does
// not matter, so we preserve the existing ordering.
@@ -425,10 +444,11 @@
bool ReorderProgramForSchurTypeLinearSolver(
const LinearSolverType linear_solver_type,
const SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
+ const LinearSolverOrderingType linear_solver_ordering_type,
const ProblemImpl::ParameterMap& parameter_map,
ParameterBlockOrdering* parameter_block_ordering,
Program* program,
- string* error) {
+ std::string* error) {
if (parameter_block_ordering->NumElements() !=
program->NumParameterBlocks()) {
*error = StringPrintf(
@@ -441,12 +461,12 @@
if (parameter_block_ordering->NumGroups() == 1) {
// If the user supplied an parameter_block_ordering with just one
- // group, it is equivalent to the user supplying NULL as an
+ // group, it is equivalent to the user supplying nullptr as an
// parameter_block_ordering. Ceres is completely free to choose the
// parameter block ordering as it sees fit. For Schur type solvers,
// this means that the user wishes for Ceres to identify the
// e_blocks, which we do by computing a maximal independent set.
- vector<ParameterBlock*> schur_ordering;
+ std::vector<ParameterBlock*> schur_ordering;
const int size_of_first_elimination_group =
ComputeStableSchurOrdering(*program, &schur_ordering);
@@ -469,7 +489,10 @@
// group.
// Verify that the first elimination group is an independent set.
- const set<double*>& first_elimination_group =
+
+ // TODO(sameeragarwal): Investigate if this should be a set or an
+ // unordered_set.
+ const std::set<double*>& first_elimination_group =
parameter_block_ordering->group_to_elements().begin()->second;
if (!program->IsParameterBlockSetIndependent(first_elimination_group)) {
*error = StringPrintf(
@@ -491,12 +514,20 @@
parameter_block_ordering->group_to_elements().begin()->second.size();
if (linear_solver_type == SPARSE_SCHUR) {
- if (sparse_linear_algebra_library_type == SUITE_SPARSE) {
- MaybeReorderSchurComplementColumnsUsingSuiteSparse(
- *parameter_block_ordering, program);
+ if (sparse_linear_algebra_library_type == SUITE_SPARSE &&
+ linear_solver_ordering_type == ceres::AMD) {
+ // Preordering support for schur complement only works with AMD
+ // for now, since we are using CAMD.
+ //
+ // TODO(sameeragarwal): It maybe worth adding pre-ordering support for
+ // nested dissection too.
+ ReorderSchurComplementColumnsUsingSuiteSparse(*parameter_block_ordering,
+ program);
} else if (sparse_linear_algebra_library_type == EIGEN_SPARSE) {
- MaybeReorderSchurComplementColumnsUsingEigen(
- size_of_first_elimination_group, parameter_map, program);
+ ReorderSchurComplementColumnsUsingEigen(linear_solver_ordering_type,
+ size_of_first_elimination_group,
+ parameter_map,
+ program);
}
}
@@ -508,10 +539,11 @@
bool ReorderProgramForSparseCholesky(
const SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
+ const LinearSolverOrderingType linear_solver_ordering_type,
const ParameterBlockOrdering& parameter_block_ordering,
int start_row_block,
Program* program,
- string* error) {
+ std::string* error) {
if (parameter_block_ordering.NumElements() != program->NumParameterBlocks()) {
*error = StringPrintf(
"The program has %d parameter blocks, but the parameter block "
@@ -525,19 +557,17 @@
std::unique_ptr<TripletSparseMatrix> tsm_block_jacobian_transpose(
program->CreateJacobianBlockSparsityTranspose(start_row_block));
- vector<int> ordering(program->NumParameterBlocks(), 0);
- vector<ParameterBlock*>& parameter_blocks =
+ std::vector<int> ordering(program->NumParameterBlocks(), 0);
+ std::vector<ParameterBlock*>& parameter_blocks =
*(program->mutable_parameter_blocks());
if (sparse_linear_algebra_library_type == SUITE_SPARSE) {
OrderingForSparseNormalCholeskyUsingSuiteSparse(
+ linear_solver_ordering_type,
*tsm_block_jacobian_transpose,
parameter_blocks,
parameter_block_ordering,
- &ordering[0]);
- } else if (sparse_linear_algebra_library_type == CX_SPARSE) {
- OrderingForSparseNormalCholeskyUsingCXSparse(*tsm_block_jacobian_transpose,
- &ordering[0]);
+ ordering.data());
} else if (sparse_linear_algebra_library_type == ACCELERATE_SPARSE) {
// Accelerate does not provide a function to perform reordering without
// performing a full symbolic factorisation. As such, we have nothing
@@ -549,11 +579,13 @@
} else if (sparse_linear_algebra_library_type == EIGEN_SPARSE) {
OrderingForSparseNormalCholeskyUsingEigenSparse(
- *tsm_block_jacobian_transpose, &ordering[0]);
+ linear_solver_ordering_type,
+ *tsm_block_jacobian_transpose,
+ ordering.data());
}
// Apply ordering.
- const vector<ParameterBlock*> parameter_blocks_copy(parameter_blocks);
+ const std::vector<ParameterBlock*> parameter_blocks_copy(parameter_blocks);
for (int i = 0; i < program->NumParameterBlocks(); ++i) {
parameter_blocks[i] = parameter_blocks_copy[ordering[i]];
}
@@ -574,5 +606,39 @@
return it - residual_blocks->begin();
}
-} // namespace internal
-} // namespace ceres
+bool AreJacobianColumnsOrdered(
+ const LinearSolverType linear_solver_type,
+ const PreconditionerType preconditioner_type,
+ const SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
+ const LinearSolverOrderingType linear_solver_ordering_type) {
+ if (sparse_linear_algebra_library_type == SUITE_SPARSE) {
+ if (linear_solver_type == SPARSE_NORMAL_CHOLESKY ||
+ (linear_solver_type == CGNR && preconditioner_type == SUBSET)) {
+ return true;
+ }
+ if (linear_solver_type == SPARSE_SCHUR &&
+ linear_solver_ordering_type == ceres::AMD) {
+ return true;
+ }
+ return false;
+ }
+
+ if (sparse_linear_algebra_library_type == ceres::EIGEN_SPARSE) {
+ if (linear_solver_type == SPARSE_NORMAL_CHOLESKY ||
+ linear_solver_type == SPARSE_SCHUR ||
+ (linear_solver_type == CGNR && preconditioner_type == SUBSET)) {
+ return true;
+ }
+ return false;
+ }
+
+ if (sparse_linear_algebra_library_type == ceres::ACCELERATE_SPARSE) {
+ // Apple's accelerate framework does not allow direct access to
+ // ordering algorithms, so jacobian columns are never pre-ordered.
+ return false;
+ }
+
+ return false;
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/reorder_program.h b/internal/ceres/reorder_program.h
index 2e0c326..368a6ed 100644
--- a/internal/ceres/reorder_program.h
+++ b/internal/ceres/reorder_program.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,18 +33,19 @@
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
+#include "ceres/linear_solver.h"
#include "ceres/parameter_block_ordering.h"
#include "ceres/problem_impl.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class Program;
// Reorder the parameter blocks in program using the ordering
-CERES_EXPORT_INTERNAL bool ApplyOrdering(
+CERES_NO_EXPORT bool ApplyOrdering(
const ProblemImpl::ParameterMap& parameter_map,
const ParameterBlockOrdering& ordering,
Program* program,
@@ -53,7 +54,7 @@
// Reorder the residuals for program, if necessary, so that the residuals
// involving each E block occur together. This is a necessary condition for the
// Schur eliminator, which works on these "row blocks" in the jacobian.
-CERES_EXPORT_INTERNAL bool LexicographicallyOrderResidualBlocks(
+CERES_NO_EXPORT bool LexicographicallyOrderResidualBlocks(
int size_of_first_elimination_group, Program* program, std::string* error);
// Schur type solvers require that all parameter blocks eliminated
@@ -72,9 +73,10 @@
//
// Upon return, ordering contains the parameter block ordering that
// was used to order the program.
-CERES_EXPORT_INTERNAL bool ReorderProgramForSchurTypeLinearSolver(
+CERES_NO_EXPORT bool ReorderProgramForSchurTypeLinearSolver(
LinearSolverType linear_solver_type,
SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
+ LinearSolverOrderingType linear_solver_ordering_type,
const ProblemImpl::ParameterMap& parameter_map,
ParameterBlockOrdering* parameter_block_ordering,
Program* program,
@@ -90,8 +92,9 @@
// fill-reducing ordering is available in the sparse linear algebra
// library (SuiteSparse version >= 4.2.0) then the fill reducing
// ordering will take it into account, otherwise it will be ignored.
-CERES_EXPORT_INTERNAL bool ReorderProgramForSparseCholesky(
+CERES_NO_EXPORT bool ReorderProgramForSparseCholesky(
SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
+ LinearSolverOrderingType linear_solver_ordering_type,
const ParameterBlockOrdering& parameter_block_ordering,
int start_row_block,
Program* program,
@@ -107,11 +110,20 @@
// bottom_residual_blocks.size() because we allow
// bottom_residual_blocks to contain residual blocks not present in
// the Program.
-CERES_EXPORT_INTERNAL int ReorderResidualBlocksByPartition(
+CERES_NO_EXPORT int ReorderResidualBlocksByPartition(
const std::unordered_set<ResidualBlockId>& bottom_residual_blocks,
Program* program);
-} // namespace internal
-} // namespace ceres
+// The return value of this function indicates whether the columns of
+// the Jacobian can be reordered using a fill reducing ordering.
+CERES_NO_EXPORT bool AreJacobianColumnsOrdered(
+ LinearSolverType linear_solver_type,
+ PreconditionerType preconditioner_type,
+ SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
+ LinearSolverOrderingType linear_solver_ordering_type);
-#endif // CERES_INTERNAL_REORDER_PROGRAM_
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
+
+#endif // CERES_INTERNAL_REORDER_PROGRAM_H_
diff --git a/internal/ceres/reorder_program_test.cc b/internal/ceres/reorder_program_test.cc
index 83c867a..a8db314 100644
--- a/internal/ceres/reorder_program_test.cc
+++ b/internal/ceres/reorder_program_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,7 +31,9 @@
#include "ceres/reorder_program.h"
#include <random>
+#include <vector>
+#include "ceres/internal/config.h"
#include "ceres/parameter_block.h"
#include "ceres/problem_impl.h"
#include "ceres/program.h"
@@ -43,8 +45,6 @@
namespace ceres {
namespace internal {
-using std::vector;
-
// Templated base class for the CostFunction signatures.
template <int kNumResiduals, int... Ns>
class MockCostFunctionBase : public SizedCostFunction<kNumResiduals, Ns...> {
@@ -78,19 +78,19 @@
problem.AddResidualBlock(new BinaryCostFunction(), nullptr, &x, &y);
problem.AddResidualBlock(new UnaryCostFunction(), nullptr, &y);
- ParameterBlockOrdering* linear_solver_ordering = new ParameterBlockOrdering;
+ auto linear_solver_ordering = std::make_shared<ParameterBlockOrdering>();
linear_solver_ordering->AddElementToGroup(&x, 0);
linear_solver_ordering->AddElementToGroup(&y, 0);
linear_solver_ordering->AddElementToGroup(&z, 1);
Solver::Options options;
options.linear_solver_type = DENSE_SCHUR;
- options.linear_solver_ordering.reset(linear_solver_ordering);
+ options.linear_solver_ordering = linear_solver_ordering;
- const vector<ResidualBlock*>& residual_blocks =
+ const std::vector<ResidualBlock*>& residual_blocks =
problem.program().residual_blocks();
- vector<ResidualBlock*> expected_residual_blocks;
+ std::vector<ResidualBlock*> expected_residual_blocks;
// This is a bit fragile, but it serves the purpose. We know the
// bucketing algorithm that the reordering function uses, so we
@@ -155,7 +155,8 @@
EXPECT_TRUE(ApplyOrdering(
problem.parameter_map(), linear_solver_ordering, program, &message));
- const vector<ParameterBlock*>& parameter_blocks = program->parameter_blocks();
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program->parameter_blocks();
EXPECT_EQ(parameter_blocks.size(), 3);
EXPECT_EQ(parameter_blocks[0]->user_state(), &x);
@@ -164,10 +165,10 @@
}
#ifndef CERES_NO_SUITESPARSE
-class ReorderProgramFoSparseCholeskyUsingSuiteSparseTest
+class ReorderProgramForSparseCholeskyUsingSuiteSparseTest
: public ::testing::Test {
protected:
- void SetUp() {
+ void SetUp() override {
problem_.AddResidualBlock(new UnaryCostFunction(), nullptr, &x_);
problem_.AddResidualBlock(new BinaryCostFunction(), nullptr, &z_, &x_);
problem_.AddResidualBlock(new BinaryCostFunction(), nullptr, &z_, &y_);
@@ -179,16 +180,17 @@
void ComputeAndValidateOrdering(
const ParameterBlockOrdering& linear_solver_ordering) {
Program* program = problem_.mutable_program();
- vector<ParameterBlock*> unordered_parameter_blocks =
+ std::vector<ParameterBlock*> unordered_parameter_blocks =
program->parameter_blocks();
std::string error;
EXPECT_TRUE(ReorderProgramForSparseCholesky(ceres::SUITE_SPARSE,
+ ceres::AMD,
linear_solver_ordering,
0, /* use all rows */
program,
&error));
- const vector<ParameterBlock*>& ordered_parameter_blocks =
+ const std::vector<ParameterBlock*>& ordered_parameter_blocks =
program->parameter_blocks();
EXPECT_EQ(ordered_parameter_blocks.size(),
unordered_parameter_blocks.size());
@@ -203,7 +205,7 @@
double z_;
};
-TEST_F(ReorderProgramFoSparseCholeskyUsingSuiteSparseTest,
+TEST_F(ReorderProgramForSparseCholeskyUsingSuiteSparseTest,
EverythingInGroupZero) {
ParameterBlockOrdering linear_solver_ordering;
linear_solver_ordering.AddElementToGroup(&x_, 0);
@@ -213,7 +215,7 @@
ComputeAndValidateOrdering(linear_solver_ordering);
}
-TEST_F(ReorderProgramFoSparseCholeskyUsingSuiteSparseTest, ContiguousGroups) {
+TEST_F(ReorderProgramForSparseCholeskyUsingSuiteSparseTest, ContiguousGroups) {
ParameterBlockOrdering linear_solver_ordering;
linear_solver_ordering.AddElementToGroup(&x_, 0);
linear_solver_ordering.AddElementToGroup(&y_, 1);
@@ -222,7 +224,7 @@
ComputeAndValidateOrdering(linear_solver_ordering);
}
-TEST_F(ReorderProgramFoSparseCholeskyUsingSuiteSparseTest, GroupsWithGaps) {
+TEST_F(ReorderProgramForSparseCholeskyUsingSuiteSparseTest, GroupsWithGaps) {
ParameterBlockOrdering linear_solver_ordering;
linear_solver_ordering.AddElementToGroup(&x_, 0);
linear_solver_ordering.AddElementToGroup(&y_, 2);
@@ -231,7 +233,7 @@
ComputeAndValidateOrdering(linear_solver_ordering);
}
-TEST_F(ReorderProgramFoSparseCholeskyUsingSuiteSparseTest,
+TEST_F(ReorderProgramForSparseCholeskyUsingSuiteSparseTest,
NonContiguousStartingAtTwo) {
ParameterBlockOrdering linear_solver_ordering;
linear_solver_ordering.AddElementToGroup(&x_, 2);
@@ -263,7 +265,7 @@
problem.GetResidualBlocks(&residual_block_ids);
std::vector<ResidualBlock*> residual_blocks =
problem.program().residual_blocks();
- auto rng = std::default_random_engine{};
+ auto rng = std::mt19937{};
for (int i = 1; i < 6; ++i) {
std::shuffle(
std::begin(residual_block_ids), std::end(residual_block_ids), rng);
diff --git a/internal/ceres/residual_block.cc b/internal/ceres/residual_block.cc
index 067c9ef..f5ad125 100644
--- a/internal/ceres/residual_block.cc
+++ b/internal/ceres/residual_block.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,16 +39,15 @@
#include "ceres/cost_function.h"
#include "ceres/internal/eigen.h"
#include "ceres/internal/fixed_array.h"
-#include "ceres/local_parameterization.h"
#include "ceres/loss_function.h"
+#include "ceres/manifold.h"
#include "ceres/parameter_block.h"
#include "ceres/residual_block_utils.h"
#include "ceres/small_blas.h"
using Eigen::Dynamic;
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
ResidualBlock::ResidualBlock(
const CostFunction* cost_function,
@@ -87,7 +86,7 @@
for (int i = 0; i < num_parameter_blocks; ++i) {
const ParameterBlock* parameter_block = parameter_blocks_[i];
if (jacobians[i] != nullptr &&
- parameter_block->LocalParameterizationJacobian() != nullptr) {
+ parameter_block->PlusJacobian() != nullptr) {
global_jacobians[i] = scratch;
scratch += num_residuals * parameter_block->Size();
} else {
@@ -114,8 +113,7 @@
return false;
}
- if (!IsEvaluationValid(
- *this, parameters.data(), cost, residuals, eval_jacobians)) {
+ if (!IsEvaluationValid(*this, parameters.data(), residuals, eval_jacobians)) {
// clang-format off
std::string message =
"\n\n"
@@ -132,27 +130,27 @@
double squared_norm = VectorRef(residuals, num_residuals).squaredNorm();
- // Update the jacobians with the local parameterizations.
+ // Update the plus_jacobian for the manifolds.
if (jacobians != nullptr) {
for (int i = 0; i < num_parameter_blocks; ++i) {
if (jacobians[i] != nullptr) {
const ParameterBlock* parameter_block = parameter_blocks_[i];
- // Apply local reparameterization to the jacobians.
- if (parameter_block->LocalParameterizationJacobian() != nullptr) {
+ // Apply the Manifold::PlusJacobian to the ambient jacobians.
+ if (parameter_block->PlusJacobian() != nullptr) {
// jacobians[i] = global_jacobians[i] * global_to_local_jacobian.
MatrixMatrixMultiply<Dynamic, Dynamic, Dynamic, Dynamic, 0>(
global_jacobians[i],
num_residuals,
parameter_block->Size(),
- parameter_block->LocalParameterizationJacobian(),
+ parameter_block->PlusJacobian(),
parameter_block->Size(),
- parameter_block->LocalSize(),
+ parameter_block->TangentSize(),
jacobians[i],
0,
0,
num_residuals,
- parameter_block->LocalSize());
+ parameter_block->TangentSize());
}
}
}
@@ -183,7 +181,7 @@
// Correct the jacobians for the loss function.
correct.CorrectJacobian(num_residuals,
- parameter_block->LocalSize(),
+ parameter_block->TangentSize(),
residuals,
jacobians[i]);
}
@@ -199,16 +197,16 @@
int ResidualBlock::NumScratchDoublesForEvaluate() const {
// Compute the amount of scratch space needed to store the full-sized
- // jacobians. For parameters that have no local parameterization no storage
- // is needed and the passed-in jacobian array is used directly. Also include
- // space to store the residuals, which is needed for cost-only evaluations.
- // This is slightly pessimistic, since both won't be needed all the time, but
- // the amount of excess should not cause problems for the caller.
+ // jacobians. For parameters that have no manifold no storage is needed and
+ // the passed-in jacobian array is used directly. Also include space to store
+ // the residuals, which is needed for cost-only evaluations. This is slightly
+ // pessimistic, since both won't be needed all the time, but the amount of
+ // excess should not cause problems for the caller.
int num_parameters = NumParameterBlocks();
int scratch_doubles = 1;
for (int i = 0; i < num_parameters; ++i) {
const ParameterBlock* parameter_block = parameter_blocks_[i];
- if (parameter_block->LocalParameterizationJacobian() != nullptr) {
+ if (parameter_block->PlusJacobian() != nullptr) {
scratch_doubles += parameter_block->Size();
}
}
@@ -216,5 +214,4 @@
return scratch_doubles;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/residual_block.h b/internal/ceres/residual_block.h
index f28fd42..62460c7 100644
--- a/internal/ceres/residual_block.h
+++ b/internal/ceres/residual_block.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,7 +40,8 @@
#include <vector>
#include "ceres/cost_function.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/stringprintf.h"
#include "ceres/types.h"
@@ -65,7 +66,7 @@
//
// The residual block stores pointers to but does not own the cost functions,
// loss functions, and parameter blocks.
-class CERES_EXPORT_INTERNAL ResidualBlock {
+class CERES_NO_EXPORT ResidualBlock {
public:
// Construct the residual block with the given cost/loss functions. Loss may
// be null. The index is the index of the residual block in the Program's
@@ -77,10 +78,10 @@
// Evaluates the residual term, storing the scalar cost in *cost, the residual
// components in *residuals, and the jacobians between the parameters and
- // residuals in jacobians[i], in row-major order. If residuals is NULL, the
- // residuals are not computed. If jacobians is NULL, no jacobians are
- // computed. If jacobians[i] is NULL, then the jacobian for that parameter is
- // not computed.
+ // residuals in jacobians[i], in row-major order. If residuals is nullptr, the
+ // residuals are not computed. If jacobians is nullptr, no jacobians are
+ // computed. If jacobians[i] is nullptr, then the jacobian for that parameter
+ // is not computed.
//
// cost must not be null.
//
@@ -92,10 +93,10 @@
// false, the caller should expect the output memory locations to have
// been modified.
//
- // The returned cost and jacobians have had robustification and local
- // parameterizations applied already; for example, the jacobian for a
- // 4-dimensional quaternion parameter using the "QuaternionParameterization"
- // is num_residuals by 3 instead of num_residuals by 4.
+ // The returned cost and jacobians have had robustification and manifold
+ // projection applied already; for example, the jacobian for a 4-dimensional
+ // quaternion parameter using the "Quaternion" manifold is num_residuals by 3
+ // instead of num_residuals by 4.
//
// apply_loss_function as the name implies allows the user to switch
// the application of the loss function on and off.
@@ -147,4 +148,6 @@
} // namespace internal
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_INTERNAL_RESIDUAL_BLOCK_H_
diff --git a/internal/ceres/residual_block_test.cc b/internal/ceres/residual_block_test.cc
index 3c05f48..8040136 100644
--- a/internal/ceres/residual_block_test.cc
+++ b/internal/ceres/residual_block_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,17 +31,16 @@
#include "ceres/residual_block.h"
#include <cstdint>
+#include <string>
+#include <vector>
#include "ceres/internal/eigen.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
#include "ceres/parameter_block.h"
#include "ceres/sized_cost_function.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
// Trivial cost function that accepts three arguments.
class TernaryCostFunction : public CostFunction {
@@ -64,7 +63,7 @@
}
if (jacobians) {
for (int k = 0; k < 3; ++k) {
- if (jacobians[k] != NULL) {
+ if (jacobians[k] != nullptr) {
MatrixRef jacobian(
jacobians[k], num_residuals(), parameter_block_sizes()[k]);
jacobian.setConstant(k);
@@ -75,7 +74,7 @@
}
};
-TEST(ResidualBlock, EvaluteWithNoLossFunctionOrLocalParameterizations) {
+TEST(ResidualBlock, EvaluateWithNoLossFunctionOrManifolds) {
double scratch[64];
// Prepare the parameter blocks.
@@ -88,7 +87,7 @@
double values_z[4];
ParameterBlock z(values_z, 4, -1);
- vector<ParameterBlock*> parameters;
+ std::vector<ParameterBlock*> parameters;
parameters.push_back(&x);
parameters.push_back(&y);
parameters.push_back(&z);
@@ -96,11 +95,11 @@
TernaryCostFunction cost_function(3, 2, 3, 4);
// Create the object under tests.
- ResidualBlock residual_block(&cost_function, NULL, parameters, -1);
+ ResidualBlock residual_block(&cost_function, nullptr, parameters, -1);
// Verify getters.
EXPECT_EQ(&cost_function, residual_block.cost_function());
- EXPECT_EQ(NULL, residual_block.loss_function());
+ EXPECT_EQ(nullptr, residual_block.loss_function());
EXPECT_EQ(parameters[0], residual_block.parameter_blocks()[0]);
EXPECT_EQ(parameters[1], residual_block.parameter_blocks()[1]);
EXPECT_EQ(parameters[2], residual_block.parameter_blocks()[2]);
@@ -108,12 +107,12 @@
// Verify cost-only evaluation.
double cost;
- residual_block.Evaluate(true, &cost, NULL, NULL, scratch);
+ residual_block.Evaluate(true, &cost, nullptr, nullptr, scratch);
EXPECT_EQ(0.5 * (0 * 0 + 1 * 1 + 2 * 2), cost);
// Verify cost and residual evaluation.
double residuals[3];
- residual_block.Evaluate(true, &cost, residuals, NULL, scratch);
+ residual_block.Evaluate(true, &cost, residuals, nullptr, scratch);
EXPECT_EQ(0.5 * (0 * 0 + 1 * 1 + 2 * 2), cost);
EXPECT_EQ(0.0, residuals[0]);
EXPECT_EQ(1.0, residuals[1]);
@@ -151,7 +150,7 @@
jacobian_ry.setConstant(-1.0);
jacobian_rz.setConstant(-1.0);
- jacobian_ptrs[1] = NULL; // Don't compute the jacobian for y.
+ jacobian_ptrs[1] = nullptr; // Don't compute the jacobian for y.
residual_block.Evaluate(true, &cost, residuals, jacobian_ptrs, scratch);
EXPECT_EQ(0.5 * (0 * 0 + 1 * 1 + 2 * 2), cost);
@@ -178,16 +177,16 @@
if (jacobians) {
for (int k = 0; k < 3; ++k) {
// The jacobians here are full sized, but they are transformed in the
- // evaluator into the "local" jacobian. In the tests, the "subset
- // constant" parameterization is used, which should pick out columns
- // from these jacobians. Put values in the jacobian that make this
- // obvious; in particular, make the jacobians like this:
+ // evaluator into the "local" jacobian. In the tests, the
+ // "SubsetManifold" is used, which should pick out columns from these
+ // jacobians. Put values in the jacobian that make this obvious; in
+ // particular, make the jacobians like this:
//
// 0 1 2 3 4 ...
// 0 1 2 3 4 ...
// 0 1 2 3 4 ...
//
- if (jacobians[k] != NULL) {
+ if (jacobians[k] != nullptr) {
MatrixRef jacobian(
jacobians[k], num_residuals(), parameter_block_sizes()[k]);
for (int j = 0; j < k + 2; ++j) {
@@ -200,7 +199,7 @@
}
};
-TEST(ResidualBlock, EvaluteWithLocalParameterizations) {
+TEST(ResidualBlock, EvaluateWithManifolds) {
double scratch[64];
// Prepare the parameter blocks.
@@ -213,31 +212,31 @@
double values_z[4];
ParameterBlock z(values_z, 4, -1);
- vector<ParameterBlock*> parameters;
+ std::vector<ParameterBlock*> parameters;
parameters.push_back(&x);
parameters.push_back(&y);
parameters.push_back(&z);
// Make x have the first component fixed.
- vector<int> x_fixed;
+ std::vector<int> x_fixed;
x_fixed.push_back(0);
- SubsetParameterization x_parameterization(2, x_fixed);
- x.SetParameterization(&x_parameterization);
+ SubsetManifold x_manifold(2, x_fixed);
+ x.SetManifold(&x_manifold);
// Make z have the last and last component fixed.
- vector<int> z_fixed;
+ std::vector<int> z_fixed;
z_fixed.push_back(2);
- SubsetParameterization z_parameterization(4, z_fixed);
- z.SetParameterization(&z_parameterization);
+ SubsetManifold z_manifold(4, z_fixed);
+ z.SetManifold(&z_manifold);
LocallyParameterizedCostFunction cost_function;
// Create the object under tests.
- ResidualBlock residual_block(&cost_function, NULL, parameters, -1);
+ ResidualBlock residual_block(&cost_function, nullptr, parameters, -1);
// Verify getters.
EXPECT_EQ(&cost_function, residual_block.cost_function());
- EXPECT_EQ(NULL, residual_block.loss_function());
+ EXPECT_EQ(nullptr, residual_block.loss_function());
EXPECT_EQ(parameters[0], residual_block.parameter_blocks()[0]);
EXPECT_EQ(parameters[1], residual_block.parameter_blocks()[1]);
EXPECT_EQ(parameters[2], residual_block.parameter_blocks()[2]);
@@ -245,12 +244,12 @@
// Verify cost-only evaluation.
double cost;
- residual_block.Evaluate(true, &cost, NULL, NULL, scratch);
+ residual_block.Evaluate(true, &cost, nullptr, nullptr, scratch);
EXPECT_EQ(0.5 * (0 * 0 + 1 * 1 + 2 * 2), cost);
// Verify cost and residual evaluation.
double residuals[3];
- residual_block.Evaluate(true, &cost, residuals, NULL, scratch);
+ residual_block.Evaluate(true, &cost, residuals, nullptr, scratch);
EXPECT_EQ(0.5 * (0 * 0 + 1 * 1 + 2 * 2), cost);
EXPECT_EQ(0.0, residuals[0]);
EXPECT_EQ(1.0, residuals[1]);
@@ -311,7 +310,7 @@
jacobian_ry.setConstant(-1.0);
jacobian_rz.setConstant(-1.0);
- jacobian_ptrs[1] = NULL; // Don't compute the jacobian for y.
+ jacobian_ptrs[1] = nullptr; // Don't compute the jacobian for y.
residual_block.Evaluate(true, &cost, residuals, jacobian_ptrs, scratch);
EXPECT_EQ(0.5 * (0 * 0 + 1 * 1 + 2 * 2), cost);
@@ -324,5 +323,4 @@
EXPECT_EQ(expected_jacobian_rz, jacobian_rz);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/residual_block_utils.cc b/internal/ceres/residual_block_utils.cc
index d5b3fa1..91370d8 100644
--- a/internal/ceres/residual_block_utils.cc
+++ b/internal/ceres/residual_block_utils.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,19 +33,17 @@
#include <cmath>
#include <cstddef>
#include <limits>
+#include <string>
#include "ceres/array_utils.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/parameter_block.h"
#include "ceres/residual_block.h"
#include "ceres/stringprintf.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::string;
+namespace ceres::internal {
void InvalidateEvaluation(const ResidualBlock& block,
double* cost,
@@ -56,7 +54,7 @@
InvalidateArray(1, cost);
InvalidateArray(num_residuals, residuals);
- if (jacobians != NULL) {
+ if (jacobians != nullptr) {
for (int i = 0; i < num_parameter_blocks; ++i) {
const int parameter_block_size = block.parameter_blocks()[i]->Size();
InvalidateArray(num_residuals * parameter_block_size, jacobians[i]);
@@ -64,17 +62,17 @@
}
}
-string EvaluationToString(const ResidualBlock& block,
- double const* const* parameters,
- double* cost,
- double* residuals,
- double** jacobians) {
+std::string EvaluationToString(const ResidualBlock& block,
+ double const* const* parameters,
+ double* cost,
+ double* residuals,
+ double** jacobians) {
CHECK(cost != nullptr);
CHECK(residuals != nullptr);
const int num_parameter_blocks = block.NumParameterBlocks();
const int num_residuals = block.NumResiduals();
- string result = "";
+ std::string result = "";
// clang-format off
StringAppendF(&result,
@@ -89,7 +87,7 @@
"to Inf or NaN is also an error. \n\n"; // NOLINT
// clang-format on
- string space = "Residuals: ";
+ std::string space = "Residuals: ";
result += space;
AppendArrayToString(num_residuals, residuals, &result);
StringAppendF(&result, "\n\n");
@@ -104,9 +102,9 @@
StringAppendF(&result, "| ");
for (int k = 0; k < num_residuals; ++k) {
AppendArrayToString(1,
- (jacobians != NULL && jacobians[i] != NULL)
+ (jacobians != nullptr && jacobians[i] != nullptr)
? jacobians[i] + k * parameter_block_size + j
- : NULL,
+ : nullptr,
&result);
}
StringAppendF(&result, "\n");
@@ -117,9 +115,11 @@
return result;
}
+// TODO(sameeragarwal) Check cost value validness here
+// Cost value is a part of evaluation but not checked here since according to
+// residual_block.cc cost is not valid at the time this method is called
bool IsEvaluationValid(const ResidualBlock& block,
- double const* const* parameters,
- double* cost,
+ double const* const* /*parameters*/,
double* residuals,
double** jacobians) {
const int num_parameter_blocks = block.NumParameterBlocks();
@@ -129,7 +129,7 @@
return false;
}
- if (jacobians != NULL) {
+ if (jacobians != nullptr) {
for (int i = 0; i < num_parameter_blocks; ++i) {
const int parameter_block_size = block.parameter_blocks()[i]->Size();
if (!IsArrayValid(num_residuals * parameter_block_size, jacobians[i])) {
@@ -141,5 +141,4 @@
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/residual_block_utils.h b/internal/ceres/residual_block_utils.h
index 41ae81a..1bf1ca1 100644
--- a/internal/ceres/residual_block_utils.h
+++ b/internal/ceres/residual_block_utils.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -45,14 +45,14 @@
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class ResidualBlock;
-// Invalidate cost, resdual and jacobian arrays (if not NULL).
+// Invalidate cost, resdual and jacobian arrays (if not nullptr).
+CERES_NO_EXPORT
void InvalidateEvaluation(const ResidualBlock& block,
double* cost,
double* residuals,
@@ -60,22 +60,22 @@
// Check if any of the arrays cost, residuals or jacobians contains an
// NaN, return true if it does.
+CERES_NO_EXPORT
bool IsEvaluationValid(const ResidualBlock& block,
double const* const* parameters,
- double* cost,
double* residuals,
double** jacobians);
// Create a string representation of the Residual block containing the
// value of the parameters, residuals and jacobians if present.
// Useful for debugging output.
+CERES_NO_EXPORT
std::string EvaluationToString(const ResidualBlock& block,
double const* const* parameters,
double* cost,
double* residuals,
double** jacobians);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_RESIDUAL_BLOCK_UTILS_H_
diff --git a/internal/ceres/residual_block_utils_test.cc b/internal/ceres/residual_block_utils_test.cc
index 331f5ab..6fc8aa0 100644
--- a/internal/ceres/residual_block_utils_test.cc
+++ b/internal/ceres/residual_block_utils_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -51,7 +51,7 @@
std::vector<ParameterBlock*> parameter_blocks;
parameter_blocks.push_back(¶meter_block);
- ResidualBlock residual_block(&cost_function, NULL, parameter_blocks, -1);
+ ResidualBlock residual_block(&cost_function, nullptr, parameter_blocks, -1);
std::unique_ptr<double[]> scratch(
new double[residual_block.NumScratchDoublesForEvaluate()]);
@@ -66,7 +66,7 @@
is_good);
}
-// A CostFunction that behaves normaly, i.e., it computes numerically
+// A CostFunction that behaves normally, i.e., it computes numerically
// valid residuals and jacobians.
class GoodCostFunction : public SizedCostFunction<1, 1> {
public:
@@ -74,7 +74,7 @@
double* residuals,
double** jacobians) const final {
residuals[0] = 1;
- if (jacobians != NULL && jacobians[0] != NULL) {
+ if (jacobians != nullptr && jacobians[0] != nullptr) {
jacobians[0][0] = 0.0;
}
return true;
@@ -90,7 +90,7 @@
double** jacobians) const final {
// Forget to update the residuals.
// residuals[0] = 1;
- if (jacobians != NULL && jacobians[0] != NULL) {
+ if (jacobians != nullptr && jacobians[0] != nullptr) {
jacobians[0][0] = 0.0;
}
return true;
@@ -103,7 +103,7 @@
double* residuals,
double** jacobians) const final {
residuals[0] = 1;
- if (jacobians != NULL && jacobians[0] != NULL) {
+ if (jacobians != nullptr && jacobians[0] != nullptr) {
// Forget to update the jacobians.
// jacobians[0][0] = 0.0;
}
@@ -117,7 +117,7 @@
double* residuals,
double** jacobians) const final {
residuals[0] = std::numeric_limits<double>::infinity();
- if (jacobians != NULL && jacobians[0] != NULL) {
+ if (jacobians != nullptr && jacobians[0] != nullptr) {
jacobians[0][0] = 0.0;
}
return true;
@@ -130,7 +130,7 @@
double* residuals,
double** jacobians) const final {
residuals[0] = 1.0;
- if (jacobians != NULL && jacobians[0] != NULL) {
+ if (jacobians != nullptr && jacobians[0] != nullptr) {
jacobians[0][0] = std::numeric_limits<double>::quiet_NaN();
}
return true;
diff --git a/internal/ceres/rotation_test.cc b/internal/ceres/rotation_test.cc
index fc39b31..f1ea071 100644
--- a/internal/ceres/rotation_test.cc
+++ b/internal/ceres/rotation_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,12 +30,18 @@
#include "ceres/rotation.h"
+#include <algorithm>
+#include <array>
#include <cmath>
#include <limits>
+#include <random>
#include <string>
+#include <utility>
+#include "ceres/constants.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/euler_angles.h"
+#include "ceres/internal/export.h"
#include "ceres/is_close.h"
#include "ceres/jet.h"
#include "ceres/stringprintf.h"
@@ -47,22 +53,11 @@
namespace ceres {
namespace internal {
-using std::max;
-using std::min;
-using std::numeric_limits;
-using std::string;
-using std::swap;
-
-const double kPi = 3.14159265358979323846;
+inline constexpr double kPi = constants::pi;
const double kHalfSqrt2 = 0.707106781186547524401;
-static double RandDouble() {
- double r = rand();
- return r / RAND_MAX;
-}
-
// A tolerance value for floating-point comparisons.
-static double const kTolerance = numeric_limits<double>::epsilon() * 10;
+static double const kTolerance = std::numeric_limits<double>::epsilon() * 10;
// Looser tolerance used for numerically unstable conversions.
static double const kLooseTolerance = 1e-9;
@@ -71,11 +66,6 @@
// double quaternion[4];
// EXPECT_THAT(quaternion, IsNormalizedQuaternion());
MATCHER(IsNormalizedQuaternion, "") {
- if (arg == NULL) {
- *result_listener << "Null quaternion";
- return false;
- }
-
double norm2 =
arg[0] * arg[0] + arg[1] * arg[1] + arg[2] * arg[2] + arg[3] * arg[3];
if (fabs(norm2 - 1.0) > kTolerance) {
@@ -91,34 +81,31 @@
// double actual_quaternion[4];
// EXPECT_THAT(actual_quaternion, IsNearQuaternion(expected_quaternion));
MATCHER_P(IsNearQuaternion, expected, "") {
- if (arg == NULL) {
- *result_listener << "Null quaternion";
- return false;
- }
-
// Quaternions are equivalent upto a sign change. So we will compare
// both signs before declaring failure.
- bool near = true;
+ bool is_near = true;
+ // NOTE: near (and far) can be defined as macros on the Windows platform (for
+ // ancient pascal calling convention). Do not use these identifiers.
for (int i = 0; i < 4; i++) {
if (fabs(arg[i] - expected[i]) > kTolerance) {
- near = false;
+ is_near = false;
break;
}
}
- if (near) {
+ if (is_near) {
return true;
}
- near = true;
+ is_near = true;
for (int i = 0; i < 4; i++) {
if (fabs(arg[i] + expected[i]) > kTolerance) {
- near = false;
+ is_near = false;
break;
}
}
- if (near) {
+ if (is_near) {
return true;
}
@@ -142,21 +129,16 @@
// double actual_axis_angle[3];
// EXPECT_THAT(actual_axis_angle, IsNearAngleAxis(expected_axis_angle));
MATCHER_P(IsNearAngleAxis, expected, "") {
- if (arg == NULL) {
- *result_listener << "Null axis/angle";
- return false;
- }
-
Eigen::Vector3d a(arg[0], arg[1], arg[2]);
Eigen::Vector3d e(expected[0], expected[1], expected[2]);
const double e_norm = e.norm();
- double delta_norm = numeric_limits<double>::max();
+ double delta_norm = std::numeric_limits<double>::max();
if (e_norm > 0) {
// Deal with the sign ambiguity near PI. Since the sign can flip,
// we take the smaller of the two differences.
if (fabs(e_norm - kPi) < kLooseTolerance) {
- delta_norm = min((a - e).norm(), (a + e).norm()) / e_norm;
+ delta_norm = std::min((a - e).norm(), (a + e).norm()) / e_norm;
} else {
delta_norm = (a - e).norm() / e_norm;
}
@@ -185,11 +167,6 @@
// double matrix[9];
// EXPECT_THAT(matrix, IsOrthonormal());
MATCHER(IsOrthonormal, "") {
- if (arg == NULL) {
- *result_listener << "Null matrix";
- return false;
- }
-
for (int c1 = 0; c1 < 3; c1++) {
for (int c2 = 0; c2 < 3; c2++) {
double v = 0;
@@ -214,11 +191,6 @@
// double matrix2[9];
// EXPECT_THAT(matrix1, IsNear3x3Matrix(matrix2));
MATCHER_P(IsNear3x3Matrix, expected, "") {
- if (arg == NULL) {
- *result_listener << "Null matrix";
- return false;
- }
-
for (int i = 0; i < 9; i++) {
if (fabs(arg[i] - expected[i]) > kTolerance) {
*result_listener << "component " << i << " should be " << expected[i];
@@ -254,7 +226,7 @@
// Test that approximate conversion works for very small angles.
TEST(Rotation, TinyAngleAxisToQuaternion) {
// Very small value that could potentially cause underflow.
- double theta = pow(numeric_limits<double>::min(), 0.75);
+ double theta = pow(std::numeric_limits<double>::min(), 0.75);
double axis_angle[3] = {theta, 0, 0};
double quaternion[4];
double expected[4] = {cos(theta / 2), sin(theta / 2.0), 0, 0};
@@ -315,7 +287,7 @@
// Test that approximate conversion works for very small angles.
TEST(Rotation, TinyQuaternionToAngleAxis) {
// Very small value that could potentially cause underflow.
- double theta = pow(numeric_limits<double>::min(), 0.75);
+ double theta = pow(std::numeric_limits<double>::min(), 0.75);
double quaternion[4] = {cos(theta / 2), sin(theta / 2.0), 0, 0};
double axis_angle[3];
double expected[3] = {theta, 0, 0};
@@ -334,9 +306,7 @@
quaternion[2] = 0.0;
quaternion[3] = 0.0;
QuaternionToAngleAxis(quaternion, angle_axis);
- const double angle =
- sqrt(angle_axis[0] * angle_axis[0] + angle_axis[1] * angle_axis[1] +
- angle_axis[2] * angle_axis[2]);
+ const double angle = std::hypot(angle_axis[0], angle_axis[1], angle_axis[2]);
EXPECT_LE(angle, kPi);
}
@@ -345,22 +315,24 @@
// Takes a bunch of random axis/angle values, converts them to quaternions,
// and back again.
TEST(Rotation, AngleAxisToQuaterionAndBack) {
- srand(5);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
for (int i = 0; i < kNumTrials; i++) {
double axis_angle[3];
// Make an axis by choosing three random numbers in [-1, 1) and
// normalizing.
double norm = 0;
- for (int i = 0; i < 3; i++) {
- axis_angle[i] = RandDouble() * 2 - 1;
- norm += axis_angle[i] * axis_angle[i];
+ for (double& coeff : axis_angle) {
+ coeff = uniform_distribution(prng);
+ norm += coeff * coeff;
}
norm = sqrt(norm);
// Angle in [-pi, pi).
- double theta = kPi * 2 * RandDouble() - kPi;
- for (int i = 0; i < 3; i++) {
- axis_angle[i] = axis_angle[i] * theta / norm;
+ double theta = uniform_distribution(
+ prng, std::uniform_real_distribution<double>::param_type{-kPi, kPi});
+ for (double& coeff : axis_angle) {
+ coeff = coeff * theta / norm;
}
double quaternion[4];
@@ -378,19 +350,20 @@
// Takes a bunch of random quaternions, converts them to axis/angle,
// and back again.
TEST(Rotation, QuaterionToAngleAxisAndBack) {
- srand(5);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
for (int i = 0; i < kNumTrials; i++) {
double quaternion[4];
// Choose four random numbers in [-1, 1) and normalize.
double norm = 0;
- for (int i = 0; i < 4; i++) {
- quaternion[i] = RandDouble() * 2 - 1;
- norm += quaternion[i] * quaternion[i];
+ for (double& coeff : quaternion) {
+ coeff = uniform_distribution(prng);
+ norm += coeff * coeff;
}
norm = sqrt(norm);
- for (int i = 0; i < 4; i++) {
- quaternion[i] = quaternion[i] / norm;
+ for (double& coeff : quaternion) {
+ coeff = coeff / norm;
}
double axis_angle[3];
@@ -455,23 +428,27 @@
double matrix[9];
double out_axis_angle[3];
- srand(5);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
for (int i = 0; i < kNumTrials; i++) {
// Make an axis by choosing three random numbers in [-1, 1) and
// normalizing.
double norm = 0;
- for (int i = 0; i < 3; i++) {
- in_axis_angle[i] = RandDouble() * 2 - 1;
- norm += in_axis_angle[i] * in_axis_angle[i];
+ for (double& coeff : in_axis_angle) {
+ coeff = uniform_distribution(prng);
+ norm += coeff * coeff;
}
norm = sqrt(norm);
// Angle in [pi - kMaxSmallAngle, pi).
- const double kMaxSmallAngle = 1e-8;
- double theta = kPi - kMaxSmallAngle * RandDouble();
+ constexpr double kMaxSmallAngle = 1e-8;
+ double theta =
+ uniform_distribution(prng,
+ std::uniform_real_distribution<double>::param_type{
+ kPi - kMaxSmallAngle, kPi});
- for (int i = 0; i < 3; i++) {
- in_axis_angle[i] *= (theta / norm);
+ for (double& coeff : in_axis_angle) {
+ coeff *= (theta / norm);
}
AngleAxisToRotationMatrix(in_axis_angle, matrix);
RotationMatrixToAngleAxis(matrix, out_axis_angle);
@@ -516,7 +493,7 @@
LOG(INFO) << "Rotation:";
LOG(INFO) << "EXPECTED | ACTUAL";
for (int i = 0; i < 3; ++i) {
- string line;
+ std::string line;
for (int j = 0; j < 3; ++j) {
StringAppendF(&line, "%g ", kMatrix[i][j]);
}
@@ -554,22 +531,24 @@
// Takes a bunch of random axis/angle values, converts them to rotation
// matrices, and back again.
TEST(Rotation, AngleAxisToRotationMatrixAndBack) {
- srand(5);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
for (int i = 0; i < kNumTrials; i++) {
double axis_angle[3];
// Make an axis by choosing three random numbers in [-1, 1) and
// normalizing.
double norm = 0;
- for (int i = 0; i < 3; i++) {
- axis_angle[i] = RandDouble() * 2 - 1;
- norm += axis_angle[i] * axis_angle[i];
+ for (double& i : axis_angle) {
+ i = uniform_distribution(prng);
+ norm += i * i;
}
norm = sqrt(norm);
// Angle in [-pi, pi).
- double theta = kPi * 2 * RandDouble() - kPi;
- for (int i = 0; i < 3; i++) {
- axis_angle[i] = axis_angle[i] * theta / norm;
+ double theta = uniform_distribution(
+ prng, std::uniform_real_distribution<double>::param_type{-kPi, kPi});
+ for (double& i : axis_angle) {
+ i = i * theta / norm;
}
double matrix[9];
@@ -587,22 +566,27 @@
// Takes a bunch of random axis/angle values near zero, converts them
// to rotation matrices, and back again.
TEST(Rotation, AngleAxisToRotationMatrixAndBackNearZero) {
- srand(5);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
for (int i = 0; i < kNumTrials; i++) {
double axis_angle[3];
// Make an axis by choosing three random numbers in [-1, 1) and
// normalizing.
double norm = 0;
- for (int i = 0; i < 3; i++) {
- axis_angle[i] = RandDouble() * 2 - 1;
- norm += axis_angle[i] * axis_angle[i];
+ for (double& i : axis_angle) {
+ i = uniform_distribution(prng);
+ norm += i * i;
}
norm = sqrt(norm);
// Tiny theta.
- double theta = 1e-16 * (kPi * 2 * RandDouble() - kPi);
- for (int i = 0; i < 3; i++) {
- axis_angle[i] = axis_angle[i] * theta / norm;
+ constexpr double kScale = 1e-16;
+ double theta =
+ uniform_distribution(prng,
+ std::uniform_real_distribution<double>::param_type{
+ -kScale * kPi, kScale * kPi});
+ for (double& i : axis_angle) {
+ i = i * theta / norm;
}
double matrix[9];
@@ -613,16 +597,16 @@
for (int i = 0; i < 3; ++i) {
EXPECT_NEAR(
- round_trip[i], axis_angle[i], numeric_limits<double>::epsilon());
+ round_trip[i], axis_angle[i], std::numeric_limits<double>::epsilon());
}
}
}
// Transposes a 3x3 matrix.
static void Transpose3x3(double m[9]) {
- swap(m[1], m[3]);
- swap(m[2], m[6]);
- swap(m[5], m[7]);
+ std::swap(m[1], m[3]);
+ std::swap(m[2], m[6]);
+ std::swap(m[5], m[7]);
}
// Convert Euler angles from radians to degrees.
@@ -670,11 +654,12 @@
// Test that a random rotation produces an orthonormal rotation
// matrix.
TEST(EulerAnglesToRotationMatrix, IsOrthonormal) {
- srand(5);
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-180.0, 180.0};
for (int trial = 0; trial < kNumTrials; ++trial) {
double euler_angles_degrees[3];
- for (int i = 0; i < 3; ++i) {
- euler_angles_degrees[i] = RandDouble() * 360.0 - 180.0;
+ for (double& euler_angles_degree : euler_angles_degrees) {
+ euler_angles_degree = uniform_distribution(prng);
}
double rotation_matrix[9];
EulerAnglesToRotationMatrix(euler_angles_degrees, 3, rotation_matrix);
@@ -682,14 +667,271 @@
}
}
+static double sample_euler[][3] = {{0.5235988, 1.047198, 0.7853982},
+ {0.5235988, 1.047198, 0.5235988},
+ {0.7853982, 0.5235988, 1.047198}};
+
+// ZXY Intrinsic Euler Angle to rotation matrix conversion test from
+// scipy/spatial/transform/test/test_rotation.py
+TEST(EulerAngles, IntrinsicEulerSequence312ToRotationMatrixCanned) {
+ // clang-format off
+ double const expected[][9] =
+ {{0.306186083320088, -0.249999816228639, 0.918558748402491,
+ 0.883883627842492, 0.433012359189203, -0.176776777947208,
+ -0.353553128699351, 0.866025628186053, 0.353553102817459},
+ { 0.533493553519713, -0.249999816228639, 0.808012821828067,
+ 0.808012821828067, 0.433012359189203, -0.399519181705765,
+ -0.249999816228639, 0.866025628186053, 0.433012359189203},
+ { 0.047366781483451, -0.612372449482883, 0.789149143778432,
+ 0.659739427618959, 0.612372404654096, 0.435596057905909,
+ -0.750000183771249, 0.500000021132493, 0.433012359189203}};
+ // clang-format on
+
+ for (int i = 0; i < 3; ++i) {
+ double results[9];
+ EulerAnglesToRotation<IntrinsicZXY>(sample_euler[i], results);
+ ASSERT_THAT(results, IsNear3x3Matrix(expected[i]));
+ }
+}
+
+// ZXY Extrinsic Euler Angle to rotation matrix conversion test from
+// scipy/spatial/transform/test/test_rotation.py
+TEST(EulerAngles, ExtrinsicEulerSequence312ToRotationMatrix) {
+ // clang-format off
+ double const expected[][9] =
+ {{0.918558725988105, 0.176776842651999, 0.353553128699352,
+ 0.249999816228639, 0.433012359189203, -0.866025628186053,
+ -0.306186150563275, 0.883883614901527, 0.353553102817459},
+ { 0.966506404215301, -0.058012606358071, 0.249999816228639,
+ 0.249999816228639, 0.433012359189203, -0.866025628186053,
+ -0.058012606358071, 0.899519223970752, 0.433012359189203},
+ { 0.659739424151467, -0.047366829779744, 0.750000183771249,
+ 0.612372449482883, 0.612372404654096, -0.500000021132493,
+ -0.435596000136163, 0.789149175666285, 0.433012359189203}};
+ // clang-format on
+
+ for (int i = 0; i < 3; ++i) {
+ double results[9];
+ EulerAnglesToRotation<ExtrinsicZXY>(sample_euler[i], results);
+ ASSERT_THAT(results, IsNear3x3Matrix(expected[i]));
+ }
+}
+
+// ZXZ Intrinsic Euler Angle to rotation matrix conversion test from
+// scipy/spatial/transform/test/test_rotation.py
+TEST(EulerAngles, IntrinsicEulerSequence313ToRotationMatrix) {
+ // clang-format off
+ double expected[][9] =
+ {{0.435595832832961, -0.789149008363071, 0.433012832394307,
+ 0.659739379322704, -0.047367454164077, -0.750000183771249,
+ 0.612372616786097, 0.612372571957297, 0.499999611324802},
+ { 0.625000065470068, -0.649518902838302, 0.433012832394307,
+ 0.649518902838302, 0.124999676794869, -0.750000183771249,
+ 0.433012832394307, 0.750000183771249, 0.499999611324802},
+ {-0.176777132429787, -0.918558558684756, 0.353553418477159,
+ 0.883883325123719, -0.306186652473014, -0.353553392595246,
+ 0.433012832394307, 0.249999816228639, 0.866025391583588}};
+ // clang-format on
+ for (int i = 0; i < 3; ++i) {
+ double results[9];
+ EulerAnglesToRotation<IntrinsicZXZ>(sample_euler[i], results);
+ ASSERT_THAT(results, IsNear3x3Matrix(expected[i]));
+ }
+}
+
+// ZXZ Extrinsic Euler Angle to rotation matrix conversion test from
+// scipy/spatial/transform/test/test_rotation.py
+TEST(EulerAngles, ExtrinsicEulerSequence313ToRotationMatrix) {
+ // clang-format off
+ double expected[][9] =
+ {{0.435595832832961, -0.659739379322704, 0.612372616786097,
+ 0.789149008363071, -0.047367454164077, -0.612372571957297,
+ 0.433012832394307, 0.750000183771249, 0.499999611324802},
+ { 0.625000065470068, -0.649518902838302, 0.433012832394307,
+ 0.649518902838302, 0.124999676794869, -0.750000183771249,
+ 0.433012832394307, 0.750000183771249, 0.499999611324802},
+ {-0.176777132429787, -0.883883325123719, 0.433012832394307,
+ 0.918558558684756, -0.306186652473014, -0.249999816228639,
+ 0.353553418477159, 0.353553392595246, 0.866025391583588}};
+ // clang-format on
+ for (int i = 0; i < 3; ++i) {
+ double results[9];
+ EulerAnglesToRotation<ExtrinsicZXZ>(sample_euler[i], results);
+ ASSERT_THAT(results, IsNear3x3Matrix(expected[i]));
+ }
+}
+
+template <typename T>
+struct GeneralEulerAngles : public ::testing::Test {
+ public:
+ static constexpr bool kIsParityOdd = T::kIsParityOdd;
+ static constexpr bool kIsProperEuler = T::kIsProperEuler;
+ static constexpr bool kIsIntrinsic = T::kIsIntrinsic;
+
+ template <typename URBG>
+ static void RandomEulerAngles(double* euler, URBG& prng) {
+ using ParamType = std::uniform_real_distribution<double>::param_type;
+ std::uniform_real_distribution<double> uniform_distribution{-kPi, kPi};
+ // Euler angles should be in
+ // [-pi,pi) x [0,pi) x [-pi,pi])
+ // if the outer axes are repeated and
+ // [-pi,pi) x [-pi/2,pi/2) x [-pi,pi])
+ // otherwise
+ euler[0] = uniform_distribution(prng);
+ euler[2] = uniform_distribution(prng);
+ if constexpr (kIsProperEuler) {
+ euler[1] = uniform_distribution(prng, ParamType{0, kPi});
+ } else {
+ euler[1] = uniform_distribution(prng, ParamType{-kPi / 2, kPi / 2});
+ }
+ }
+
+ static void CheckPrincipalRotationMatrixProduct(double angles[3]) {
+ // Convert Shoemake's Euler angle convention into 'apparent' rotation axes
+ // sequences, i.e. the alphabetic code (ZYX, ZYZ, etc.) indicates in what
+ // sequence rotations about different axes are applied
+ constexpr int i = T::kAxes[0];
+ constexpr int j = (3 + (kIsParityOdd ? (i - 1) % 3 : (i + 1) % 3)) % 3;
+ constexpr int k = kIsProperEuler ? i : 3 ^ i ^ j;
+ constexpr auto kSeq =
+ kIsIntrinsic ? std::array{k, j, i} : std::array{i, j, k};
+
+ double aa_matrix[9];
+ Eigen::Map<Eigen::Matrix3d, 0, Eigen::Stride<1, 3>> aa(aa_matrix);
+ aa.setIdentity();
+ for (int i = 0; i < 3; ++i) {
+ Eigen::Vector3d angle_axis;
+ if constexpr (kIsIntrinsic) {
+ angle_axis = -angles[i] * Eigen::Vector3d::Unit(kSeq[i]);
+ } else {
+ angle_axis = angles[i] * Eigen::Vector3d::Unit(kSeq[i]);
+ }
+ Eigen::Matrix3d m;
+ AngleAxisToRotationMatrix(angle_axis.data(), m.data());
+ aa = m * aa;
+ }
+ if constexpr (kIsIntrinsic) {
+ aa.transposeInPlace();
+ }
+
+ double ea_matrix[9];
+ EulerAnglesToRotation<T>(angles, ea_matrix);
+
+ EXPECT_THAT(aa_matrix, IsOrthonormal());
+ EXPECT_THAT(ea_matrix, IsOrthonormal());
+ EXPECT_THAT(ea_matrix, IsNear3x3Matrix(aa_matrix));
+ }
+};
+
+using EulerSystemList = ::testing::Types<ExtrinsicXYZ,
+ ExtrinsicXYX,
+ ExtrinsicXZY,
+ ExtrinsicXZX,
+ ExtrinsicYZX,
+ ExtrinsicYZY,
+ ExtrinsicYXZ,
+ ExtrinsicYXY,
+ ExtrinsicZXY,
+ ExtrinsicZXZ,
+ ExtrinsicZYX,
+ ExtrinsicZYZ,
+ IntrinsicZYX,
+ IntrinsicXYX,
+ IntrinsicYZX,
+ IntrinsicXZX,
+ IntrinsicXZY,
+ IntrinsicYZY,
+ IntrinsicZXY,
+ IntrinsicYXY,
+ IntrinsicYXZ,
+ IntrinsicZXZ,
+ IntrinsicXYZ,
+ IntrinsicZYZ>;
+TYPED_TEST_SUITE(GeneralEulerAngles, EulerSystemList);
+
+TYPED_TEST(GeneralEulerAngles, EulerAnglesToRotationMatrixAndBack) {
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
+ for (int i = 0; i < kNumTrials; ++i) {
+ double euler[3];
+ TestFixture::RandomEulerAngles(euler, prng);
+
+ double matrix[9];
+ double round_trip[3];
+ EulerAnglesToRotation<TypeParam>(euler, matrix);
+ ASSERT_THAT(matrix, IsOrthonormal());
+ RotationMatrixToEulerAngles<TypeParam>(matrix, round_trip);
+ for (int j = 0; j < 3; ++j)
+ ASSERT_NEAR(euler[j], round_trip[j], 128.0 * kLooseTolerance);
+ }
+}
+
+// Check that the rotation matrix converted from euler angles is equivalent to
+// product of three principal axis rotation matrices
+// R_euler = R_a2(euler_2) * R_a1(euler_1) * R_a0(euler_0)
+TYPED_TEST(GeneralEulerAngles, PrincipalRotationMatrixProduct) {
+ std::mt19937 prng;
+ double euler[3];
+ for (int i = 0; i < kNumTrials; ++i) {
+ TestFixture::RandomEulerAngles(euler, prng);
+ TestFixture::CheckPrincipalRotationMatrixProduct(euler);
+ }
+}
+
+// Gimbal lock (euler[1] == +/-pi) handling test. If a rotation matrix
+// represents a gimbal-locked configuration, then converting this rotation
+// matrix to euler angles and back must produce the same rotation matrix.
+//
+// From scipy/spatial/transform/test/test_rotation.py, but additionally covers
+// gimbal lock handling for proper euler angles, which scipy appears to fail to
+// do properly.
+TYPED_TEST(GeneralEulerAngles, GimbalLocked) {
+ constexpr auto kBoundaryAngles = TestFixture::kIsProperEuler
+ ? std::array{0.0, kPi}
+ : std::array{-kPi / 2, kPi / 2};
+ constexpr double gimbal_locked_configurations[4][3] = {
+ {0.78539816, kBoundaryAngles[1], 0.61086524},
+ {0.61086524, kBoundaryAngles[0], 0.34906585},
+ {0.61086524, kBoundaryAngles[1], 0.43633231},
+ {0.43633231, kBoundaryAngles[0], 0.26179939}};
+ double angle_estimates[3];
+ double mat_expected[9];
+ double mat_estimated[9];
+ for (const auto& euler_angles : gimbal_locked_configurations) {
+ EulerAnglesToRotation<TypeParam>(euler_angles, mat_expected);
+ RotationMatrixToEulerAngles<TypeParam>(mat_expected, angle_estimates);
+ EulerAnglesToRotation<TypeParam>(angle_estimates, mat_estimated);
+ ASSERT_THAT(mat_expected, IsNear3x3Matrix(mat_estimated));
+ }
+}
+
// Tests using Jets for specific behavior involving auto differentiation
// near singularity points.
-typedef Jet<double, 3> J3;
-typedef Jet<double, 4> J4;
+using J3 = Jet<double, 3>;
+using J4 = Jet<double, 4>;
namespace {
+// Converts an array of N real numbers (doubles) to an array of jets
+template <int N>
+void ArrayToArrayOfJets(double const* const src, Jet<double, N>* dst) {
+ for (int i = 0; i < N; ++i) {
+ dst[i] = Jet<double, N>(src[i], i);
+ }
+}
+
+// Generically initializes a Jet with type T and a N-dimensional dual part
+// N is explicitly given (instead of inferred from sizeof...(Ts)) so that the
+// dual part can be initialized from Eigen expressions
+template <int N, typename T, typename... Ts>
+Jet<T, N> MakeJet(T a, const T& v0, Ts&&... v) {
+ Jet<T, N> j;
+ j.a = a; // Real part
+ ((j.v << v0), ..., std::forward<Ts>(v)); // Fill dual part with N components
+ return j;
+}
+
J3 MakeJ3(double a, double v0, double v1, double v2) {
J3 j;
j.a = a;
@@ -709,52 +951,57 @@
return j;
}
-bool IsClose(double x, double y) {
- EXPECT_FALSE(IsNaN(x));
- EXPECT_FALSE(IsNaN(y));
- return internal::IsClose(x, y, kTolerance, NULL, NULL);
-}
-
} // namespace
-template <int N>
-bool IsClose(const Jet<double, N>& x, const Jet<double, N>& y) {
- if (!IsClose(x.a, y.a)) {
+// Use EXPECT_THAT(x, testing::PointWise(JetClose(prec), y); to achieve Jet
+// array comparison
+MATCHER_P(JetClose, relative_precision, "") {
+ using internal::IsClose;
+ using LHSJetType = std::remove_reference_t<std::tuple_element_t<0, arg_type>>;
+ using RHSJetType = std::remove_reference_t<std::tuple_element_t<1, arg_type>>;
+
+ constexpr int kDualPartDimension = LHSJetType::DIMENSION;
+ static_assert(
+ kDualPartDimension == RHSJetType::DIMENSION,
+ "Can only compare Jets with dual parts having equal dimensions");
+ auto&& [x, y] = arg;
+ double relative_error;
+ double absolute_error;
+ if (!IsClose(
+ x.a, y.a, relative_precision, &relative_error, &absolute_error)) {
+ *result_listener << "Real part mismatch: x.a = " << x.a
+ << " and y.a = " << y.a
+ << " where the relative error between them is "
+ << relative_error
+ << " and the absolute error between them is "
+ << absolute_error;
return false;
}
- for (int i = 0; i < N; i++) {
- if (!IsClose(x.v[i], y.v[i])) {
+ for (int i = 0; i < kDualPartDimension; i++) {
+ if (!IsClose(x.v[i],
+ y.v[i],
+ relative_precision,
+ &relative_error,
+ &absolute_error)) {
+ *result_listener << "Dual part mismatch: x.v[" << i << "] = " << x.v[i]
+ << " and y.v[" << i << "] = " << y.v[i]
+ << " where the relative error between them is "
+ << relative_error
+ << " and the absolute error between them is "
+ << absolute_error;
return false;
}
}
return true;
}
-template <int M, int N>
-void ExpectJetArraysClose(const Jet<double, N>* x, const Jet<double, N>* y) {
- for (int i = 0; i < M; i++) {
- if (!IsClose(x[i], y[i])) {
- LOG(ERROR) << "Jet " << i << "/" << M << " not equal";
- LOG(ERROR) << "x[" << i << "]: " << x[i];
- LOG(ERROR) << "y[" << i << "]: " << y[i];
- Jet<double, N> d, zero;
- d.a = y[i].a - x[i].a;
- for (int j = 0; j < N; j++) {
- d.v[j] = y[i].v[j] - x[i].v[j];
- }
- LOG(ERROR) << "diff: " << d;
- EXPECT_TRUE(IsClose(x[i], y[i]));
- }
- }
-}
-
// Log-10 of a value well below machine precision.
-static const int kSmallTinyCutoff =
- static_cast<int>(2 * log(numeric_limits<double>::epsilon()) / log(10.0));
+static const int kSmallTinyCutoff = static_cast<int>(
+ 2 * log(std::numeric_limits<double>::epsilon()) / log(10.0));
// Log-10 of a value just below values representable by double.
static const int kTinyZeroLimit =
- static_cast<int>(1 + log(numeric_limits<double>::min()) / log(10.0));
+ static_cast<int>(1 + log(std::numeric_limits<double>::min()) / log(10.0));
// Test that exact conversion works for small angles when jets are used.
TEST(Rotation, SmallAngleAxisToQuaternionForJets) {
@@ -771,7 +1018,7 @@
MakeJ3(0, 0, 0, sin(theta / 2) / theta),
};
AngleAxisToQuaternion(axis_angle, quaternion);
- ExpectJetArraysClose<4, 3>(quaternion, expected);
+ EXPECT_THAT(quaternion, testing::Pointwise(JetClose(kTolerance), expected));
}
}
@@ -793,7 +1040,7 @@
MakeJ3(0, 0, 0, 0.5),
};
AngleAxisToQuaternion(axis_angle, quaternion);
- ExpectJetArraysClose<4, 3>(quaternion, expected);
+ EXPECT_THAT(quaternion, testing::Pointwise(JetClose(kTolerance), expected));
}
}
@@ -808,7 +1055,7 @@
MakeJ3(0, 0, 0, 0.5),
};
AngleAxisToQuaternion(axis_angle, quaternion);
- ExpectJetArraysClose<4, 3>(quaternion, expected);
+ EXPECT_THAT(quaternion, testing::Pointwise(JetClose(kTolerance), expected));
}
// Test that exact conversion works for small angles.
@@ -829,7 +1076,7 @@
};
// clang-format on
QuaternionToAngleAxis(quaternion, axis_angle);
- ExpectJetArraysClose<3, 4>(axis_angle, expected);
+ EXPECT_THAT(axis_angle, testing::Pointwise(JetClose(kTolerance), expected));
}
}
@@ -854,7 +1101,7 @@
};
// clang-format on
QuaternionToAngleAxis(quaternion, axis_angle);
- ExpectJetArraysClose<3, 4>(axis_angle, expected);
+ EXPECT_THAT(axis_angle, testing::Pointwise(JetClose(kTolerance), expected));
}
}
@@ -868,7 +1115,508 @@
MakeJ4(0, 0, 0, 0, 2.0),
};
QuaternionToAngleAxis(quaternion, axis_angle);
- ExpectJetArraysClose<3, 4>(axis_angle, expected);
+ EXPECT_THAT(axis_angle, testing::Pointwise(JetClose(kTolerance), expected));
+}
+
+// The following 4 test cases cover the conversion of Euler Angles to rotation
+// matrices for Jets
+//
+// The dual parts (with dimension 3) of the resultant matrix of Jets contain the
+// derivative of each matrix element w.r.t. the input Euler Angles. In other
+// words, for each element in R = EulerAnglesToRotationMatrix(angles), we have
+// R_ij.v = jacobian(R_ij, angles)
+//
+// The test data (dual parts of the Jets) is generated by analytically
+// differentiating the formulas for Euler Angle to Rotation Matrix conversion
+
+// Test ZXY/312 Intrinsic Euler Angles to rotation matrix conversion using Jets
+// The two ZXY test cases specifically cover handling of Tait-Bryan angles
+// i.e. last axis of rotation is different from the first
+TEST(EulerAngles, Intrinsic312EulerSequenceToRotationMatrixForJets) {
+ J3 euler_angles[3];
+ J3 rotation_matrix[9];
+
+ ArrayToArrayOfJets(sample_euler[0], euler_angles);
+ EulerAnglesToRotation<IntrinsicZXY>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.306186083320, -0.883883627842, -0.176776571821, -0.918558748402), // NOLINT
+ MakeJ3(-0.249999816229, -0.433012359189, 0.433012832394, 0.000000000000), // NOLINT
+ MakeJ3( 0.918558748402, 0.176776777947, 0.176776558880, 0.306186083320), // NOLINT
+ MakeJ3( 0.883883627842, 0.306186083320, 0.306185986727, 0.176776777947), // NOLINT
+ MakeJ3( 0.433012359189, -0.249999816229, -0.750000183771, 0.000000000000), // NOLINT
+ MakeJ3(-0.176776777947, 0.918558748402, -0.306185964313, 0.883883627842), // NOLINT
+ MakeJ3(-0.353553128699, 0.000000000000, 0.612372616786, -0.353553102817), // NOLINT
+ MakeJ3( 0.866025628186, 0.000000000000, 0.499999611325, 0.000000000000), // NOLINT
+ MakeJ3( 0.353553102817, 0.000000000000, -0.612372571957, -0.353553128699) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_euler[1], euler_angles);
+ EulerAnglesToRotation<IntrinsicZXY>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.533493553520, -0.808012821828, -0.124999913397, -0.808012821828), // NOLINT
+ MakeJ3(-0.249999816229, -0.433012359189, 0.433012832394, 0.000000000000), // NOLINT
+ MakeJ3( 0.808012821828, 0.399519181706, 0.216506188745, 0.533493553520), // NOLINT
+ MakeJ3( 0.808012821828, 0.533493553520, 0.216506188745, 0.399519181706), // NOLINT
+ MakeJ3( 0.433012359189, -0.249999816229, -0.750000183771, 0.000000000000), // NOLINT
+ MakeJ3(-0.399519181706, 0.808012821828, -0.374999697927, 0.808012821828), // NOLINT
+ MakeJ3(-0.249999816229, 0.000000000000, 0.433012832394, -0.433012359189), // NOLINT
+ MakeJ3( 0.866025628186, 0.000000000000, 0.499999611325, 0.000000000000), // NOLINT
+ MakeJ3( 0.433012359189, 0.000000000000, -0.750000183771, -0.249999816229) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_euler[2], euler_angles);
+ EulerAnglesToRotation<IntrinsicZXY>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.047366781483, -0.659739427619, -0.530330235247, -0.789149143778), // NOLINT
+ MakeJ3(-0.612372449483, -0.612372404654, 0.353553418477, 0.000000000000), // NOLINT
+ MakeJ3( 0.789149143778, -0.435596057906, 0.306185986727, 0.047366781483), // NOLINT
+ MakeJ3( 0.659739427619, 0.047366781483, 0.530330196424, -0.435596057906), // NOLINT
+ MakeJ3( 0.612372404654, -0.612372449483, -0.353553392595, 0.000000000000), // NOLINT
+ MakeJ3( 0.435596057906, 0.789149143778, -0.306185964313, 0.659739427619), // NOLINT
+ MakeJ3(-0.750000183771, 0.000000000000, 0.433012832394, -0.433012359189), // NOLINT
+ MakeJ3( 0.500000021132, 0.000000000000, 0.866025391584, 0.000000000000), // NOLINT
+ MakeJ3( 0.433012359189, 0.000000000000, -0.249999816229, -0.750000183771) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+}
+
+// Test ZXY/312 Extrinsic Euler Angles to rotation matrix conversion using Jets
+TEST(EulerAngles, Extrinsic312EulerSequenceToRotationMatrixForJets) {
+ J3 euler_angles[3];
+ J3 rotation_matrix[9];
+
+ ArrayToArrayOfJets(sample_euler[0], euler_angles);
+ EulerAnglesToRotation<ExtrinsicZXY>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.918558725988, 0.176776842652, 0.176776571821, -0.306186150563), // NOLINT
+ MakeJ3( 0.176776842652, -0.918558725988, 0.306185986727, 0.883883614902), // NOLINT
+ MakeJ3( 0.353553128699, 0.000000000000, -0.612372616786, 0.353553102817), // NOLINT
+ MakeJ3( 0.249999816229, 0.433012359189, -0.433012832394, 0.000000000000), // NOLINT
+ MakeJ3( 0.433012359189, -0.249999816229, -0.750000183771, 0.000000000000), // NOLINT
+ MakeJ3(-0.866025628186, 0.000000000000, -0.499999611325, 0.000000000000), // NOLINT
+ MakeJ3(-0.306186150563, 0.883883614902, 0.176776558880, -0.918558725988), // NOLINT
+ MakeJ3( 0.883883614902, 0.306186150563, 0.306185964313, -0.176776842652), // NOLINT
+ MakeJ3( 0.353553102817, 0.000000000000, -0.612372571957, -0.353553128699) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_euler[1], euler_angles);
+ EulerAnglesToRotation<ExtrinsicZXY>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.966506404215, -0.058012606358, 0.124999913397, -0.058012606358), // NOLINT
+ MakeJ3(-0.058012606358, -0.966506404215, 0.216506188745, 0.899519223971), // NOLINT
+ MakeJ3( 0.249999816229, 0.000000000000, -0.433012832394, 0.433012359189), // NOLINT
+ MakeJ3( 0.249999816229, 0.433012359189, -0.433012832394, 0.000000000000), // NOLINT
+ MakeJ3( 0.433012359189, -0.249999816229, -0.750000183771, 0.000000000000), // NOLINT
+ MakeJ3(-0.866025628186, 0.000000000000, -0.499999611325, 0.000000000000), // NOLINT
+ MakeJ3(-0.058012606358, 0.899519223971, 0.216506188745, -0.966506404215), // NOLINT
+ MakeJ3( 0.899519223971, 0.058012606358, 0.374999697927, 0.058012606358), // NOLINT
+ MakeJ3( 0.433012359189, 0.000000000000, -0.750000183771, -0.249999816229) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_euler[2], euler_angles);
+ EulerAnglesToRotation<ExtrinsicZXY>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.659739424151, -0.047366829780, 0.530330235247, -0.435596000136), // NOLINT
+ MakeJ3(-0.047366829780, -0.659739424151, 0.530330196424, 0.789149175666), // NOLINT
+ MakeJ3( 0.750000183771, 0.000000000000, -0.433012832394, 0.433012359189), // NOLINT
+ MakeJ3( 0.612372449483, 0.612372404654, -0.353553418477, 0.000000000000), // NOLINT
+ MakeJ3( 0.612372404654, -0.612372449483, -0.353553392595, 0.000000000000), // NOLINT
+ MakeJ3(-0.500000021132, 0.000000000000, -0.866025391584, 0.000000000000), // NOLINT
+ MakeJ3(-0.435596000136, 0.789149175666, 0.306185986727, -0.659739424151), // NOLINT
+ MakeJ3( 0.789149175666, 0.435596000136, 0.306185964313, 0.047366829780), // NOLINT
+ MakeJ3( 0.433012359189, 0.000000000000, -0.249999816229, -0.750000183771) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+}
+
+// Test ZXZ/313 Intrinsic Euler Angles to rotation matrix conversion using Jets
+// The two ZXZ test cases specifically cover handling of proper Euler Sequences
+// i.e. last axis of rotation is same as the first
+TEST(EulerAngles, Intrinsic313EulerSequenceToRotationMatrixForJets) {
+ J3 euler_angles[3];
+ J3 rotation_matrix[9];
+
+ ArrayToArrayOfJets(sample_euler[0], euler_angles);
+ EulerAnglesToRotation<IntrinsicZXZ>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.435595832833, -0.659739379323, 0.306186321334, -0.789149008363), // NOLINT
+ MakeJ3(-0.789149008363, 0.047367454164, 0.306186298920, -0.435595832833), // NOLINT
+ MakeJ3( 0.433012832394, 0.750000183771, 0.249999816229, 0.000000000000), // NOLINT
+ MakeJ3( 0.659739379323, 0.435595832833, -0.530330235247, -0.047367454164), // NOLINT
+ MakeJ3(-0.047367454164, -0.789149008363, -0.530330196424, -0.659739379323), // NOLINT
+ MakeJ3(-0.750000183771, 0.433012832394, -0.433012359189, 0.000000000000), // NOLINT
+ MakeJ3( 0.612372616786, 0.000000000000, 0.353553128699, 0.612372571957), // NOLINT
+ MakeJ3( 0.612372571957, 0.000000000000, 0.353553102817, -0.612372616786), // NOLINT
+ MakeJ3( 0.499999611325, 0.000000000000, -0.866025628186, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_euler[1], euler_angles);
+ EulerAnglesToRotation<IntrinsicZXZ>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.625000065470, -0.649518902838, 0.216506425348, -0.649518902838), // NOLINT
+ MakeJ3(-0.649518902838, -0.124999676795, 0.375000107735, -0.625000065470), // NOLINT
+ MakeJ3( 0.433012832394, 0.750000183771, 0.249999816229, 0.000000000000), // NOLINT
+ MakeJ3( 0.649518902838, 0.625000065470, -0.375000107735, 0.124999676795), // NOLINT
+ MakeJ3( 0.124999676795, -0.649518902838, -0.649519202838, -0.649518902838), // NOLINT
+ MakeJ3(-0.750000183771, 0.433012832394, -0.433012359189, 0.000000000000), // NOLINT
+ MakeJ3( 0.433012832394, 0.000000000000, 0.249999816229, 0.750000183771), // NOLINT
+ MakeJ3( 0.750000183771, 0.000000000000, 0.433012359189, -0.433012832394), // NOLINT
+ MakeJ3( 0.499999611325, 0.000000000000, -0.866025628186, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_euler[2], euler_angles);
+ EulerAnglesToRotation<IntrinsicZXZ>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3(-0.176777132430, -0.883883325124, 0.306186321334, -0.918558558685), // NOLINT
+ MakeJ3(-0.918558558685, 0.306186652473, 0.176776571821, 0.176777132430), // NOLINT
+ MakeJ3( 0.353553418477, 0.353553392595, 0.612372449483, 0.000000000000), // NOLINT
+ MakeJ3( 0.883883325124, -0.176777132430, -0.306186298920, -0.306186652473), // NOLINT
+ MakeJ3(-0.306186652473, -0.918558558685, -0.176776558880, -0.883883325124), // NOLINT
+ MakeJ3(-0.353553392595, 0.353553418477, -0.612372404654, 0.000000000000), // NOLINT
+ MakeJ3( 0.433012832394, 0.000000000000, 0.750000183771, 0.249999816229), // NOLINT
+ MakeJ3( 0.249999816229, 0.000000000000, 0.433012359189, -0.433012832394), // NOLINT
+ MakeJ3( 0.866025391584, 0.000000000000, -0.500000021132, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+}
+
+// Test ZXZ/313 Extrinsic Euler Angles to rotation matrix conversion using Jets
+TEST(EulerAngles, Extrinsic313EulerSequenceToRotationMatrixForJets) {
+ J3 euler_angles[3];
+ J3 rotation_matrix[9];
+
+ ArrayToArrayOfJets(sample_euler[0], euler_angles);
+ EulerAnglesToRotation<ExtrinsicZXZ>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.435595832833, -0.659739379323, 0.306186321334, -0.789149008363), // NOLINT
+ MakeJ3(-0.659739379323, -0.435595832833, 0.530330235247, 0.047367454164), // NOLINT
+ MakeJ3( 0.612372616786, 0.000000000000, 0.353553128699, 0.612372571957), // NOLINT
+ MakeJ3( 0.789149008363, -0.047367454164, -0.306186298920, 0.435595832833), // NOLINT
+ MakeJ3(-0.047367454164, -0.789149008363, -0.530330196424, -0.659739379323), // NOLINT
+ MakeJ3(-0.612372571957, 0.000000000000, -0.353553102817, 0.612372616786), // NOLINT
+ MakeJ3( 0.433012832394, 0.750000183771, 0.249999816229, 0.000000000000), // NOLINT
+ MakeJ3( 0.750000183771, -0.433012832394, 0.433012359189, 0.000000000000), // NOLINT
+ MakeJ3( 0.499999611325, 0.000000000000, -0.866025628186, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_euler[1], euler_angles);
+ EulerAnglesToRotation<ExtrinsicZXZ>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3( 0.625000065470, -0.649518902838, 0.216506425348, -0.649518902838), // NOLINT
+ MakeJ3(-0.649518902838, -0.625000065470, 0.375000107735, -0.124999676795), // NOLINT
+ MakeJ3( 0.433012832394, 0.000000000000, 0.249999816229, 0.750000183771), // NOLINT
+ MakeJ3( 0.649518902838, 0.124999676795, -0.375000107735, 0.625000065470), // NOLINT
+ MakeJ3( 0.124999676795, -0.649518902838, -0.649519202838, -0.649518902838), // NOLINT
+ MakeJ3(-0.750000183771, 0.000000000000, -0.433012359189, 0.433012832394), // NOLINT
+ MakeJ3( 0.433012832394, 0.750000183771, 0.249999816229, 0.000000000000), // NOLINT
+ MakeJ3( 0.750000183771, -0.433012832394, 0.433012359189, 0.000000000000), // NOLINT
+ MakeJ3( 0.499999611325, 0.000000000000, -0.866025628186, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_euler[2], euler_angles);
+ EulerAnglesToRotation<ExtrinsicZXZ>(euler_angles, rotation_matrix);
+ {
+ // clang-format off
+ const J3 expected[] = {
+ MakeJ3(-0.176777132430, -0.883883325124, 0.306186321334, -0.918558558685), // NOLINT
+ MakeJ3(-0.883883325124, 0.176777132430, 0.306186298920, 0.306186652473), // NOLINT
+ MakeJ3( 0.433012832394, 0.000000000000, 0.750000183771, 0.249999816229), // NOLINT
+ MakeJ3( 0.918558558685, -0.306186652473, -0.176776571821, -0.176777132430), // NOLINT
+ MakeJ3(-0.306186652473, -0.918558558685, -0.176776558880, -0.883883325124), // NOLINT
+ MakeJ3(-0.249999816229, 0.000000000000, -0.433012359189, 0.433012832394), // NOLINT
+ MakeJ3( 0.353553418477, 0.353553392595, 0.612372449483, 0.000000000000), // NOLINT
+ MakeJ3( 0.353553392595, -0.353553418477, 0.612372404654, 0.000000000000), // NOLINT
+ MakeJ3( 0.866025391584, 0.000000000000, -0.500000021132, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(rotation_matrix,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+}
+
+using J9 = Jet<double, 9>;
+
+// The following 4 tests Tests the conversion of rotation matrices to Euler
+// Angles for Jets.
+//
+// The dual parts (with dimension 9) of the resultant array of Jets contain the
+// derivative of each Euler angle w.r.t. each of the 9 elements of the rotation
+// matrix, or a 9-by-1 array formed from flattening the rotation matrix. In
+// other words, for each element in angles = RotationMatrixToEulerAngles(R), we
+// have angles.v = jacobian(angles, [R11 R12 R13 R21 ... R32 R33]);
+//
+// Note: the order of elements in v depend on row/column-wise flattening of
+// the rotation matrix
+//
+// The test data (dual parts of the Jets) is generated by analytically
+// differentiating the formulas for Rotation Matrix to Euler Angle conversion
+
+// clang-format off
+static double sample_matrices[][9] = {
+ { 0.433012359189, 0.176776842652, 0.883883614902, 0.249999816229, 0.918558725988, -0.306186150563, -0.866025628186, 0.353553128699, 0.353553102817}, // NOLINT
+ { 0.433012359189, -0.058012606358, 0.899519223971, 0.249999816229, 0.966506404215, -0.058012606358, -0.866025628186, 0.249999816229, 0.433012359189}, // NOLINT
+ { 0.612372404654, -0.047366829780, 0.789149175666, 0.612372449483, 0.659739424151, -0.435596000136, -0.500000021132, 0.750000183771, 0.433012359189} // NOLINT
+};
+// clang-format on
+
+// Test rotation matrix to ZXY/312 Intrinsic Euler Angles conversion using Jets
+// The two ZXY test cases specifically cover handling of Tait-Bryan angles
+// i.e. last axis of rotation is different from the first
+TEST(EulerAngles, RotationMatrixToIntrinsic312EulerSequenceForJets) {
+ J9 euler_angles[3];
+ J9 rotation_matrix[9];
+
+ ArrayToArrayOfJets(sample_matrices[0], rotation_matrix);
+ RotationMatrixToEulerAngles<IntrinsicZXY>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>(-0.190125743401, 0.000000000000, -1.049781178951, 0.000000000000, 0.000000000000, 0.202030634558, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 0.361366843930, 0.000000000000, -0.066815309609, 0.000000000000, 0.000000000000, -0.347182270882, 0.000000000000, 0.000000000000, 0.935414445680, 0.000000000000), // NOLINT
+ MakeJet<9>( 1.183200015636, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.404060603418, 0.000000000000, -0.989743365598) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_matrices[1], rotation_matrix);
+ RotationMatrixToEulerAngles<IntrinsicZXY>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>( 0.059951064811, 0.000000000000, -1.030940063452, 0.000000000000, 0.000000000000, -0.061880107384, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 0.252680065344, 0.000000000000, 0.014978778808, 0.000000000000, 0.000000000000, -0.249550684831, 0.000000000000, 0.000000000000, 0.968245884001, 0.000000000000), // NOLINT
+ MakeJet<9>( 1.107149138016, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.461879804532, 0.000000000000, -0.923760579526) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_matrices[2], rotation_matrix);
+ RotationMatrixToEulerAngles<IntrinsicZXY>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>( 0.071673287221, 0.000000000000, -1.507976776767, 0.000000000000, 0.000000000000, -0.108267107713, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 0.848062356818, 0.000000000000, 0.053708966648, 0.000000000000, 0.000000000000, -0.748074610289, 0.000000000000, 0.000000000000, 0.661437619389, 0.000000000000), // NOLINT
+ MakeJet<9>( 0.857072360427, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.989743158900, 0.000000000000, -1.142857911244) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+}
+
+// Test rotation matrix to ZXY/312 Extrinsic Euler Angles conversion using Jets
+TEST(EulerAngles, RotationMatrixToExtrinsic312EulerSequenceForJets) {
+ J9 euler_angles[3];
+ J9 rotation_matrix[9];
+
+ ArrayToArrayOfJets(sample_matrices[0], rotation_matrix);
+ RotationMatrixToEulerAngles<ExtrinsicZXY>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>( 0.265728912717, 0.000000000000, 0.000000000000, 0.000000000000, 1.013581996386, -0.275861853641, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 0.311184173598, 0.000000000000, 0.000000000000, -0.284286741927, 0.000000000000, 0.000000000000, -0.951971659874, 0.000000000000, 0.000000000000, -0.113714586405), // NOLINT
+ MakeJet<9>( 1.190290284357, 0.000000000000, 0.000000000000, 0.390127543992, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.975319806582) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_matrices[1], rotation_matrix);
+ RotationMatrixToEulerAngles<ExtrinsicZXY>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>( 0.253115668605, 0.000000000000, 0.000000000000, 0.000000000000, 0.969770129215, -0.250844022378, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 0.058045195612, 0.000000000000, 0.000000000000, -0.052271487648, 0.000000000000, 0.000000000000, -0.998315850572, 0.000000000000, 0.000000000000, -0.025162553041), // NOLINT
+ MakeJet<9>( 1.122153748896, 0.000000000000, 0.000000000000, 0.434474567050, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.902556744846) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_matrices[2], rotation_matrix);
+ RotationMatrixToEulerAngles<ExtrinsicZXY>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>( 0.748180444286, 0.000000000000, 0.000000000000, 0.000000000000, 0.814235652244, -0.755776390750, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 0.450700288478, 0.000000000000, 0.000000000000, -0.381884322045, 0.000000000000, 0.000000000000, -0.900142280234, 0.000000000000, 0.000000000000, -0.209542930950), // NOLINT
+ MakeJet<9>( 1.068945699497, 0.000000000000, 0.000000000000, 0.534414175972, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.973950275281) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+}
+
+// Test rotation matrix to ZXZ/313 Intrinsic Euler Angles conversion using Jets
+//// The two ZXZ test cases specifically cover handling of proper Euler
+/// Sequences
+// i.e. last axis of rotation is same as the first
+TEST(EulerAngles, RotationMatrixToIntrinsic313EulerSequenceForJets) {
+ J9 euler_angles[3];
+ J9 rotation_matrix[9];
+
+ ArrayToArrayOfJets(sample_matrices[0], rotation_matrix);
+ RotationMatrixToEulerAngles<IntrinsicZXZ>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>( 1.237323270947, 0.000000000000, 0.000000000000, 0.349926947837, 0.000000000000, 0.000000000000, 1.010152467826, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 1.209429510533, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.327326615680, 0.133630397662, -0.935414455462), // NOLINT
+ MakeJet<9>(-1.183199990019, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.404060624546, 0.989743344897, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_matrices[1], rotation_matrix);
+ RotationMatrixToEulerAngles<IntrinsicZXZ>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>( 1.506392616830, 0.000000000000, 0.000000000000, 0.071400104821, 0.000000000000, 0.000000000000, 1.107100178948, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 1.122964310061, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.416024849727, 0.120095910090, -0.901387983495), // NOLINT
+ MakeJet<9>(-1.289761690216, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.307691969119, 1.065877306886, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_matrices[2], rotation_matrix);
+ RotationMatrixToEulerAngles<IntrinsicZXZ>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>( 1.066432836578, 0.000000000000, 0.000000000000, 0.536117958181, 0.000000000000, 0.000000000000, 0.971260169116, 0.000000000000, 0.000000000000, 0.000000000000), // NOLINT
+ MakeJet<9>( 1.122964310061, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.240192006893, 0.360288083393, -0.901387983495), // NOLINT
+ MakeJet<9>(-0.588002509965, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.923076812076, 0.615384416607, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+}
+
+// Test rotation matrix to ZXZ/313 Extrinsic Euler Angles conversion using Jets
+TEST(EulerAngles, RotationMatrixToExtrinsic313EulerSequenceForJets) {
+ J9 euler_angles[3];
+ J9 rotation_matrix[9];
+
+ ArrayToArrayOfJets(sample_matrices[0], rotation_matrix);
+ RotationMatrixToEulerAngles<ExtrinsicZXZ>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>(-1.183199990019, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.404060624546, 0.989743344897, 0.000000000000), // NOLINT
+ MakeJet<9>( 1.209429510533, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.327326615680, 0.133630397662, -0.935414455462), // NOLINT
+ MakeJet<9>( 1.237323270947, 0.000000000000, 0.000000000000, 0.349926947837, 0.000000000000, 0.000000000000, 1.010152467826, 0.000000000000, 0.000000000000, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_matrices[1], rotation_matrix);
+ RotationMatrixToEulerAngles<ExtrinsicZXZ>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>(-1.289761690216, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.307691969119, 1.065877306886, 0.000000000000), // NOLINT
+ MakeJet<9>( 1.122964310061, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.416024849727, 0.120095910090, -0.901387983495), // NOLINT
+ MakeJet<9>( 1.506392616830, 0.000000000000, 0.000000000000, 0.071400104821, 0.000000000000, 0.000000000000, 1.107100178948, 0.000000000000, 0.000000000000, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
+
+ ArrayToArrayOfJets(sample_matrices[2], rotation_matrix);
+ RotationMatrixToEulerAngles<ExtrinsicZXZ>(rotation_matrix, euler_angles);
+ {
+ // clang-format off
+ const J9 expected[] = {
+ MakeJet<9>(-0.588002509965, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.923076812076, 0.615384416607, 0.000000000000), // NOLINT
+ MakeJet<9>( 1.122964310061, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, 0.000000000000, -0.240192006893, 0.360288083393, -0.901387983495), // NOLINT
+ MakeJet<9>( 1.066432836578, 0.000000000000, 0.000000000000, 0.536117958181, 0.000000000000, 0.000000000000, 0.971260169116, 0.000000000000, 0.000000000000, 0.000000000000) // NOLINT
+ };
+ // clang-format on
+ EXPECT_THAT(euler_angles,
+ testing::Pointwise(JetClose(kLooseTolerance), expected));
+ }
}
TEST(Quaternion, RotatePointGivesSameAnswerAsRotationByMatrixCanned) {
@@ -929,13 +1677,15 @@
// Verify that (a * b) * c == a * (b * c).
TEST(Quaternion, MultiplicationIsAssociative) {
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
double a[4];
double b[4];
double c[4];
for (int i = 0; i < 4; ++i) {
- a[i] = 2 * RandDouble() - 1;
- b[i] = 2 * RandDouble() - 1;
- c[i] = 2 * RandDouble() - 1;
+ a[i] = uniform_distribution(prng);
+ b[i] = uniform_distribution(prng);
+ c[i] = uniform_distribution(prng);
}
double ab[4];
@@ -955,6 +1705,8 @@
}
TEST(AngleAxis, RotatePointGivesSameAnswerAsRotationMatrix) {
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
double angle_axis[3];
double R[9];
double p[3];
@@ -964,16 +1716,15 @@
for (int i = 0; i < 10000; ++i) {
double theta = (2.0 * i * 0.0011 - 1.0) * kPi;
for (int j = 0; j < 50; ++j) {
- double norm2 = 0.0;
for (int k = 0; k < 3; ++k) {
- angle_axis[k] = 2.0 * RandDouble() - 1.0;
- p[k] = 2.0 * RandDouble() - 1.0;
- norm2 = angle_axis[k] * angle_axis[k];
+ angle_axis[k] = uniform_distribution(prng);
+ p[k] = uniform_distribution(prng);
}
- const double inv_norm = theta / sqrt(norm2);
- for (int k = 0; k < 3; ++k) {
- angle_axis[k] *= inv_norm;
+ const double inv_norm =
+ theta / std::hypot(angle_axis[0], angle_axis[1], angle_axis[2]);
+ for (double& angle_axi : angle_axis) {
+ angle_axi *= inv_norm;
}
AngleAxisToRotationMatrix(angle_axis, R);
@@ -998,7 +1749,22 @@
}
}
+TEST(Quaternion, UnitQuaternion) {
+ using Jet = ceres::Jet<double, 4>;
+ std::array<Jet, 4> quaternion = {
+ Jet(1.0, 0), Jet(0.0, 1), Jet(0.0, 2), Jet(0.0, 3)};
+ std::array<Jet, 3> point = {Jet(0.0), Jet(0.0), Jet(0.0)};
+ std::array<Jet, 3> rotated_point;
+ QuaternionRotatePoint(quaternion.data(), point.data(), rotated_point.data());
+ for (int i = 0; i < 3; ++i) {
+ EXPECT_EQ(rotated_point[i], point[i]);
+ EXPECT_FALSE(rotated_point[i].v.array().isNaN().any());
+ }
+}
+
TEST(AngleAxis, NearZeroRotatePointGivesSameAnswerAsRotationMatrix) {
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution{-1.0, 1.0};
double angle_axis[3];
double R[9];
double p[3];
@@ -1008,15 +1774,15 @@
for (int i = 0; i < 10000; ++i) {
double norm2 = 0.0;
for (int k = 0; k < 3; ++k) {
- angle_axis[k] = 2.0 * RandDouble() - 1.0;
- p[k] = 2.0 * RandDouble() - 1.0;
+ angle_axis[k] = uniform_distribution(prng);
+ p[k] = uniform_distribution(prng);
norm2 = angle_axis[k] * angle_axis[k];
}
double theta = (2.0 * i * 0.0001 - 1.0) * 1e-16;
const double inv_norm = theta / sqrt(norm2);
- for (int k = 0; k < 3; ++k) {
- angle_axis[k] *= inv_norm;
+ for (double& angle_axi : angle_axis) {
+ angle_axi *= inv_norm;
}
AngleAxisToRotationMatrix(angle_axis, R);
@@ -1134,7 +1900,12 @@
}
TEST(RotationMatrixToAngleAxis, ExhaustiveRoundTrip) {
- const double kMaxSmallAngle = 1e-8;
+ constexpr double kMaxSmallAngle = 1e-8;
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> uniform_distribution1{
+ kPi - kMaxSmallAngle, kPi};
+ std::uniform_real_distribution<double> uniform_distribution2{
+ -1.0, 2.0 * kMaxSmallAngle - 1.0};
const int kNumSteps = 1000;
for (int i = 0; i < kNumSteps; ++i) {
const double theta = static_cast<double>(i) / kNumSteps * 2.0 * kPi;
@@ -1144,10 +1915,10 @@
CheckRotationMatrixToAngleAxisRoundTrip(theta, phi, kPi);
// Rotation of angle approximately Pi.
CheckRotationMatrixToAngleAxisRoundTrip(
- theta, phi, kPi - kMaxSmallAngle * RandDouble());
+ theta, phi, uniform_distribution1(prng));
// Rotations of angle approximately zero.
CheckRotationMatrixToAngleAxisRoundTrip(
- theta, phi, kMaxSmallAngle * 2.0 * RandDouble() - 1.0);
+ theta, phi, uniform_distribution2(prng));
}
}
}
diff --git a/internal/ceres/schur_complement_solver.cc b/internal/ceres/schur_complement_solver.cc
index 65e7854..e113040 100644
--- a/internal/ceres/schur_complement_solver.cc
+++ b/internal/ceres/schur_complement_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,6 +34,7 @@
#include <ctime>
#include <memory>
#include <set>
+#include <utility>
#include <vector>
#include "Eigen/Dense"
@@ -46,75 +47,56 @@
#include "ceres/conjugate_gradients_solver.h"
#include "ceres/detect_structure.h"
#include "ceres/internal/eigen.h"
-#include "ceres/lapack.h"
#include "ceres/linear_solver.h"
#include "ceres/sparse_cholesky.h"
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/types.h"
#include "ceres/wall_time.h"
-namespace ceres {
-namespace internal {
-
-using std::make_pair;
-using std::pair;
-using std::set;
-using std::vector;
-
+namespace ceres::internal {
namespace {
-class BlockRandomAccessSparseMatrixAdapter : public LinearOperator {
+class BlockRandomAccessSparseMatrixAdapter final
+ : public ConjugateGradientsLinearOperator<Vector> {
public:
explicit BlockRandomAccessSparseMatrixAdapter(
const BlockRandomAccessSparseMatrix& m)
: m_(m) {}
- virtual ~BlockRandomAccessSparseMatrixAdapter() {}
-
- // y = y + Ax;
- void RightMultiply(const double* x, double* y) const final {
- m_.SymmetricRightMultiply(x, y);
+ void RightMultiplyAndAccumulate(const Vector& x, Vector& y) final {
+ m_.SymmetricRightMultiplyAndAccumulate(x.data(), y.data());
}
- // y = y + A'x;
- void LeftMultiply(const double* x, double* y) const final {
- m_.SymmetricRightMultiply(x, y);
- }
-
- int num_rows() const final { return m_.num_rows(); }
- int num_cols() const final { return m_.num_rows(); }
-
private:
const BlockRandomAccessSparseMatrix& m_;
};
-class BlockRandomAccessDiagonalMatrixAdapter : public LinearOperator {
+class BlockRandomAccessDiagonalMatrixAdapter final
+ : public ConjugateGradientsLinearOperator<Vector> {
public:
explicit BlockRandomAccessDiagonalMatrixAdapter(
const BlockRandomAccessDiagonalMatrix& m)
: m_(m) {}
- virtual ~BlockRandomAccessDiagonalMatrixAdapter() {}
-
// y = y + Ax;
- void RightMultiply(const double* x, double* y) const final {
- m_.RightMultiply(x, y);
+ void RightMultiplyAndAccumulate(const Vector& x, Vector& y) final {
+ m_.RightMultiplyAndAccumulate(x.data(), y.data());
}
- // y = y + A'x;
- void LeftMultiply(const double* x, double* y) const final {
- m_.RightMultiply(x, y);
- }
-
- int num_rows() const final { return m_.num_rows(); }
- int num_cols() const final { return m_.num_rows(); }
-
private:
const BlockRandomAccessDiagonalMatrix& m_;
};
} // namespace
+SchurComplementSolver::SchurComplementSolver(
+ const LinearSolver::Options& options)
+ : options_(options) {
+ CHECK_GT(options.elimination_groups.size(), 1);
+ CHECK_GT(options.elimination_groups[0], 0);
+ CHECK(options.context != nullptr);
+}
+
LinearSolver::Summary SchurComplementSolver::SolveImpl(
BlockSparseMatrix* A,
const double* b,
@@ -123,7 +105,7 @@
EventLogger event_logger("SchurComplementSolver::Solve");
const CompressedRowBlockStructure* bs = A->block_structure();
- if (eliminator_.get() == NULL) {
+ if (eliminator_ == nullptr) {
const int num_eliminate_blocks = options_.elimination_groups[0];
const int num_f_blocks = bs->cols.size() - num_eliminate_blocks;
@@ -141,9 +123,9 @@
// mechanism that does not cause binary bloat.
if (options_.row_block_size == 2 && options_.e_block_size == 3 &&
options_.f_block_size == 6 && num_f_blocks == 1) {
- eliminator_.reset(new SchurEliminatorForOneFBlock<2, 3, 6>);
+ eliminator_ = std::make_unique<SchurEliminatorForOneFBlock<2, 3, 6>>();
} else {
- eliminator_.reset(SchurEliminatorBase::Create(options_));
+ eliminator_ = SchurEliminatorBase::Create(options_);
}
CHECK(eliminator_);
@@ -158,7 +140,7 @@
b,
per_solve_options.D,
lhs_.get(),
- rhs_.get());
+ rhs_.data());
event_logger.AddEvent("Eliminate");
double* reduced_solution = x + A->num_cols() - lhs_->num_cols();
@@ -166,7 +148,7 @@
SolveReducedLinearSystem(per_solve_options, reduced_solution);
event_logger.AddEvent("ReducedSolve");
- if (summary.termination_type == LINEAR_SOLVER_SUCCESS) {
+ if (summary.termination_type == LinearSolverTerminationType::SUCCESS) {
eliminator_->BackSubstitute(
BlockSparseMatrixData(*A), b, per_solve_options.D, reduced_solution, x);
event_logger.AddEvent("BackSubstitute");
@@ -174,6 +156,12 @@
return summary;
}
+DenseSchurComplementSolver::DenseSchurComplementSolver(
+ const LinearSolver::Options& options)
+ : SchurComplementSolver(options),
+ cholesky_(DenseCholesky::Create(options)) {}
+
+DenseSchurComplementSolver::~DenseSchurComplementSolver() = default;
// Initialize a BlockRandomAccessDenseMatrix to store the Schur
// complement.
@@ -181,28 +169,24 @@
const CompressedRowBlockStructure* bs) {
const int num_eliminate_blocks = options().elimination_groups[0];
const int num_col_blocks = bs->cols.size();
-
- vector<int> blocks(num_col_blocks - num_eliminate_blocks, 0);
- for (int i = num_eliminate_blocks, j = 0; i < num_col_blocks; ++i, ++j) {
- blocks[j] = bs->cols[i].size;
- }
-
- set_lhs(new BlockRandomAccessDenseMatrix(blocks));
- set_rhs(new double[lhs()->num_rows()]);
+ auto blocks = Tail(bs->cols, num_col_blocks - num_eliminate_blocks);
+ set_lhs(std::make_unique<BlockRandomAccessDenseMatrix>(
+ blocks, options().context, options().num_threads));
+ ResizeRhs(lhs()->num_rows());
}
// Solve the system Sx = r, assuming that the matrix S is stored in a
// BlockRandomAccessDenseMatrix. The linear system is solved using
// Eigen's Cholesky factorization.
LinearSolver::Summary DenseSchurComplementSolver::SolveReducedLinearSystem(
- const LinearSolver::PerSolveOptions& per_solve_options, double* solution) {
+ const LinearSolver::PerSolveOptions& /*per_solve_options*/,
+ double* solution) {
LinearSolver::Summary summary;
summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
summary.message = "Success.";
- const BlockRandomAccessDenseMatrix* m =
- down_cast<const BlockRandomAccessDenseMatrix*>(lhs());
+ auto* m = down_cast<BlockRandomAccessDenseMatrix*>(mutable_lhs());
const int num_rows = m->num_rows();
// The case where there are no f blocks, and the system is block
@@ -212,26 +196,8 @@
}
summary.num_iterations = 1;
-
- if (options().dense_linear_algebra_library_type == EIGEN) {
- Eigen::LLT<Matrix, Eigen::Upper> llt =
- ConstMatrixRef(m->values(), num_rows, num_rows)
- .selfadjointView<Eigen::Upper>()
- .llt();
- if (llt.info() != Eigen::Success) {
- summary.termination_type = LINEAR_SOLVER_FAILURE;
- summary.message =
- "Eigen failure. Unable to perform dense Cholesky factorization.";
- return summary;
- }
-
- VectorRef(solution, num_rows) = llt.solve(ConstVectorRef(rhs(), num_rows));
- } else {
- VectorRef(solution, num_rows) = ConstVectorRef(rhs(), num_rows);
- summary.termination_type = LAPACK::SolveInPlaceUsingCholesky(
- num_rows, m->values(), solution, &summary.message);
- }
-
+ summary.termination_type = cholesky_->FactorAndSolve(
+ num_rows, m->mutable_values(), rhs().data(), solution, &summary.message);
return summary;
}
@@ -243,7 +209,14 @@
}
}
-SparseSchurComplementSolver::~SparseSchurComplementSolver() {}
+SparseSchurComplementSolver::~SparseSchurComplementSolver() {
+ for (int i = 0; i < 4; ++i) {
+ if (scratch_[i]) {
+ delete scratch_[i];
+ scratch_[i] = nullptr;
+ }
+ }
+}
// Determine the non-zero blocks in the Schur Complement matrix, and
// initialize a BlockRandomAccessSparseMatrix object.
@@ -253,14 +226,11 @@
const int num_col_blocks = bs->cols.size();
const int num_row_blocks = bs->rows.size();
- blocks_.resize(num_col_blocks - num_eliminate_blocks, 0);
- for (int i = num_eliminate_blocks; i < num_col_blocks; ++i) {
- blocks_[i - num_eliminate_blocks] = bs->cols[i].size;
- }
+ blocks_ = Tail(bs->cols, num_col_blocks - num_eliminate_blocks);
- set<pair<int, int>> block_pairs;
+ std::set<std::pair<int, int>> block_pairs;
for (int i = 0; i < blocks_.size(); ++i) {
- block_pairs.insert(make_pair(i, i));
+ block_pairs.emplace(i, i);
}
int r = 0;
@@ -269,7 +239,7 @@
if (e_block_id >= num_eliminate_blocks) {
break;
}
- vector<int> f_blocks;
+ std::vector<int> f_blocks;
// Add to the chunk until the first block in the row is
// different than the one in the first row for the chunk.
@@ -287,11 +257,12 @@
}
}
- sort(f_blocks.begin(), f_blocks.end());
- f_blocks.erase(unique(f_blocks.begin(), f_blocks.end()), f_blocks.end());
+ std::sort(f_blocks.begin(), f_blocks.end());
+ f_blocks.erase(std::unique(f_blocks.begin(), f_blocks.end()),
+ f_blocks.end());
for (int i = 0; i < f_blocks.size(); ++i) {
for (int j = i + 1; j < f_blocks.size(); ++j) {
- block_pairs.insert(make_pair(f_blocks[i], f_blocks[j]));
+ block_pairs.emplace(f_blocks[i], f_blocks[j]);
}
}
}
@@ -303,17 +274,18 @@
CHECK_GE(row.cells.front().block_id, num_eliminate_blocks);
for (int i = 0; i < row.cells.size(); ++i) {
int r_block1_id = row.cells[i].block_id - num_eliminate_blocks;
- for (int j = 0; j < row.cells.size(); ++j) {
- int r_block2_id = row.cells[j].block_id - num_eliminate_blocks;
+ for (const auto& cell : row.cells) {
+ int r_block2_id = cell.block_id - num_eliminate_blocks;
if (r_block1_id <= r_block2_id) {
- block_pairs.insert(make_pair(r_block1_id, r_block2_id));
+ block_pairs.emplace(r_block1_id, r_block2_id);
}
}
}
}
- set_lhs(new BlockRandomAccessSparseMatrix(blocks_, block_pairs));
- set_rhs(new double[lhs()->num_rows()]);
+ set_lhs(std::make_unique<BlockRandomAccessSparseMatrix>(
+ blocks_, block_pairs, options().context, options().num_threads));
+ ResizeRhs(lhs()->num_rows());
}
LinearSolver::Summary SparseSchurComplementSolver::SolveReducedLinearSystem(
@@ -325,33 +297,39 @@
LinearSolver::Summary summary;
summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
summary.message = "Success.";
- const TripletSparseMatrix* tsm =
+ const BlockSparseMatrix* bsm =
down_cast<const BlockRandomAccessSparseMatrix*>(lhs())->matrix();
- if (tsm->num_rows() == 0) {
+ if (bsm->num_rows() == 0) {
return summary;
}
- std::unique_ptr<CompressedRowSparseMatrix> lhs;
const CompressedRowSparseMatrix::StorageType storage_type =
sparse_cholesky_->StorageType();
- if (storage_type == CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
- lhs.reset(CompressedRowSparseMatrix::FromTripletSparseMatrix(*tsm));
- lhs->set_storage_type(CompressedRowSparseMatrix::UPPER_TRIANGULAR);
+ if (storage_type ==
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
+ if (!crs_lhs_) {
+ crs_lhs_ = bsm->ToCompressedRowSparseMatrix();
+ crs_lhs_->set_storage_type(
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR);
+ } else {
+ bsm->UpdateCompressedRowSparseMatrix(crs_lhs_.get());
+ }
} else {
- lhs.reset(
- CompressedRowSparseMatrix::FromTripletSparseMatrixTransposed(*tsm));
- lhs->set_storage_type(CompressedRowSparseMatrix::LOWER_TRIANGULAR);
+ if (!crs_lhs_) {
+ crs_lhs_ = bsm->ToCompressedRowSparseMatrixTranspose();
+ crs_lhs_->set_storage_type(
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR);
+ } else {
+ bsm->UpdateCompressedRowSparseMatrixTranspose(crs_lhs_.get());
+ }
}
- *lhs->mutable_col_blocks() = blocks_;
- *lhs->mutable_row_blocks() = blocks_;
-
summary.num_iterations = 1;
summary.termination_type = sparse_cholesky_->FactorAndSolve(
- lhs.get(), rhs(), solution, &summary.message);
+ crs_lhs_.get(), rhs().data(), solution, &summary.message);
return summary;
}
@@ -365,7 +343,7 @@
if (num_rows == 0) {
LinearSolver::Summary summary;
summary.num_iterations = 0;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
summary.message = "Success.";
return summary;
}
@@ -373,17 +351,17 @@
// Only SCHUR_JACOBI is supported over here right now.
CHECK_EQ(options().preconditioner_type, SCHUR_JACOBI);
- if (preconditioner_.get() == NULL) {
- preconditioner_.reset(new BlockRandomAccessDiagonalMatrix(blocks_));
+ if (preconditioner_ == nullptr) {
+ preconditioner_ = std::make_unique<BlockRandomAccessDiagonalMatrix>(
+ blocks_, options().context, options().num_threads);
}
- BlockRandomAccessSparseMatrix* sc = down_cast<BlockRandomAccessSparseMatrix*>(
- const_cast<BlockRandomAccessMatrix*>(lhs()));
+ auto* sc = down_cast<BlockRandomAccessSparseMatrix*>(mutable_lhs());
// Extract block diagonal from the Schur complement to construct the
// schur_jacobi preconditioner.
for (int i = 0; i < blocks_.size(); ++i) {
- const int block_size = blocks_[i];
+ const int block_size = blocks_[i].size;
int sc_r, sc_c, sc_row_stride, sc_col_stride;
CellInfo* sc_cell_info =
@@ -404,24 +382,28 @@
VectorRef(solution, num_rows).setZero();
- std::unique_ptr<LinearOperator> lhs_adapter(
- new BlockRandomAccessSparseMatrixAdapter(*sc));
- std::unique_ptr<LinearOperator> preconditioner_adapter(
- new BlockRandomAccessDiagonalMatrixAdapter(*preconditioner_));
+ auto lhs = std::make_unique<BlockRandomAccessSparseMatrixAdapter>(*sc);
+ auto preconditioner =
+ std::make_unique<BlockRandomAccessDiagonalMatrixAdapter>(
+ *preconditioner_);
- LinearSolver::Options cg_options;
+ ConjugateGradientsSolverOptions cg_options;
cg_options.min_num_iterations = options().min_num_iterations;
cg_options.max_num_iterations = options().max_num_iterations;
- ConjugateGradientsSolver cg_solver(cg_options);
+ cg_options.residual_reset_period = options().residual_reset_period;
+ cg_options.q_tolerance = per_solve_options.q_tolerance;
+ cg_options.r_tolerance = per_solve_options.r_tolerance;
- LinearSolver::PerSolveOptions cg_per_solve_options;
- cg_per_solve_options.r_tolerance = per_solve_options.r_tolerance;
- cg_per_solve_options.q_tolerance = per_solve_options.q_tolerance;
- cg_per_solve_options.preconditioner = preconditioner_adapter.get();
-
- return cg_solver.Solve(
- lhs_adapter.get(), rhs(), cg_per_solve_options, solution);
+ cg_solution_ = Vector::Zero(sc->num_rows());
+ for (int i = 0; i < 4; ++i) {
+ if (scratch_[i] == nullptr) {
+ scratch_[i] = new Vector(sc->num_rows());
+ }
+ }
+ auto summary = ConjugateGradientsSolver<Vector>(
+ cg_options, *lhs, rhs(), *preconditioner, scratch_, cg_solution_);
+ VectorRef(solution, sc->num_rows()) = cg_solution_;
+ return summary;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/schur_complement_solver.h b/internal/ceres/schur_complement_solver.h
index 3bfa22f..5e11b94 100644
--- a/internal/ceres/schur_complement_solver.h
+++ b/internal/ceres/schur_complement_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,7 +40,9 @@
#include "ceres/block_random_access_matrix.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/block_structure.h"
-#include "ceres/internal/port.h"
+#include "ceres/dense_cholesky.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
#include "ceres/schur_eliminator.h"
#include "ceres/types.h"
@@ -50,8 +52,9 @@
#include "Eigen/SparseCholesky"
#endif
-namespace ceres {
-namespace internal {
+#include "ceres/internal/disable_warnings.h"
+
+namespace ceres::internal {
class BlockSparseMatrix;
class SparseCholesky;
@@ -62,7 +65,7 @@
//
// E y + F z = b
//
-// Where x = [y;z] is a partition of the variables. The paritioning
+// Where x = [y;z] is a partition of the variables. The partitioning
// of the variables is such that, E'E is a block diagonal
// matrix. Further, the rows of A are ordered so that for every
// variable block in y, all the rows containing that variable block
@@ -107,20 +110,12 @@
// set to DENSE_SCHUR and SPARSE_SCHUR
// respectively. LinearSolver::Options::elimination_groups[0] should
// be at least 1.
-class CERES_EXPORT_INTERNAL SchurComplementSolver
- : public BlockSparseMatrixSolver {
+class CERES_NO_EXPORT SchurComplementSolver : public BlockSparseMatrixSolver {
public:
- explicit SchurComplementSolver(const LinearSolver::Options& options)
- : options_(options) {
- CHECK_GT(options.elimination_groups.size(), 1);
- CHECK_GT(options.elimination_groups[0], 0);
- CHECK(options.context != NULL);
- }
+ explicit SchurComplementSolver(const LinearSolver::Options& options);
SchurComplementSolver(const SchurComplementSolver&) = delete;
void operator=(const SchurComplementSolver&) = delete;
- // LinearSolver methods
- virtual ~SchurComplementSolver() {}
LinearSolver::Summary SolveImpl(
BlockSparseMatrix* A,
const double* b,
@@ -130,10 +125,13 @@
protected:
const LinearSolver::Options& options() const { return options_; }
+ void set_lhs(std::unique_ptr<BlockRandomAccessMatrix> lhs) {
+ lhs_ = std::move(lhs);
+ }
const BlockRandomAccessMatrix* lhs() const { return lhs_.get(); }
- void set_lhs(BlockRandomAccessMatrix* lhs) { lhs_.reset(lhs); }
- const double* rhs() const { return rhs_.get(); }
- void set_rhs(double* rhs) { rhs_.reset(rhs); }
+ BlockRandomAccessMatrix* mutable_lhs() { return lhs_.get(); }
+ void ResizeRhs(int n) { rhs_.resize(n); }
+ const Vector& rhs() const { return rhs_; }
private:
virtual void InitStorage(const CompressedRowBlockStructure* bs) = 0;
@@ -145,34 +143,37 @@
std::unique_ptr<SchurEliminatorBase> eliminator_;
std::unique_ptr<BlockRandomAccessMatrix> lhs_;
- std::unique_ptr<double[]> rhs_;
+ Vector rhs_;
};
// Dense Cholesky factorization based solver.
-class DenseSchurComplementSolver : public SchurComplementSolver {
+class CERES_NO_EXPORT DenseSchurComplementSolver final
+ : public SchurComplementSolver {
public:
- explicit DenseSchurComplementSolver(const LinearSolver::Options& options)
- : SchurComplementSolver(options) {}
+ explicit DenseSchurComplementSolver(const LinearSolver::Options& options);
DenseSchurComplementSolver(const DenseSchurComplementSolver&) = delete;
void operator=(const DenseSchurComplementSolver&) = delete;
- virtual ~DenseSchurComplementSolver() {}
+ ~DenseSchurComplementSolver() override;
private:
void InitStorage(const CompressedRowBlockStructure* bs) final;
LinearSolver::Summary SolveReducedLinearSystem(
const LinearSolver::PerSolveOptions& per_solve_options,
double* solution) final;
+
+ std::unique_ptr<DenseCholesky> cholesky_;
};
// Sparse Cholesky factorization based solver.
-class SparseSchurComplementSolver : public SchurComplementSolver {
+class CERES_NO_EXPORT SparseSchurComplementSolver final
+ : public SchurComplementSolver {
public:
explicit SparseSchurComplementSolver(const LinearSolver::Options& options);
SparseSchurComplementSolver(const SparseSchurComplementSolver&) = delete;
void operator=(const SparseSchurComplementSolver&) = delete;
- virtual ~SparseSchurComplementSolver();
+ ~SparseSchurComplementSolver() override;
private:
void InitStorage(const CompressedRowBlockStructure* bs) final;
@@ -182,13 +183,16 @@
LinearSolver::Summary SolveReducedLinearSystemUsingConjugateGradients(
const LinearSolver::PerSolveOptions& per_solve_options, double* solution);
- // Size of the blocks in the Schur complement.
- std::vector<int> blocks_;
+ std::vector<Block> blocks_;
std::unique_ptr<SparseCholesky> sparse_cholesky_;
std::unique_ptr<BlockRandomAccessDiagonalMatrix> preconditioner_;
+ std::unique_ptr<CompressedRowSparseMatrix> crs_lhs_;
+ Vector cg_solution_;
+ Vector* scratch_[4] = {nullptr, nullptr, nullptr, nullptr};
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_SCHUR_COMPLEMENT_SOLVER_H_
diff --git a/internal/ceres/schur_complement_solver_test.cc b/internal/ceres/schur_complement_solver_test.cc
index 550733e..7c5ce28 100644
--- a/internal/ceres/schur_complement_solver_test.cc
+++ b/internal/ceres/schur_complement_solver_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -45,19 +45,18 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class SchurComplementSolverTest : public ::testing::Test {
protected:
void SetUpFromProblemId(int problem_id) {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(problem_id));
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(problem_id);
CHECK(problem != nullptr);
A.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
- b.reset(problem->b.release());
- D.reset(problem->D.release());
+ b = std::move(problem->b);
+ D = std::move(problem->D);
num_cols = A->num_cols();
num_rows = A->num_rows();
@@ -94,7 +93,7 @@
ceres::LinearSolverType linear_solver_type,
ceres::DenseLinearAlgebraLibraryType dense_linear_algebra_library_type,
ceres::SparseLinearAlgebraLibraryType sparse_linear_algebra_library_type,
- bool use_postordering) {
+ ceres::internal::OrderingType ordering_type) {
SetUpFromProblemId(problem_id);
LinearSolver::Options options;
options.elimination_groups.push_back(num_eliminate_blocks);
@@ -105,7 +104,7 @@
dense_linear_algebra_library_type;
options.sparse_linear_algebra_library_type =
sparse_linear_algebra_library_type;
- options.use_postordering = use_postordering;
+ options.ordering_type = ordering_type;
ContextImpl context;
options.context = &context;
DetectStructure(*A->block_structure(),
@@ -123,7 +122,7 @@
}
summary = solver->Solve(A.get(), b.get(), per_solve_options, x.data());
- EXPECT_EQ(summary.termination_type, LINEAR_SOLVER_SUCCESS);
+ EXPECT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
if (regularization) {
ASSERT_NEAR((sol_d - x).norm() / num_cols, 0, 1e-10)
@@ -151,98 +150,173 @@
// TODO(sameeragarwal): Refactor these using value parameterized tests.
// TODO(sameeragarwal): More extensive tests using random matrices.
TEST_F(SchurComplementSolverTest, DenseSchurWithEigenSmallProblem) {
- ComputeAndCompareSolutions(2, false, DENSE_SCHUR, EIGEN, SUITE_SPARSE, true);
- ComputeAndCompareSolutions(2, true, DENSE_SCHUR, EIGEN, SUITE_SPARSE, true);
+ ComputeAndCompareSolutions(
+ 2, false, DENSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
+ ComputeAndCompareSolutions(
+ 2, true, DENSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
}
TEST_F(SchurComplementSolverTest, DenseSchurWithEigenLargeProblem) {
- ComputeAndCompareSolutions(3, false, DENSE_SCHUR, EIGEN, SUITE_SPARSE, true);
- ComputeAndCompareSolutions(3, true, DENSE_SCHUR, EIGEN, SUITE_SPARSE, true);
+ ComputeAndCompareSolutions(
+ 3, false, DENSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
+ ComputeAndCompareSolutions(
+ 3, true, DENSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
}
TEST_F(SchurComplementSolverTest, DenseSchurWithEigenVaryingFBlockSize) {
- ComputeAndCompareSolutions(4, true, DENSE_SCHUR, EIGEN, SUITE_SPARSE, true);
+ ComputeAndCompareSolutions(
+ 4, true, DENSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
}
#ifndef CERES_NO_LAPACK
TEST_F(SchurComplementSolverTest, DenseSchurWithLAPACKSmallProblem) {
- ComputeAndCompareSolutions(2, false, DENSE_SCHUR, LAPACK, SUITE_SPARSE, true);
- ComputeAndCompareSolutions(2, true, DENSE_SCHUR, LAPACK, SUITE_SPARSE, true);
+ ComputeAndCompareSolutions(
+ 2, false, DENSE_SCHUR, LAPACK, SUITE_SPARSE, OrderingType::NATURAL);
+ ComputeAndCompareSolutions(
+ 2, true, DENSE_SCHUR, LAPACK, SUITE_SPARSE, OrderingType::NATURAL);
}
TEST_F(SchurComplementSolverTest, DenseSchurWithLAPACKLargeProblem) {
- ComputeAndCompareSolutions(3, false, DENSE_SCHUR, LAPACK, SUITE_SPARSE, true);
- ComputeAndCompareSolutions(3, true, DENSE_SCHUR, LAPACK, SUITE_SPARSE, true);
+ ComputeAndCompareSolutions(
+ 3, false, DENSE_SCHUR, LAPACK, SUITE_SPARSE, OrderingType::NATURAL);
+ ComputeAndCompareSolutions(
+ 3, true, DENSE_SCHUR, LAPACK, SUITE_SPARSE, OrderingType::NATURAL);
}
#endif
#ifndef CERES_NO_SUITESPARSE
TEST_F(SchurComplementSolverTest,
- SparseSchurWithSuiteSparseSmallProblemNoPostOrdering) {
+ SparseSchurWithSuiteSparseSmallProblemNATURAL) {
ComputeAndCompareSolutions(
- 2, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, false);
- ComputeAndCompareSolutions(2, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, false);
-}
-
-TEST_F(SchurComplementSolverTest,
- SparseSchurWithSuiteSparseSmallProblemPostOrdering) {
- ComputeAndCompareSolutions(2, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, true);
- ComputeAndCompareSolutions(2, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, true);
-}
-
-TEST_F(SchurComplementSolverTest,
- SparseSchurWithSuiteSparseLargeProblemNoPostOrdering) {
+ 2, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
ComputeAndCompareSolutions(
- 3, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, false);
- ComputeAndCompareSolutions(3, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, false);
+ 2, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
}
TEST_F(SchurComplementSolverTest,
- SparseSchurWithSuiteSparseLargeProblemPostOrdering) {
- ComputeAndCompareSolutions(3, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, true);
- ComputeAndCompareSolutions(3, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, true);
+ SparseSchurWithSuiteSparseLargeProblemNATURAL) {
+ ComputeAndCompareSolutions(
+ 3, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
+ ComputeAndCompareSolutions(
+ 3, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NATURAL);
}
+
+TEST_F(SchurComplementSolverTest, SparseSchurWithSuiteSparseSmallProblemAMD) {
+ ComputeAndCompareSolutions(
+ 2, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::AMD);
+ ComputeAndCompareSolutions(
+ 2, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::AMD);
+}
+
+TEST_F(SchurComplementSolverTest, SparseSchurWithSuiteSparseLargeProblemAMD) {
+ ComputeAndCompareSolutions(
+ 3, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::AMD);
+ ComputeAndCompareSolutions(
+ 3, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::AMD);
+}
+
+#ifndef CERES_NO_EIGEN_METIS
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithSuiteSparseSmallProblemNESDIS) {
+ ComputeAndCompareSolutions(
+ 2, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NESDIS);
+ ComputeAndCompareSolutions(
+ 2, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NESDIS);
+}
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithSuiteSparseLargeProblemNESDIS) {
+ ComputeAndCompareSolutions(
+ 3, false, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NESDIS);
+ ComputeAndCompareSolutions(
+ 3, true, SPARSE_SCHUR, EIGEN, SUITE_SPARSE, OrderingType::NESDIS);
+}
+#endif // CERES_NO_EIGEN_METIS
#endif // CERES_NO_SUITESPARSE
-#ifndef CERES_NO_CXSPARSE
-TEST_F(SchurComplementSolverTest, SparseSchurWithCXSparseSmallProblem) {
- ComputeAndCompareSolutions(2, false, SPARSE_SCHUR, EIGEN, CX_SPARSE, true);
- ComputeAndCompareSolutions(2, true, SPARSE_SCHUR, EIGEN, CX_SPARSE, true);
-}
-
-TEST_F(SchurComplementSolverTest, SparseSchurWithCXSparseLargeProblem) {
- ComputeAndCompareSolutions(3, false, SPARSE_SCHUR, EIGEN, CX_SPARSE, true);
- ComputeAndCompareSolutions(3, true, SPARSE_SCHUR, EIGEN, CX_SPARSE, true);
-}
-#endif // CERES_NO_CXSPARSE
-
#ifndef CERES_NO_ACCELERATE_SPARSE
-TEST_F(SchurComplementSolverTest, SparseSchurWithAccelerateSparseSmallProblem) {
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithAccelerateSparseSmallProblemAMD) {
ComputeAndCompareSolutions(
- 2, false, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, true);
+ 2, false, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, OrderingType::AMD);
ComputeAndCompareSolutions(
- 2, true, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, true);
+ 2, true, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, OrderingType::AMD);
}
-TEST_F(SchurComplementSolverTest, SparseSchurWithAccelerateSparseLargeProblem) {
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithAccelerateSparseSmallProblemNESDIS) {
ComputeAndCompareSolutions(
- 3, false, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, true);
+ 2, false, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, OrderingType::NESDIS);
ComputeAndCompareSolutions(
- 3, true, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, true);
+ 2, true, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, OrderingType::NESDIS);
+}
+
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithAccelerateSparseLargeProblemAMD) {
+ ComputeAndCompareSolutions(
+ 3, false, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, OrderingType::AMD);
+ ComputeAndCompareSolutions(
+ 3, true, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, OrderingType::AMD);
+}
+
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithAccelerateSparseLargeProblemNESDIS) {
+ ComputeAndCompareSolutions(
+ 3, false, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, OrderingType::NESDIS);
+ ComputeAndCompareSolutions(
+ 3, true, SPARSE_SCHUR, EIGEN, ACCELERATE_SPARSE, OrderingType::NESDIS);
}
#endif // CERES_NO_ACCELERATE_SPARSE
#ifdef CERES_USE_EIGEN_SPARSE
-TEST_F(SchurComplementSolverTest, SparseSchurWithEigenSparseSmallProblem) {
- ComputeAndCompareSolutions(2, false, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, true);
- ComputeAndCompareSolutions(2, true, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, true);
+TEST_F(SchurComplementSolverTest, SparseSchurWithEigenSparseSmallProblemAMD) {
+ ComputeAndCompareSolutions(
+ 2, false, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::AMD);
+ ComputeAndCompareSolutions(
+ 2, true, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::AMD);
}
-TEST_F(SchurComplementSolverTest, SparseSchurWithEigenSparseLargeProblem) {
- ComputeAndCompareSolutions(3, false, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, true);
- ComputeAndCompareSolutions(3, true, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, true);
+#ifndef CERES_NO_EIGEN_METIS
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithEigenSparseSmallProblemNESDIS) {
+ ComputeAndCompareSolutions(
+ 2, false, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::NESDIS);
+ ComputeAndCompareSolutions(
+ 2, true, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::NESDIS);
+}
+#endif
+
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithEigenSparseSmallProblemNATURAL) {
+ ComputeAndCompareSolutions(
+ 2, false, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::NATURAL);
+ ComputeAndCompareSolutions(
+ 2, true, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::NATURAL);
+}
+
+TEST_F(SchurComplementSolverTest, SparseSchurWithEigenSparseLargeProblemAMD) {
+ ComputeAndCompareSolutions(
+ 3, false, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::AMD);
+ ComputeAndCompareSolutions(
+ 3, true, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::AMD);
+}
+
+#ifndef CERES_NO_EIGEN_METIS
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithEigenSparseLargeProblemNESDIS) {
+ ComputeAndCompareSolutions(
+ 3, false, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::NESDIS);
+ ComputeAndCompareSolutions(
+ 3, true, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::NESDIS);
+}
+#endif
+
+TEST_F(SchurComplementSolverTest,
+ SparseSchurWithEigenSparseLargeProblemNATURAL) {
+ ComputeAndCompareSolutions(
+ 3, false, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::NATURAL);
+ ComputeAndCompareSolutions(
+ 3, true, SPARSE_SCHUR, EIGEN, EIGEN_SPARSE, OrderingType::NATURAL);
}
#endif // CERES_USE_EIGEN_SPARSE
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/schur_eliminator.cc b/internal/ceres/schur_eliminator.cc
index 613ae95..cb079b5 100644
--- a/internal/ceres/schur_eliminator.cc
+++ b/internal/ceres/schur_eliminator.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,122 +39,125 @@
//
// This file is generated using generate_template_specializations.py.
+#include <memory>
+
#include "ceres/linear_solver.h"
#include "ceres/schur_eliminator.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-SchurEliminatorBase* SchurEliminatorBase::Create(
+SchurEliminatorBase::~SchurEliminatorBase() = default;
+
+std::unique_ptr<SchurEliminatorBase> SchurEliminatorBase::Create(
const LinearSolver::Options& options) {
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
if ((options.row_block_size == 2) &&
(options.e_block_size == 2) &&
(options.f_block_size == 2)) {
- return new SchurEliminator<2, 2, 2>(options);
+ return std::make_unique<SchurEliminator<2, 2, 2>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 2) &&
(options.f_block_size == 3)) {
- return new SchurEliminator<2, 2, 3>(options);
+ return std::make_unique<SchurEliminator<2, 2, 3>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 2) &&
(options.f_block_size == 4)) {
- return new SchurEliminator<2, 2, 4>(options);
+ return std::make_unique<SchurEliminator<2, 2, 4>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 2)) {
- return new SchurEliminator<2, 2, Eigen::Dynamic>(options);
+ return std::make_unique<SchurEliminator<2, 2, Eigen::Dynamic>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3) &&
(options.f_block_size == 3)) {
- return new SchurEliminator<2, 3, 3>(options);
+ return std::make_unique<SchurEliminator<2, 3, 3>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3) &&
(options.f_block_size == 4)) {
- return new SchurEliminator<2, 3, 4>(options);
+ return std::make_unique<SchurEliminator<2, 3, 4>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3) &&
(options.f_block_size == 6)) {
- return new SchurEliminator<2, 3, 6>(options);
+ return std::make_unique<SchurEliminator<2, 3, 6>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3) &&
(options.f_block_size == 9)) {
- return new SchurEliminator<2, 3, 9>(options);
+ return std::make_unique<SchurEliminator<2, 3, 9>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 3)) {
- return new SchurEliminator<2, 3, Eigen::Dynamic>(options);
+ return std::make_unique<SchurEliminator<2, 3, Eigen::Dynamic>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 3)) {
- return new SchurEliminator<2, 4, 3>(options);
+ return std::make_unique<SchurEliminator<2, 4, 3>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 4)) {
- return new SchurEliminator<2, 4, 4>(options);
+ return std::make_unique<SchurEliminator<2, 4, 4>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 6)) {
- return new SchurEliminator<2, 4, 6>(options);
+ return std::make_unique<SchurEliminator<2, 4, 6>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 8)) {
- return new SchurEliminator<2, 4, 8>(options);
+ return std::make_unique<SchurEliminator<2, 4, 8>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4) &&
(options.f_block_size == 9)) {
- return new SchurEliminator<2, 4, 9>(options);
+ return std::make_unique<SchurEliminator<2, 4, 9>>(options);
}
if ((options.row_block_size == 2) &&
(options.e_block_size == 4)) {
- return new SchurEliminator<2, 4, Eigen::Dynamic>(options);
+ return std::make_unique<SchurEliminator<2, 4, Eigen::Dynamic>>(options);
}
if (options.row_block_size == 2) {
- return new SchurEliminator<2, Eigen::Dynamic, Eigen::Dynamic>(options);
+ return std::make_unique<SchurEliminator<2, Eigen::Dynamic, Eigen::Dynamic>>(options);
}
if ((options.row_block_size == 3) &&
(options.e_block_size == 3) &&
(options.f_block_size == 3)) {
- return new SchurEliminator<3, 3, 3>(options);
+ return std::make_unique<SchurEliminator<3, 3, 3>>(options);
}
if ((options.row_block_size == 4) &&
(options.e_block_size == 4) &&
(options.f_block_size == 2)) {
- return new SchurEliminator<4, 4, 2>(options);
+ return std::make_unique<SchurEliminator<4, 4, 2>>(options);
}
if ((options.row_block_size == 4) &&
(options.e_block_size == 4) &&
(options.f_block_size == 3)) {
- return new SchurEliminator<4, 4, 3>(options);
+ return std::make_unique<SchurEliminator<4, 4, 3>>(options);
}
if ((options.row_block_size == 4) &&
(options.e_block_size == 4) &&
(options.f_block_size == 4)) {
- return new SchurEliminator<4, 4, 4>(options);
+ return std::make_unique<SchurEliminator<4, 4, 4>>(options);
}
if ((options.row_block_size == 4) &&
(options.e_block_size == 4)) {
- return new SchurEliminator<4, 4, Eigen::Dynamic>(options);
+ return std::make_unique<SchurEliminator<4, 4, Eigen::Dynamic>>(options);
}
#endif
VLOG(1) << "Template specializations not found for <"
<< options.row_block_size << "," << options.e_block_size << ","
<< options.f_block_size << ">";
- return new SchurEliminator<Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic>(
- options);
+ return std::make_unique<SchurEliminator<Eigen::Dynamic,
+ Eigen::Dynamic,
+ Eigen::Dynamic>>(options);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/schur_eliminator.h b/internal/ceres/schur_eliminator.h
index 42c016e..3832fe6 100644
--- a/internal/ceres/schur_eliminator.h
+++ b/internal/ceres/schur_eliminator.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,12 +40,13 @@
#include "ceres/block_random_access_matrix.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/block_structure.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Classes implementing the SchurEliminatorBase interface implement
// variable elimination for linear least squares problems. Assuming
@@ -56,9 +57,8 @@
// Where x = [y;z] is a partition of the variables. The partitioning
// of the variables is such that, E'E is a block diagonal matrix. Or
// in other words, the parameter blocks in E form an independent set
-// of the of the graph implied by the block matrix A'A. Then, this
-// class provides the functionality to compute the Schur complement
-// system
+// of the graph implied by the block matrix A'A. Then, this class
+// provides the functionality to compute the Schur complement system
//
// S z = r
//
@@ -164,13 +164,13 @@
// 2008 for an example of such use].
//
// Example usage: Please see schur_complement_solver.cc
-class CERES_EXPORT_INTERNAL SchurEliminatorBase {
+class CERES_NO_EXPORT SchurEliminatorBase {
public:
- virtual ~SchurEliminatorBase() {}
+ virtual ~SchurEliminatorBase();
- // Initialize the eliminator. It is the user's responsibilty to call
+ // Initialize the eliminator. It is the user's responsibility to call
// this function before calling Eliminate or BackSubstitute. It is
- // also the caller's responsibilty to ensure that the
+ // also the caller's responsibility to ensure that the
// CompressedRowBlockStructure object passed to this method is the
// same one (or is equivalent to) the one associated with the
// BlockSparseMatrix objects below.
@@ -210,7 +210,8 @@
const double* z,
double* y) = 0;
// Factory
- static SchurEliminatorBase* Create(const LinearSolver::Options& options);
+ static std::unique_ptr<SchurEliminatorBase> Create(
+ const LinearSolver::Options& options);
};
// Templated implementation of the SchurEliminatorBase interface. The
@@ -223,7 +224,7 @@
template <int kRowBlockSize = Eigen::Dynamic,
int kEBlockSize = Eigen::Dynamic,
int kFBlockSize = Eigen::Dynamic>
-class SchurEliminator : public SchurEliminatorBase {
+class CERES_NO_EXPORT SchurEliminator final : public SchurEliminatorBase {
public:
explicit SchurEliminator(const LinearSolver::Options& options)
: num_threads_(options.num_threads), context_(options.context) {
@@ -231,7 +232,7 @@
}
// SchurEliminatorBase Interface
- virtual ~SchurEliminator();
+ ~SchurEliminator() override;
void Init(int num_eliminate_blocks,
bool assume_full_rank_ete,
const CompressedRowBlockStructure* bs) final;
@@ -272,9 +273,9 @@
// buffer_layout[z1] = 0
// buffer_layout[z5] = y1 * z1
// buffer_layout[z2] = y1 * z1 + y1 * z5
- typedef std::map<int, int> BufferLayoutType;
+ using BufferLayoutType = std::map<int, int>;
struct Chunk {
- Chunk() : size(0) {}
+ explicit Chunk(int start) : size(0), start(start) {}
int size;
int start;
BufferLayoutType buffer_layout;
@@ -378,11 +379,12 @@
template <int kRowBlockSize = Eigen::Dynamic,
int kEBlockSize = Eigen::Dynamic,
int kFBlockSize = Eigen::Dynamic>
-class SchurEliminatorForOneFBlock : public SchurEliminatorBase {
+class CERES_NO_EXPORT SchurEliminatorForOneFBlock final
+ : public SchurEliminatorBase {
public:
- virtual ~SchurEliminatorForOneFBlock() {}
+ // TODO(sameeragarwal) Find out why "assume_full_rank_ete" is not used here
void Init(int num_eliminate_blocks,
- bool assume_full_rank_ete,
+ bool /*assume_full_rank_ete*/,
const CompressedRowBlockStructure* bs) override {
CHECK_GT(num_eliminate_blocks, 0)
<< "SchurComplementSolver cannot be initialized with "
@@ -445,7 +447,7 @@
const CompressedRowBlockStructure* bs = A.block_structure();
const double* values = A.values();
- // Add the diagonal to the schur complement.
+ // Add the diagonal to the Schur complement.
if (D != nullptr) {
typename EigenTypes<kFBlockSize>::ConstVectorRef diag(
D + bs->cols[num_eliminate_blocks_].position, kFBlockSize);
@@ -477,14 +479,14 @@
const Chunk& chunk = chunks_[i];
const int e_block_id = bs->rows[chunk.start].cells.front().block_id;
- // Naming covention, e_t_e = e_block.transpose() * e_block;
+ // Naming convention, e_t_e = e_block.transpose() * e_block;
Eigen::Matrix<double, kEBlockSize, kEBlockSize> e_t_e;
Eigen::Matrix<double, kEBlockSize, kFBlockSize> e_t_f;
Eigen::Matrix<double, kEBlockSize, 1> e_t_b;
Eigen::Matrix<double, kFBlockSize, 1> f_t_b;
// Add the square of the diagonal to e_t_e.
- if (D != NULL) {
+ if (D != nullptr) {
const typename EigenTypes<kEBlockSize>::ConstVectorRef diag(
D + bs->cols[e_block_id].position, kEBlockSize);
e_t_e = diag.array().square().matrix().asDiagonal();
@@ -568,7 +570,7 @@
// y_i = e_t_e_inverse * sum_i e_i^T * (b_i - f_i * z);
void BackSubstitute(const BlockSparseMatrixData& A,
const double* b,
- const double* D,
+ const double* /*D*/,
const double* z_ptr,
double* y) override {
typename EigenTypes<kFBlockSize>::ConstVectorRef z(z_ptr, kFBlockSize);
@@ -621,7 +623,8 @@
std::vector<double> e_t_e_inverse_matrices_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_SCHUR_ELIMINATOR_H_
diff --git a/internal/ceres/schur_eliminator_benchmark.cc b/internal/ceres/schur_eliminator_benchmark.cc
index 6307025..78aa580 100644
--- a/internal/ceres/schur_eliminator_benchmark.cc
+++ b/internal/ceres/schur_eliminator_benchmark.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,16 +28,19 @@
//
// Authors: sameeragarwal@google.com (Sameer Agarwal)
+#include <algorithm>
+#include <memory>
+#include <random>
+#include <vector>
+
#include "Eigen/Dense"
#include "benchmark/benchmark.h"
#include "ceres/block_random_access_dense_matrix.h"
#include "ceres/block_sparse_matrix.h"
#include "ceres/block_structure.h"
-#include "ceres/random.h"
#include "ceres/schur_eliminator.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
constexpr int kRowBlockSize = 2;
constexpr int kEBlockSize = 3;
@@ -46,7 +49,7 @@
class BenchmarkData {
public:
explicit BenchmarkData(const int num_e_blocks) {
- CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
+ auto* bs = new CompressedRowBlockStructure;
bs->cols.resize(num_e_blocks + 1);
int col_pos = 0;
for (int i = 0; i < num_e_blocks; ++i) {
@@ -88,17 +91,18 @@
}
}
- matrix_.reset(new BlockSparseMatrix(bs));
+ matrix_ = std::make_unique<BlockSparseMatrix>(bs);
double* values = matrix_->mutable_values();
- for (int i = 0; i < matrix_->num_nonzeros(); ++i) {
- values[i] = RandNormal();
- }
+ std::generate_n(values, matrix_->num_nonzeros(), [this] {
+ return standard_normal_(prng_);
+ });
b_.resize(matrix_->num_rows());
b_.setRandom();
- std::vector<int> blocks(1, kFBlockSize);
- lhs_.reset(new BlockRandomAccessDenseMatrix(blocks));
+ std::vector<Block> blocks;
+ blocks.emplace_back(kFBlockSize, 0);
+ lhs_ = std::make_unique<BlockRandomAccessDenseMatrix>(blocks, &context_, 1);
diagonal_.resize(matrix_->num_cols());
diagonal_.setOnes();
rhs_.resize(kFBlockSize);
@@ -117,7 +121,11 @@
Vector* mutable_y() { return &y_; }
Vector* mutable_z() { return &z_; }
+ ContextImpl* context() { return &context_; }
+
private:
+ ContextImpl context_;
+
std::unique_ptr<BlockSparseMatrix> matrix_;
Vector b_;
std::unique_ptr<BlockRandomAccessDenseMatrix> lhs_;
@@ -125,18 +133,19 @@
Vector diagonal_;
Vector z_;
Vector y_;
+ std::mt19937 prng_;
+ std::normal_distribution<> standard_normal_;
};
-void BM_SchurEliminatorEliminate(benchmark::State& state) {
+static void BM_SchurEliminatorEliminate(benchmark::State& state) {
const int num_e_blocks = state.range(0);
BenchmarkData data(num_e_blocks);
- ContextImpl context;
LinearSolver::Options linear_solver_options;
linear_solver_options.e_block_size = kEBlockSize;
linear_solver_options.row_block_size = kRowBlockSize;
linear_solver_options.f_block_size = kFBlockSize;
- linear_solver_options.context = &context;
+ linear_solver_options.context = data.context();
std::unique_ptr<SchurEliminatorBase> eliminator(
SchurEliminatorBase::Create(linear_solver_options));
@@ -150,16 +159,15 @@
}
}
-void BM_SchurEliminatorBackSubstitute(benchmark::State& state) {
+static void BM_SchurEliminatorBackSubstitute(benchmark::State& state) {
const int num_e_blocks = state.range(0);
BenchmarkData data(num_e_blocks);
- ContextImpl context;
LinearSolver::Options linear_solver_options;
linear_solver_options.e_block_size = kEBlockSize;
linear_solver_options.row_block_size = kRowBlockSize;
linear_solver_options.f_block_size = kFBlockSize;
- linear_solver_options.context = &context;
+ linear_solver_options.context = data.context();
std::unique_ptr<SchurEliminatorBase> eliminator(
SchurEliminatorBase::Create(linear_solver_options));
@@ -178,7 +186,7 @@
}
}
-void BM_SchurEliminatorForOneFBlockEliminate(benchmark::State& state) {
+static void BM_SchurEliminatorForOneFBlockEliminate(benchmark::State& state) {
const int num_e_blocks = state.range(0);
BenchmarkData data(num_e_blocks);
SchurEliminatorForOneFBlock<2, 3, 6> eliminator;
@@ -192,7 +200,8 @@
}
}
-void BM_SchurEliminatorForOneFBlockBackSubstitute(benchmark::State& state) {
+static void BM_SchurEliminatorForOneFBlockBackSubstitute(
+ benchmark::State& state) {
const int num_e_blocks = state.range(0);
BenchmarkData data(num_e_blocks);
SchurEliminatorForOneFBlock<2, 3, 6> eliminator;
@@ -216,7 +225,6 @@
BENCHMARK(BM_SchurEliminatorBackSubstitute)->Range(10, 10000);
BENCHMARK(BM_SchurEliminatorForOneFBlockBackSubstitute)->Range(10, 10000);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
BENCHMARK_MAIN();
diff --git a/internal/ceres/schur_eliminator_impl.h b/internal/ceres/schur_eliminator_impl.h
index 1f0b4fa..ef5ce66 100644
--- a/internal/ceres/schur_eliminator_impl.h
+++ b/internal/ceres/schur_eliminator_impl.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -47,7 +47,7 @@
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
// clang-format on
#include <algorithm>
@@ -69,8 +69,7 @@
#include "ceres/thread_token_provider.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
SchurEliminator<kRowBlockSize, kEBlockSize, kFBlockSize>::~SchurEliminator() {
@@ -107,7 +106,7 @@
}
// TODO(sameeragarwal): Now that we may have subset block structure,
- // we need to make sure that we account for the fact that somep
+ // we need to make sure that we account for the fact that some
// point blocks only have a "diagonal" row and nothing more.
//
// This likely requires a slightly different algorithm, which works
@@ -125,10 +124,8 @@
break;
}
- chunks_.push_back(Chunk());
+ chunks_.push_back(Chunk(r));
Chunk& chunk = chunks_.back();
- chunk.size = 0;
- chunk.start = r;
int buffer_size = 0;
const int e_block_size = bs->cols[chunk_block_id].size;
@@ -161,12 +158,13 @@
uneliminated_row_begins_ = chunk.start + chunk.size;
- buffer_.reset(new double[buffer_size_ * num_threads_]);
+ buffer_ = std::make_unique<double[]>(buffer_size_ * num_threads_);
// chunk_outer_product_buffer_ only needs to store e_block_size *
// f_block_size, which is always less than buffer_size_, so we just
// allocate buffer_size_ per thread.
- chunk_outer_product_buffer_.reset(new double[buffer_size_ * num_threads_]);
+ chunk_outer_product_buffer_ =
+ std::make_unique<double[]>(buffer_size_ * num_threads_);
STLDeleteElements(&rhs_locks_);
rhs_locks_.resize(num_col_blocks - num_eliminate_blocks_);
@@ -193,7 +191,7 @@
const int num_col_blocks = bs->cols.size();
// Add the diagonal to the schur complement.
- if (D != NULL) {
+ if (D != nullptr) {
ParallelFor(context_,
num_eliminate_blocks_,
num_col_blocks,
@@ -203,12 +201,10 @@
int r, c, row_stride, col_stride;
CellInfo* cell_info = lhs->GetCell(
block_id, block_id, &r, &c, &row_stride, &col_stride);
- if (cell_info != NULL) {
+ if (cell_info != nullptr) {
const int block_size = bs->cols[i].size;
typename EigenTypes<Eigen::Dynamic>::ConstVectorRef diag(
D + bs->cols[i].position, block_size);
-
- std::lock_guard<std::mutex> l(cell_info->m);
MatrixRef m(cell_info->values, row_stride, col_stride);
m.block(r, c, block_size, block_size).diagonal() +=
diag.array().square().matrix();
@@ -245,7 +241,7 @@
typename EigenTypes<kEBlockSize, kEBlockSize>::Matrix ete(e_block_size,
e_block_size);
- if (D != NULL) {
+ if (D != nullptr) {
const typename EigenTypes<kEBlockSize>::ConstVectorRef diag(
D + bs->cols[e_block_id].position, e_block_size);
ete = diag.array().square().matrix().asDiagonal();
@@ -302,7 +298,7 @@
thread_id, bs, inverse_ete, buffer, chunk.buffer_layout, lhs);
});
- // For rows with no e_blocks, the schur complement update reduces to
+ // For rows with no e_blocks, the Schur complement update reduces to
// S += F'F.
NoEBlockRowsUpdate(A, b, uneliminated_row_begins_, lhs, rhs);
}
@@ -327,7 +323,7 @@
typename EigenTypes<kEBlockSize, kEBlockSize>::Matrix ete(e_block_size,
e_block_size);
- if (D != NULL) {
+ if (D != nullptr) {
const typename EigenTypes<kEBlockSize>::ConstVectorRef diag(
D + bs->cols[e_block_id].position, e_block_size);
ete = diag.array().square().matrix().asDiagonal();
@@ -411,7 +407,7 @@
const int block_id = row.cells[c].block_id;
const int block_size = bs->cols[block_id].size;
const int block = block_id - num_eliminate_blocks_;
- std::lock_guard<std::mutex> l(*rhs_locks_[block]);
+ auto lock = MakeConditionalLock(num_threads_, *rhs_locks_[block]);
// clang-format off
MatrixTransposeVectorMultiply<kRowBlockSize, kFBlockSize, 1>(
values + row.cells[c].position,
@@ -434,7 +430,7 @@
//
// ete = y11 * y11' + y12 * y12'
//
-// and the off diagonal blocks in the Guass Newton Hessian.
+// and the off diagonal blocks in the Gauss Newton Hessian.
//
// buffer = [y11'(z11 + z12), y12' * z22, y11' * z51]
//
@@ -525,7 +521,7 @@
// computation of the right-hand matrix product, but memory
// references to the left hand side.
const int e_block_size = inverse_ete.rows();
- BufferLayoutType::const_iterator it1 = buffer_layout.begin();
+ auto it1 = buffer_layout.begin();
double* b1_transpose_inverse_ete =
chunk_outer_product_buffer_.get() + thread_id * buffer_size_;
@@ -542,16 +538,16 @@
b1_transpose_inverse_ete, 0, 0, block1_size, e_block_size);
// clang-format on
- BufferLayoutType::const_iterator it2 = it1;
+ auto it2 = it1;
for (; it2 != buffer_layout.end(); ++it2) {
const int block2 = it2->first - num_eliminate_blocks_;
int r, c, row_stride, col_stride;
CellInfo* cell_info =
lhs->GetCell(block1, block2, &r, &c, &row_stride, &col_stride);
- if (cell_info != NULL) {
+ if (cell_info != nullptr) {
const int block2_size = bs->cols[it2->first].size;
- std::lock_guard<std::mutex> l(cell_info->m);
+ auto lock = MakeConditionalLock(num_threads_, cell_info->m);
// clang-format off
MatrixMatrixMultiply
<kFBlockSize, kEBlockSize, kEBlockSize, kFBlockSize, -1>(
@@ -564,7 +560,7 @@
}
}
-// For rows with no e_blocks, the schur complement update reduces to S
+// For rows with no e_blocks, the Schur complement update reduces to S
// += F'F. This function iterates over the rows of A with no e_block,
// and calls NoEBlockRowOuterProduct on each row.
template <int kRowBlockSize, int kEBlockSize, int kFBlockSize>
@@ -597,7 +593,7 @@
}
// A row r of A, which has no e_blocks gets added to the Schur
-// Complement as S += r r'. This function is responsible for computing
+// complement as S += r r'. This function is responsible for computing
// the contribution of a single row r to the Schur complement. It is
// very similar in structure to EBlockRowOuterProduct except for
// one difference. It does not use any of the template
@@ -627,8 +623,8 @@
int r, c, row_stride, col_stride;
CellInfo* cell_info =
lhs->GetCell(block1, block1, &r, &c, &row_stride, &col_stride);
- if (cell_info != NULL) {
- std::lock_guard<std::mutex> l(cell_info->m);
+ if (cell_info != nullptr) {
+ auto lock = MakeConditionalLock(num_threads_, cell_info->m);
// This multiply currently ignores the fact that this is a
// symmetric outer product.
// clang-format off
@@ -647,9 +643,9 @@
int r, c, row_stride, col_stride;
CellInfo* cell_info =
lhs->GetCell(block1, block2, &r, &c, &row_stride, &col_stride);
- if (cell_info != NULL) {
+ if (cell_info != nullptr) {
const int block2_size = bs->cols[row.cells[j].block_id].size;
- std::lock_guard<std::mutex> l(cell_info->m);
+ auto lock = MakeConditionalLock(num_threads_, cell_info->m);
// clang-format off
MatrixTransposeMatrixMultiply
<Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic, 1>(
@@ -682,8 +678,8 @@
int r, c, row_stride, col_stride;
CellInfo* cell_info =
lhs->GetCell(block1, block1, &r, &c, &row_stride, &col_stride);
- if (cell_info != NULL) {
- std::lock_guard<std::mutex> l(cell_info->m);
+ if (cell_info != nullptr) {
+ auto lock = MakeConditionalLock(num_threads_, cell_info->m);
// block += b1.transpose() * b1;
// clang-format off
MatrixTransposeMatrixMultiply
@@ -702,9 +698,9 @@
int r, c, row_stride, col_stride;
CellInfo* cell_info =
lhs->GetCell(block1, block2, &r, &c, &row_stride, &col_stride);
- if (cell_info != NULL) {
+ if (cell_info != nullptr) {
// block += b1.transpose() * b2;
- std::lock_guard<std::mutex> l(cell_info->m);
+ auto lock = MakeConditionalLock(num_threads_, cell_info->m);
// clang-format off
MatrixTransposeMatrixMultiply
<kRowBlockSize, kFBlockSize, kRowBlockSize, kFBlockSize, 1>(
@@ -717,7 +713,6 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_SCHUR_ELIMINATOR_IMPL_H_
diff --git a/internal/ceres/schur_eliminator_template.py b/internal/ceres/schur_eliminator_template.py
index 5051595..99e6f3e 100644
--- a/internal/ceres/schur_eliminator_template.py
+++ b/internal/ceres/schur_eliminator_template.py
@@ -1,5 +1,5 @@
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2017 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -51,7 +51,7 @@
# Set of template specializations to generate
HEADER = """// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -95,57 +95,56 @@
DYNAMIC_FILE = """
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<%s, %s, %s>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
"""
SPECIALIZATION_FILE = """
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
#include "ceres/schur_eliminator_impl.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template class SchurEliminator<%s, %s, %s>;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_RESTRICT_SCHUR_SPECIALIZATION
"""
FACTORY_FILE_HEADER = """
+#include <memory>
+
#include "ceres/linear_solver.h"
#include "ceres/schur_eliminator.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-SchurEliminatorBase* SchurEliminatorBase::Create(
+SchurEliminatorBase::~SchurEliminatorBase() = default;
+
+std::unique_ptr<SchurEliminatorBase> SchurEliminatorBase::Create(
const LinearSolver::Options& options) {
#ifndef CERES_RESTRICT_SCHUR_SPECIALIZATION
"""
-FACTORY = """ return new SchurEliminator<%s, %s, %s>(options);"""
+FACTORY = """ return std::make_unique<SchurEliminator<%s, %s, %s>>(options);"""
FACTORY_FOOTER = """
#endif
VLOG(1) << "Template specializations not found for <"
<< options.row_block_size << "," << options.e_block_size << ","
<< options.f_block_size << ">";
- return new SchurEliminator<Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic>(
- options);
+ return std::make_unique<SchurEliminator<Eigen::Dynamic,
+ Eigen::Dynamic,
+ Eigen::Dynamic>>(options);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
"""
diff --git a/internal/ceres/schur_eliminator_test.cc b/internal/ceres/schur_eliminator_test.cc
index 6383ced..bdf8b8c 100644
--- a/internal/ceres/schur_eliminator_test.cc
+++ b/internal/ceres/schur_eliminator_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,7 +30,10 @@
#include "ceres/schur_eliminator.h"
+#include <algorithm>
#include <memory>
+#include <random>
+#include <vector>
#include "Eigen/Dense"
#include "ceres/block_random_access_dense_matrix.h"
@@ -41,7 +44,6 @@
#include "ceres/detect_structure.h"
#include "ceres/internal/eigen.h"
#include "ceres/linear_least_squares_problems.h"
-#include "ceres/random.h"
#include "ceres/test_util.h"
#include "ceres/triplet_sparse_matrix.h"
#include "ceres/types.h"
@@ -51,22 +53,20 @@
// TODO(sameeragarwal): Reduce the size of these tests and redo the
// parameterization to be more efficient.
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class SchurEliminatorTest : public ::testing::Test {
protected:
void SetUpFromId(int id) {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(id));
+ auto problem = CreateLinearLeastSquaresProblemFromId(id);
CHECK(problem != nullptr);
SetupHelper(problem.get());
}
void SetupHelper(LinearLeastSquaresProblem* problem) {
A.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
- b.reset(problem->b.release());
- D.reset(problem->D.release());
+ b = std::move(problem->b);
+ D = std::move(problem->D);
num_eliminate_blocks = problem->num_eliminate_blocks;
num_eliminate_cols = 0;
@@ -126,12 +126,8 @@
const double relative_tolerance) {
const CompressedRowBlockStructure* bs = A->block_structure();
const int num_col_blocks = bs->cols.size();
- std::vector<int> blocks(num_col_blocks - num_eliminate_blocks, 0);
- for (int i = num_eliminate_blocks; i < num_col_blocks; ++i) {
- blocks[i - num_eliminate_blocks] = bs->cols[i].size;
- }
-
- BlockRandomAccessDenseMatrix lhs(blocks);
+ auto blocks = Tail(bs->cols, num_col_blocks - num_eliminate_blocks);
+ BlockRandomAccessDenseMatrix lhs(blocks, &context_, 1);
const int num_cols = A->num_cols();
const int schur_size = lhs.num_rows();
@@ -139,8 +135,7 @@
Vector rhs(schur_size);
LinearSolver::Options options;
- ContextImpl context;
- options.context = &context;
+ options.context = &context_;
options.elimination_groups.push_back(num_eliminate_blocks);
if (use_static_structure) {
DetectStructure(*bs,
@@ -150,8 +145,8 @@
&options.f_block_size);
}
- std::unique_ptr<SchurEliminatorBase> eliminator;
- eliminator.reset(SchurEliminatorBase::Create(options));
+ std::unique_ptr<SchurEliminatorBase> eliminator =
+ SchurEliminatorBase::Create(options);
const bool kFullRankETE = true;
eliminator->Init(num_eliminate_blocks, kFullRankETE, A->block_structure());
eliminator->Eliminate(
@@ -182,6 +177,8 @@
relative_tolerance);
}
+ ContextImpl context_;
+
std::unique_ptr<BlockSparseMatrix> A;
std::unique_ptr<double[]> b;
std::unique_ptr<double[]> D;
@@ -228,7 +225,9 @@
constexpr int kFBlockSize = 6;
constexpr int num_e_blocks = 5;
- CompressedRowBlockStructure* bs = new CompressedRowBlockStructure;
+ ContextImpl context;
+
+ auto* bs = new CompressedRowBlockStructure;
bs->cols.resize(num_e_blocks + 1);
int col_pos = 0;
for (int i = 0; i < num_e_blocks; ++i) {
@@ -284,9 +283,11 @@
BlockSparseMatrix matrix(bs);
double* values = matrix.mutable_values();
- for (int i = 0; i < matrix.num_nonzeros(); ++i) {
- values[i] = RandNormal();
- }
+ std::mt19937 prng;
+ std::normal_distribution<> standard_normal;
+ std::generate_n(values, matrix.num_nonzeros(), [&prng, &standard_normal] {
+ return standard_normal(prng);
+ });
Vector b(matrix.num_rows());
b.setRandom();
@@ -294,9 +295,10 @@
Vector diagonal(matrix.num_cols());
diagonal.setOnes();
- std::vector<int> blocks(1, kFBlockSize);
- BlockRandomAccessDenseMatrix actual_lhs(blocks);
- BlockRandomAccessDenseMatrix expected_lhs(blocks);
+ std::vector<Block> blocks;
+ blocks.emplace_back(kFBlockSize, 0);
+ BlockRandomAccessDenseMatrix actual_lhs(blocks, &context, 1);
+ BlockRandomAccessDenseMatrix expected_lhs(blocks, &context, 1);
Vector actual_rhs(kFBlockSize);
Vector expected_rhs(kFBlockSize);
@@ -308,7 +310,6 @@
expected_e_sol.setZero();
{
- ContextImpl context;
LinearSolver::Options linear_solver_options;
linear_solver_options.e_block_size = kEBlockSize;
linear_solver_options.row_block_size = kRowBlockSize;
@@ -369,5 +370,4 @@
<< actual_e_sol;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/schur_jacobi_preconditioner.cc b/internal/ceres/schur_jacobi_preconditioner.cc
index 89d770b..fbe258d 100644
--- a/internal/ceres/schur_jacobi_preconditioner.cc
+++ b/internal/ceres/schur_jacobi_preconditioner.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,6 +30,7 @@
#include "ceres/schur_jacobi_preconditioner.h"
+#include <memory>
#include <utility>
#include <vector>
@@ -39,30 +40,32 @@
#include "ceres/schur_eliminator.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
SchurJacobiPreconditioner::SchurJacobiPreconditioner(
- const CompressedRowBlockStructure& bs,
- const Preconditioner::Options& options)
- : options_(options) {
+ const CompressedRowBlockStructure& bs, Preconditioner::Options options)
+ : options_(std::move(options)) {
CHECK_GT(options_.elimination_groups.size(), 1);
CHECK_GT(options_.elimination_groups[0], 0);
const int num_blocks = bs.cols.size() - options_.elimination_groups[0];
CHECK_GT(num_blocks, 0) << "Jacobian should have at least 1 f_block for "
<< "SCHUR_JACOBI preconditioner.";
- CHECK(options_.context != NULL);
+ CHECK(options_.context != nullptr);
- std::vector<int> blocks(num_blocks);
+ std::vector<Block> blocks(num_blocks);
+ int position = 0;
for (int i = 0; i < num_blocks; ++i) {
- blocks[i] = bs.cols[i + options_.elimination_groups[0]].size;
+ blocks[i] =
+ Block(bs.cols[i + options_.elimination_groups[0]].size, position);
+ position += blocks[i].size;
}
- m_.reset(new BlockRandomAccessDiagonalMatrix(blocks));
+ m_ = std::make_unique<BlockRandomAccessDiagonalMatrix>(
+ blocks, options_.context, options_.num_threads);
InitEliminator(bs);
}
-SchurJacobiPreconditioner::~SchurJacobiPreconditioner() {}
+SchurJacobiPreconditioner::~SchurJacobiPreconditioner() = default;
// Initialize the SchurEliminator.
void SchurJacobiPreconditioner::InitEliminator(
@@ -74,7 +77,7 @@
eliminator_options.f_block_size = options_.f_block_size;
eliminator_options.row_block_size = options_.row_block_size;
eliminator_options.context = options_.context;
- eliminator_.reset(SchurEliminatorBase::Create(eliminator_options));
+ eliminator_ = SchurEliminatorBase::Create(eliminator_options);
const bool kFullRankETE = true;
eliminator_->Init(
eliminator_options.elimination_groups[0], kFullRankETE, &bs);
@@ -93,12 +96,11 @@
return true;
}
-void SchurJacobiPreconditioner::RightMultiply(const double* x,
- double* y) const {
- m_->RightMultiply(x, y);
+void SchurJacobiPreconditioner::RightMultiplyAndAccumulate(const double* x,
+ double* y) const {
+ m_->RightMultiplyAndAccumulate(x, y);
}
int SchurJacobiPreconditioner::num_rows() const { return m_->num_rows(); }
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/schur_jacobi_preconditioner.h b/internal/ceres/schur_jacobi_preconditioner.h
index 372b790..b540bc0 100644
--- a/internal/ceres/schur_jacobi_preconditioner.h
+++ b/internal/ceres/schur_jacobi_preconditioner.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -43,10 +43,11 @@
#include <utility>
#include <vector>
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/preconditioner.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockRandomAccessDiagonalMatrix;
class BlockSparseMatrix;
@@ -69,10 +70,13 @@
// options.elimination_groups.push_back(num_cameras);
// SchurJacobiPreconditioner preconditioner(
// *A.block_structure(), options);
-// preconditioner.Update(A, NULL);
-// preconditioner.RightMultiply(x, y);
+// preconditioner.Update(A, nullptr);
+// preconditioner.RightMultiplyAndAccumulate(x, y);
//
-class SchurJacobiPreconditioner : public BlockSparseMatrixPreconditioner {
+// TODO(https://github.com/ceres-solver/ceres-solver/issues/935):
+// SchurJacobiPreconditioner::RightMultiply will benefit from multithreading
+class CERES_NO_EXPORT SchurJacobiPreconditioner
+ : public BlockSparseMatrixPreconditioner {
public:
// Initialize the symbolic structure of the preconditioner. bs is
// the block structure of the linear system to be solved. It is used
@@ -81,14 +85,14 @@
// It has the same structural requirement as other Schur complement
// based solvers. Please see schur_eliminator.h for more details.
SchurJacobiPreconditioner(const CompressedRowBlockStructure& bs,
- const Preconditioner::Options& options);
+ Preconditioner::Options options);
SchurJacobiPreconditioner(const SchurJacobiPreconditioner&) = delete;
void operator=(const SchurJacobiPreconditioner&) = delete;
- virtual ~SchurJacobiPreconditioner();
+ ~SchurJacobiPreconditioner() override;
// Preconditioner interface.
- void RightMultiply(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
int num_rows() const final;
private:
@@ -101,7 +105,8 @@
std::unique_ptr<BlockRandomAccessDiagonalMatrix> m_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_SCHUR_JACOBI_PRECONDITIONER_H_
diff --git a/internal/ceres/schur_templates.cc b/internal/ceres/schur_templates.cc
index bcf0d14..95df671 100644
--- a/internal/ceres/schur_templates.cc
+++ b/internal/ceres/schur_templates.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
diff --git a/internal/ceres/schur_templates.h b/internal/ceres/schur_templates.h
index 90aee0a..218fb51 100644
--- a/internal/ceres/schur_templates.h
+++ b/internal/ceres/schur_templates.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,15 +32,16 @@
#ifndef CERES_INTERNAL_SCHUR_TEMPLATES_H_
#define CERES_INTERNAL_SCHUR_TEMPLATES_H_
+#include "ceres/internal/config.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
+CERES_NO_EXPORT
void GetBestSchurTemplateSpecialization(int* row_block_size,
int* e_block_size,
int* f_block_size);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_SCHUR_TEMPLATES_H_
diff --git a/internal/ceres/scoped_thread_token.h b/internal/ceres/scoped_thread_token.h
index c167397..76da95b 100644
--- a/internal/ceres/scoped_thread_token.h
+++ b/internal/ceres/scoped_thread_token.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,31 +31,29 @@
#ifndef CERES_INTERNAL_SCOPED_THREAD_TOKEN_H_
#define CERES_INTERNAL_SCOPED_THREAD_TOKEN_H_
+#include "ceres/internal/export.h"
#include "ceres/thread_token_provider.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Helper class for ThreadTokenProvider. This object acquires a token in its
// constructor and puts that token back with destruction.
-class ScopedThreadToken {
+class CERES_NO_EXPORT ScopedThreadToken {
public:
- ScopedThreadToken(ThreadTokenProvider* provider)
+ explicit ScopedThreadToken(ThreadTokenProvider* provider)
: provider_(provider), token_(provider->Acquire()) {}
~ScopedThreadToken() { provider_->Release(token_); }
+ ScopedThreadToken(ScopedThreadToken&) = delete;
+ ScopedThreadToken& operator=(ScopedThreadToken&) = delete;
int token() const { return token_; }
private:
ThreadTokenProvider* provider_;
int token_;
-
- ScopedThreadToken(ScopedThreadToken&);
- ScopedThreadToken& operator=(ScopedThreadToken&);
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_SCOPED_THREAD_TOKEN_H_
diff --git a/internal/ceres/scratch_evaluate_preparer.cc b/internal/ceres/scratch_evaluate_preparer.cc
index 9905b22..86cad93 100644
--- a/internal/ceres/scratch_evaluate_preparer.cc
+++ b/internal/ceres/scratch_evaluate_preparer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,26 +30,28 @@
#include "ceres/scratch_evaluate_preparer.h"
+#include <memory>
+
#include "ceres/parameter_block.h"
#include "ceres/program.h"
#include "ceres/residual_block.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-ScratchEvaluatePreparer* ScratchEvaluatePreparer::Create(const Program& program,
- int num_threads) {
- ScratchEvaluatePreparer* preparers = new ScratchEvaluatePreparer[num_threads];
+std::unique_ptr<ScratchEvaluatePreparer[]> ScratchEvaluatePreparer::Create(
+ const Program& program, unsigned num_threads) {
+ auto preparers = std::make_unique<ScratchEvaluatePreparer[]>(num_threads);
int max_derivatives_per_residual_block =
program.MaxDerivativesPerResidualBlock();
- for (int i = 0; i < num_threads; i++) {
+ for (unsigned i = 0; i < num_threads; i++) {
preparers[i].Init(max_derivatives_per_residual_block);
}
return preparers;
}
void ScratchEvaluatePreparer::Init(int max_derivatives_per_residual_block) {
- jacobian_scratch_.reset(new double[max_derivatives_per_residual_block]);
+ jacobian_scratch_ = std::make_unique<double[]>(
+ static_cast<std::size_t>(max_derivatives_per_residual_block));
}
// Point the jacobian blocks into the scratch area of this evaluate preparer.
@@ -64,13 +66,12 @@
const ParameterBlock* parameter_block =
residual_block->parameter_blocks()[j];
if (parameter_block->IsConstant()) {
- jacobians[j] = NULL;
+ jacobians[j] = nullptr;
} else {
jacobians[j] = jacobian_block_cursor;
- jacobian_block_cursor += num_residuals * parameter_block->LocalSize();
+ jacobian_block_cursor += num_residuals * parameter_block->TangentSize();
}
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/scratch_evaluate_preparer.h b/internal/ceres/scratch_evaluate_preparer.h
index 2d2745d..a7fd8a8 100644
--- a/internal/ceres/scratch_evaluate_preparer.h
+++ b/internal/ceres/scratch_evaluate_preparer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,18 +37,20 @@
#include <memory>
-namespace ceres {
-namespace internal {
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
+
+namespace ceres::internal {
class Program;
class ResidualBlock;
class SparseMatrix;
-class ScratchEvaluatePreparer {
+class CERES_NO_EXPORT ScratchEvaluatePreparer {
public:
// Create num_threads ScratchEvaluatePreparers.
- static ScratchEvaluatePreparer* Create(const Program& program,
- int num_threads);
+ static std::unique_ptr<ScratchEvaluatePreparer[]> Create(
+ const Program& program, unsigned num_threads);
// EvaluatePreparer interface
void Init(int max_derivatives_per_residual_block);
@@ -63,7 +65,8 @@
std::unique_ptr<double[]> jacobian_scratch_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_SCRATCH_EVALUATE_PREPARER_H_
diff --git a/internal/ceres/single_linkage_clustering.cc b/internal/ceres/single_linkage_clustering.cc
index 0e78131..06e76df 100644
--- a/internal/ceres/single_linkage_clustering.cc
+++ b/internal/ceres/single_linkage_clustering.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,8 +36,7 @@
#include "ceres/graph.h"
#include "ceres/graph_algorithms.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
int ComputeSingleLinkageClustering(
const SingleLinkageClusteringOptions& options,
@@ -91,5 +90,4 @@
return num_clusters;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/single_linkage_clustering.h b/internal/ceres/single_linkage_clustering.h
index e891a9e..3f49540 100644
--- a/internal/ceres/single_linkage_clustering.h
+++ b/internal/ceres/single_linkage_clustering.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,10 +34,10 @@
#include <unordered_map>
#include "ceres/graph.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
struct SingleLinkageClusteringOptions {
// Graph edges with edge weight less than min_similarity are ignored
@@ -55,12 +55,13 @@
//
// The return value of this function is the number of clusters
// identified by the algorithm.
-int CERES_EXPORT_INTERNAL
-ComputeSingleLinkageClustering(const SingleLinkageClusteringOptions& options,
- const WeightedGraph<int>& graph,
- std::unordered_map<int, int>* membership);
+CERES_NO_EXPORT int ComputeSingleLinkageClustering(
+ const SingleLinkageClusteringOptions& options,
+ const WeightedGraph<int>& graph,
+ std::unordered_map<int, int>* membership);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_SINGLE_LINKAGE_CLUSTERING_H_
diff --git a/internal/ceres/single_linkage_clustering_test.cc b/internal/ceres/single_linkage_clustering_test.cc
index 28c7c41..cc16cb4 100644
--- a/internal/ceres/single_linkage_clustering_test.cc
+++ b/internal/ceres/single_linkage_clustering_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,8 +35,7 @@
#include "ceres/graph.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST(SingleLinkageClustering, GraphHasTwoComponents) {
WeightedGraph<int> graph;
@@ -122,5 +121,4 @@
EXPECT_EQ(membership[4], membership[5]);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/small_blas.h b/internal/ceres/small_blas.h
index 4ee9229..fb8d7fa 100644
--- a/internal/ceres/small_blas.h
+++ b/internal/ceres/small_blas.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,12 +36,11 @@
#define CERES_INTERNAL_SMALL_BLAS_H_
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "glog/logging.h"
#include "small_blas_generic.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// The following three macros are used to share code and reduce
// template junk across the various GEMM variants.
@@ -210,7 +209,7 @@
// Process the couple columns in remainder if present.
if (NUM_COL_C & 2) {
- int col = NUM_COL_C & (int)(~(span - 1));
+ int col = NUM_COL_C & (~(span - 1));
const double* pa = &A[0];
for (int row = 0; row < NUM_ROW_C; ++row, pa += NUM_COL_A) {
const double* pb = &B[col];
@@ -232,7 +231,7 @@
}
// Calculate the main part with multiples of 4.
- int col_m = NUM_COL_C & (int)(~(span - 1));
+ int col_m = NUM_COL_C & (~(span - 1));
for (int col = 0; col < col_m; col += span) {
for (int row = 0; row < NUM_ROW_C; ++row) {
const int index = (row + start_row_c) * col_stride_c + start_col_c + col;
@@ -315,7 +314,7 @@
// Process the couple columns in remainder if present.
if (NUM_COL_C & 2) {
- int col = NUM_COL_C & (int)(~(span - 1));
+ int col = NUM_COL_C & (~(span - 1));
for (int row = 0; row < NUM_ROW_C; ++row) {
const double* pa = &A[row];
const double* pb = &B[col];
@@ -339,7 +338,7 @@
}
// Process the main part with multiples of 4.
- int col_m = NUM_COL_C & (int)(~(span - 1));
+ int col_m = NUM_COL_C & (~(span - 1));
for (int col = 0; col < col_m; col += span) {
for (int row = 0; row < NUM_ROW_C; ++row) {
const int index = (row + start_row_c) * col_stride_c + start_col_c + col;
@@ -435,7 +434,7 @@
// Process the couple rows in remainder if present.
if (NUM_ROW_A & 2) {
- int row = NUM_ROW_A & (int)(~(span - 1));
+ int row = NUM_ROW_A & (~(span - 1));
const double* pa1 = &A[row * NUM_COL_A];
const double* pa2 = pa1 + NUM_COL_A;
const double* pb = &b[0];
@@ -454,7 +453,7 @@
}
// Calculate the main part with multiples of 4.
- int row_m = NUM_ROW_A & (int)(~(span - 1));
+ int row_m = NUM_ROW_A & (~(span - 1));
for (int row = 0; row < row_m; row += span) {
// clang-format off
MVM_mat4x1(NUM_COL_A, &A[row * NUM_COL_A], NUM_COL_A,
@@ -522,7 +521,7 @@
// Process the couple columns in remainder if present.
if (NUM_COL_A & 2) {
- int row = NUM_COL_A & (int)(~(span - 1));
+ int row = NUM_COL_A & (~(span - 1));
const double* pa = &A[row];
const double* pb = &b[0];
double tmp1 = 0.0, tmp2 = 0.0;
@@ -543,7 +542,7 @@
}
// Calculate the main part with multiples of 4.
- int row_m = NUM_COL_A & (int)(~(span - 1));
+ int row_m = NUM_COL_A & (~(span - 1));
for (int row = 0; row < row_m; row += span) {
// clang-format off
MTV_mat4x1(NUM_ROW_A, &A[row], NUM_COL_A,
@@ -561,7 +560,6 @@
#undef CERES_GEMM_STORE_SINGLE
#undef CERES_GEMM_STORE_PAIR
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_SMALL_BLAS_H_
diff --git a/internal/ceres/small_blas_gemm_benchmark.cc b/internal/ceres/small_blas_gemm_benchmark.cc
index aa6c41d..ea0ecdf 100644
--- a/internal/ceres/small_blas_gemm_benchmark.cc
+++ b/internal/ceres/small_blas_gemm_benchmark.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,8 +34,7 @@
#include "benchmark/benchmark.h"
#include "ceres/small_blas.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Benchmarking matrix-matrix multiply routines and optimizing memory
// access requires that we make sure that they are not just sitting in
@@ -50,7 +49,7 @@
MatrixMatrixMultiplyData(
int a_rows, int a_cols, int b_rows, int b_cols, int c_rows, int c_cols)
: num_elements_(1000),
- a_size_(a_rows * a_cols),
+ a_size_(num_elements_ * a_rows * a_cols),
b_size_(b_rows * b_cols),
c_size_(c_rows * c_cols),
a_(num_elements_ * a_size_, 1.00001),
@@ -72,99 +71,147 @@
std::vector<double> c_;
};
-static void MatrixMatrixMultiplySizeArguments(
- benchmark::internal::Benchmark* benchmark) {
- const std::vector<int> b_rows = {1, 2, 3, 4, 6, 8};
- const std::vector<int> b_cols = {1, 2, 3, 4, 8, 12, 15};
- const std::vector<int> c_cols = b_cols;
- for (int i : b_rows) {
- for (int j : b_cols) {
- for (int k : c_cols) {
- benchmark->Args({i, j, k});
- }
- }
- }
-}
+#define GEMM_KIND_EQ 0
+#define GEMM_KIND_ADD 1
+#define GEMM_KIND_SUB -1
-void BM_MatrixMatrixMultiplyDynamic(benchmark::State& state) {
- const int i = state.range(0);
- const int j = state.range(1);
- const int k = state.range(2);
+#define BENCHMARK_MM_FN(FN, M, N, K, NAME, MT, NT, KT) \
+ void static BM_##FN##_##NAME##_##M##x##N##x##K(benchmark::State& state) { \
+ const int b_rows = M; \
+ const int b_cols = N; \
+ const int c_rows = b_cols; \
+ const int c_cols = K; \
+ const int a_rows = b_rows; \
+ const int a_cols = c_cols; \
+ MatrixMatrixMultiplyData data( \
+ a_rows, a_cols, b_rows, b_cols, c_rows, c_cols); \
+ const int num_elements = data.num_elements(); \
+ int iter = 0; \
+ for (auto _ : state) { \
+ FN<MT, KT, KT, NT, GEMM_KIND_ADD>(data.GetB(iter), \
+ b_rows, \
+ b_cols, \
+ data.GetC(iter), \
+ c_rows, \
+ c_cols, \
+ data.GetA(iter), \
+ 512, \
+ 512, \
+ a_rows, \
+ a_cols); \
+ iter = (iter + 1) % num_elements; \
+ } \
+ } \
+ BENCHMARK(BM_##FN##_##NAME##_##M##x##N##x##K);
- const int b_rows = i;
- const int b_cols = j;
- const int c_rows = b_cols;
- const int c_cols = k;
- const int a_rows = b_rows;
- const int a_cols = c_cols;
+#define BENCHMARK_STATIC_MM_FN(FN, M, N, K) \
+ BENCHMARK_MM_FN(FN, M, N, K, Static, M, N, K)
+#define BENCHMARK_DYNAMIC_MM_FN(FN, M, N, K) \
+ BENCHMARK_MM_FN( \
+ FN, M, N, K, Dynamic, Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic)
- MatrixMatrixMultiplyData data(a_rows, a_cols, b_rows, b_cols, c_rows, c_cols);
- const int num_elements = data.num_elements();
+#define BENCHMARK_MTM_FN(FN, M, N, K, NAME, MT, NT, KT) \
+ void static BM_##FN##_##NAME##_##M##x##N##x##K(benchmark::State& state) { \
+ const int b_rows = M; \
+ const int b_cols = N; \
+ const int c_rows = b_rows; \
+ const int c_cols = K; \
+ const int a_rows = b_cols; \
+ const int a_cols = c_cols; \
+ MatrixMatrixMultiplyData data( \
+ a_rows, a_cols, b_rows, b_cols, c_rows, c_cols); \
+ const int num_elements = data.num_elements(); \
+ int iter = 0; \
+ for (auto _ : state) { \
+ FN<KT, MT, KT, NT, GEMM_KIND_ADD>(data.GetB(iter), \
+ b_rows, \
+ b_cols, \
+ data.GetC(iter), \
+ c_rows, \
+ c_cols, \
+ data.GetA(iter), \
+ 0, \
+ 0, \
+ a_rows, \
+ a_cols); \
+ iter = (iter + 1) % num_elements; \
+ } \
+ } \
+ BENCHMARK(BM_##FN##_##NAME##_##M##x##N##x##K);
- int iter = 0;
- for (auto _ : state) {
- // a += b * c
- // clang-format off
- MatrixMatrixMultiply
- <Eigen::Dynamic, Eigen::Dynamic,Eigen::Dynamic,Eigen::Dynamic, 1>
- (data.GetB(iter), b_rows, b_cols,
- data.GetC(iter), c_rows, c_cols,
- data.GetA(iter), 0, 0, a_rows, a_cols);
- // clang-format on
- iter = (iter + 1) % num_elements;
- }
-}
+#define BENCHMARK_STATIC_MMT_FN(FN, M, N, K) \
+ BENCHMARK_MTM_FN(FN, M, N, K, Static, M, N, K)
+#define BENCHMARK_DYNAMIC_MMT_FN(FN, M, N, K) \
+ BENCHMARK_MTM_FN( \
+ FN, M, N, K, Dynamic, Eigen::Dynamic, Eigen::Dynamic, Eigen::Dynamic)
-BENCHMARK(BM_MatrixMatrixMultiplyDynamic)
- ->Apply(MatrixMatrixMultiplySizeArguments);
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyEigen, 2, 3, 4)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyEigen, 3, 3, 3)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyEigen, 4, 4, 4)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyEigen, 8, 8, 8)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyEigen, 9, 9, 3)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyEigen, 9, 3, 3)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyEigen, 3, 9, 9)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyEigen, 2, 3, 4)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyEigen, 3, 3, 3)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyEigen, 4, 4, 4)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyEigen, 8, 8, 8)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyEigen, 9, 9, 3)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyEigen, 9, 3, 3)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyEigen, 3, 9, 9)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyNaive, 2, 3, 4)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyNaive, 3, 3, 3)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyNaive, 4, 4, 4)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyNaive, 8, 8, 8)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyNaive, 9, 9, 3)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyNaive, 9, 3, 3)
+BENCHMARK_STATIC_MM_FN(MatrixMatrixMultiplyNaive, 3, 9, 9)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyNaive, 2, 3, 4)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyNaive, 3, 3, 3)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyNaive, 4, 4, 4)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyNaive, 8, 8, 8)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyNaive, 9, 9, 3)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyNaive, 9, 3, 3)
+BENCHMARK_DYNAMIC_MM_FN(MatrixMatrixMultiplyNaive, 3, 9, 9)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 2, 3, 4)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 3, 3, 3)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 4, 4, 4)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 8, 8, 8)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 9, 9, 3)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 9, 3, 3)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 3, 9, 9)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 2, 3, 4)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 3, 3, 3)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 4, 4, 4)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 8, 8, 8)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 9, 9, 3)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 9, 3, 3)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyEigen, 3, 9, 9)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 2, 3, 4)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 3, 3, 3)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 4, 4, 4)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 8, 8, 8)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 9, 9, 3)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 9, 3, 3)
+BENCHMARK_STATIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 3, 9, 9)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 2, 3, 4)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 3, 3, 3)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 4, 4, 4)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 8, 8, 8)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 9, 9, 3)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 9, 3, 3)
+BENCHMARK_DYNAMIC_MMT_FN(MatrixTransposeMatrixMultiplyNaive, 3, 9, 9)
-static void MatrixTransposeMatrixMultiplySizeArguments(
- benchmark::internal::Benchmark* benchmark) {
- std::vector<int> b_rows = {1, 2, 3, 4, 6, 8};
- std::vector<int> b_cols = {1, 2, 3, 4, 8, 12, 15};
- std::vector<int> c_cols = b_rows;
- for (int i : b_rows) {
- for (int j : b_cols) {
- for (int k : c_cols) {
- benchmark->Args({i, j, k});
- }
- }
- }
-}
+#undef GEMM_KIND_EQ
+#undef GEMM_KIND_ADD
+#undef GEMM_KIND_SUB
+#undef BENCHMARK_MM_FN
+#undef BENCHMARK_STATIC_MM_FN
+#undef BENCHMARK_DYNAMIC_MM_FN
+#undef BENCHMARK_MTM_FN
+#undef BENCHMARK_DYNAMIC_MMT_FN
+#undef BENCHMARK_STATIC_MMT_FN
-void BM_MatrixTransposeMatrixMultiplyDynamic(benchmark::State& state) {
- const int i = state.range(0);
- const int j = state.range(1);
- const int k = state.range(2);
-
- const int b_rows = i;
- const int b_cols = j;
- const int c_rows = b_rows;
- const int c_cols = k;
- const int a_rows = b_cols;
- const int a_cols = c_cols;
-
- MatrixMatrixMultiplyData data(a_rows, a_cols, b_rows, b_cols, c_rows, c_cols);
- const int num_elements = data.num_elements();
-
- int iter = 0;
- for (auto _ : state) {
- // a += b' * c
- // clang-format off
- MatrixTransposeMatrixMultiply
- <Eigen::Dynamic,Eigen::Dynamic,Eigen::Dynamic,Eigen::Dynamic, 1>
- (data.GetB(iter), b_rows, b_cols,
- data.GetC(iter), c_rows, c_cols,
- data.GetA(iter), 0, 0, a_rows, a_cols);
- // clang-format on
- iter = (iter + 1) % num_elements;
- }
-}
-
-BENCHMARK(BM_MatrixTransposeMatrixMultiplyDynamic)
- ->Apply(MatrixTransposeMatrixMultiplySizeArguments);
-
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
BENCHMARK_MAIN();
diff --git a/internal/ceres/small_blas_gemv_benchmark.cc b/internal/ceres/small_blas_gemv_benchmark.cc
index 4b587bf..6bf584d 100644
--- a/internal/ceres/small_blas_gemv_benchmark.cc
+++ b/internal/ceres/small_blas_gemv_benchmark.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -78,7 +78,7 @@
}
}
-void BM_MatrixVectorMultiply(benchmark::State& state) {
+static void BM_MatrixVectorMultiply(benchmark::State& state) {
const int rows = state.range(0);
const int cols = state.range(1);
MatrixVectorMultiplyData data(rows, cols);
@@ -94,7 +94,7 @@
BENCHMARK(BM_MatrixVectorMultiply)->Apply(MatrixSizeArguments);
-void BM_MatrixTransposeVectorMultiply(benchmark::State& state) {
+static void BM_MatrixTransposeVectorMultiply(benchmark::State& state) {
const int rows = state.range(0);
const int cols = state.range(1);
MatrixVectorMultiplyData data(cols, rows);
diff --git a/internal/ceres/small_blas_generic.h b/internal/ceres/small_blas_generic.h
index 3f3ea42..93ee338 100644
--- a/internal/ceres/small_blas_generic.h
+++ b/internal/ceres/small_blas_generic.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,38 +35,35 @@
#ifndef CERES_INTERNAL_SMALL_BLAS_GENERIC_H_
#define CERES_INTERNAL_SMALL_BLAS_GENERIC_H_
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// The following macros are used to share code
-#define CERES_GEMM_OPT_NAIVE_HEADER \
- double c0 = 0.0; \
- double c1 = 0.0; \
- double c2 = 0.0; \
- double c3 = 0.0; \
- const double* pa = a; \
- const double* pb = b; \
- const int span = 4; \
- int col_r = col_a & (span - 1); \
+#define CERES_GEMM_OPT_NAIVE_HEADER \
+ double cvec4[4] = {0.0, 0.0, 0.0, 0.0}; \
+ const double* pa = a; \
+ const double* pb = b; \
+ const int span = 4; \
+ int col_r = col_a & (span - 1); \
int col_m = col_a - col_r;
#define CERES_GEMM_OPT_STORE_MAT1X4 \
if (kOperation > 0) { \
- *c++ += c0; \
- *c++ += c1; \
- *c++ += c2; \
- *c++ += c3; \
+ c[0] += cvec4[0]; \
+ c[1] += cvec4[1]; \
+ c[2] += cvec4[2]; \
+ c[3] += cvec4[3]; \
} else if (kOperation < 0) { \
- *c++ -= c0; \
- *c++ -= c1; \
- *c++ -= c2; \
- *c++ -= c3; \
+ c[0] -= cvec4[0]; \
+ c[1] -= cvec4[1]; \
+ c[2] -= cvec4[2]; \
+ c[3] -= cvec4[3]; \
} else { \
- *c++ = c0; \
- *c++ = c1; \
- *c++ = c2; \
- *c++ = c3; \
- }
+ c[0] = cvec4[0]; \
+ c[1] = cvec4[1]; \
+ c[2] = cvec4[2]; \
+ c[3] = cvec4[3]; \
+ } \
+ c += 4;
// Matrix-Matrix Multiplication
// Figure out 1x4 of Matrix C in one batch
@@ -100,10 +97,11 @@
#define CERES_GEMM_OPT_MMM_MAT1X4_MUL \
av = pa[k]; \
pb = b + bi; \
- c0 += av * *pb++; \
- c1 += av * *pb++; \
- c2 += av * *pb++; \
- c3 += av * *pb++; \
+ cvec4[0] += av * pb[0]; \
+ cvec4[1] += av * pb[1]; \
+ cvec4[2] += av * pb[2]; \
+ cvec4[3] += av * pb[3]; \
+ pb += 4; \
bi += col_stride_b; \
k++;
@@ -167,10 +165,11 @@
#define CERES_GEMM_OPT_MTM_MAT1X4_MUL \
av = pa[ai]; \
pb = b + bi; \
- c0 += av * *pb++; \
- c1 += av * *pb++; \
- c2 += av * *pb++; \
- c3 += av * *pb++; \
+ cvec4[0] += av * pb[0]; \
+ cvec4[1] += av * pb[1]; \
+ cvec4[2] += av * pb[2]; \
+ cvec4[3] += av * pb[3]; \
+ pb += 4; \
ai += col_stride_a; \
bi += col_stride_b;
@@ -219,13 +218,13 @@
double bv = 0.0;
// clang-format off
-#define CERES_GEMM_OPT_MVM_MAT4X1_MUL \
- bv = *pb; \
- c0 += *(pa ) * bv; \
- c1 += *(pa + col_stride_a ) * bv; \
- c2 += *(pa + col_stride_a * 2) * bv; \
- c3 += *(pa + col_stride_a * 3) * bv; \
- pa++; \
+#define CERES_GEMM_OPT_MVM_MAT4X1_MUL \
+ bv = *pb; \
+ cvec4[0] += *(pa ) * bv; \
+ cvec4[1] += *(pa + col_stride_a ) * bv; \
+ cvec4[2] += *(pa + col_stride_a * 2) * bv; \
+ cvec4[3] += *(pa + col_stride_a * 3) * bv; \
+ pa++; \
pb++;
// clang-format on
@@ -283,16 +282,14 @@
CERES_GEMM_OPT_NAIVE_HEADER
double bv = 0.0;
- // clang-format off
#define CERES_GEMM_OPT_MTV_MAT4X1_MUL \
bv = *pb; \
- c0 += *(pa ) * bv; \
- c1 += *(pa + 1) * bv; \
- c2 += *(pa + 2) * bv; \
- c3 += *(pa + 3) * bv; \
+ cvec4[0] += pa[0] * bv; \
+ cvec4[1] += pa[1] * bv; \
+ cvec4[2] += pa[2] * bv; \
+ cvec4[3] += pa[3] * bv; \
pa += col_stride_a; \
pb++;
- // clang-format on
for (int k = 0; k < col_m; k += span) {
CERES_GEMM_OPT_MTV_MAT4X1_MUL
@@ -313,7 +310,6 @@
#undef CERES_GEMM_OPT_NAIVE_HEADER
#undef CERES_GEMM_OPT_STORE_MAT1X4
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_SMALL_BLAS_GENERIC_H_
diff --git a/internal/ceres/small_blas_test.cc b/internal/ceres/small_blas_test.cc
index 6f819c4..97922aa 100644
--- a/internal/ceres/small_blas_test.cc
+++ b/internal/ceres/small_blas_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,6 +31,7 @@
#include "ceres/small_blas.h"
#include <limits>
+#include <string>
#include "ceres/internal/eigen.h"
#include "gtest/gtest.h"
@@ -38,341 +39,473 @@
namespace ceres {
namespace internal {
-const double kTolerance = 3.0 * std::numeric_limits<double>::epsilon();
+const double kTolerance = 5.0 * std::numeric_limits<double>::epsilon();
-TEST(BLAS, MatrixMatrixMultiply) {
- const int kRowA = 3;
- const int kColA = 5;
- Matrix A(kRowA, kColA);
- A.setOnes();
+// Static or dynamic problem types.
+enum class DimType { Static, Dynamic };
- const int kRowB = 5;
- const int kColB = 7;
- Matrix B(kRowB, kColB);
- B.setOnes();
+// Constructs matrix functor type.
+#define MATRIX_FUN_TY(FN) \
+ template <int kRowA, \
+ int kColA, \
+ int kRowB, \
+ int kColB, \
+ int kOperation, \
+ DimType kDimType> \
+ struct FN##Ty { \
+ void operator()(const double* A, \
+ const int num_row_a, \
+ const int num_col_a, \
+ const double* B, \
+ const int num_row_b, \
+ const int num_col_b, \
+ double* C, \
+ const int start_row_c, \
+ const int start_col_c, \
+ const int row_stride_c, \
+ const int col_stride_c) { \
+ if (kDimType == DimType::Static) { \
+ FN<kRowA, kColA, kRowB, kColB, kOperation>(A, \
+ num_row_a, \
+ num_col_a, \
+ B, \
+ num_row_b, \
+ num_col_b, \
+ C, \
+ start_row_c, \
+ start_col_c, \
+ row_stride_c, \
+ col_stride_c); \
+ } else { \
+ FN<Eigen::Dynamic, \
+ Eigen::Dynamic, \
+ Eigen::Dynamic, \
+ Eigen::Dynamic, \
+ kOperation>(A, \
+ num_row_a, \
+ num_col_a, \
+ B, \
+ num_row_b, \
+ num_col_b, \
+ C, \
+ start_row_c, \
+ start_col_c, \
+ row_stride_c, \
+ col_stride_c); \
+ } \
+ } \
+ };
- for (int row_stride_c = kRowA; row_stride_c < 3 * kRowA; ++row_stride_c) {
- for (int col_stride_c = kColB; col_stride_c < 3 * kColB; ++col_stride_c) {
- Matrix C(row_stride_c, col_stride_c);
- C.setOnes();
+MATRIX_FUN_TY(MatrixMatrixMultiply)
+MATRIX_FUN_TY(MatrixMatrixMultiplyNaive)
+MATRIX_FUN_TY(MatrixTransposeMatrixMultiply)
+MATRIX_FUN_TY(MatrixTransposeMatrixMultiplyNaive)
- Matrix C_plus = C;
- Matrix C_minus = C;
- Matrix C_assign = C;
+#undef MATRIX_FUN_TY
- Matrix C_plus_ref = C;
- Matrix C_minus_ref = C;
- Matrix C_assign_ref = C;
- // clang-format off
- for (int start_row_c = 0; start_row_c + kRowA < row_stride_c; ++start_row_c) {
- for (int start_col_c = 0; start_col_c + kColB < col_stride_c; ++start_col_c) {
- C_plus_ref.block(start_row_c, start_col_c, kRowA, kColB) +=
- A * B;
-
- MatrixMatrixMultiply<kRowA, kColA, kRowB, kColB, 1>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_plus.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
-
- EXPECT_NEAR((C_plus_ref - C_plus).norm(), 0.0, kTolerance)
- << "C += A * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_plus_ref << "\n"
- << "C: \n" << C_plus;
-
- C_minus_ref.block(start_row_c, start_col_c, kRowA, kColB) -=
- A * B;
-
- MatrixMatrixMultiply<kRowA, kColA, kRowB, kColB, -1>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_minus.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
-
- EXPECT_NEAR((C_minus_ref - C_minus).norm(), 0.0, kTolerance)
- << "C -= A * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_minus_ref << "\n"
- << "C: \n" << C_minus;
-
- C_assign_ref.block(start_row_c, start_col_c, kRowA, kColB) =
- A * B;
-
- MatrixMatrixMultiply<kRowA, kColA, kRowB, kColB, 0>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_assign.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
-
- EXPECT_NEAR((C_assign_ref - C_assign).norm(), 0.0, kTolerance)
- << "C = A * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_assign_ref << "\n"
- << "C: \n" << C_assign;
- }
- }
- // clang-format on
+// Initializes matrix entries.
+static void initMatrix(Matrix& mat) {
+ for (int i = 0; i < mat.rows(); ++i) {
+ for (int j = 0; j < mat.cols(); ++j) {
+ mat(i, j) = i + j + 1;
}
}
}
-TEST(BLAS, MatrixTransposeMatrixMultiply) {
- const int kRowA = 5;
- const int kColA = 3;
- Matrix A(kRowA, kColA);
- A.setOnes();
+template <int kRowA,
+ int kColA,
+ int kColB,
+ DimType kDimType,
+ template <int, int, int, int, int, DimType>
+ class FunctorTy>
+struct TestMatrixFunctions {
+ void operator()() {
+ Matrix A(kRowA, kColA);
+ initMatrix(A);
+ const int kRowB = kColA;
+ Matrix B(kRowB, kColB);
+ initMatrix(B);
- const int kRowB = 5;
- const int kColB = 7;
- Matrix B(kRowB, kColB);
- B.setOnes();
+ for (int row_stride_c = kRowA; row_stride_c < 3 * kRowA; ++row_stride_c) {
+ for (int col_stride_c = kColB; col_stride_c < 3 * kColB; ++col_stride_c) {
+ Matrix C(row_stride_c, col_stride_c);
+ C.setOnes();
- for (int row_stride_c = kColA; row_stride_c < 3 * kColA; ++row_stride_c) {
- for (int col_stride_c = kColB; col_stride_c < 3 * kColB; ++col_stride_c) {
- Matrix C(row_stride_c, col_stride_c);
- C.setOnes();
+ Matrix C_plus = C;
+ Matrix C_minus = C;
+ Matrix C_assign = C;
- Matrix C_plus = C;
- Matrix C_minus = C;
- Matrix C_assign = C;
+ Matrix C_plus_ref = C;
+ Matrix C_minus_ref = C;
+ Matrix C_assign_ref = C;
- Matrix C_plus_ref = C;
- Matrix C_minus_ref = C;
- Matrix C_assign_ref = C;
- // clang-format off
- for (int start_row_c = 0; start_row_c + kColA < row_stride_c; ++start_row_c) {
- for (int start_col_c = 0; start_col_c + kColB < col_stride_c; ++start_col_c) {
- C_plus_ref.block(start_row_c, start_col_c, kColA, kColB) +=
- A.transpose() * B;
+ for (int start_row_c = 0; start_row_c + kRowA < row_stride_c;
+ ++start_row_c) {
+ for (int start_col_c = 0; start_col_c + kColB < col_stride_c;
+ ++start_col_c) {
+ C_plus_ref.block(start_row_c, start_col_c, kRowA, kColB) += A * B;
+ FunctorTy<kRowA, kColA, kRowB, kColB, 1, kDimType>()(A.data(),
+ kRowA,
+ kColA,
+ B.data(),
+ kRowB,
+ kColB,
+ C_plus.data(),
+ start_row_c,
+ start_col_c,
+ row_stride_c,
+ col_stride_c);
- MatrixTransposeMatrixMultiply<kRowA, kColA, kRowB, kColB, 1>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_plus.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
+ EXPECT_NEAR((C_plus_ref - C_plus).norm(), 0.0, kTolerance)
+ << "C += A * B \n"
+ << "row_stride_c : " << row_stride_c << "\n"
+ << "col_stride_c : " << col_stride_c << "\n"
+ << "start_row_c : " << start_row_c << "\n"
+ << "start_col_c : " << start_col_c << "\n"
+ << "Cref : \n"
+ << C_plus_ref << "\n"
+ << "C: \n"
+ << C_plus;
- EXPECT_NEAR((C_plus_ref - C_plus).norm(), 0.0, kTolerance)
- << "C += A' * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_plus_ref << "\n"
- << "C: \n" << C_plus;
+ C_minus_ref.block(start_row_c, start_col_c, kRowA, kColB) -= A * B;
+ FunctorTy<kRowA, kColA, kRowB, kColB, -1, kDimType>()(
+ A.data(),
+ kRowA,
+ kColA,
+ B.data(),
+ kRowB,
+ kColB,
+ C_minus.data(),
+ start_row_c,
+ start_col_c,
+ row_stride_c,
+ col_stride_c);
- C_minus_ref.block(start_row_c, start_col_c, kColA, kColB) -=
- A.transpose() * B;
+ EXPECT_NEAR((C_minus_ref - C_minus).norm(), 0.0, kTolerance)
+ << "C -= A * B \n"
+ << "row_stride_c : " << row_stride_c << "\n"
+ << "col_stride_c : " << col_stride_c << "\n"
+ << "start_row_c : " << start_row_c << "\n"
+ << "start_col_c : " << start_col_c << "\n"
+ << "Cref : \n"
+ << C_minus_ref << "\n"
+ << "C: \n"
+ << C_minus;
- MatrixTransposeMatrixMultiply<kRowA, kColA, kRowB, kColB, -1>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_minus.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
+ C_assign_ref.block(start_row_c, start_col_c, kRowA, kColB) = A * B;
- EXPECT_NEAR((C_minus_ref - C_minus).norm(), 0.0, kTolerance)
- << "C -= A' * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_minus_ref << "\n"
- << "C: \n" << C_minus;
+ FunctorTy<kRowA, kColA, kRowB, kColB, 0, kDimType>()(
+ A.data(),
+ kRowA,
+ kColA,
+ B.data(),
+ kRowB,
+ kColB,
+ C_assign.data(),
+ start_row_c,
+ start_col_c,
+ row_stride_c,
+ col_stride_c);
- C_assign_ref.block(start_row_c, start_col_c, kColA, kColB) =
- A.transpose() * B;
-
- MatrixTransposeMatrixMultiply<kRowA, kColA, kRowB, kColB, 0>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_assign.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
-
- EXPECT_NEAR((C_assign_ref - C_assign).norm(), 0.0, kTolerance)
- << "C = A' * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_assign_ref << "\n"
- << "C: \n" << C_assign;
+ EXPECT_NEAR((C_assign_ref - C_assign).norm(), 0.0, kTolerance)
+ << "C = A * B \n"
+ << "row_stride_c : " << row_stride_c << "\n"
+ << "col_stride_c : " << col_stride_c << "\n"
+ << "start_row_c : " << start_row_c << "\n"
+ << "start_col_c : " << start_col_c << "\n"
+ << "Cref : \n"
+ << C_assign_ref << "\n"
+ << "C: \n"
+ << C_assign;
+ }
}
}
- // clang-format on
}
}
+};
+
+template <int kRowA,
+ int kColA,
+ int kColB,
+ DimType kDimType,
+ template <int, int, int, int, int, DimType>
+ class FunctorTy>
+struct TestMatrixTransposeFunctions {
+ void operator()() {
+ Matrix A(kRowA, kColA);
+ initMatrix(A);
+ const int kRowB = kRowA;
+ Matrix B(kRowB, kColB);
+ initMatrix(B);
+
+ for (int row_stride_c = kColA; row_stride_c < 3 * kColA; ++row_stride_c) {
+ for (int col_stride_c = kColB; col_stride_c < 3 * kColB; ++col_stride_c) {
+ Matrix C(row_stride_c, col_stride_c);
+ C.setOnes();
+
+ Matrix C_plus = C;
+ Matrix C_minus = C;
+ Matrix C_assign = C;
+
+ Matrix C_plus_ref = C;
+ Matrix C_minus_ref = C;
+ Matrix C_assign_ref = C;
+ for (int start_row_c = 0; start_row_c + kColA < row_stride_c;
+ ++start_row_c) {
+ for (int start_col_c = 0; start_col_c + kColB < col_stride_c;
+ ++start_col_c) {
+ C_plus_ref.block(start_row_c, start_col_c, kColA, kColB) +=
+ A.transpose() * B;
+
+ FunctorTy<kRowA, kColA, kRowB, kColB, 1, kDimType>()(A.data(),
+ kRowA,
+ kColA,
+ B.data(),
+ kRowB,
+ kColB,
+ C_plus.data(),
+ start_row_c,
+ start_col_c,
+ row_stride_c,
+ col_stride_c);
+
+ EXPECT_NEAR((C_plus_ref - C_plus).norm(), 0.0, kTolerance)
+ << "C += A' * B \n"
+ << "row_stride_c : " << row_stride_c << "\n"
+ << "col_stride_c : " << col_stride_c << "\n"
+ << "start_row_c : " << start_row_c << "\n"
+ << "start_col_c : " << start_col_c << "\n"
+ << "Cref : \n"
+ << C_plus_ref << "\n"
+ << "C: \n"
+ << C_plus;
+
+ C_minus_ref.block(start_row_c, start_col_c, kColA, kColB) -=
+ A.transpose() * B;
+
+ FunctorTy<kRowA, kColA, kRowB, kColB, -1, kDimType>()(
+ A.data(),
+ kRowA,
+ kColA,
+ B.data(),
+ kRowB,
+ kColB,
+ C_minus.data(),
+ start_row_c,
+ start_col_c,
+ row_stride_c,
+ col_stride_c);
+
+ EXPECT_NEAR((C_minus_ref - C_minus).norm(), 0.0, kTolerance)
+ << "C -= A' * B \n"
+ << "row_stride_c : " << row_stride_c << "\n"
+ << "col_stride_c : " << col_stride_c << "\n"
+ << "start_row_c : " << start_row_c << "\n"
+ << "start_col_c : " << start_col_c << "\n"
+ << "Cref : \n"
+ << C_minus_ref << "\n"
+ << "C: \n"
+ << C_minus;
+
+ C_assign_ref.block(start_row_c, start_col_c, kColA, kColB) =
+ A.transpose() * B;
+
+ FunctorTy<kRowA, kColA, kRowB, kColB, 0, kDimType>()(
+ A.data(),
+ kRowA,
+ kColA,
+ B.data(),
+ kRowB,
+ kColB,
+ C_assign.data(),
+ start_row_c,
+ start_col_c,
+ row_stride_c,
+ col_stride_c);
+
+ EXPECT_NEAR((C_assign_ref - C_assign).norm(), 0.0, kTolerance)
+ << "C = A' * B \n"
+ << "row_stride_c : " << row_stride_c << "\n"
+ << "col_stride_c : " << col_stride_c << "\n"
+ << "start_row_c : " << start_row_c << "\n"
+ << "start_col_c : " << start_col_c << "\n"
+ << "Cref : \n"
+ << C_assign_ref << "\n"
+ << "C: \n"
+ << C_assign;
+ }
+ }
+ }
+ }
+ }
+};
+
+TEST(BLAS, MatrixMatrixMultiply_5_3_7) {
+ TestMatrixFunctions<5, 3, 7, DimType::Static, MatrixMatrixMultiplyTy>()();
}
-// TODO(sameeragarwal): Dedup and reduce the amount of duplication of
-// test code in this file.
-
-TEST(BLAS, MatrixMatrixMultiplyNaive) {
- const int kRowA = 3;
- const int kColA = 5;
- Matrix A(kRowA, kColA);
- A.setOnes();
-
- const int kRowB = 5;
- const int kColB = 7;
- Matrix B(kRowB, kColB);
- B.setOnes();
-
- for (int row_stride_c = kRowA; row_stride_c < 3 * kRowA; ++row_stride_c) {
- for (int col_stride_c = kColB; col_stride_c < 3 * kColB; ++col_stride_c) {
- Matrix C(row_stride_c, col_stride_c);
- C.setOnes();
-
- Matrix C_plus = C;
- Matrix C_minus = C;
- Matrix C_assign = C;
-
- Matrix C_plus_ref = C;
- Matrix C_minus_ref = C;
- Matrix C_assign_ref = C;
- // clang-format off
- for (int start_row_c = 0; start_row_c + kRowA < row_stride_c; ++start_row_c) {
- for (int start_col_c = 0; start_col_c + kColB < col_stride_c; ++start_col_c) {
- C_plus_ref.block(start_row_c, start_col_c, kRowA, kColB) +=
- A * B;
-
- MatrixMatrixMultiplyNaive<kRowA, kColA, kRowB, kColB, 1>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_plus.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
-
- EXPECT_NEAR((C_plus_ref - C_plus).norm(), 0.0, kTolerance)
- << "C += A * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_plus_ref << "\n"
- << "C: \n" << C_plus;
-
- C_minus_ref.block(start_row_c, start_col_c, kRowA, kColB) -=
- A * B;
-
- MatrixMatrixMultiplyNaive<kRowA, kColA, kRowB, kColB, -1>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_minus.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
-
- EXPECT_NEAR((C_minus_ref - C_minus).norm(), 0.0, kTolerance)
- << "C -= A * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_minus_ref << "\n"
- << "C: \n" << C_minus;
-
- C_assign_ref.block(start_row_c, start_col_c, kRowA, kColB) =
- A * B;
-
- MatrixMatrixMultiplyNaive<kRowA, kColA, kRowB, kColB, 0>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_assign.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
-
- EXPECT_NEAR((C_assign_ref - C_assign).norm(), 0.0, kTolerance)
- << "C = A * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_assign_ref << "\n"
- << "C: \n" << C_assign;
- }
- }
- // clang-format on
- }
- }
+TEST(BLAS, MatrixMatrixMultiply_5_3_7_Dynamic) {
+ TestMatrixFunctions<5, 3, 7, DimType::Dynamic, MatrixMatrixMultiplyTy>()();
}
-TEST(BLAS, MatrixTransposeMatrixMultiplyNaive) {
- const int kRowA = 5;
- const int kColA = 3;
- Matrix A(kRowA, kColA);
- A.setOnes();
+TEST(BLAS, MatrixMatrixMultiply_1_1_1) {
+ TestMatrixFunctions<1, 1, 1, DimType::Static, MatrixMatrixMultiplyTy>()();
+}
- const int kRowB = 5;
- const int kColB = 7;
- Matrix B(kRowB, kColB);
- B.setOnes();
+TEST(BLAS, MatrixMatrixMultiply_1_1_1_Dynamic) {
+ TestMatrixFunctions<1, 1, 1, DimType::Dynamic, MatrixMatrixMultiplyTy>()();
+}
- for (int row_stride_c = kColA; row_stride_c < 3 * kColA; ++row_stride_c) {
- for (int col_stride_c = kColB; col_stride_c < 3 * kColB; ++col_stride_c) {
- Matrix C(row_stride_c, col_stride_c);
- C.setOnes();
+TEST(BLAS, MatrixMatrixMultiply_9_9_9) {
+ TestMatrixFunctions<9, 9, 9, DimType::Static, MatrixMatrixMultiplyTy>()();
+}
- Matrix C_plus = C;
- Matrix C_minus = C;
- Matrix C_assign = C;
+TEST(BLAS, MatrixMatrixMultiply_9_9_9_Dynamic) {
+ TestMatrixFunctions<9, 9, 9, DimType::Dynamic, MatrixMatrixMultiplyTy>()();
+}
- Matrix C_plus_ref = C;
- Matrix C_minus_ref = C;
- Matrix C_assign_ref = C;
- // clang-format off
- for (int start_row_c = 0; start_row_c + kColA < row_stride_c; ++start_row_c) {
- for (int start_col_c = 0; start_col_c + kColB < col_stride_c; ++start_col_c) {
- C_plus_ref.block(start_row_c, start_col_c, kColA, kColB) +=
- A.transpose() * B;
+TEST(BLAS, MatrixMatrixMultiplyNaive_5_3_7) {
+ TestMatrixFunctions<5,
+ 3,
+ 7,
+ DimType::Static,
+ MatrixMatrixMultiplyNaiveTy>()();
+}
- MatrixTransposeMatrixMultiplyNaive<kRowA, kColA, kRowB, kColB, 1>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_plus.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
+TEST(BLAS, MatrixMatrixMultiplyNaive_5_3_7_Dynamic) {
+ TestMatrixFunctions<5,
+ 3,
+ 7,
+ DimType::Dynamic,
+ MatrixMatrixMultiplyNaiveTy>()();
+}
- EXPECT_NEAR((C_plus_ref - C_plus).norm(), 0.0, kTolerance)
- << "C += A' * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_plus_ref << "\n"
- << "C: \n" << C_plus;
+TEST(BLAS, MatrixMatrixMultiplyNaive_1_1_1) {
+ TestMatrixFunctions<1,
+ 1,
+ 1,
+ DimType::Static,
+ MatrixMatrixMultiplyNaiveTy>()();
+}
- C_minus_ref.block(start_row_c, start_col_c, kColA, kColB) -=
- A.transpose() * B;
+TEST(BLAS, MatrixMatrixMultiplyNaive_1_1_1_Dynamic) {
+ TestMatrixFunctions<1,
+ 1,
+ 1,
+ DimType::Dynamic,
+ MatrixMatrixMultiplyNaiveTy>()();
+}
- MatrixTransposeMatrixMultiplyNaive<kRowA, kColA, kRowB, kColB, -1>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_minus.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
+TEST(BLAS, MatrixMatrixMultiplyNaive_9_9_9) {
+ TestMatrixFunctions<9,
+ 9,
+ 9,
+ DimType::Static,
+ MatrixMatrixMultiplyNaiveTy>()();
+}
- EXPECT_NEAR((C_minus_ref - C_minus).norm(), 0.0, kTolerance)
- << "C -= A' * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_minus_ref << "\n"
- << "C: \n" << C_minus;
+TEST(BLAS, MatrixMatrixMultiplyNaive_9_9_9_Dynamic) {
+ TestMatrixFunctions<9,
+ 9,
+ 9,
+ DimType::Dynamic,
+ MatrixMatrixMultiplyNaiveTy>()();
+}
- C_assign_ref.block(start_row_c, start_col_c, kColA, kColB) =
- A.transpose() * B;
+TEST(BLAS, MatrixTransposeMatrixMultiply_5_3_7) {
+ TestMatrixTransposeFunctions<5,
+ 3,
+ 7,
+ DimType::Static,
+ MatrixTransposeMatrixMultiplyTy>()();
+}
- MatrixTransposeMatrixMultiplyNaive<kRowA, kColA, kRowB, kColB, 0>(
- A.data(), kRowA, kColA,
- B.data(), kRowB, kColB,
- C_assign.data(), start_row_c, start_col_c, row_stride_c, col_stride_c);
+TEST(BLAS, MatrixTransposeMatrixMultiply_5_3_7_Dynamic) {
+ TestMatrixTransposeFunctions<5,
+ 3,
+ 7,
+ DimType::Dynamic,
+ MatrixTransposeMatrixMultiplyTy>()();
+}
- EXPECT_NEAR((C_assign_ref - C_assign).norm(), 0.0, kTolerance)
- << "C = A' * B \n"
- << "row_stride_c : " << row_stride_c << "\n"
- << "col_stride_c : " << col_stride_c << "\n"
- << "start_row_c : " << start_row_c << "\n"
- << "start_col_c : " << start_col_c << "\n"
- << "Cref : \n" << C_assign_ref << "\n"
- << "C: \n" << C_assign;
- }
- }
- // clang-format on
- }
- }
+TEST(BLAS, MatrixTransposeMatrixMultiply_1_1_1) {
+ TestMatrixTransposeFunctions<1,
+ 1,
+ 1,
+ DimType::Static,
+ MatrixTransposeMatrixMultiplyTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiply_1_1_1_Dynamic) {
+ TestMatrixTransposeFunctions<1,
+ 1,
+ 1,
+ DimType::Dynamic,
+ MatrixTransposeMatrixMultiplyTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiply_9_9_9) {
+ TestMatrixTransposeFunctions<9,
+ 9,
+ 9,
+ DimType::Static,
+ MatrixTransposeMatrixMultiplyTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiply_9_9_9_Dynamic) {
+ TestMatrixTransposeFunctions<9,
+ 9,
+ 9,
+ DimType::Dynamic,
+ MatrixTransposeMatrixMultiplyTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiplyNaive_5_3_7) {
+ TestMatrixTransposeFunctions<5,
+ 3,
+ 7,
+ DimType::Static,
+ MatrixTransposeMatrixMultiplyNaiveTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiplyNaive_5_3_7_Dynamic) {
+ TestMatrixTransposeFunctions<5,
+ 3,
+ 7,
+ DimType::Dynamic,
+ MatrixTransposeMatrixMultiplyNaiveTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiplyNaive_1_1_1) {
+ TestMatrixTransposeFunctions<1,
+ 1,
+ 1,
+ DimType::Static,
+ MatrixTransposeMatrixMultiplyNaiveTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiplyNaive_1_1_1_Dynamic) {
+ TestMatrixTransposeFunctions<1,
+ 1,
+ 1,
+ DimType::Dynamic,
+ MatrixTransposeMatrixMultiplyNaiveTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiplyNaive_9_9_9) {
+ TestMatrixTransposeFunctions<9,
+ 9,
+ 9,
+ DimType::Static,
+ MatrixTransposeMatrixMultiplyNaiveTy>()();
+}
+
+TEST(BLAS, MatrixTransposeMatrixMultiplyNaive_9_9_9_Dynamic) {
+ TestMatrixTransposeFunctions<9,
+ 9,
+ 9,
+ DimType::Dynamic,
+ MatrixTransposeMatrixMultiplyNaiveTy>()();
}
TEST(BLAS, MatrixVectorMultiply) {
@@ -412,7 +545,7 @@
b.data(),
c_minus.data());
EXPECT_NEAR((c_minus_ref - c_minus).norm(), 0.0, kTolerance)
- << "c += A * b \n"
+ << "c -= A * b \n"
<< "c_ref : \n" << c_minus_ref << "\n"
<< "c: \n" << c_minus;
@@ -422,7 +555,7 @@
b.data(),
c_assign.data());
EXPECT_NEAR((c_assign_ref - c_assign).norm(), 0.0, kTolerance)
- << "c += A * b \n"
+ << "c = A * b \n"
<< "c_ref : \n" << c_assign_ref << "\n"
<< "c: \n" << c_assign;
// clang-format on
@@ -467,7 +600,7 @@
b.data(),
c_minus.data());
EXPECT_NEAR((c_minus_ref - c_minus).norm(), 0.0, kTolerance)
- << "c += A' * b \n"
+ << "c -= A' * b \n"
<< "c_ref : \n" << c_minus_ref << "\n"
<< "c: \n" << c_minus;
@@ -477,7 +610,7 @@
b.data(),
c_assign.data());
EXPECT_NEAR((c_assign_ref - c_assign).norm(), 0.0, kTolerance)
- << "c += A' * b \n"
+ << "c = A' * b \n"
<< "c_ref : \n" << c_assign_ref << "\n"
<< "c: \n" << c_assign;
// clang-format on
diff --git a/internal/ceres/solver.cc b/internal/ceres/solver.cc
index dfde122..611e465 100644
--- a/internal/ceres/solver.cc
+++ b/internal/ceres/solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,16 +32,19 @@
#include "ceres/solver.h"
#include <algorithm>
+#include <map>
#include <memory>
#include <sstream> // NOLINT
+#include <string>
#include <vector>
#include "ceres/casts.h"
#include "ceres/context.h"
#include "ceres/context_impl.h"
#include "ceres/detect_structure.h"
+#include "ceres/eigensparse.h"
#include "ceres/gradient_checking_cost_function.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/parameter_block_ordering.h"
#include "ceres/preprocessor.h"
#include "ceres/problem.h"
@@ -50,6 +53,7 @@
#include "ceres/schur_templates.h"
#include "ceres/solver_utils.h"
#include "ceres/stringprintf.h"
+#include "ceres/suitesparse.h"
#include "ceres/types.h"
#include "ceres/wall_time.h"
@@ -58,32 +62,29 @@
using internal::StringAppendF;
using internal::StringPrintf;
-using std::map;
-using std::string;
-using std::vector;
-#define OPTION_OP(x, y, OP) \
- if (!(options.x OP y)) { \
- std::stringstream ss; \
- ss << "Invalid configuration. "; \
- ss << string("Solver::Options::" #x " = ") << options.x << ". "; \
- ss << "Violated constraint: "; \
- ss << string("Solver::Options::" #x " " #OP " " #y); \
- *error = ss.str(); \
- return false; \
+#define OPTION_OP(x, y, OP) \
+ if (!(options.x OP y)) { \
+ std::stringstream ss; \
+ ss << "Invalid configuration. "; \
+ ss << std::string("Solver::Options::" #x " = ") << options.x << ". "; \
+ ss << "Violated constraint: "; \
+ ss << std::string("Solver::Options::" #x " " #OP " " #y); \
+ *error = ss.str(); \
+ return false; \
}
-#define OPTION_OP_OPTION(x, y, OP) \
- if (!(options.x OP options.y)) { \
- std::stringstream ss; \
- ss << "Invalid configuration. "; \
- ss << string("Solver::Options::" #x " = ") << options.x << ". "; \
- ss << string("Solver::Options::" #y " = ") << options.y << ". "; \
- ss << "Violated constraint: "; \
- ss << string("Solver::Options::" #x); \
- ss << string(#OP " Solver::Options::" #y "."); \
- *error = ss.str(); \
- return false; \
+#define OPTION_OP_OPTION(x, y, OP) \
+ if (!(options.x OP options.y)) { \
+ std::stringstream ss; \
+ ss << "Invalid configuration. "; \
+ ss << std::string("Solver::Options::" #x " = ") << options.x << ". "; \
+ ss << std::string("Solver::Options::" #y " = ") << options.y << ". "; \
+ ss << "Violated constraint: "; \
+ ss << std::string("Solver::Options::" #x); \
+ ss << std::string(#OP " Solver::Options::" #y "."); \
+ *error = ss.str(); \
+ return false; \
}
#define OPTION_GE(x, y) OPTION_OP(x, y, >=);
@@ -93,7 +94,7 @@
#define OPTION_LE_OPTION(x, y) OPTION_OP_OPTION(x, y, <=)
#define OPTION_LT_OPTION(x, y) OPTION_OP_OPTION(x, y, <)
-bool CommonOptionsAreValid(const Solver::Options& options, string* error) {
+bool CommonOptionsAreValid(const Solver::Options& options, std::string* error) {
OPTION_GE(max_num_iterations, 0);
OPTION_GE(max_solver_time_in_seconds, 0.0);
OPTION_GE(function_tolerance, 0.0);
@@ -107,7 +108,286 @@
return true;
}
-bool TrustRegionOptionsAreValid(const Solver::Options& options, string* error) {
+bool IsNestedDissectionAvailable(SparseLinearAlgebraLibraryType type) {
+ return (((type == SUITE_SPARSE) &&
+ internal::SuiteSparse::IsNestedDissectionAvailable()) ||
+ (type == ACCELERATE_SPARSE) ||
+ ((type == EIGEN_SPARSE) &&
+ internal::EigenSparse::IsNestedDissectionAvailable()));
+}
+
+bool IsIterativeSolver(LinearSolverType type) {
+ return (type == CGNR || type == ITERATIVE_SCHUR);
+}
+
+bool OptionsAreValidForDenseSolver(const Solver::Options& options,
+ std::string* error) {
+ const char* library_name = DenseLinearAlgebraLibraryTypeToString(
+ options.dense_linear_algebra_library_type);
+ const char* solver_name =
+ LinearSolverTypeToString(options.linear_solver_type);
+ constexpr char kFormat[] =
+ "Can't use %s with dense_linear_algebra_library_type = %s "
+ "because support not enabled when Ceres was built.";
+
+ if (!IsDenseLinearAlgebraLibraryTypeAvailable(
+ options.dense_linear_algebra_library_type)) {
+ *error = StringPrintf(kFormat, solver_name, library_name);
+ return false;
+ }
+ return true;
+}
+
+bool OptionsAreValidForSparseCholeskyBasedSolver(const Solver::Options& options,
+ std::string* error) {
+ const char* library_name = SparseLinearAlgebraLibraryTypeToString(
+ options.sparse_linear_algebra_library_type);
+ // Sparse factorization based solvers and some preconditioners require a
+ // sparse Cholesky factorization.
+ const char* solver_name =
+ IsIterativeSolver(options.linear_solver_type)
+ ? PreconditionerTypeToString(options.preconditioner_type)
+ : LinearSolverTypeToString(options.linear_solver_type);
+
+ constexpr char kNoSparseFormat[] =
+ "Can't use %s with sparse_linear_algebra_library_type = %s.";
+ constexpr char kNoLibraryFormat[] =
+ "Can't use %s sparse_linear_algebra_library_type = %s, because support "
+ "was not enabled when Ceres Solver was built.";
+ constexpr char kNoNesdisFormat[] =
+ "NESDIS is not available with sparse_linear_algebra_library_type = %s.";
+ constexpr char kMixedFormat[] =
+ "use_mixed_precision_solves with %s is not supported with "
+ "sparse_linear_algebra_library_type = %s";
+ constexpr char kDynamicSparsityFormat[] =
+ "dynamic sparsity is not supported with "
+ "sparse_linear_algebra_library_type = %s";
+
+ if (options.sparse_linear_algebra_library_type == NO_SPARSE) {
+ *error = StringPrintf(kNoSparseFormat, solver_name, library_name);
+ return false;
+ }
+
+ if (!IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ *error = StringPrintf(kNoLibraryFormat, solver_name, library_name);
+ return false;
+ }
+
+ if (options.linear_solver_ordering_type == ceres::NESDIS &&
+ !IsNestedDissectionAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ *error = StringPrintf(kNoNesdisFormat, library_name);
+ return false;
+ }
+
+ if (options.use_mixed_precision_solves &&
+ options.sparse_linear_algebra_library_type == SUITE_SPARSE) {
+ *error = StringPrintf(kMixedFormat, solver_name, library_name);
+ return false;
+ }
+
+ if (options.dynamic_sparsity &&
+ options.sparse_linear_algebra_library_type == ACCELERATE_SPARSE) {
+ *error = StringPrintf(kDynamicSparsityFormat, library_name);
+ return false;
+ }
+
+ return true;
+}
+
+bool OptionsAreValidForDenseNormalCholesky(const Solver::Options& options,
+ std::string* error) {
+ CHECK_EQ(options.linear_solver_type, DENSE_NORMAL_CHOLESKY);
+ return OptionsAreValidForDenseSolver(options, error);
+}
+
+bool OptionsAreValidForDenseQr(const Solver::Options& options,
+ std::string* error) {
+ CHECK_EQ(options.linear_solver_type, DENSE_QR);
+
+ if (!OptionsAreValidForDenseSolver(options, error)) {
+ return false;
+ }
+
+ if (options.use_mixed_precision_solves) {
+ *error = "Can't use use_mixed_precision_solves with DENSE_QR.";
+ return false;
+ }
+
+ return true;
+}
+
+bool OptionsAreValidForSparseNormalCholesky(const Solver::Options& options,
+ std::string* error) {
+ CHECK_EQ(options.linear_solver_type, SPARSE_NORMAL_CHOLESKY);
+ return OptionsAreValidForSparseCholeskyBasedSolver(options, error);
+}
+
+bool OptionsAreValidForDenseSchur(const Solver::Options& options,
+ std::string* error) {
+ CHECK_EQ(options.linear_solver_type, DENSE_SCHUR);
+
+ if (options.dynamic_sparsity) {
+ *error = "dynamic sparsity is only supported with SPARSE_NORMAL_CHOLESKY";
+ return false;
+ }
+
+ if (!OptionsAreValidForDenseSolver(options, error)) {
+ return false;
+ }
+
+ return true;
+}
+
+bool OptionsAreValidForSparseSchur(const Solver::Options& options,
+ std::string* error) {
+ CHECK_EQ(options.linear_solver_type, SPARSE_SCHUR);
+ if (options.dynamic_sparsity) {
+ *error = "Dynamic sparsity is only supported with SPARSE_NORMAL_CHOLESKY.";
+ return false;
+ }
+ return OptionsAreValidForSparseCholeskyBasedSolver(options, error);
+}
+
+bool OptionsAreValidForIterativeSchur(const Solver::Options& options,
+ std::string* error) {
+ CHECK_EQ(options.linear_solver_type, ITERATIVE_SCHUR);
+ if (options.dynamic_sparsity) {
+ *error = "Dynamic sparsity is only supported with SPARSE_NORMAL_CHOLESKY.";
+ return false;
+ }
+
+ if (options.use_explicit_schur_complement) {
+ if (options.preconditioner_type != SCHUR_JACOBI) {
+ *error =
+ "use_explicit_schur_complement only supports "
+ "SCHUR_JACOBI as the preconditioner.";
+ return false;
+ }
+ if (options.use_spse_initialization) {
+ *error =
+ "use_explicit_schur_complement does not support "
+ "use_spse_initialization.";
+ return false;
+ }
+ }
+
+ if (options.use_spse_initialization ||
+ options.preconditioner_type == SCHUR_POWER_SERIES_EXPANSION) {
+ OPTION_GE(max_num_spse_iterations, 1)
+ OPTION_GE(spse_tolerance, 0.0)
+ }
+
+ if (options.use_mixed_precision_solves) {
+ *error = "Can't use use_mixed_precision_solves with ITERATIVE_SCHUR";
+ return false;
+ }
+
+ if (options.dynamic_sparsity) {
+ *error = "Dynamic sparsity is only supported with SPARSE_NORMAL_CHOLESKY.";
+ return false;
+ }
+
+ if (options.preconditioner_type == SUBSET) {
+ *error = "Can't use SUBSET preconditioner with ITERATIVE_SCHUR";
+ return false;
+ }
+
+ // CLUSTER_JACOBI and CLUSTER_TRIDIAGONAL require sparse Cholesky
+ // factorization.
+ if (options.preconditioner_type == CLUSTER_JACOBI ||
+ options.preconditioner_type == CLUSTER_TRIDIAGONAL) {
+ return OptionsAreValidForSparseCholeskyBasedSolver(options, error);
+ }
+
+ return true;
+}
+
+bool OptionsAreValidForCgnr(const Solver::Options& options,
+ std::string* error) {
+ CHECK_EQ(options.linear_solver_type, CGNR);
+
+ if (options.preconditioner_type != IDENTITY &&
+ options.preconditioner_type != JACOBI &&
+ options.preconditioner_type != SUBSET) {
+ *error =
+ StringPrintf("Can't use CGNR with preconditioner_type = %s.",
+ PreconditionerTypeToString(options.preconditioner_type));
+ return false;
+ }
+
+ if (options.use_mixed_precision_solves) {
+ *error = "use_mixed_precision_solves cannot be used with CGNR";
+ return false;
+ }
+
+ if (options.dynamic_sparsity) {
+ *error = "Dynamic sparsity is only supported with SPARSE_NORMAL_CHOLESKY.";
+ return false;
+ }
+
+ if (options.preconditioner_type == SUBSET) {
+ if (options.sparse_linear_algebra_library_type == CUDA_SPARSE) {
+ *error =
+ "Can't use CGNR with preconditioner_type = SUBSET when "
+ "sparse_linear_algebra_library_type = CUDA_SPARSE.";
+ return false;
+ }
+
+ if (options.residual_blocks_for_subset_preconditioner.empty()) {
+ *error =
+ "When using SUBSET preconditioner, "
+ "residual_blocks_for_subset_preconditioner cannot be empty";
+ return false;
+ }
+
+ // SUBSET preconditioner requires sparse Cholesky factorization.
+ if (!OptionsAreValidForSparseCholeskyBasedSolver(options, error)) {
+ return false;
+ }
+ }
+
+ // Check options for CGNR with CUDA_SPARSE.
+ if (options.sparse_linear_algebra_library_type == CUDA_SPARSE) {
+ if (!IsSparseLinearAlgebraLibraryTypeAvailable(CUDA_SPARSE)) {
+ *error =
+ "Can't use CGNR with sparse_linear_algebra_library_type = "
+ "CUDA_SPARSE because support was not enabled when Ceres was built.";
+ return false;
+ }
+ }
+ return true;
+}
+
+bool OptionsAreValidForLinearSolver(const Solver::Options& options,
+ std::string* error) {
+ switch (options.linear_solver_type) {
+ case DENSE_NORMAL_CHOLESKY:
+ return OptionsAreValidForDenseNormalCholesky(options, error);
+ case DENSE_QR:
+ return OptionsAreValidForDenseQr(options, error);
+ case SPARSE_NORMAL_CHOLESKY:
+ return OptionsAreValidForSparseNormalCholesky(options, error);
+ case DENSE_SCHUR:
+ return OptionsAreValidForDenseSchur(options, error);
+ case SPARSE_SCHUR:
+ return OptionsAreValidForSparseSchur(options, error);
+ case ITERATIVE_SCHUR:
+ return OptionsAreValidForIterativeSchur(options, error);
+ case CGNR:
+ return OptionsAreValidForCgnr(options, error);
+ default:
+ LOG(FATAL) << "Congratulations you have found a bug. Please report "
+ "this to the "
+ "Ceres Solver developers. Unknown linear solver type: "
+ << LinearSolverTypeToString(options.linear_solver_type);
+ }
+ return false;
+}
+
+bool TrustRegionOptionsAreValid(const Solver::Options& options,
+ std::string* error) {
OPTION_GT(initial_trust_region_radius, 0.0);
OPTION_GT(min_trust_region_radius, 0.0);
OPTION_GT(max_trust_region_radius, 0.0);
@@ -121,7 +401,7 @@
OPTION_GE(max_num_consecutive_invalid_steps, 0);
OPTION_GT(eta, 0.0);
OPTION_GE(min_linear_solver_iterations, 0);
- OPTION_GE(max_linear_solver_iterations, 1);
+ OPTION_GE(max_linear_solver_iterations, 0);
OPTION_LE_OPTION(min_linear_solver_iterations, max_linear_solver_iterations);
if (options.use_inner_iterations) {
@@ -132,76 +412,19 @@
OPTION_GT(max_consecutive_nonmonotonic_steps, 0);
}
- if (options.linear_solver_type == ITERATIVE_SCHUR &&
- options.use_explicit_schur_complement &&
- options.preconditioner_type != SCHUR_JACOBI) {
+ if ((options.trust_region_strategy_type == DOGLEG) &&
+ IsIterativeSolver(options.linear_solver_type)) {
*error =
- "use_explicit_schur_complement only supports "
- "SCHUR_JACOBI as the preconditioner.";
+ "DOGLEG only supports exact factorization based linear "
+ "solvers. If you want to use an iterative solver please "
+ "use LEVENBERG_MARQUARDT as the trust_region_strategy_type";
return false;
}
- if (options.dense_linear_algebra_library_type == LAPACK &&
- !IsDenseLinearAlgebraLibraryTypeAvailable(LAPACK) &&
- (options.linear_solver_type == DENSE_NORMAL_CHOLESKY ||
- options.linear_solver_type == DENSE_QR ||
- options.linear_solver_type == DENSE_SCHUR)) {
- *error = StringPrintf(
- "Can't use %s with "
- "Solver::Options::dense_linear_algebra_library_type = LAPACK "
- "because LAPACK was not enabled when Ceres was built.",
- LinearSolverTypeToString(options.linear_solver_type));
+ if (!OptionsAreValidForLinearSolver(options, error)) {
return false;
}
- {
- const char* sparse_linear_algebra_library_name =
- SparseLinearAlgebraLibraryTypeToString(
- options.sparse_linear_algebra_library_type);
- const char* name = nullptr;
- if (options.linear_solver_type == SPARSE_NORMAL_CHOLESKY ||
- options.linear_solver_type == SPARSE_SCHUR) {
- name = LinearSolverTypeToString(options.linear_solver_type);
- } else if ((options.linear_solver_type == ITERATIVE_SCHUR &&
- (options.preconditioner_type == CLUSTER_JACOBI ||
- options.preconditioner_type == CLUSTER_TRIDIAGONAL)) ||
- (options.linear_solver_type == CGNR &&
- options.preconditioner_type == SUBSET)) {
- name = PreconditionerTypeToString(options.preconditioner_type);
- }
-
- if (name) {
- if (options.sparse_linear_algebra_library_type == NO_SPARSE) {
- *error = StringPrintf(
- "Can't use %s with "
- "Solver::Options::sparse_linear_algebra_library_type = %s.",
- name,
- sparse_linear_algebra_library_name);
- return false;
- } else if (!IsSparseLinearAlgebraLibraryTypeAvailable(
- options.sparse_linear_algebra_library_type)) {
- *error = StringPrintf(
- "Can't use %s with "
- "Solver::Options::sparse_linear_algebra_library_type = %s, "
- "because support was not enabled when Ceres Solver was built.",
- name,
- sparse_linear_algebra_library_name);
- return false;
- }
- }
- }
-
- if (options.trust_region_strategy_type == DOGLEG) {
- if (options.linear_solver_type == ITERATIVE_SCHUR ||
- options.linear_solver_type == CGNR) {
- *error =
- "DOGLEG only supports exact factorization based linear "
- "solvers. If you want to use an iterative solver please "
- "use LEVENBERG_MARQUARDT as the trust_region_strategy_type";
- return false;
- }
- }
-
if (!options.trust_region_minimizer_iterations_to_dump.empty() &&
options.trust_region_problem_dump_format_type != CONSOLE &&
options.trust_region_problem_dump_directory.empty()) {
@@ -209,33 +432,11 @@
return false;
}
- if (options.dynamic_sparsity) {
- if (options.linear_solver_type != SPARSE_NORMAL_CHOLESKY) {
- *error =
- "Dynamic sparsity is only supported with SPARSE_NORMAL_CHOLESKY.";
- return false;
- }
- if (options.sparse_linear_algebra_library_type == ACCELERATE_SPARSE) {
- *error =
- "ACCELERATE_SPARSE is not currently supported with dynamic sparsity.";
- return false;
- }
- }
-
- if (options.linear_solver_type == CGNR &&
- options.preconditioner_type == SUBSET &&
- options.residual_blocks_for_subset_preconditioner.empty()) {
- *error =
- "When using SUBSET preconditioner, "
- "Solver::Options::residual_blocks_for_subset_preconditioner cannot be "
- "empty";
- return false;
- }
-
return true;
}
-bool LineSearchOptionsAreValid(const Solver::Options& options, string* error) {
+bool LineSearchOptionsAreValid(const Solver::Options& options,
+ std::string* error) {
OPTION_GT(max_lbfgs_rank, 0);
OPTION_GT(min_line_search_step_size, 0.0);
OPTION_GT(max_line_search_step_contraction, 0.0);
@@ -255,9 +456,10 @@
options.line_search_direction_type == ceres::LBFGS) &&
options.line_search_type != ceres::WOLFE) {
*error =
- string("Invalid configuration: Solver::Options::line_search_type = ") +
- string(LineSearchTypeToString(options.line_search_type)) +
- string(
+ std::string(
+ "Invalid configuration: Solver::Options::line_search_type = ") +
+ std::string(LineSearchTypeToString(options.line_search_type)) +
+ std::string(
". When using (L)BFGS, "
"Solver::Options::line_search_type must be set to WOLFE.");
return false;
@@ -265,8 +467,8 @@
// Warn user if they have requested BISECTION interpolation, but constraints
// on max/min step size change during line search prevent bisection scaling
- // from occurring. Warn only, as this is likely a user mistake, but one which
- // does not prevent us from continuing.
+ // from occurring. Warn only, as this is likely a user mistake, but one
+ // which does not prevent us from continuing.
if (options.line_search_interpolation_type == ceres::BISECTION &&
(options.max_line_search_step_contraction > 0.5 ||
options.min_line_search_step_contraction < 0.5)) {
@@ -291,7 +493,7 @@
#undef OPTION_LE_OPTION
#undef OPTION_LT_OPTION
-void StringifyOrdering(const vector<int>& ordering, string* report) {
+void StringifyOrdering(const std::vector<int>& ordering, std::string* report) {
if (ordering.empty()) {
internal::StringAppendF(report, "AUTOMATIC");
return;
@@ -335,7 +537,7 @@
&(summary->inner_iteration_ordering_given));
// clang-format off
- summary->dense_linear_algebra_library_type = options.dense_linear_algebra_library_type; // NOLINT
+ summary->dense_linear_algebra_library_type = options.dense_linear_algebra_library_type;
summary->dogleg_type = options.dogleg_type;
summary->inner_iteration_time_in_seconds = 0.0;
summary->num_line_search_steps = 0;
@@ -344,18 +546,19 @@
summary->line_search_polynomial_minimization_time_in_seconds = 0.0;
summary->line_search_total_time_in_seconds = 0.0;
summary->inner_iterations_given = options.use_inner_iterations;
- summary->line_search_direction_type = options.line_search_direction_type; // NOLINT
- summary->line_search_interpolation_type = options.line_search_interpolation_type; // NOLINT
+ summary->line_search_direction_type = options.line_search_direction_type;
+ summary->line_search_interpolation_type = options.line_search_interpolation_type;
summary->line_search_type = options.line_search_type;
summary->linear_solver_type_given = options.linear_solver_type;
summary->max_lbfgs_rank = options.max_lbfgs_rank;
summary->minimizer_type = options.minimizer_type;
- summary->nonlinear_conjugate_gradient_type = options.nonlinear_conjugate_gradient_type; // NOLINT
+ summary->nonlinear_conjugate_gradient_type = options.nonlinear_conjugate_gradient_type;
summary->num_threads_given = options.num_threads;
summary->preconditioner_type_given = options.preconditioner_type;
- summary->sparse_linear_algebra_library_type = options.sparse_linear_algebra_library_type; // NOLINT
- summary->trust_region_strategy_type = options.trust_region_strategy_type; // NOLINT
- summary->visibility_clustering_type = options.visibility_clustering_type; // NOLINT
+ summary->sparse_linear_algebra_library_type = options.sparse_linear_algebra_library_type;
+ summary->linear_solver_ordering_type = options.linear_solver_ordering_type;
+ summary->trust_region_strategy_type = options.trust_region_strategy_type;
+ summary->visibility_clustering_type = options.visibility_clustering_type;
// clang-format on
}
@@ -363,19 +566,23 @@
Solver::Summary* summary) {
internal::OrderingToGroupSizes(pp.options.linear_solver_ordering.get(),
&(summary->linear_solver_ordering_used));
+ // TODO(sameeragarwal): Update the preprocessor to collapse the
+ // second and higher groups into one group when nested dissection is
+ // used.
internal::OrderingToGroupSizes(pp.options.inner_iteration_ordering.get(),
&(summary->inner_iteration_ordering_used));
// clang-format off
- summary->inner_iterations_used = pp.inner_iteration_minimizer.get() != NULL; // NOLINT
+ summary->inner_iterations_used = pp.inner_iteration_minimizer != nullptr;
summary->linear_solver_type_used = pp.linear_solver_options.type;
+ summary->mixed_precision_solves_used = pp.options.use_mixed_precision_solves;
summary->num_threads_used = pp.options.num_threads;
summary->preconditioner_type_used = pp.options.preconditioner_type;
// clang-format on
internal::SetSummaryFinalCost(summary);
- if (pp.reduced_program.get() != NULL) {
+ if (pp.reduced_program != nullptr) {
SummarizeReducedProgram(*pp.reduced_program, summary);
}
@@ -385,8 +592,8 @@
// case if the preprocessor failed, or if the reduced problem did
// not contain any parameter blocks. Thus, only extract the
// evaluator statistics if one exists.
- if (pp.evaluator.get() != NULL) {
- const map<string, CallStatistics>& evaluator_statistics =
+ if (pp.evaluator != nullptr) {
+ const std::map<std::string, CallStatistics>& evaluator_statistics =
pp.evaluator->Statistics();
{
const CallStatistics& call_stats = FindWithDefault(
@@ -407,8 +614,8 @@
// Again, like the evaluator, there may or may not be a linear
// solver from which we can extract run time statistics. In
// particular the line search solver does not use a linear solver.
- if (pp.linear_solver.get() != NULL) {
- const map<string, CallStatistics>& linear_solver_statistics =
+ if (pp.linear_solver != nullptr) {
+ const std::map<std::string, CallStatistics>& linear_solver_statistics =
pp.linear_solver->Statistics();
const CallStatistics& call_stats = FindWithDefault(
linear_solver_statistics, "LinearSolver::Solve", CallStatistics());
@@ -436,8 +643,7 @@
}
const Vector original_reduced_parameters = pp->reduced_parameters;
- std::unique_ptr<Minimizer> minimizer(
- Minimizer::Create(pp->options.minimizer_type));
+ auto minimizer = Minimizer::Create(pp->options.minimizer_type);
minimizer->Minimize(
pp->minimizer_options, pp->reduced_parameters.data(), summary);
@@ -465,9 +671,23 @@
return internal::StringPrintf("%s,%s,%s", row.c_str(), e.c_str(), f.c_str());
}
+#ifndef CERES_NO_CUDA
+bool IsCudaRequired(const Solver::Options& options) {
+ if (options.linear_solver_type == DENSE_NORMAL_CHOLESKY ||
+ options.linear_solver_type == DENSE_SCHUR ||
+ options.linear_solver_type == DENSE_QR) {
+ return (options.dense_linear_algebra_library_type == CUDA);
+ }
+ if (options.linear_solver_type == CGNR) {
+ return (options.sparse_linear_algebra_library_type == CUDA_SPARSE);
+ }
+ return false;
+}
+#endif
+
} // namespace
-bool Solver::Options::IsValid(string* error) const {
+bool Solver::Options::IsValid(std::string* error) const {
if (!CommonOptionsAreValid(*this, error)) {
return false;
}
@@ -485,7 +705,7 @@
return LineSearchOptionsAreValid(*this, error);
}
-Solver::~Solver() {}
+Solver::~Solver() = default;
void Solver::Solve(const Solver::Options& options,
Problem* problem,
@@ -506,10 +726,19 @@
return;
}
- ProblemImpl* problem_impl = problem->impl_.get();
+ ProblemImpl* problem_impl = problem->mutable_impl();
Program* program = problem_impl->mutable_program();
PreSolveSummarize(options, problem_impl, summary);
+#ifndef CERES_NO_CUDA
+ if (IsCudaRequired(options)) {
+ if (!problem_impl->context()->InitCuda(&summary->message)) {
+ LOG(ERROR) << "Terminating: " << summary->message;
+ return;
+ }
+ }
+#endif // CERES_NO_CUDA
+
// If gradient_checking is enabled, wrap all cost functions in a
// gradient checker and install a callback that terminates if any gradient
// error is detected.
@@ -518,11 +747,11 @@
Solver::Options modified_options = options;
if (options.check_gradients) {
modified_options.callbacks.push_back(&gradient_checking_callback);
- gradient_checking_problem.reset(CreateGradientCheckingProblemImpl(
+ gradient_checking_problem = CreateGradientCheckingProblemImpl(
problem_impl,
options.gradient_check_numeric_derivative_relative_step_size,
options.gradient_check_relative_precision,
- &gradient_checking_callback));
+ &gradient_checking_callback);
problem_impl = gradient_checking_problem.get();
program = problem_impl->mutable_program();
}
@@ -534,8 +763,7 @@
// The main thread also does work so we only need to launch num_threads - 1.
problem_impl->context()->EnsureMinimumThreads(options.num_threads - 1);
- std::unique_ptr<Preprocessor> preprocessor(
- Preprocessor::Create(modified_options.minimizer_type));
+ auto preprocessor = Preprocessor::Create(modified_options.minimizer_type);
PreprocessedProblem pp;
const bool status =
@@ -545,7 +773,7 @@
// modified_options.linear_solver_type because, depending on the
// lack of a Schur structure, the preprocessor may change the linear
// solver type.
- if (IsSchurType(pp.linear_solver_options.type)) {
+ if (status && IsSchurType(pp.linear_solver_options.type)) {
// TODO(sameeragarwal): We can likely eliminate the duplicate call
// to DetectStructure here and inside the linear solver, by
// calling this in the preprocessor.
@@ -580,7 +808,7 @@
}
const double postprocessor_start_time = WallTimeInSeconds();
- problem_impl = problem->impl_.get();
+ problem_impl = problem->mutable_impl();
program = problem_impl->mutable_program();
// On exit, ensure that the parameter blocks again point at the user
// provided values and the parameter blocks are numbered according
@@ -608,7 +836,7 @@
solver.Solve(options, problem, summary);
}
-string Solver::Summary::BriefReport() const {
+std::string Solver::Summary::BriefReport() const {
return StringPrintf(
"Ceres Solver Report: "
"Iterations: %d, "
@@ -621,10 +849,12 @@
TerminationTypeToString(termination_type));
}
-string Solver::Summary::FullReport() const {
+std::string Solver::Summary::FullReport() const {
using internal::VersionString;
- string report = string("\nSolver Summary (v " + VersionString() + ")\n\n");
+ // NOTE operator+ is not usable for concatenating a string and a string_view.
+ std::string report =
+ std::string{"\nSolver Summary (v "}.append(VersionString()) + ")\n\n";
StringAppendF(&report, "%45s %21s\n", "Original", "Reduced");
StringAppendF(&report,
@@ -658,21 +888,13 @@
if (linear_solver_type_used == DENSE_NORMAL_CHOLESKY ||
linear_solver_type_used == DENSE_SCHUR ||
linear_solver_type_used == DENSE_QR) {
+ const char* mixed_precision_suffix =
+ (mixed_precision_solves_used ? "(Mixed Precision)" : "");
StringAppendF(&report,
- "\nDense linear algebra library %15s\n",
+ "\nDense linear algebra library %15s %s\n",
DenseLinearAlgebraLibraryTypeToString(
- dense_linear_algebra_library_type));
- }
-
- if (linear_solver_type_used == SPARSE_NORMAL_CHOLESKY ||
- linear_solver_type_used == SPARSE_SCHUR ||
- (linear_solver_type_used == ITERATIVE_SCHUR &&
- (preconditioner_type_used == CLUSTER_JACOBI ||
- preconditioner_type_used == CLUSTER_TRIDIAGONAL))) {
- StringAppendF(&report,
- "\nSparse linear algebra library %15s\n",
- SparseLinearAlgebraLibraryTypeToString(
- sparse_linear_algebra_library_type));
+ dense_linear_algebra_library_type),
+ mixed_precision_suffix);
}
StringAppendF(&report,
@@ -685,17 +907,50 @@
StringAppendF(&report, " (SUBSPACE)");
}
}
- StringAppendF(&report, "\n");
- StringAppendF(&report, "\n");
+ const bool used_sparse_linear_algebra_library =
+ linear_solver_type_used == SPARSE_NORMAL_CHOLESKY ||
+ linear_solver_type_used == SPARSE_SCHUR ||
+ linear_solver_type_used == CGNR ||
+ (linear_solver_type_used == ITERATIVE_SCHUR &&
+ (preconditioner_type_used == CLUSTER_JACOBI ||
+ preconditioner_type_used == CLUSTER_TRIDIAGONAL));
+
+ const bool linear_solver_ordering_required =
+ linear_solver_type_used == SPARSE_SCHUR ||
+ (linear_solver_type_used == ITERATIVE_SCHUR &&
+ (preconditioner_type_used == CLUSTER_JACOBI ||
+ preconditioner_type_used == CLUSTER_TRIDIAGONAL)) ||
+ (linear_solver_type_used == CGNR && preconditioner_type_used == SUBSET);
+
+ if (used_sparse_linear_algebra_library) {
+ const char* mixed_precision_suffix =
+ (mixed_precision_solves_used ? "(Mixed Precision)" : "");
+ if (linear_solver_ordering_required) {
+ StringAppendF(
+ &report,
+ "\nSparse linear algebra library %15s + %s %s\n",
+ SparseLinearAlgebraLibraryTypeToString(
+ sparse_linear_algebra_library_type),
+ LinearSolverOrderingTypeToString(linear_solver_ordering_type),
+ mixed_precision_suffix);
+ } else {
+ StringAppendF(&report,
+ "\nSparse linear algebra library %15s %s\n",
+ SparseLinearAlgebraLibraryTypeToString(
+ sparse_linear_algebra_library_type),
+ mixed_precision_suffix);
+ }
+ }
+
+ StringAppendF(&report, "\n");
StringAppendF(&report, "%45s %21s\n", "Given", "Used");
StringAppendF(&report,
"Linear solver %25s%25s\n",
LinearSolverTypeToString(linear_solver_type_given),
LinearSolverTypeToString(linear_solver_type_used));
- if (linear_solver_type_given == CGNR ||
- linear_solver_type_given == ITERATIVE_SCHUR) {
+ if (IsIterativeSolver(linear_solver_type_given)) {
StringAppendF(&report,
"Preconditioner %25s%25s\n",
PreconditionerTypeToString(preconditioner_type_given),
@@ -715,9 +970,9 @@
num_threads_given,
num_threads_used);
- string given;
+ std::string given;
StringifyOrdering(linear_solver_ordering_given, &given);
- string used;
+ std::string used;
StringifyOrdering(linear_solver_ordering_used, &used);
StringAppendF(&report,
"Linear solver ordering %22s %24s\n",
@@ -738,9 +993,9 @@
}
if (inner_iterations_used) {
- string given;
+ std::string given;
StringifyOrdering(inner_iteration_ordering_given, &given);
- string used;
+ std::string used;
StringifyOrdering(inner_iteration_ordering_used, &used);
StringAppendF(&report,
"Inner iteration ordering %20s %24s\n",
@@ -751,7 +1006,7 @@
// LINE_SEARCH HEADER
StringAppendF(&report, "\nMinimizer %19s\n", "LINE_SEARCH");
- string line_search_direction_string;
+ std::string line_search_direction_string;
if (line_search_direction_type == LBFGS) {
line_search_direction_string = StringPrintf("LBFGS (%d)", max_lbfgs_rank);
} else if (line_search_direction_type == NONLINEAR_CONJUGATE_GRADIENT) {
@@ -766,7 +1021,7 @@
"Line search direction %19s\n",
line_search_direction_string.c_str());
- const string line_search_type_string = StringPrintf(
+ const std::string line_search_type_string = StringPrintf(
"%s %s",
LineSearchInterpolationTypeToString(line_search_interpolation_type),
LineSearchTypeToString(line_search_type));
diff --git a/internal/ceres/solver_test.cc b/internal/ceres/solver_test.cc
index c4823be..52bd594 100644
--- a/internal/ceres/solver_test.cc
+++ b/internal/ceres/solver_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2019 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,32 +33,30 @@
#include <cmath>
#include <limits>
#include <memory>
+#include <string>
#include <vector>
#include "ceres/autodiff_cost_function.h"
#include "ceres/evaluation_callback.h"
-#include "ceres/local_parameterization.h"
+#include "ceres/manifold.h"
#include "ceres/problem.h"
#include "ceres/problem_impl.h"
#include "ceres/sized_cost_function.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::string;
+namespace ceres::internal {
TEST(SolverOptions, DefaultTrustRegionOptionsAreValid) {
Solver::Options options;
options.minimizer_type = TRUST_REGION;
- string error;
+ std::string error;
EXPECT_TRUE(options.IsValid(&error)) << error;
}
TEST(SolverOptions, DefaultLineSearchOptionsAreValid) {
Solver::Options options;
options.minimizer_type = LINE_SEARCH;
- string error;
+ std::string error;
EXPECT_TRUE(options.IsValid(&error)) << error;
}
@@ -77,7 +75,6 @@
struct RememberingCallback : public IterationCallback {
explicit RememberingCallback(double* x) : calls(0), x(x) {}
- virtual ~RememberingCallback() {}
CallbackReturnType operator()(const IterationSummary& summary) final {
x_values.push_back(*x);
return SOLVER_CONTINUE;
@@ -88,7 +85,6 @@
};
struct NoOpEvaluationCallback : EvaluationCallback {
- virtual ~NoOpEvaluationCallback() {}
void PrepareForEvaluation(bool evaluate_jacobians,
bool new_evaluation_point) final {
(void)evaluate_jacobians;
@@ -119,8 +115,8 @@
num_iterations =
summary.num_successful_steps + summary.num_unsuccessful_steps;
EXPECT_GT(num_iterations, 1);
- for (int i = 0; i < callback.x_values.size(); ++i) {
- EXPECT_EQ(50.0, callback.x_values[i]);
+ for (double value : callback.x_values) {
+ EXPECT_EQ(50.0, value);
}
// Second: update_state_every_iteration=true, evaluation_callback=nullptr.
@@ -315,166 +311,12 @@
EXPECT_EQ(summary.final_cost, 1.0 / 2.0);
}
-#if defined(CERES_NO_SUITESPARSE)
-TEST(Solver, SparseNormalCholeskyNoSuiteSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = SUITE_SPARSE;
- options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver, SparseSchurNoSuiteSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = SUITE_SPARSE;
- options.linear_solver_type = SPARSE_SCHUR;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-#endif
-
-#if defined(CERES_NO_CXSPARSE)
-TEST(Solver, SparseNormalCholeskyNoCXSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = CX_SPARSE;
- options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver, SparseSchurNoCXSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = CX_SPARSE;
- options.linear_solver_type = SPARSE_SCHUR;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-#endif
-
-#if defined(CERES_NO_ACCELERATE_SPARSE)
-TEST(Solver, SparseNormalCholeskyNoAccelerateSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
- options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver, SparseSchurNoAccelerateSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
- options.linear_solver_type = SPARSE_SCHUR;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-#else
-TEST(Solver, DynamicSparseNormalCholeskyUnsupportedWithAccelerateSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
- options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- options.dynamic_sparsity = true;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-#endif
-
-#if !defined(CERES_USE_EIGEN_SPARSE)
-TEST(Solver, SparseNormalCholeskyNoEigenSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
- options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver, SparseSchurNoEigenSparse) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
- options.linear_solver_type = SPARSE_SCHUR;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-#endif
-
-TEST(Solver, SparseNormalCholeskyNoSparseLibrary) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = NO_SPARSE;
- options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver, SparseSchurNoSparseLibrary) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = NO_SPARSE;
- options.linear_solver_type = SPARSE_SCHUR;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver, IterativeSchurWithClusterJacobiPerconditionerNoSparseLibrary) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = NO_SPARSE;
- options.linear_solver_type = ITERATIVE_SCHUR;
- // Requires SuiteSparse.
- options.preconditioner_type = CLUSTER_JACOBI;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver,
- IterativeSchurWithClusterTridiagonalPerconditionerNoSparseLibrary) {
- Solver::Options options;
- options.sparse_linear_algebra_library_type = NO_SPARSE;
- options.linear_solver_type = ITERATIVE_SCHUR;
- // Requires SuiteSparse.
- options.preconditioner_type = CLUSTER_TRIDIAGONAL;
- string message;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver, IterativeLinearSolverForDogleg) {
- Solver::Options options;
- options.trust_region_strategy_type = DOGLEG;
- string message;
- options.linear_solver_type = ITERATIVE_SCHUR;
- EXPECT_FALSE(options.IsValid(&message));
-
- options.linear_solver_type = CGNR;
- EXPECT_FALSE(options.IsValid(&message));
-}
-
-TEST(Solver, LinearSolverTypeNormalOperation) {
- Solver::Options options;
- options.linear_solver_type = DENSE_QR;
-
- string message;
- EXPECT_TRUE(options.IsValid(&message));
-
- options.linear_solver_type = DENSE_NORMAL_CHOLESKY;
- EXPECT_TRUE(options.IsValid(&message));
-
- options.linear_solver_type = DENSE_SCHUR;
- EXPECT_TRUE(options.IsValid(&message));
-
- options.linear_solver_type = SPARSE_SCHUR;
-#if defined(CERES_NO_SUITESPARSE) && defined(CERES_NO_CXSPARSE) && \
- !defined(CERES_USE_EIGEN_SPARSE)
- EXPECT_FALSE(options.IsValid(&message));
-#else
- EXPECT_TRUE(options.IsValid(&message));
-#endif
-
- options.linear_solver_type = ITERATIVE_SCHUR;
- EXPECT_TRUE(options.IsValid(&message));
-}
-
template <int kNumResiduals, int... Ns>
class DummyCostFunction : public SizedCostFunction<kNumResiduals, Ns...> {
public:
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
for (int i = 0; i < kNumResiduals; ++i) {
residuals[i] = kNumResiduals * kNumResiduals + i;
}
@@ -512,12 +354,12 @@
}
};
-TEST(Solver, ZeroSizedLocalParameterizationHoldsParameterBlockConstant) {
+TEST(Solver, ZeroSizedManifoldHoldsParameterBlockConstant) {
double x = 0.0;
double y = 1.0;
Problem problem;
problem.AddResidualBlock(LinearCostFunction::Create(), nullptr, &x, &y);
- problem.SetParameterization(&y, new SubsetParameterization(1, {0}));
+ problem.SetManifold(&y, new SubsetManifold(1, {0}));
EXPECT_TRUE(problem.IsParameterBlockConstant(&y));
Solver::Options options;
@@ -532,5 +374,856 @@
EXPECT_EQ(y, 1.0);
}
-} // namespace internal
-} // namespace ceres
+TEST(Solver, DenseNormalCholeskyOptions) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = DENSE_NORMAL_CHOLESKY;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dense_linear_algebra_library_type = EIGEN;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ if (IsDenseLinearAlgebraLibraryTypeAvailable(LAPACK)) {
+ options.use_mixed_precision_solves = false;
+ options.dense_linear_algebra_library_type = LAPACK;
+
+ EXPECT_TRUE(options.IsValid(&message));
+ options.use_mixed_precision_solves = true;
+ EXPECT_TRUE(options.IsValid(&message));
+ } else {
+ options.use_mixed_precision_solves = false;
+ options.dense_linear_algebra_library_type = LAPACK;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+}
+
+TEST(Solver, DenseQrOptions) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = DENSE_QR;
+
+ options.use_mixed_precision_solves = false;
+ options.dense_linear_algebra_library_type = EIGEN;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ if (IsDenseLinearAlgebraLibraryTypeAvailable(LAPACK)) {
+ options.use_mixed_precision_solves = false;
+ options.dense_linear_algebra_library_type = LAPACK;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ } else {
+ options.use_mixed_precision_solves = false;
+ options.dense_linear_algebra_library_type = LAPACK;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+}
+
+TEST(Solver, SparseNormalCholeskyOptionsNoSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options.sparse_linear_algebra_library_type = NO_SPARSE;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, SparseNormalCholeskyOptionsEigenSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+ options.linear_solver_ordering_type = AMD;
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(EIGEN_SPARSE)) {
+ EXPECT_TRUE(options.IsValid(&message));
+ } else {
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(EIGEN_SPARSE)) {
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_TRUE(options.IsValid(&message));
+ }
+
+#ifndef CERES_NO_EIGEN_METIS
+ options.linear_solver_ordering_type = NESDIS;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(EIGEN_SPARSE)) {
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_TRUE(options.IsValid(&message));
+ }
+#else
+ options.linear_solver_ordering_type = NESDIS;
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_FALSE(options.IsValid(&message));
+#endif
+}
+
+TEST(Solver, SparseNormalCholeskyOptionsSuiteSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+ options.linear_solver_ordering_type = AMD;
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ EXPECT_TRUE(options.IsValid(&message));
+ } else {
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+#ifndef CERES_NO_CHOLMOD_PARTITION
+ options.linear_solver_ordering_type = NESDIS;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+#else
+ options.linear_solver_ordering_type = NESDIS;
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_FALSE(options.IsValid(&message));
+#endif
+}
+
+TEST(Solver, SparseNormalCholeskyOptionsAccelerateSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = SPARSE_NORMAL_CHOLESKY;
+ options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
+ options.linear_solver_ordering_type = AMD;
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ EXPECT_TRUE(options.IsValid(&message));
+ } else {
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ options.linear_solver_ordering_type = NESDIS;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+}
+
+TEST(Solver, DenseSchurOptions) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = DENSE_SCHUR;
+ options.dense_linear_algebra_library_type = EIGEN;
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dense_linear_algebra_library_type = LAPACK;
+ if (IsDenseLinearAlgebraLibraryTypeAvailable(
+ options.dense_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+}
+
+TEST(Solver, SparseSchurOptionsNoSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = SPARSE_SCHUR;
+ options.sparse_linear_algebra_library_type = NO_SPARSE;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, SparseSchurOptionsEigenSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = SPARSE_SCHUR;
+ options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+ options.linear_solver_ordering_type = AMD;
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(EIGEN_SPARSE)) {
+ EXPECT_TRUE(options.IsValid(&message));
+ } else {
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(EIGEN_SPARSE)) {
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+#ifndef CERES_NO_EIGEN_METIS
+ options.linear_solver_ordering_type = NESDIS;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(EIGEN_SPARSE)) {
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+#else
+ options.linear_solver_ordering_type = NESDIS;
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_FALSE(options.IsValid(&message));
+#endif
+}
+
+TEST(Solver, SparseSchurOptionsSuiteSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = SPARSE_SCHUR;
+ options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+ options.linear_solver_ordering_type = AMD;
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ EXPECT_TRUE(options.IsValid(&message));
+ } else {
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+#ifndef CERES_NO_CHOLMOD_PARTITION
+ options.linear_solver_ordering_type = NESDIS;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+#else
+ options.linear_solver_ordering_type = NESDIS;
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_FALSE(options.IsValid(&message));
+#endif
+}
+
+TEST(Solver, SparseSchurOptionsAccelerateSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = SPARSE_SCHUR;
+ options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
+ options.linear_solver_ordering_type = AMD;
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ EXPECT_TRUE(options.IsValid(&message));
+ } else {
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ options.linear_solver_ordering_type = NESDIS;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = false;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_mixed_precision_solves = true;
+ options.dynamic_sparsity = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+}
+
+TEST(Solver, CgnrOptionsIdentityPreconditioner) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = CGNR;
+ options.preconditioner_type = IDENTITY;
+ options.sparse_linear_algebra_library_type = NO_SPARSE;
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = CUDA_SPARSE;
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_EQ(options.IsValid(&message),
+ IsSparseLinearAlgebraLibraryTypeAvailable(CUDA_SPARSE));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, CgnrOptionsJacobiPreconditioner) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = CGNR;
+ options.preconditioner_type = JACOBI;
+ options.sparse_linear_algebra_library_type = NO_SPARSE;
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = CUDA_SPARSE;
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_EQ(options.IsValid(&message),
+ IsSparseLinearAlgebraLibraryTypeAvailable(CUDA_SPARSE));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, CgnrOptionsSubsetPreconditioner) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = CGNR;
+ options.preconditioner_type = SUBSET;
+
+ options.sparse_linear_algebra_library_type = NO_SPARSE;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.residual_blocks_for_subset_preconditioner.insert(nullptr);
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
+ if (IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type)) {
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_TRUE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+ }
+
+ options.sparse_linear_algebra_library_type = CUDA_SPARSE;
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = true;
+ options.use_mixed_precision_solves = false;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.dynamic_sparsity = false;
+ options.use_mixed_precision_solves = true;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, CgnrOptionsSchurPreconditioners) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = CGNR;
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, IterativeSchurOptionsNoSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = ITERATIVE_SCHUR;
+ options.sparse_linear_algebra_library_type = NO_SPARSE;
+ options.preconditioner_type = IDENTITY;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = SUBSET;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_explicit_schur_complement = true;
+ options.preconditioner_type = IDENTITY;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, IterativeSchurOptionsEigenSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = ITERATIVE_SCHUR;
+ options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
+ options.preconditioner_type = IDENTITY;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_EQ(options.IsValid(&message),
+ IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_EQ(options.IsValid(&message),
+ IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type));
+ options.preconditioner_type = SUBSET;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_explicit_schur_complement = true;
+ options.preconditioner_type = IDENTITY;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, IterativeSchurOptionsSuiteSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = ITERATIVE_SCHUR;
+ options.sparse_linear_algebra_library_type = SUITE_SPARSE;
+ options.preconditioner_type = IDENTITY;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_EQ(options.IsValid(&message),
+ IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_EQ(options.IsValid(&message),
+ IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type));
+ options.preconditioner_type = SUBSET;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_explicit_schur_complement = true;
+ options.preconditioner_type = IDENTITY;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+TEST(Solver, IterativeSchurOptionsAccelerateSparse) {
+ std::string message;
+ Solver::Options options;
+ options.linear_solver_type = ITERATIVE_SCHUR;
+ options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
+ options.preconditioner_type = IDENTITY;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_EQ(options.IsValid(&message),
+ IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_EQ(options.IsValid(&message),
+ IsSparseLinearAlgebraLibraryTypeAvailable(
+ options.sparse_linear_algebra_library_type));
+ options.preconditioner_type = SUBSET;
+ EXPECT_FALSE(options.IsValid(&message));
+
+ options.use_explicit_schur_complement = true;
+ options.preconditioner_type = IDENTITY;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = SCHUR_JACOBI;
+ EXPECT_TRUE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_JACOBI;
+ EXPECT_FALSE(options.IsValid(&message));
+ options.preconditioner_type = CLUSTER_TRIDIAGONAL;
+ EXPECT_FALSE(options.IsValid(&message));
+}
+
+class LargeCostCostFunction : public SizedCostFunction<1, 1> {
+ public:
+ bool Evaluate(double const* const* parameters,
+ double* residuals,
+ double** jacobians) const override {
+ residuals[0] = 1e300;
+ if (jacobians && jacobians[0]) {
+ jacobians[0][0] = 1.0;
+ }
+ return true;
+ }
+};
+
+TEST(Solver, LargeCostProblem) {
+ double x = 1;
+ Problem problem;
+ problem.AddResidualBlock(new LargeCostCostFunction, nullptr, &x);
+ Solver::Options options;
+ Solver::Summary summary;
+ Solve(options, &problem, &summary);
+ LOG(INFO) << summary.FullReport();
+ EXPECT_EQ(summary.termination_type, FAILURE);
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/solver_utils.cc b/internal/ceres/solver_utils.cc
index eb5aafa..f5fbf05 100644
--- a/internal/ceres/solver_utils.cc
+++ b/internal/ceres/solver_utils.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,65 +30,59 @@
#include "ceres/solver_utils.h"
-#include <string>
-
#include "Eigen/Core"
#include "ceres/internal/config.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/version.h"
+#ifndef CERES_NO_CUDA
+#include "cuda_runtime.h"
+#endif // CERES_NO_CUDA
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-// clang-format off
-#define CERES_EIGEN_VERSION \
- CERES_TO_STRING(EIGEN_WORLD_VERSION) "." \
- CERES_TO_STRING(EIGEN_MAJOR_VERSION) "." \
- CERES_TO_STRING(EIGEN_MINOR_VERSION)
-// clang-format on
-
-std::string VersionString() {
- std::string value = std::string(CERES_VERSION_STRING);
- value += "-eigen-(" + std::string(CERES_EIGEN_VERSION) + ")";
+constexpr char kVersion[] =
+ // clang-format off
+ CERES_VERSION_STRING "-eigen-("
+ CERES_SEMVER_VERSION(EIGEN_WORLD_VERSION,
+ EIGEN_MAJOR_VERSION,
+ EIGEN_MINOR_VERSION) ")"
#ifdef CERES_NO_LAPACK
- value += "-no_lapack";
+ "-no_lapack"
#else
- value += "-lapack";
+ "-lapack"
#endif
#ifndef CERES_NO_SUITESPARSE
- value += "-suitesparse-(" + std::string(CERES_SUITESPARSE_VERSION) + ")";
+ "-suitesparse-(" CERES_SUITESPARSE_VERSION ")"
#endif
-#ifndef CERES_NO_CXSPARSE
- value += "-cxsparse-(" + std::string(CERES_CXSPARSE_VERSION) + ")";
+#if !defined(CERES_NO_EIGEN_METIS) || !defined(CERES_NO_CHOLMOD_PARTITION)
+ "-metis-(" CERES_METIS_VERSION ")"
#endif
#ifndef CERES_NO_ACCELERATE_SPARSE
- value += "-acceleratesparse";
+ "-acceleratesparse"
#endif
#ifdef CERES_USE_EIGEN_SPARSE
- value += "-eigensparse";
+ "-eigensparse"
#endif
#ifdef CERES_RESTRUCT_SCHUR_SPECIALIZATIONS
- value += "-no_schur_specializations";
-#endif
-
-#ifdef CERES_USE_OPENMP
- value += "-openmp";
-#else
- value += "-no_openmp";
+ "-no_schur_specializations"
#endif
#ifdef CERES_NO_CUSTOM_BLAS
- value += "-no_custom_blas";
+ "-no_custom_blas"
#endif
- return value;
-}
+#ifndef CERES_NO_CUDA
+ "-cuda-(" CERES_TO_STRING(CUDART_VERSION) ")"
+#endif
+ ;
+// clang-format on
-} // namespace internal
-} // namespace ceres
+std::string_view VersionString() noexcept { return kVersion; }
+
+} // namespace ceres::internal
diff --git a/internal/ceres/solver_utils.h b/internal/ceres/solver_utils.h
index 85fbf37..ff5e280 100644
--- a/internal/ceres/solver_utils.h
+++ b/internal/ceres/solver_utils.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,14 +28,18 @@
//
// Author: sameeragarwal@google.com (Sameer Agarwal)
-#include <algorithm>
-#include <string>
+#ifndef CERES_INTERNAL_SOLVER_UTILS_H_
+#define CERES_INTERNAL_SOLVER_UTILS_H_
+#include <algorithm>
+#include <string_view>
+
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/iteration_callback.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
template <typename SummaryType>
bool IsSolutionUsable(const SummaryType& summary) {
@@ -55,7 +59,11 @@
}
}
-std::string VersionString();
+CERES_NO_EXPORT
+std::string_view VersionString() noexcept;
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
+
+#endif // CERES_INTERNAL_SOLVER_UTILS_H_
diff --git a/internal/ceres/sparse_cholesky.cc b/internal/ceres/sparse_cholesky.cc
index 91cdf67..4f1bf87 100644
--- a/internal/ceres/sparse_cholesky.cc
+++ b/internal/ceres/sparse_cholesky.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,29 +30,29 @@
#include "ceres/sparse_cholesky.h"
+#include <memory>
+#include <utility>
+
#include "ceres/accelerate_sparse.h"
-#include "ceres/cxsparse.h"
#include "ceres/eigensparse.h"
-#include "ceres/float_cxsparse.h"
#include "ceres/float_suitesparse.h"
#include "ceres/iterative_refiner.h"
#include "ceres/suitesparse.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
std::unique_ptr<SparseCholesky> SparseCholesky::Create(
const LinearSolver::Options& options) {
- const OrderingType ordering_type = options.use_postordering ? AMD : NATURAL;
std::unique_ptr<SparseCholesky> sparse_cholesky;
switch (options.sparse_linear_algebra_library_type) {
case SUITE_SPARSE:
#ifndef CERES_NO_SUITESPARSE
if (options.use_mixed_precision_solves) {
- sparse_cholesky = FloatSuiteSparseCholesky::Create(ordering_type);
+ sparse_cholesky =
+ FloatSuiteSparseCholesky::Create(options.ordering_type);
} else {
- sparse_cholesky = SuiteSparseCholesky::Create(ordering_type);
+ sparse_cholesky = SuiteSparseCholesky::Create(options.ordering_type);
}
break;
#else
@@ -62,9 +62,10 @@
case EIGEN_SPARSE:
#ifdef CERES_USE_EIGEN_SPARSE
if (options.use_mixed_precision_solves) {
- sparse_cholesky = FloatEigenSparseCholesky::Create(ordering_type);
+ sparse_cholesky =
+ FloatEigenSparseCholesky::Create(options.ordering_type);
} else {
- sparse_cholesky = EigenSparseCholesky::Create(ordering_type);
+ sparse_cholesky = EigenSparseCholesky::Create(options.ordering_type);
}
break;
#else
@@ -72,25 +73,14 @@
<< "Eigen's sparse Cholesky factorization routines.";
#endif
- case CX_SPARSE:
-#ifndef CERES_NO_CXSPARSE
- if (options.use_mixed_precision_solves) {
- sparse_cholesky = FloatCXSparseCholesky::Create(ordering_type);
- } else {
- sparse_cholesky = CXSparseCholesky::Create(ordering_type);
- }
- break;
-#else
- LOG(FATAL) << "Ceres was compiled without support for CXSparse.";
-#endif
-
case ACCELERATE_SPARSE:
#ifndef CERES_NO_ACCELERATE_SPARSE
if (options.use_mixed_precision_solves) {
- sparse_cholesky = AppleAccelerateCholesky<float>::Create(ordering_type);
+ sparse_cholesky =
+ AppleAccelerateCholesky<float>::Create(options.ordering_type);
} else {
sparse_cholesky =
- AppleAccelerateCholesky<double>::Create(ordering_type);
+ AppleAccelerateCholesky<double>::Create(options.ordering_type);
}
break;
#else
@@ -105,15 +95,15 @@
}
if (options.max_num_refinement_iterations > 0) {
- std::unique_ptr<IterativeRefiner> refiner(
- new IterativeRefiner(options.max_num_refinement_iterations));
- sparse_cholesky = std::unique_ptr<SparseCholesky>(new RefinedSparseCholesky(
- std::move(sparse_cholesky), std::move(refiner)));
+ auto refiner = std::make_unique<SparseIterativeRefiner>(
+ options.max_num_refinement_iterations);
+ sparse_cholesky = std::make_unique<RefinedSparseCholesky>(
+ std::move(sparse_cholesky), std::move(refiner));
}
return sparse_cholesky;
}
-SparseCholesky::~SparseCholesky() {}
+SparseCholesky::~SparseCholesky() = default;
LinearSolverTerminationType SparseCholesky::FactorAndSolve(
CompressedRowSparseMatrix* lhs,
@@ -121,7 +111,7 @@
double* solution,
std::string* message) {
LinearSolverTerminationType termination_type = Factorize(lhs, message);
- if (termination_type == LINEAR_SOLVER_SUCCESS) {
+ if (termination_type == LinearSolverTerminationType::SUCCESS) {
termination_type = Solve(rhs, solution, message);
}
return termination_type;
@@ -129,11 +119,11 @@
RefinedSparseCholesky::RefinedSparseCholesky(
std::unique_ptr<SparseCholesky> sparse_cholesky,
- std::unique_ptr<IterativeRefiner> iterative_refiner)
+ std::unique_ptr<SparseIterativeRefiner> iterative_refiner)
: sparse_cholesky_(std::move(sparse_cholesky)),
iterative_refiner_(std::move(iterative_refiner)) {}
-RefinedSparseCholesky::~RefinedSparseCholesky() {}
+RefinedSparseCholesky::~RefinedSparseCholesky() = default;
CompressedRowSparseMatrix::StorageType RefinedSparseCholesky::StorageType()
const {
@@ -151,13 +141,12 @@
std::string* message) {
CHECK(lhs_ != nullptr);
auto termination_type = sparse_cholesky_->Solve(rhs, solution, message);
- if (termination_type != LINEAR_SOLVER_SUCCESS) {
+ if (termination_type != LinearSolverTerminationType::SUCCESS) {
return termination_type;
}
iterative_refiner_->Refine(*lhs_, rhs, sparse_cholesky_.get(), solution);
- return LINEAR_SOLVER_SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/sparse_cholesky.h b/internal/ceres/sparse_cholesky.h
index a6af6b2..53f475a 100644
--- a/internal/ceres/sparse_cholesky.h
+++ b/internal/ceres/sparse_cholesky.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,16 +33,17 @@
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
// clang-format on
#include <memory>
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// An interface that abstracts away the internal details of various
// sparse linear algebra libraries and offers a simple API for solving
@@ -61,13 +62,14 @@
//
// CompressedRowSparseMatrix lhs = ...;
// std::string message;
-// CHECK_EQ(sparse_cholesky->Factorize(&lhs, &message), LINEAR_SOLVER_SUCCESS);
+// CHECK_EQ(sparse_cholesky->Factorize(&lhs, &message),
+// LinearSolverTerminationType::SUCCESS);
// Vector rhs = ...;
// Vector solution = ...;
// CHECK_EQ(sparse_cholesky->Solve(rhs.data(), solution.data(), &message),
-// LINEAR_SOLVER_SUCCESS);
+// LinearSolverTerminationType::SUCCESS);
-class CERES_EXPORT_INTERNAL SparseCholesky {
+class CERES_NO_EXPORT SparseCholesky {
public:
static std::unique_ptr<SparseCholesky> Create(
const LinearSolver::Options& options);
@@ -103,38 +105,39 @@
// Convenience method which combines a call to Factorize and
// Solve. Solve is only called if Factorize returns
- // LINEAR_SOLVER_SUCCESS.
- virtual LinearSolverTerminationType FactorAndSolve(
- CompressedRowSparseMatrix* lhs,
- const double* rhs,
- double* solution,
- std::string* message);
+ // LinearSolverTerminationType::SUCCESS.
+ LinearSolverTerminationType FactorAndSolve(CompressedRowSparseMatrix* lhs,
+ const double* rhs,
+ double* solution,
+ std::string* message);
};
-class IterativeRefiner;
+class SparseIterativeRefiner;
// Computes an initial solution using the given instance of
-// SparseCholesky, and then refines it using the IterativeRefiner.
-class CERES_EXPORT_INTERNAL RefinedSparseCholesky : public SparseCholesky {
+// SparseCholesky, and then refines it using the SparseIterativeRefiner.
+class CERES_NO_EXPORT RefinedSparseCholesky final : public SparseCholesky {
public:
- RefinedSparseCholesky(std::unique_ptr<SparseCholesky> sparse_cholesky,
- std::unique_ptr<IterativeRefiner> iterative_refiner);
- virtual ~RefinedSparseCholesky();
+ RefinedSparseCholesky(
+ std::unique_ptr<SparseCholesky> sparse_cholesky,
+ std::unique_ptr<SparseIterativeRefiner> iterative_refiner);
+ ~RefinedSparseCholesky() override;
- virtual CompressedRowSparseMatrix::StorageType StorageType() const;
- virtual LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
- std::string* message);
- virtual LinearSolverTerminationType Solve(const double* rhs,
- double* solution,
- std::string* message);
+ CompressedRowSparseMatrix::StorageType StorageType() const override;
+ LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
+ std::string* message) override;
+ LinearSolverTerminationType Solve(const double* rhs,
+ double* solution,
+ std::string* message) override;
private:
std::unique_ptr<SparseCholesky> sparse_cholesky_;
- std::unique_ptr<IterativeRefiner> iterative_refiner_;
+ std::unique_ptr<SparseIterativeRefiner> iterative_refiner_;
CompressedRowSparseMatrix* lhs_ = nullptr;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_SPARSE_CHOLESKY_H_
diff --git a/internal/ceres/sparse_cholesky_test.cc b/internal/ceres/sparse_cholesky_test.cc
index 2ef24e3..d0d962e 100644
--- a/internal/ceres/sparse_cholesky_test.cc
+++ b/internal/ceres/sparse_cholesky_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,7 @@
#include <memory>
#include <numeric>
+#include <random>
#include <vector>
#include "Eigen/Dense"
@@ -39,22 +40,23 @@
#include "ceres/block_sparse_matrix.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/inner_product_computer.h"
+#include "ceres/internal/config.h"
#include "ceres/internal/eigen.h"
#include "ceres/iterative_refiner.h"
-#include "ceres/random.h"
#include "glog/logging.h"
#include "gmock/gmock.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
namespace {
-BlockSparseMatrix* CreateRandomFullRankMatrix(const int num_col_blocks,
- const int min_col_block_size,
- const int max_col_block_size,
- const double block_density) {
+std::unique_ptr<BlockSparseMatrix> CreateRandomFullRankMatrix(
+ const int num_col_blocks,
+ const int min_col_block_size,
+ const int max_col_block_size,
+ const double block_density,
+ std::mt19937& prng) {
// Create a random matrix
BlockSparseMatrix::RandomMatrixOptions options;
options.num_col_blocks = num_col_blocks;
@@ -65,24 +67,23 @@
options.min_row_block_size = 1;
options.max_row_block_size = max_col_block_size;
options.block_density = block_density;
- std::unique_ptr<BlockSparseMatrix> random_matrix(
- BlockSparseMatrix::CreateRandomMatrix(options));
+ auto random_matrix = BlockSparseMatrix::CreateRandomMatrix(options, prng);
// Add a diagonal block sparse matrix to make it full rank.
Vector diagonal = Vector::Ones(random_matrix->num_cols());
- std::unique_ptr<BlockSparseMatrix> block_diagonal(
- BlockSparseMatrix::CreateDiagonalMatrix(
- diagonal.data(), random_matrix->block_structure()->cols));
+ auto block_diagonal = BlockSparseMatrix::CreateDiagonalMatrix(
+ diagonal.data(), random_matrix->block_structure()->cols);
random_matrix->AppendRows(*block_diagonal);
- return random_matrix.release();
+ return random_matrix;
}
-static bool ComputeExpectedSolution(const CompressedRowSparseMatrix& lhs,
- const Vector& rhs,
- Vector* solution) {
+bool ComputeExpectedSolution(const CompressedRowSparseMatrix& lhs,
+ const Vector& rhs,
+ Vector* solution) {
Matrix eigen_lhs;
lhs.ToDenseMatrix(&eigen_lhs);
- if (lhs.storage_type() == CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
+ if (lhs.storage_type() ==
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
Matrix full_lhs = eigen_lhs.selfadjointView<Eigen::Upper>();
Eigen::LLT<Matrix, Eigen::Upper> llt =
eigen_lhs.selfadjointView<Eigen::Upper>().llt();
@@ -110,20 +111,19 @@
const int num_blocks,
const int min_block_size,
const int max_block_size,
- const double block_density) {
+ const double block_density,
+ std::mt19937& prng) {
LinearSolver::Options sparse_cholesky_options;
sparse_cholesky_options.sparse_linear_algebra_library_type =
sparse_linear_algebra_library_type;
- sparse_cholesky_options.use_postordering = (ordering_type == AMD);
- std::unique_ptr<SparseCholesky> sparse_cholesky =
- SparseCholesky::Create(sparse_cholesky_options);
+ sparse_cholesky_options.ordering_type = ordering_type;
+ auto sparse_cholesky = SparseCholesky::Create(sparse_cholesky_options);
const CompressedRowSparseMatrix::StorageType storage_type =
sparse_cholesky->StorageType();
- std::unique_ptr<BlockSparseMatrix> m(CreateRandomFullRankMatrix(
- num_blocks, min_block_size, max_block_size, block_density));
- std::unique_ptr<InnerProductComputer> inner_product_computer(
- InnerProductComputer::Create(*m, storage_type));
+ auto m = CreateRandomFullRankMatrix(
+ num_blocks, min_block_size, max_block_size, block_density, prng);
+ auto inner_product_computer = InnerProductComputer::Create(*m, storage_type);
inner_product_computer->Compute();
CompressedRowSparseMatrix* lhs = inner_product_computer->mutable_result();
@@ -140,7 +140,7 @@
std::string message;
EXPECT_EQ(
sparse_cholesky->FactorAndSolve(lhs, rhs.data(), actual.data(), &message),
- LINEAR_SOLVER_SUCCESS);
+ LinearSolverTerminationType::SUCCESS);
Matrix eigen_lhs;
lhs->ToDenseMatrix(&eigen_lhs);
EXPECT_NEAR((actual - expected).norm() / actual.norm(),
@@ -150,14 +150,14 @@
<< eigen_lhs;
}
-typedef ::testing::tuple<SparseLinearAlgebraLibraryType, OrderingType, bool>
- Param;
+using Param =
+ ::testing::tuple<SparseLinearAlgebraLibraryType, OrderingType, bool>;
std::string ParamInfoToString(testing::TestParamInfo<Param> info) {
Param param = info.param;
std::stringstream ss;
ss << SparseLinearAlgebraLibraryTypeToString(::testing::get<0>(param)) << "_"
- << (::testing::get<1>(param) == AMD ? "AMD" : "NATURAL") << "_"
+ << ::testing::get<1>(param) << "_"
<< (::testing::get<2>(param) ? "UseBlockStructure" : "NoBlockStructure");
return ss.str();
}
@@ -167,25 +167,29 @@
class SparseCholeskyTest : public ::testing::TestWithParam<Param> {};
TEST_P(SparseCholeskyTest, FactorAndSolve) {
- SetRandomState(2982);
- const int kMinNumBlocks = 1;
- const int kMaxNumBlocks = 10;
- const int kNumTrials = 10;
- const int kMinBlockSize = 1;
- const int kMaxBlockSize = 5;
+ constexpr int kMinNumBlocks = 1;
+ constexpr int kMaxNumBlocks = 10;
+ constexpr int kNumTrials = 10;
+ constexpr int kMinBlockSize = 1;
+ constexpr int kMaxBlockSize = 5;
+
+ Param param = GetParam();
+
+ std::mt19937 prng;
+ std::uniform_real_distribution<double> distribution(0.1, 1.0);
for (int num_blocks = kMinNumBlocks; num_blocks < kMaxNumBlocks;
++num_blocks) {
for (int trial = 0; trial < kNumTrials; ++trial) {
- const double block_density = std::max(0.1, RandDouble());
- Param param = GetParam();
+ const double block_density = distribution(prng);
SparseCholeskySolverUnitTest(::testing::get<0>(param),
::testing::get<1>(param),
::testing::get<2>(param),
num_blocks,
kMinBlockSize,
kMaxBlockSize,
- block_density);
+ block_density,
+ prng);
}
}
}
@@ -193,29 +197,35 @@
namespace {
#ifndef CERES_NO_SUITESPARSE
-INSTANTIATE_TEST_SUITE_P(SuiteSparseCholesky,
- SparseCholeskyTest,
- ::testing::Combine(::testing::Values(SUITE_SPARSE),
- ::testing::Values(AMD, NATURAL),
- ::testing::Values(true, false)),
- ParamInfoToString);
+INSTANTIATE_TEST_SUITE_P(
+ SuiteSparseCholesky,
+ SparseCholeskyTest,
+ ::testing::Combine(::testing::Values(SUITE_SPARSE),
+ ::testing::Values(OrderingType::AMD,
+ OrderingType::NATURAL),
+ ::testing::Values(true, false)),
+ ParamInfoToString);
#endif
-#ifndef CERES_NO_CXSPARSE
-INSTANTIATE_TEST_SUITE_P(CXSparseCholesky,
- SparseCholeskyTest,
- ::testing::Combine(::testing::Values(CX_SPARSE),
- ::testing::Values(AMD, NATURAL),
- ::testing::Values(true, false)),
- ParamInfoToString);
-#endif
+#if !defined(CERES_NO_SUITESPARSE) && !defined(CERES_NO_CHOLMOD_PARTITION)
+INSTANTIATE_TEST_SUITE_P(
+ SuiteSparseCholeskyMETIS,
+ SparseCholeskyTest,
+ ::testing::Combine(::testing::Values(SUITE_SPARSE),
+ ::testing::Values(OrderingType::NESDIS),
+ ::testing::Values(true, false)),
+ ParamInfoToString);
+#endif // !defined(CERES_NO_SUITESPARSE) &&
+ // !defined(CERES_NO_CHOLMOD_PARTITION)
#ifndef CERES_NO_ACCELERATE_SPARSE
INSTANTIATE_TEST_SUITE_P(
AccelerateSparseCholesky,
SparseCholeskyTest,
::testing::Combine(::testing::Values(ACCELERATE_SPARSE),
- ::testing::Values(AMD, NATURAL),
+ ::testing::Values(OrderingType::AMD,
+ OrderingType::NESDIS,
+ OrderingType::NATURAL),
::testing::Values(true, false)),
ParamInfoToString);
@@ -223,26 +233,50 @@
AccelerateSparseCholeskySingle,
SparseCholeskyTest,
::testing::Combine(::testing::Values(ACCELERATE_SPARSE),
- ::testing::Values(AMD, NATURAL),
+ ::testing::Values(OrderingType::AMD,
+ OrderingType::NESDIS,
+ OrderingType::NATURAL),
::testing::Values(true, false)),
ParamInfoToString);
#endif
#ifdef CERES_USE_EIGEN_SPARSE
-INSTANTIATE_TEST_SUITE_P(EigenSparseCholesky,
- SparseCholeskyTest,
- ::testing::Combine(::testing::Values(EIGEN_SPARSE),
- ::testing::Values(AMD, NATURAL),
- ::testing::Values(true, false)),
- ParamInfoToString);
+INSTANTIATE_TEST_SUITE_P(
+ EigenSparseCholesky,
+ SparseCholeskyTest,
+ ::testing::Combine(::testing::Values(EIGEN_SPARSE),
+ ::testing::Values(OrderingType::AMD,
+ OrderingType::NATURAL),
+ ::testing::Values(true, false)),
+ ParamInfoToString);
-INSTANTIATE_TEST_SUITE_P(EigenSparseCholeskySingle,
- SparseCholeskyTest,
- ::testing::Combine(::testing::Values(EIGEN_SPARSE),
- ::testing::Values(AMD, NATURAL),
- ::testing::Values(true, false)),
- ParamInfoToString);
-#endif
+INSTANTIATE_TEST_SUITE_P(
+ EigenSparseCholeskySingle,
+ SparseCholeskyTest,
+ ::testing::Combine(::testing::Values(EIGEN_SPARSE),
+ ::testing::Values(OrderingType::AMD,
+ OrderingType::NATURAL),
+ ::testing::Values(true, false)),
+ ParamInfoToString);
+#endif // CERES_USE_EIGEN_SPARSE
+
+#if defined(CERES_USE_EIGEN_SPARSE) && !defined(CERES_NO_EIGEN_METIS)
+INSTANTIATE_TEST_SUITE_P(
+ EigenSparseCholeskyMETIS,
+ SparseCholeskyTest,
+ ::testing::Combine(::testing::Values(EIGEN_SPARSE),
+ ::testing::Values(OrderingType::NESDIS),
+ ::testing::Values(true, false)),
+ ParamInfoToString);
+
+INSTANTIATE_TEST_SUITE_P(
+ EigenSparseCholeskySingleMETIS,
+ SparseCholeskyTest,
+ ::testing::Combine(::testing::Values(EIGEN_SPARSE),
+ ::testing::Values(OrderingType::NESDIS),
+ ::testing::Values(true, false)),
+ ParamInfoToString);
+#endif // defined(CERES_USE_EIGEN_SPARSE) && !defined(CERES_NO_EIGEN_METIS)
class MockSparseCholesky : public SparseCholesky {
public:
@@ -256,9 +290,9 @@
std::string* message));
};
-class MockIterativeRefiner : public IterativeRefiner {
+class MockSparseIterativeRefiner : public SparseIterativeRefiner {
public:
- MockIterativeRefiner() : IterativeRefiner(1) {}
+ MockSparseIterativeRefiner() : SparseIterativeRefiner(1) {}
MOCK_METHOD4(Refine,
void(const SparseMatrix& lhs,
const double* rhs,
@@ -270,47 +304,48 @@
using testing::Return;
TEST(RefinedSparseCholesky, StorageType) {
- MockSparseCholesky* mock_sparse_cholesky = new MockSparseCholesky;
- MockIterativeRefiner* mock_iterative_refiner = new MockIterativeRefiner;
- EXPECT_CALL(*mock_sparse_cholesky, StorageType())
+ auto sparse_cholesky = std::make_unique<MockSparseCholesky>();
+ auto iterative_refiner = std::make_unique<MockSparseIterativeRefiner>();
+ EXPECT_CALL(*sparse_cholesky, StorageType())
.Times(1)
- .WillRepeatedly(Return(CompressedRowSparseMatrix::UPPER_TRIANGULAR));
- EXPECT_CALL(*mock_iterative_refiner, Refine(_, _, _, _)).Times(0);
- std::unique_ptr<SparseCholesky> sparse_cholesky(mock_sparse_cholesky);
- std::unique_ptr<IterativeRefiner> iterative_refiner(mock_iterative_refiner);
+ .WillRepeatedly(
+ Return(CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR));
+ EXPECT_CALL(*iterative_refiner, Refine(_, _, _, _)).Times(0);
RefinedSparseCholesky refined_sparse_cholesky(std::move(sparse_cholesky),
std::move(iterative_refiner));
EXPECT_EQ(refined_sparse_cholesky.StorageType(),
- CompressedRowSparseMatrix::UPPER_TRIANGULAR);
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR);
};
TEST(RefinedSparseCholesky, Factorize) {
- MockSparseCholesky* mock_sparse_cholesky = new MockSparseCholesky;
- MockIterativeRefiner* mock_iterative_refiner = new MockIterativeRefiner;
+ auto* mock_sparse_cholesky = new MockSparseCholesky;
+ auto* mock_iterative_refiner = new MockSparseIterativeRefiner;
EXPECT_CALL(*mock_sparse_cholesky, Factorize(_, _))
.Times(1)
- .WillRepeatedly(Return(LINEAR_SOLVER_SUCCESS));
+ .WillRepeatedly(Return(LinearSolverTerminationType::SUCCESS));
EXPECT_CALL(*mock_iterative_refiner, Refine(_, _, _, _)).Times(0);
std::unique_ptr<SparseCholesky> sparse_cholesky(mock_sparse_cholesky);
- std::unique_ptr<IterativeRefiner> iterative_refiner(mock_iterative_refiner);
+ std::unique_ptr<SparseIterativeRefiner> iterative_refiner(
+ mock_iterative_refiner);
RefinedSparseCholesky refined_sparse_cholesky(std::move(sparse_cholesky),
std::move(iterative_refiner));
CompressedRowSparseMatrix m(1, 1, 1);
std::string message;
EXPECT_EQ(refined_sparse_cholesky.Factorize(&m, &message),
- LINEAR_SOLVER_SUCCESS);
+ LinearSolverTerminationType::SUCCESS);
};
TEST(RefinedSparseCholesky, FactorAndSolveWithUnsuccessfulFactorization) {
- MockSparseCholesky* mock_sparse_cholesky = new MockSparseCholesky;
- MockIterativeRefiner* mock_iterative_refiner = new MockIterativeRefiner;
+ auto* mock_sparse_cholesky = new MockSparseCholesky;
+ auto* mock_iterative_refiner = new MockSparseIterativeRefiner;
EXPECT_CALL(*mock_sparse_cholesky, Factorize(_, _))
.Times(1)
- .WillRepeatedly(Return(LINEAR_SOLVER_FAILURE));
+ .WillRepeatedly(Return(LinearSolverTerminationType::FAILURE));
EXPECT_CALL(*mock_sparse_cholesky, Solve(_, _, _)).Times(0);
EXPECT_CALL(*mock_iterative_refiner, Refine(_, _, _, _)).Times(0);
std::unique_ptr<SparseCholesky> sparse_cholesky(mock_sparse_cholesky);
- std::unique_ptr<IterativeRefiner> iterative_refiner(mock_iterative_refiner);
+ std::unique_ptr<SparseIterativeRefiner> iterative_refiner(
+ mock_iterative_refiner);
RefinedSparseCholesky refined_sparse_cholesky(std::move(sparse_cholesky),
std::move(iterative_refiner));
CompressedRowSparseMatrix m(1, 1, 1);
@@ -319,23 +354,23 @@
double solution;
EXPECT_EQ(
refined_sparse_cholesky.FactorAndSolve(&m, &rhs, &solution, &message),
- LINEAR_SOLVER_FAILURE);
+ LinearSolverTerminationType::FAILURE);
};
TEST(RefinedSparseCholesky, FactorAndSolveWithSuccess) {
- MockSparseCholesky* mock_sparse_cholesky = new MockSparseCholesky;
- std::unique_ptr<MockIterativeRefiner> mock_iterative_refiner(
- new MockIterativeRefiner);
+ auto* mock_sparse_cholesky = new MockSparseCholesky;
+ std::unique_ptr<MockSparseIterativeRefiner> mock_iterative_refiner(
+ new MockSparseIterativeRefiner);
EXPECT_CALL(*mock_sparse_cholesky, Factorize(_, _))
.Times(1)
- .WillRepeatedly(Return(LINEAR_SOLVER_SUCCESS));
+ .WillRepeatedly(Return(LinearSolverTerminationType::SUCCESS));
EXPECT_CALL(*mock_sparse_cholesky, Solve(_, _, _))
.Times(1)
- .WillRepeatedly(Return(LINEAR_SOLVER_SUCCESS));
+ .WillRepeatedly(Return(LinearSolverTerminationType::SUCCESS));
EXPECT_CALL(*mock_iterative_refiner, Refine(_, _, _, _)).Times(1);
std::unique_ptr<SparseCholesky> sparse_cholesky(mock_sparse_cholesky);
- std::unique_ptr<IterativeRefiner> iterative_refiner(
+ std::unique_ptr<SparseIterativeRefiner> iterative_refiner(
std::move(mock_iterative_refiner));
RefinedSparseCholesky refined_sparse_cholesky(std::move(sparse_cholesky),
std::move(iterative_refiner));
@@ -345,10 +380,9 @@
double solution;
EXPECT_EQ(
refined_sparse_cholesky.FactorAndSolve(&m, &rhs, &solution, &message),
- LINEAR_SOLVER_SUCCESS);
+ LinearSolverTerminationType::SUCCESS);
};
} // namespace
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/sparse_matrix.cc b/internal/ceres/sparse_matrix.cc
index 32388f5..cdc77fc 100644
--- a/internal/ceres/sparse_matrix.cc
+++ b/internal/ceres/sparse_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,10 +30,24 @@
#include "ceres/sparse_matrix.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-SparseMatrix::~SparseMatrix() {}
+SparseMatrix::~SparseMatrix() = default;
-} // namespace internal
-} // namespace ceres
+void SparseMatrix::SquaredColumnNorm(double* x,
+ ContextImpl* context,
+ int num_threads) const {
+ (void)context;
+ (void)num_threads;
+ SquaredColumnNorm(x);
+}
+
+void SparseMatrix::ScaleColumns(const double* scale,
+ ContextImpl* context,
+ int num_threads) {
+ (void)context;
+ (void)num_threads;
+ ScaleColumns(scale);
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/sparse_matrix.h b/internal/ceres/sparse_matrix.h
index b57f108..9c79417 100644
--- a/internal/ceres/sparse_matrix.h
+++ b/internal/ceres/sparse_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,12 +36,12 @@
#include <cstdio>
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_operator.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
+class ContextImpl;
// This class defines the interface for storing and manipulating
// sparse matrices. The key property that differentiates different
@@ -64,23 +64,35 @@
// matrix type dependent and we are at this stage unable to come up
// with an efficient high level interface that spans multiple sparse
// matrix types.
-class CERES_EXPORT_INTERNAL SparseMatrix : public LinearOperator {
+class CERES_NO_EXPORT SparseMatrix : public LinearOperator {
public:
- virtual ~SparseMatrix();
+ ~SparseMatrix() override;
// y += Ax;
- virtual void RightMultiply(const double* x, double* y) const = 0;
+ using LinearOperator::RightMultiplyAndAccumulate;
+ void RightMultiplyAndAccumulate(const double* x,
+ double* y) const override = 0;
+
// y += A'x;
- virtual void LeftMultiply(const double* x, double* y) const = 0;
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const override = 0;
// In MATLAB notation sum(A.*A, 1)
virtual void SquaredColumnNorm(double* x) const = 0;
+ virtual void SquaredColumnNorm(double* x,
+ ContextImpl* context,
+ int num_threads) const;
// A = A * diag(scale)
virtual void ScaleColumns(const double* scale) = 0;
+ virtual void ScaleColumns(const double* scale,
+ ContextImpl* context,
+ int num_threads);
// A = 0. A->num_nonzeros() == 0 is true after this call. The
// sparsity pattern is preserved.
virtual void SetZero() = 0;
+ virtual void SetZero(ContextImpl* /*context*/, int /*num_threads*/) {
+ SetZero();
+ }
// Resize and populate dense_matrix with a dense version of the
// sparse matrix.
@@ -98,12 +110,11 @@
virtual double* mutable_values() = 0;
virtual const double* values() const = 0;
- virtual int num_rows() const = 0;
- virtual int num_cols() const = 0;
+ int num_rows() const override = 0;
+ int num_cols() const override = 0;
virtual int num_nonzeros() const = 0;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_SPARSE_MATRIX_H_
diff --git a/internal/ceres/sparse_normal_cholesky_solver.cc b/internal/ceres/sparse_normal_cholesky_solver.cc
index 0f2e589..5746509 100644
--- a/internal/ceres/sparse_normal_cholesky_solver.cc
+++ b/internal/ceres/sparse_normal_cholesky_solver.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -45,8 +45,7 @@
#include "ceres/types.h"
#include "ceres/wall_time.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
SparseNormalCholeskySolver::SparseNormalCholeskySolver(
const LinearSolver::Options& options)
@@ -54,7 +53,7 @@
sparse_cholesky_ = SparseCholesky::Create(options);
}
-SparseNormalCholeskySolver::~SparseNormalCholeskySolver() {}
+SparseNormalCholeskySolver::~SparseNormalCholeskySolver() = default;
LinearSolver::Summary SparseNormalCholeskySolver::SolveImpl(
BlockSparseMatrix* A,
@@ -64,7 +63,7 @@
EventLogger event_logger("SparseNormalCholeskySolver::Solve");
LinearSolver::Summary summary;
summary.num_iterations = 1;
- summary.termination_type = LINEAR_SOLVER_SUCCESS;
+ summary.termination_type = LinearSolverTerminationType::SUCCESS;
summary.message = "Success.";
const int num_cols = A->num_cols();
@@ -72,24 +71,24 @@
xref.setZero();
rhs_.resize(num_cols);
rhs_.setZero();
- A->LeftMultiply(b, rhs_.data());
+ A->LeftMultiplyAndAccumulate(b, rhs_.data());
event_logger.AddEvent("Compute RHS");
- if (per_solve_options.D != NULL) {
+ if (per_solve_options.D != nullptr) {
// Temporarily append a diagonal block to the A matrix, but undo
// it before returning the matrix to the user.
- std::unique_ptr<BlockSparseMatrix> regularizer;
- regularizer.reset(BlockSparseMatrix::CreateDiagonalMatrix(
- per_solve_options.D, A->block_structure()->cols));
+ std::unique_ptr<BlockSparseMatrix> regularizer =
+ BlockSparseMatrix::CreateDiagonalMatrix(per_solve_options.D,
+ A->block_structure()->cols);
event_logger.AddEvent("Diagonal");
A->AppendRows(*regularizer);
event_logger.AddEvent("Append");
}
event_logger.AddEvent("Append Rows");
- if (inner_product_computer_.get() == NULL) {
- inner_product_computer_.reset(
- InnerProductComputer::Create(*A, sparse_cholesky_->StorageType()));
+ if (inner_product_computer_.get() == nullptr) {
+ inner_product_computer_ =
+ InnerProductComputer::Create(*A, sparse_cholesky_->StorageType());
event_logger.AddEvent("InnerProductComputer::Create");
}
@@ -97,7 +96,7 @@
inner_product_computer_->Compute();
event_logger.AddEvent("InnerProductComputer::Compute");
- if (per_solve_options.D != NULL) {
+ if (per_solve_options.D != nullptr) {
A->DeleteRowBlocks(A->block_structure()->cols.size());
}
@@ -110,5 +109,4 @@
return summary;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/sparse_normal_cholesky_solver.h b/internal/ceres/sparse_normal_cholesky_solver.h
index ef32743..585d1c1 100644
--- a/internal/ceres/sparse_normal_cholesky_solver.h
+++ b/internal/ceres/sparse_normal_cholesky_solver.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,15 +36,16 @@
// This include must come before any #ifndef check on Ceres compile options.
// clang-format off
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
// clang-format on
+#include <memory>
#include <vector>
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class CompressedRowSparseMatrix;
class InnerProductComputer;
@@ -52,13 +53,14 @@
// Solves the normal equations (A'A + D'D) x = A'b, using the sparse
// linear algebra library of the user's choice.
-class SparseNormalCholeskySolver : public BlockSparseMatrixSolver {
+class CERES_NO_EXPORT SparseNormalCholeskySolver
+ : public BlockSparseMatrixSolver {
public:
explicit SparseNormalCholeskySolver(const LinearSolver::Options& options);
SparseNormalCholeskySolver(const SparseNormalCholeskySolver&) = delete;
void operator=(const SparseNormalCholeskySolver&) = delete;
- virtual ~SparseNormalCholeskySolver();
+ ~SparseNormalCholeskySolver() override;
private:
LinearSolver::Summary SolveImpl(BlockSparseMatrix* A,
@@ -72,7 +74,6 @@
std::unique_ptr<InnerProductComputer> inner_product_computer_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_SPARSE_NORMAL_CHOLESKY_SOLVER_H_
diff --git a/internal/ceres/sparse_normal_cholesky_solver_test.cc b/internal/ceres/sparse_normal_cholesky_solver_test.cc
index 8acb98e..3396e34 100644
--- a/internal/ceres/sparse_normal_cholesky_solver_test.cc
+++ b/internal/ceres/sparse_normal_cholesky_solver_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -41,8 +41,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// TODO(sameeragarwal): These tests needs to be re-written, since
// SparseNormalCholeskySolver is a composition of two classes now,
@@ -54,20 +53,20 @@
class SparseNormalCholeskySolverTest : public ::testing::Test {
protected:
void SetUp() final {
- std::unique_ptr<LinearLeastSquaresProblem> problem(
- CreateLinearLeastSquaresProblemFromId(2));
+ std::unique_ptr<LinearLeastSquaresProblem> problem =
+ CreateLinearLeastSquaresProblemFromId(2);
CHECK(problem != nullptr);
A_.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
- b_.reset(problem->b.release());
- D_.reset(problem->D.release());
+ b_ = std::move(problem->b);
+ D_ = std::move(problem->D);
}
void TestSolver(const LinearSolver::Options& options, double* D) {
Matrix dense_A;
A_->ToDenseMatrix(&dense_A);
Matrix lhs = dense_A.transpose() * dense_A;
- if (D != NULL) {
+ if (D != nullptr) {
lhs += (ConstVectorRef(D, A_->num_cols()).array() *
ConstVectorRef(D, A_->num_cols()).array())
.matrix()
@@ -76,7 +75,7 @@
Vector rhs(A_->num_cols());
rhs.setZero();
- A_->LeftMultiply(b_.get(), rhs.data());
+ A_->LeftMultiplyAndAccumulate(b_.get(), rhs.data());
Vector expected_solution = lhs.llt().solve(rhs);
std::unique_ptr<LinearSolver> solver(LinearSolver::Create(options));
@@ -87,7 +86,7 @@
summary = solver->Solve(
A_.get(), b_.get(), per_solve_options, actual_solution.data());
- EXPECT_EQ(summary.termination_type, LINEAR_SOLVER_SUCCESS);
+ EXPECT_EQ(summary.termination_type, LinearSolverTerminationType::SUCCESS);
for (int i = 0; i < A_->num_cols(); ++i) {
EXPECT_NEAR(expected_solution(i), actual_solution(i), 1e-8)
@@ -97,7 +96,7 @@
}
void TestSolver(const LinearSolver::Options& options) {
- TestSolver(options, NULL);
+ TestSolver(options, nullptr);
TestSolver(options, D_.get());
}
@@ -112,7 +111,7 @@
LinearSolver::Options options;
options.sparse_linear_algebra_library_type = SUITE_SPARSE;
options.type = SPARSE_NORMAL_CHOLESKY;
- options.use_postordering = false;
+ options.ordering_type = OrderingType::NATURAL;
ContextImpl context;
options.context = &context;
TestSolver(options);
@@ -123,31 +122,7 @@
LinearSolver::Options options;
options.sparse_linear_algebra_library_type = SUITE_SPARSE;
options.type = SPARSE_NORMAL_CHOLESKY;
- options.use_postordering = true;
- ContextImpl context;
- options.context = &context;
- TestSolver(options);
-}
-#endif
-
-#ifndef CERES_NO_CXSPARSE
-TEST_F(SparseNormalCholeskySolverTest,
- SparseNormalCholeskyUsingCXSparsePreOrdering) {
- LinearSolver::Options options;
- options.sparse_linear_algebra_library_type = CX_SPARSE;
- options.type = SPARSE_NORMAL_CHOLESKY;
- options.use_postordering = false;
- ContextImpl context;
- options.context = &context;
- TestSolver(options);
-}
-
-TEST_F(SparseNormalCholeskySolverTest,
- SparseNormalCholeskyUsingCXSparsePostOrdering) {
- LinearSolver::Options options;
- options.sparse_linear_algebra_library_type = CX_SPARSE;
- options.type = SPARSE_NORMAL_CHOLESKY;
- options.use_postordering = true;
+ options.ordering_type = OrderingType::AMD;
ContextImpl context;
options.context = &context;
TestSolver(options);
@@ -160,7 +135,7 @@
LinearSolver::Options options;
options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options.type = SPARSE_NORMAL_CHOLESKY;
- options.use_postordering = false;
+ options.ordering_type = OrderingType::NATURAL;
ContextImpl context;
options.context = &context;
TestSolver(options);
@@ -171,7 +146,7 @@
LinearSolver::Options options;
options.sparse_linear_algebra_library_type = ACCELERATE_SPARSE;
options.type = SPARSE_NORMAL_CHOLESKY;
- options.use_postordering = true;
+ options.ordering_type = OrderingType::AMD;
ContextImpl context;
options.context = &context;
TestSolver(options);
@@ -184,7 +159,7 @@
LinearSolver::Options options;
options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
options.type = SPARSE_NORMAL_CHOLESKY;
- options.use_postordering = false;
+ options.ordering_type = OrderingType::NATURAL;
ContextImpl context;
options.context = &context;
TestSolver(options);
@@ -195,12 +170,11 @@
LinearSolver::Options options;
options.sparse_linear_algebra_library_type = EIGEN_SPARSE;
options.type = SPARSE_NORMAL_CHOLESKY;
- options.use_postordering = true;
+ options.ordering_type = OrderingType::AMD;
ContextImpl context;
options.context = &context;
TestSolver(options);
}
#endif // CERES_USE_EIGEN_SPARSE
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/split.cc b/internal/ceres/split.cc
deleted file mode 100644
index 804f441..0000000
--- a/internal/ceres/split.cc
+++ /dev/null
@@ -1,122 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: keir@google.com (Keir Mierle)
-
-#include "ceres/split.h"
-
-#include <iterator>
-#include <string>
-#include <vector>
-
-#include "ceres/internal/port.h"
-
-namespace ceres {
-namespace internal {
-
-using std::string;
-using std::vector;
-
-// If we know how much to allocate for a vector of strings, we can allocate the
-// vector<string> only once and directly to the right size. This saves in
-// between 33-66 % of memory space needed for the result, and runs faster in the
-// microbenchmarks.
-//
-// The reserve is only implemented for the single character delim.
-//
-// The implementation for counting is cut-and-pasted from
-// SplitStringToIteratorUsing. I could have written my own counting iterator,
-// and use the existing template function, but probably this is more clear and
-// more sure to get optimized to reasonable code.
-static int CalculateReserveForVector(const string& full, const char* delim) {
- int count = 0;
- if (delim[0] != '\0' && delim[1] == '\0') {
- // Optimize the common case where delim is a single character.
- char c = delim[0];
- const char* p = full.data();
- const char* end = p + full.size();
- while (p != end) {
- if (*p == c) { // This could be optimized with hasless(v,1) trick.
- ++p;
- } else {
- while (++p != end && *p != c) {
- // Skip to the next occurence of the delimiter.
- }
- ++count;
- }
- }
- }
- return count;
-}
-
-template <typename StringType, typename ITR>
-static inline void SplitStringToIteratorUsing(const StringType& full,
- const char* delim,
- ITR& result) {
- // Optimize the common case where delim is a single character.
- if (delim[0] != '\0' && delim[1] == '\0') {
- char c = delim[0];
- const char* p = full.data();
- const char* end = p + full.size();
- while (p != end) {
- if (*p == c) {
- ++p;
- } else {
- const char* start = p;
- while (++p != end && *p != c) {
- // Skip to the next occurence of the delimiter.
- }
- *result++ = StringType(start, p - start);
- }
- }
- return;
- }
-
- string::size_type begin_index, end_index;
- begin_index = full.find_first_not_of(delim);
- while (begin_index != string::npos) {
- end_index = full.find_first_of(delim, begin_index);
- if (end_index == string::npos) {
- *result++ = full.substr(begin_index);
- return;
- }
- *result++ = full.substr(begin_index, (end_index - begin_index));
- begin_index = full.find_first_not_of(delim, end_index);
- }
-}
-
-void SplitStringUsing(const string& full,
- const char* delim,
- vector<string>* result) {
- result->reserve(result->size() + CalculateReserveForVector(full, delim));
- std::back_insert_iterator<vector<string>> it(*result);
- SplitStringToIteratorUsing(full, delim, it);
-}
-
-} // namespace internal
-} // namespace ceres
diff --git a/internal/ceres/split.h b/internal/ceres/split.h
deleted file mode 100644
index f513023..0000000
--- a/internal/ceres/split.h
+++ /dev/null
@@ -1,52 +0,0 @@
-// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
-// http://ceres-solver.org/
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are met:
-//
-// * Redistributions of source code must retain the above copyright notice,
-// this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above copyright notice,
-// this list of conditions and the following disclaimer in the documentation
-// and/or other materials provided with the distribution.
-// * Neither the name of Google Inc. nor the names of its contributors may be
-// used to endorse or promote products derived from this software without
-// specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-// POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: keir@google.com (Keir Mierle)
-
-#ifndef CERES_INTERNAL_SPLIT_H_
-#define CERES_INTERNAL_SPLIT_H_
-
-#include <string>
-#include <vector>
-
-#include "ceres/internal/port.h"
-
-namespace ceres {
-namespace internal {
-
-// Split a string using one or more character delimiters, presented as a
-// nul-terminated c string. Append the components to 'result'. If there are
-// consecutive delimiters, this function skips over all of them.
-void SplitStringUsing(const std::string& full,
- const char* delim,
- std::vector<std::string>* res);
-
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_INTERNAL_SPLIT_H_
diff --git a/internal/ceres/spmv_benchmark.cc b/internal/ceres/spmv_benchmark.cc
new file mode 100644
index 0000000..6a4efa7
--- /dev/null
+++ b/internal/ceres/spmv_benchmark.cc
@@ -0,0 +1,445 @@
+// Ceres Solver - A fast non-linear least squares minimizer
+// Copyright 2023 Google Inc. All rights reserved.
+// http://ceres-solver.org/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// * Neither the name of Google Inc. nor the names of its contributors may be
+// used to endorse or promote products derived from this software without
+// specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+// CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+// SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+// CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+// POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: joydeepb@cs.utexas.edu (Joydeep Biswas)
+
+#include <memory>
+#include <random>
+#include <string>
+
+#include "Eigen/Dense"
+#include "benchmark/benchmark.h"
+#include "ceres/block_jacobi_preconditioner.h"
+#include "ceres/block_sparse_matrix.h"
+#include "ceres/context_impl.h"
+#include "ceres/cuda_sparse_matrix.h"
+#include "ceres/cuda_vector.h"
+#include "ceres/fake_bundle_adjustment_jacobian.h"
+#include "ceres/internal/config.h"
+#include "ceres/internal/eigen.h"
+#include "ceres/linear_solver.h"
+
+#ifndef CERES_NO_CUDA
+#include "cuda_runtime.h"
+#endif
+
+namespace ceres::internal {
+
+constexpr int kNumCameras = 1000;
+constexpr int kNumPoints = 10000;
+constexpr int kCameraSize = 6;
+constexpr int kPointSize = 3;
+constexpr double kVisibility = 0.1;
+
+constexpr int kNumRowBlocks = 100000;
+constexpr int kNumColBlocks = 10000;
+constexpr int kMinRowBlockSize = 1;
+constexpr int kMaxRowBlockSize = 5;
+constexpr int kMinColBlockSize = 1;
+constexpr int kMaxColBlockSize = 15;
+constexpr double kBlockDensity = 5.0 / kNumColBlocks;
+
+static void BM_BlockSparseRightMultiplyAndAccumulateBA(
+ benchmark::State& state) {
+ const int num_threads = static_cast<int>(state.range(0));
+ std::mt19937 prng;
+ auto jacobian = CreateFakeBundleAdjustmentJacobian(
+ kNumCameras, kNumPoints, kCameraSize, kPointSize, kVisibility, prng);
+
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector x(jacobian->num_cols());
+ Vector y(jacobian->num_rows());
+ x.setRandom();
+ y.setRandom();
+ double sum = 0;
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulate(
+ x.data(), y.data(), &context, num_threads);
+ sum += y.norm();
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_BlockSparseRightMultiplyAndAccumulateBA)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+static void BM_BlockSparseRightMultiplyAndAccumulateUnstructured(
+ benchmark::State& state) {
+ const int num_threads = static_cast<int>(state.range(0));
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = kNumRowBlocks;
+ options.num_col_blocks = kNumColBlocks;
+ options.min_row_block_size = kMinRowBlockSize;
+ options.min_col_block_size = kMinColBlockSize;
+ options.max_row_block_size = kMaxRowBlockSize;
+ options.max_col_block_size = kMaxColBlockSize;
+ options.block_density = kBlockDensity;
+ std::mt19937 prng;
+
+ auto jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector x(jacobian->num_cols());
+ Vector y(jacobian->num_rows());
+ x.setRandom();
+ y.setRandom();
+ double sum = 0;
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulate(
+ x.data(), y.data(), &context, num_threads);
+ sum += y.norm();
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_BlockSparseRightMultiplyAndAccumulateUnstructured)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+static void BM_BlockSparseLeftMultiplyAndAccumulateBA(benchmark::State& state) {
+ std::mt19937 prng;
+ auto jacobian = CreateFakeBundleAdjustmentJacobian(
+ kNumCameras, kNumPoints, kCameraSize, kPointSize, kVisibility, prng);
+ Vector x(jacobian->num_rows());
+ Vector y(jacobian->num_cols());
+ x.setRandom();
+ y.setRandom();
+ double sum = 0;
+ for (auto _ : state) {
+ jacobian->LeftMultiplyAndAccumulate(x.data(), y.data());
+ sum += y.norm();
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_BlockSparseLeftMultiplyAndAccumulateBA);
+
+static void BM_BlockSparseLeftMultiplyAndAccumulateUnstructured(
+ benchmark::State& state) {
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = 100000;
+ options.num_col_blocks = 10000;
+ options.min_row_block_size = 1;
+ options.min_col_block_size = 1;
+ options.max_row_block_size = 10;
+ options.max_col_block_size = 15;
+ options.block_density = 5.0 / options.num_col_blocks;
+ std::mt19937 prng;
+
+ auto jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+ Vector x(jacobian->num_rows());
+ Vector y(jacobian->num_cols());
+ x.setRandom();
+ y.setRandom();
+ double sum = 0;
+ for (auto _ : state) {
+ jacobian->LeftMultiplyAndAccumulate(x.data(), y.data());
+ sum += y.norm();
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_BlockSparseLeftMultiplyAndAccumulateUnstructured);
+
+static void BM_CRSRightMultiplyAndAccumulateBA(benchmark::State& state) {
+ const int num_threads = static_cast<int>(state.range(0));
+ std::mt19937 prng;
+ auto bsm_jacobian = CreateFakeBundleAdjustmentJacobian(
+ kNumCameras, kNumPoints, kCameraSize, kPointSize, kVisibility, prng);
+
+ auto jacobian = bsm_jacobian->ToCompressedRowSparseMatrix();
+
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector x(jacobian->num_cols());
+ Vector y(jacobian->num_rows());
+ x.setRandom();
+ y.setRandom();
+ double sum = 0;
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulate(
+ x.data(), y.data(), &context, num_threads);
+ sum += y.norm();
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_CRSRightMultiplyAndAccumulateBA)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+static void BM_CRSRightMultiplyAndAccumulateUnstructured(
+ benchmark::State& state) {
+ const int num_threads = static_cast<int>(state.range(0));
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = kNumRowBlocks;
+ options.num_col_blocks = kNumColBlocks;
+ options.min_row_block_size = kMinRowBlockSize;
+ options.min_col_block_size = kMinColBlockSize;
+ options.max_row_block_size = kMaxRowBlockSize;
+ options.max_col_block_size = kMaxColBlockSize;
+ options.block_density = kBlockDensity;
+ std::mt19937 prng;
+
+ auto bsm_jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+ auto jacobian = bsm_jacobian->ToCompressedRowSparseMatrix();
+
+ ContextImpl context;
+ context.EnsureMinimumThreads(num_threads);
+
+ Vector x(jacobian->num_cols());
+ Vector y(jacobian->num_rows());
+ x.setRandom();
+ y.setRandom();
+ double sum = 0;
+ for (auto _ : state) {
+ jacobian->RightMultiplyAndAccumulate(
+ x.data(), y.data(), &context, num_threads);
+ sum += y.norm();
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_CRSRightMultiplyAndAccumulateUnstructured)
+ ->Arg(1)
+ ->Arg(2)
+ ->Arg(4)
+ ->Arg(8)
+ ->Arg(16);
+
+static void BM_CRSLeftMultiplyAndAccumulateBA(benchmark::State& state) {
+ std::mt19937 prng;
+ // Perform setup here
+ auto bsm_jacobian = CreateFakeBundleAdjustmentJacobian(
+ kNumCameras, kNumPoints, kCameraSize, kPointSize, kVisibility, prng);
+ auto jacobian = bsm_jacobian->ToCompressedRowSparseMatrix();
+
+ Vector x(jacobian->num_rows());
+ Vector y(jacobian->num_cols());
+ x.setRandom();
+ y.setRandom();
+ double sum = 0;
+ for (auto _ : state) {
+ // This code gets timed
+ jacobian->LeftMultiplyAndAccumulate(x.data(), y.data());
+ sum += y.norm();
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_CRSLeftMultiplyAndAccumulateBA);
+
+static void BM_CRSLeftMultiplyAndAccumulateUnstructured(
+ benchmark::State& state) {
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = kNumRowBlocks;
+ options.num_col_blocks = kNumColBlocks;
+ options.min_row_block_size = kMinRowBlockSize;
+ options.min_col_block_size = kMinColBlockSize;
+ options.max_row_block_size = kMaxRowBlockSize;
+ options.max_col_block_size = kMaxColBlockSize;
+ options.block_density = kBlockDensity;
+ std::mt19937 prng;
+
+ auto bsm_jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+ auto jacobian = bsm_jacobian->ToCompressedRowSparseMatrix();
+
+ Vector x(jacobian->num_rows());
+ Vector y(jacobian->num_cols());
+ x.setRandom();
+ y.setRandom();
+ double sum = 0;
+ for (auto _ : state) {
+ // This code gets timed
+ jacobian->LeftMultiplyAndAccumulate(x.data(), y.data());
+ sum += y.norm();
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_CRSLeftMultiplyAndAccumulateUnstructured);
+
+#ifndef CERES_NO_CUDA
+static void BM_CudaRightMultiplyAndAccumulateBA(benchmark::State& state) {
+ std::mt19937 prng;
+ auto jacobian = CreateFakeBundleAdjustmentJacobian(
+ kNumCameras, kNumPoints, kCameraSize, kPointSize, kVisibility, prng);
+ ContextImpl context;
+ std::string message;
+ context.InitCuda(&message);
+ auto jacobian_crs = jacobian->ToCompressedRowSparseMatrix();
+ CudaSparseMatrix cuda_jacobian(&context, *jacobian_crs);
+ CudaVector cuda_x(&context, 0);
+ CudaVector cuda_y(&context, 0);
+
+ Vector x(jacobian->num_cols());
+ Vector y(jacobian->num_rows());
+ x.setRandom();
+ y.setRandom();
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.CopyFromCpu(y);
+ double sum = 0;
+ for (auto _ : state) {
+ cuda_jacobian.RightMultiplyAndAccumulate(cuda_x, &cuda_y);
+ sum += cuda_y.Norm();
+ CHECK_EQ(cudaDeviceSynchronize(), cudaSuccess);
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_CudaRightMultiplyAndAccumulateBA);
+
+static void BM_CudaRightMultiplyAndAccumulateUnstructured(
+ benchmark::State& state) {
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = kNumRowBlocks;
+ options.num_col_blocks = kNumColBlocks;
+ options.min_row_block_size = kMinRowBlockSize;
+ options.min_col_block_size = kMinColBlockSize;
+ options.max_row_block_size = kMaxRowBlockSize;
+ options.max_col_block_size = kMaxColBlockSize;
+ options.block_density = kBlockDensity;
+ std::mt19937 prng;
+
+ auto jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+ ContextImpl context;
+ std::string message;
+ context.InitCuda(&message);
+ auto jacobian_crs = jacobian->ToCompressedRowSparseMatrix();
+ CudaSparseMatrix cuda_jacobian(&context, *jacobian_crs);
+ CudaVector cuda_x(&context, 0);
+ CudaVector cuda_y(&context, 0);
+
+ Vector x(jacobian->num_cols());
+ Vector y(jacobian->num_rows());
+ x.setRandom();
+ y.setRandom();
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.CopyFromCpu(y);
+ double sum = 0;
+ for (auto _ : state) {
+ cuda_jacobian.RightMultiplyAndAccumulate(cuda_x, &cuda_y);
+ sum += cuda_y.Norm();
+ CHECK_EQ(cudaDeviceSynchronize(), cudaSuccess);
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_CudaRightMultiplyAndAccumulateUnstructured);
+
+static void BM_CudaLeftMultiplyAndAccumulateBA(benchmark::State& state) {
+ std::mt19937 prng;
+ auto jacobian = CreateFakeBundleAdjustmentJacobian(
+ kNumCameras, kNumPoints, kCameraSize, kPointSize, kVisibility, prng);
+ ContextImpl context;
+ std::string message;
+ context.InitCuda(&message);
+ auto jacobian_crs = jacobian->ToCompressedRowSparseMatrix();
+ CudaSparseMatrix cuda_jacobian(&context, *jacobian_crs);
+ CudaVector cuda_x(&context, 0);
+ CudaVector cuda_y(&context, 0);
+
+ Vector x(jacobian->num_rows());
+ Vector y(jacobian->num_cols());
+ x.setRandom();
+ y.setRandom();
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.CopyFromCpu(y);
+ double sum = 0;
+ for (auto _ : state) {
+ cuda_jacobian.LeftMultiplyAndAccumulate(cuda_x, &cuda_y);
+ sum += cuda_y.Norm();
+ CHECK_EQ(cudaDeviceSynchronize(), cudaSuccess);
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_CudaLeftMultiplyAndAccumulateBA);
+
+static void BM_CudaLeftMultiplyAndAccumulateUnstructured(
+ benchmark::State& state) {
+ BlockSparseMatrix::RandomMatrixOptions options;
+ options.num_row_blocks = kNumRowBlocks;
+ options.num_col_blocks = kNumColBlocks;
+ options.min_row_block_size = kMinRowBlockSize;
+ options.min_col_block_size = kMinColBlockSize;
+ options.max_row_block_size = kMaxRowBlockSize;
+ options.max_col_block_size = kMaxColBlockSize;
+ options.block_density = kBlockDensity;
+ std::mt19937 prng;
+
+ auto jacobian = BlockSparseMatrix::CreateRandomMatrix(options, prng);
+ ContextImpl context;
+ std::string message;
+ context.InitCuda(&message);
+ auto jacobian_crs = jacobian->ToCompressedRowSparseMatrix();
+ CudaSparseMatrix cuda_jacobian(&context, *jacobian_crs);
+ CudaVector cuda_x(&context, 0);
+ CudaVector cuda_y(&context, 0);
+
+ Vector x(jacobian->num_rows());
+ Vector y(jacobian->num_cols());
+ x.setRandom();
+ y.setRandom();
+
+ cuda_x.CopyFromCpu(x);
+ cuda_y.CopyFromCpu(y);
+ double sum = 0;
+ for (auto _ : state) {
+ cuda_jacobian.LeftMultiplyAndAccumulate(cuda_x, &cuda_y);
+ sum += cuda_y.Norm();
+ CHECK_EQ(cudaDeviceSynchronize(), cudaSuccess);
+ }
+ CHECK_NE(sum, 0.0);
+}
+
+BENCHMARK(BM_CudaLeftMultiplyAndAccumulateUnstructured);
+
+#endif
+
+} // namespace ceres::internal
+
+BENCHMARK_MAIN();
diff --git a/internal/ceres/stl_util.h b/internal/ceres/stl_util.h
index d3411b7..d206279 100644
--- a/internal/ceres/stl_util.h
+++ b/internal/ceres/stl_util.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -59,8 +59,8 @@
template <class ForwardIterator>
void STLDeleteUniqueContainerPointers(ForwardIterator begin,
ForwardIterator end) {
- sort(begin, end);
- ForwardIterator new_end = unique(begin, end);
+ std::sort(begin, end);
+ ForwardIterator new_end = std::unique(begin, end);
while (begin != new_end) {
ForwardIterator temp = begin;
++begin;
@@ -73,7 +73,7 @@
// hash_set, or any other STL container which defines sensible begin(), end(),
// and clear() methods.
//
-// If container is NULL, this function is a no-op.
+// If container is nullptr, this function is a no-op.
//
// As an alternative to calling STLDeleteElements() directly, consider
// ElementDeleter (defined below), which ensures that your container's elements
diff --git a/internal/ceres/stringprintf.cc b/internal/ceres/stringprintf.cc
index b0e2acc..100bbff 100644
--- a/internal/ceres/stringprintf.cc
+++ b/internal/ceres/stringprintf.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,14 +36,11 @@
#include <string>
#include <vector>
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-using std::string;
-
-void StringAppendV(string* dst, const char* format, va_list ap) {
+void StringAppendV(std::string* dst, const char* format, va_list ap) {
// First try with a small fixed size buffer
char space[1024];
@@ -66,7 +63,7 @@
// Error or MSVC running out of space. MSVC 8.0 and higher
// can be asked about space needed with the special idiom below:
va_copy(backup_ap, ap);
- result = vsnprintf(NULL, 0, format, backup_ap);
+ result = vsnprintf(nullptr, 0, format, backup_ap);
va_end(backup_ap);
#endif
@@ -93,16 +90,16 @@
delete[] buf;
}
-string StringPrintf(const char* format, ...) {
+std::string StringPrintf(const char* format, ...) {
va_list ap;
va_start(ap, format);
- string result;
+ std::string result;
StringAppendV(&result, format, ap);
va_end(ap);
return result;
}
-const string& SStringPrintf(string* dst, const char* format, ...) {
+const std::string& SStringPrintf(std::string* dst, const char* format, ...) {
va_list ap;
va_start(ap, format);
dst->clear();
@@ -111,12 +108,11 @@
return *dst;
}
-void StringAppendF(string* dst, const char* format, ...) {
+void StringAppendF(std::string* dst, const char* format, ...) {
va_list ap;
va_start(ap, format);
StringAppendV(dst, format, ap);
va_end(ap);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/stringprintf.h b/internal/ceres/stringprintf.h
index 4d51278..f761770 100644
--- a/internal/ceres/stringprintf.h
+++ b/internal/ceres/stringprintf.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -41,10 +41,10 @@
#include <cstdarg>
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
#if (defined(__GNUC__) || defined(__clang__))
// Tell the compiler to do printf format string checking if the compiler
@@ -63,32 +63,34 @@
#endif
// Return a C++ string.
-CERES_EXPORT_INTERNAL extern std::string StringPrintf(const char* format, ...)
+CERES_NO_EXPORT extern std::string StringPrintf(const char* format, ...)
// Tell the compiler to do printf format string checking.
CERES_PRINTF_ATTRIBUTE(1, 2);
// Store result into a supplied string and return it.
-CERES_EXPORT_INTERNAL extern const std::string& SStringPrintf(
- std::string* dst, const char* format, ...)
+CERES_NO_EXPORT extern const std::string& SStringPrintf(std::string* dst,
+ const char* format,
+ ...)
// Tell the compiler to do printf format string checking.
CERES_PRINTF_ATTRIBUTE(2, 3);
// Append result to a supplied string.
-CERES_EXPORT_INTERNAL extern void StringAppendF(std::string* dst,
- const char* format,
- ...)
+CERES_NO_EXPORT extern void StringAppendF(std::string* dst,
+ const char* format,
+ ...)
// Tell the compiler to do printf format string checking.
CERES_PRINTF_ATTRIBUTE(2, 3);
// Lower-level routine that takes a va_list and appends to a specified string.
// All other routines are just convenience wrappers around it.
-CERES_EXPORT_INTERNAL extern void StringAppendV(std::string* dst,
- const char* format,
- va_list ap);
+CERES_NO_EXPORT extern void StringAppendV(std::string* dst,
+ const char* format,
+ va_list ap);
#undef CERES_PRINTF_ATTRIBUTE
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_STRINGPRINTF_H_
diff --git a/internal/ceres/subset_preconditioner.cc b/internal/ceres/subset_preconditioner.cc
index 779a34a..068f6ce 100644
--- a/internal/ceres/subset_preconditioner.cc
+++ b/internal/ceres/subset_preconditioner.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,7 @@
#include <memory>
#include <string>
+#include <utility>
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/inner_product_computer.h"
@@ -39,25 +40,25 @@
#include "ceres/sparse_cholesky.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-SubsetPreconditioner::SubsetPreconditioner(
- const Preconditioner::Options& options, const BlockSparseMatrix& A)
- : options_(options), num_cols_(A.num_cols()) {
+SubsetPreconditioner::SubsetPreconditioner(Preconditioner::Options options,
+ const BlockSparseMatrix& A)
+ : options_(std::move(options)), num_cols_(A.num_cols()) {
CHECK_GE(options_.subset_preconditioner_start_row_block, 0)
<< "Congratulations, you found a bug in Ceres. Please report it.";
LinearSolver::Options sparse_cholesky_options;
sparse_cholesky_options.sparse_linear_algebra_library_type =
options_.sparse_linear_algebra_library_type;
- sparse_cholesky_options.use_postordering = options_.use_postordering;
+ sparse_cholesky_options.ordering_type = options_.ordering_type;
sparse_cholesky_ = SparseCholesky::Create(sparse_cholesky_options);
}
-SubsetPreconditioner::~SubsetPreconditioner() {}
+SubsetPreconditioner::~SubsetPreconditioner() = default;
-void SubsetPreconditioner::RightMultiply(const double* x, double* y) const {
+void SubsetPreconditioner::RightMultiplyAndAccumulate(const double* x,
+ double* y) const {
CHECK(x != nullptr);
CHECK(y != nullptr);
std::string message;
@@ -66,14 +67,14 @@
bool SubsetPreconditioner::UpdateImpl(const BlockSparseMatrix& A,
const double* D) {
- BlockSparseMatrix* m = const_cast<BlockSparseMatrix*>(&A);
+ auto* m = const_cast<BlockSparseMatrix*>(&A);
const CompressedRowBlockStructure* bs = m->block_structure();
// A = [P]
// [Q]
// Now add D to A if needed.
- if (D != NULL) {
+ if (D != nullptr) {
// A = [P]
// [Q]
// [D]
@@ -82,19 +83,19 @@
m->AppendRows(*regularizer);
}
- if (inner_product_computer_.get() == NULL) {
- inner_product_computer_.reset(InnerProductComputer::Create(
+ if (inner_product_computer_ == nullptr) {
+ inner_product_computer_ = InnerProductComputer::Create(
*m,
options_.subset_preconditioner_start_row_block,
bs->rows.size(),
- sparse_cholesky_->StorageType()));
+ sparse_cholesky_->StorageType());
}
// Compute inner_product = [Q'*Q + D'*D]
inner_product_computer_->Compute();
// Unappend D if needed.
- if (D != NULL) {
+ if (D != nullptr) {
// A = [P]
// [Q]
m->DeleteRowBlocks(bs->cols.size());
@@ -105,7 +106,7 @@
const LinearSolverTerminationType termination_type =
sparse_cholesky_->Factorize(inner_product_computer_->mutable_result(),
&message);
- if (termination_type != LINEAR_SOLVER_SUCCESS) {
+ if (termination_type != LinearSolverTerminationType::SUCCESS) {
LOG(ERROR) << "Preconditioner factorization failed: " << message;
return false;
}
@@ -113,5 +114,4 @@
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/subset_preconditioner.h b/internal/ceres/subset_preconditioner.h
index 9844a66..e179e99 100644
--- a/internal/ceres/subset_preconditioner.h
+++ b/internal/ceres/subset_preconditioner.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,11 +33,11 @@
#include <memory>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/preconditioner.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockSparseMatrix;
class SparseCholesky;
@@ -67,15 +67,15 @@
// computationally expensive this preconditioner will be.
//
// See the tests for example usage.
-class CERES_EXPORT_INTERNAL SubsetPreconditioner
+class CERES_NO_EXPORT SubsetPreconditioner
: public BlockSparseMatrixPreconditioner {
public:
- SubsetPreconditioner(const Preconditioner::Options& options,
+ SubsetPreconditioner(Preconditioner::Options options,
const BlockSparseMatrix& A);
- virtual ~SubsetPreconditioner();
+ ~SubsetPreconditioner() override;
// Preconditioner interface
- void RightMultiply(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
int num_rows() const final { return num_cols_; }
int num_cols() const final { return num_cols_; }
@@ -88,7 +88,8 @@
std::unique_ptr<InnerProductComputer> inner_product_computer_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_SUBSET_PRECONDITIONER_H_
diff --git a/internal/ceres/subset_preconditioner_test.cc b/internal/ceres/subset_preconditioner_test.cc
index 202110b..b73274c 100644
--- a/internal/ceres/subset_preconditioner_test.cc
+++ b/internal/ceres/subset_preconditioner_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,18 +31,19 @@
#include "ceres/subset_preconditioner.h"
#include <memory>
+#include <random>
#include "Eigen/Dense"
#include "Eigen/SparseCore"
#include "ceres/block_sparse_matrix.h"
#include "ceres/compressed_row_sparse_matrix.h"
#include "ceres/inner_product_computer.h"
+#include "ceres/internal/config.h"
#include "ceres/internal/eigen.h"
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
namespace {
@@ -67,7 +68,8 @@
Vector* solution) {
Matrix dense_triangular_lhs;
lhs.ToDenseMatrix(&dense_triangular_lhs);
- if (lhs.storage_type() == CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
+ if (lhs.storage_type() ==
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
Matrix full_lhs = dense_triangular_lhs.selfadjointView<Eigen::Upper>();
return SolveLinearSystemUsingEigen<Eigen::Upper>(full_lhs, rhs, solution);
}
@@ -75,7 +77,7 @@
dense_triangular_lhs, rhs, solution);
}
-typedef ::testing::tuple<SparseLinearAlgebraLibraryType, bool> Param;
+using Param = ::testing::tuple<SparseLinearAlgebraLibraryType, bool>;
std::string ParamInfoToString(testing::TestParamInfo<Param> info) {
Param param = info.param;
@@ -99,28 +101,28 @@
options.max_row_block_size = 4;
options.block_density = 0.9;
- m_.reset(BlockSparseMatrix::CreateRandomMatrix(options));
+ m_ = BlockSparseMatrix::CreateRandomMatrix(options, prng_);
start_row_block_ = m_->block_structure()->rows.size();
// Ensure that the bottom part of the matrix has the same column
// block structure.
options.col_blocks = m_->block_structure()->cols;
- b_.reset(BlockSparseMatrix::CreateRandomMatrix(options));
+ b_ = BlockSparseMatrix::CreateRandomMatrix(options, prng_);
m_->AppendRows(*b_);
// Create a Identity block diagonal matrix with the same column
// block structure.
diagonal_ = Vector::Ones(m_->num_cols());
- block_diagonal_.reset(BlockSparseMatrix::CreateDiagonalMatrix(
- diagonal_.data(), b_->block_structure()->cols));
+ block_diagonal_ = BlockSparseMatrix::CreateDiagonalMatrix(
+ diagonal_.data(), b_->block_structure()->cols);
// Unconditionally add the block diagonal to the matrix b_,
// because either it is either part of b_ to make it full rank, or
// we pass the same diagonal matrix later as the parameter D. In
// either case the preconditioner matrix is b_' b + D'D.
b_->AppendRows(*block_diagonal_);
- inner_product_computer_.reset(InnerProductComputer::Create(
- *b_, CompressedRowSparseMatrix::UPPER_TRIANGULAR));
+ inner_product_computer_ = InnerProductComputer::Create(
+ *b_, CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR);
inner_product_computer_->Compute();
}
@@ -131,6 +133,7 @@
std::unique_ptr<Preconditioner> preconditioner_;
Vector diagonal_;
int start_row_block_;
+ std::mt19937 prng_;
};
TEST_P(SubsetPreconditionerTest, foo) {
@@ -138,7 +141,7 @@
Preconditioner::Options options;
options.subset_preconditioner_start_row_block = start_row_block_;
options.sparse_linear_algebra_library_type = ::testing::get<0>(param);
- preconditioner_.reset(new SubsetPreconditioner(options, *m_));
+ preconditioner_ = std::make_unique<SubsetPreconditioner>(options, *m_);
const bool with_diagonal = ::testing::get<1>(param);
if (!with_diagonal) {
@@ -146,7 +149,7 @@
}
EXPECT_TRUE(
- preconditioner_->Update(*m_, with_diagonal ? diagonal_.data() : NULL));
+ preconditioner_->Update(*m_, with_diagonal ? diagonal_.data() : nullptr));
// Repeatedly apply the preconditioner to random vectors and check
// that the preconditioned value is the same as one obtained by
@@ -158,7 +161,7 @@
EXPECT_TRUE(ComputeExpectedSolution(*lhs, rhs, &expected));
Vector actual(lhs->num_rows());
- preconditioner_->RightMultiply(rhs.data(), actual.data());
+ preconditioner_->RightMultiplyAndAccumulate(rhs.data(), actual.data());
Matrix eigen_lhs;
lhs->ToDenseMatrix(&eigen_lhs);
@@ -180,14 +183,6 @@
ParamInfoToString);
#endif
-#ifndef CERES_NO_CXSPARSE
-INSTANTIATE_TEST_SUITE_P(SubsetPreconditionerWithCXSparse,
- SubsetPreconditionerTest,
- ::testing::Combine(::testing::Values(CX_SPARSE),
- ::testing::Values(true, false)),
- ParamInfoToString);
-#endif
-
#ifndef CERES_NO_ACCELERATE_SPARSE
INSTANTIATE_TEST_SUITE_P(
SubsetPreconditionerWithAccelerateSparse,
@@ -205,5 +200,4 @@
ParamInfoToString);
#endif
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/suitesparse.cc b/internal/ceres/suitesparse.cc
index 0d6f6bd..d93dd8d 100644
--- a/internal/ceres/suitesparse.cc
+++ b/internal/ceres/suitesparse.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -29,9 +29,12 @@
// Author: sameeragarwal@google.com (Sameer Agarwal)
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_NO_SUITESPARSE
+
+#include <memory>
+#include <string>
#include <vector>
#include "ceres/compressed_col_sparse_matrix_utils.h"
@@ -41,11 +44,24 @@
#include "ceres/triplet_sparse_matrix.h"
#include "cholmod.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
+namespace {
+int OrderingTypeToCHOLMODEnum(OrderingType ordering_type) {
+ if (ordering_type == OrderingType::AMD) {
+ return CHOLMOD_AMD;
+ }
+ if (ordering_type == OrderingType::NESDIS) {
+ return CHOLMOD_NESDIS;
+ }
-using std::string;
-using std::vector;
+ if (ordering_type == OrderingType::NATURAL) {
+ return CHOLMOD_NATURAL;
+ }
+ LOG(FATAL) << "Congratulations you have discovered a bug in Ceres Solver."
+ << "Please report it to the developers. " << ordering_type;
+ return -1;
+}
+} // namespace
SuiteSparse::SuiteSparse() { cholmod_start(&cc_); }
@@ -102,9 +118,11 @@
m.x = reinterpret_cast<void*>(A->mutable_values());
m.z = nullptr;
- if (A->storage_type() == CompressedRowSparseMatrix::LOWER_TRIANGULAR) {
+ if (A->storage_type() ==
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR) {
m.stype = 1;
- } else if (A->storage_type() == CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
+ } else if (A->storage_type() ==
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
m.stype = -1;
} else {
m.stype = 0;
@@ -143,19 +161,18 @@
}
cholmod_factor* SuiteSparse::AnalyzeCholesky(cholmod_sparse* A,
- string* message) {
- // Cholmod can try multiple re-ordering strategies to find a fill
- // reducing ordering. Here we just tell it use AMD with automatic
- // matrix dependence choice of supernodal versus simplicial
- // factorization.
+ OrderingType ordering_type,
+ std::string* message) {
cc_.nmethods = 1;
- cc_.method[0].ordering = CHOLMOD_AMD;
- cc_.supernodal = CHOLMOD_AUTO;
+ cc_.method[0].ordering = OrderingTypeToCHOLMODEnum(ordering_type);
+
+ // postordering with a NATURAL ordering leads to a significant regression in
+ // performance. See https://github.com/ceres-solver/ceres-solver/issues/905
+ if (ordering_type == OrderingType::NATURAL) {
+ cc_.postorder = 0;
+ }
cholmod_factor* factor = cholmod_analyze(A, &cc_);
- if (VLOG_IS_ON(2)) {
- cholmod_print_common(const_cast<char*>("Symbolic Analysis"), &cc_);
- }
if (cc_.status != CHOLMOD_OK) {
*message =
@@ -164,32 +181,22 @@
}
CHECK(factor != nullptr);
+ if (VLOG_IS_ON(2)) {
+ cholmod_print_common(const_cast<char*>("Symbolic Analysis"), &cc_);
+ }
+
return factor;
}
-cholmod_factor* SuiteSparse::BlockAnalyzeCholesky(cholmod_sparse* A,
- const vector<int>& row_blocks,
- const vector<int>& col_blocks,
- string* message) {
- vector<int> ordering;
- if (!BlockAMDOrdering(A, row_blocks, col_blocks, &ordering)) {
- return nullptr;
- }
- return AnalyzeCholeskyWithUserOrdering(A, ordering, message);
-}
-
-cholmod_factor* SuiteSparse::AnalyzeCholeskyWithUserOrdering(
- cholmod_sparse* A, const vector<int>& ordering, string* message) {
+cholmod_factor* SuiteSparse::AnalyzeCholeskyWithGivenOrdering(
+ cholmod_sparse* A, const std::vector<int>& ordering, std::string* message) {
CHECK_EQ(ordering.size(), A->nrow);
cc_.nmethods = 1;
cc_.method[0].ordering = CHOLMOD_GIVEN;
-
cholmod_factor* factor =
- cholmod_analyze_p(A, const_cast<int*>(&ordering[0]), nullptr, 0, &cc_);
- if (VLOG_IS_ON(2)) {
- cholmod_print_common(const_cast<char*>("Symbolic Analysis"), &cc_);
- }
+ cholmod_analyze_p(A, const_cast<int*>(ordering.data()), nullptr, 0, &cc_);
+
if (cc_.status != CHOLMOD_OK) {
*message =
StringPrintf("cholmod_analyze failed. error code: %d", cc_.status);
@@ -197,40 +204,33 @@
}
CHECK(factor != nullptr);
- return factor;
-}
-
-cholmod_factor* SuiteSparse::AnalyzeCholeskyWithNaturalOrdering(
- cholmod_sparse* A, string* message) {
- cc_.nmethods = 1;
- cc_.method[0].ordering = CHOLMOD_NATURAL;
- cc_.postorder = 0;
-
- cholmod_factor* factor = cholmod_analyze(A, &cc_);
if (VLOG_IS_ON(2)) {
cholmod_print_common(const_cast<char*>("Symbolic Analysis"), &cc_);
}
- if (cc_.status != CHOLMOD_OK) {
- *message =
- StringPrintf("cholmod_analyze failed. error code: %d", cc_.status);
- return nullptr;
- }
- CHECK(factor != nullptr);
return factor;
}
-bool SuiteSparse::BlockAMDOrdering(const cholmod_sparse* A,
- const vector<int>& row_blocks,
- const vector<int>& col_blocks,
- vector<int>* ordering) {
+bool SuiteSparse::BlockOrdering(const cholmod_sparse* A,
+ OrderingType ordering_type,
+ const std::vector<Block>& row_blocks,
+ const std::vector<Block>& col_blocks,
+ std::vector<int>* ordering) {
+ if (ordering_type == OrderingType::NATURAL) {
+ ordering->resize(A->nrow);
+ for (int i = 0; i < A->nrow; ++i) {
+ (*ordering)[i] = i;
+ }
+ return true;
+ }
+
const int num_row_blocks = row_blocks.size();
const int num_col_blocks = col_blocks.size();
// Arrays storing the compressed column structure of the matrix
- // incoding the block sparsity of A.
- vector<int> block_cols;
- vector<int> block_rows;
+ // encoding the block sparsity of A.
+ std::vector<int> block_cols;
+ std::vector<int> block_rows;
CompressedColumnScalarMatrixToBlockMatrix(reinterpret_cast<const int*>(A->i),
reinterpret_cast<const int*>(A->p),
@@ -242,8 +242,8 @@
block_matrix.nrow = num_row_blocks;
block_matrix.ncol = num_col_blocks;
block_matrix.nzmax = block_rows.size();
- block_matrix.p = reinterpret_cast<void*>(&block_cols[0]);
- block_matrix.i = reinterpret_cast<void*>(&block_rows[0]);
+ block_matrix.p = reinterpret_cast<void*>(block_cols.data());
+ block_matrix.i = reinterpret_cast<void*>(block_rows.data());
block_matrix.x = nullptr;
block_matrix.stype = A->stype;
block_matrix.itype = CHOLMOD_INT;
@@ -252,8 +252,8 @@
block_matrix.sorted = 1;
block_matrix.packed = 1;
- vector<int> block_ordering(num_row_blocks);
- if (!cholmod_amd(&block_matrix, nullptr, 0, &block_ordering[0], &cc_)) {
+ std::vector<int> block_ordering(num_row_blocks);
+ if (!Ordering(&block_matrix, ordering_type, block_ordering.data())) {
return false;
}
@@ -261,9 +261,22 @@
return true;
}
+cholmod_factor* SuiteSparse::BlockAnalyzeCholesky(
+ cholmod_sparse* A,
+ OrderingType ordering_type,
+ const std::vector<Block>& row_blocks,
+ const std::vector<Block>& col_blocks,
+ std::string* message) {
+ std::vector<int> ordering;
+ if (!BlockOrdering(A, ordering_type, row_blocks, col_blocks, &ordering)) {
+ return nullptr;
+ }
+ return AnalyzeCholeskyWithGivenOrdering(A, ordering, message);
+}
+
LinearSolverTerminationType SuiteSparse::Cholesky(cholmod_sparse* A,
cholmod_factor* L,
- string* message) {
+ std::string* message) {
CHECK(A != nullptr);
CHECK(L != nullptr);
@@ -281,48 +294,48 @@
switch (cc_.status) {
case CHOLMOD_NOT_INSTALLED:
*message = "CHOLMOD failure: Method not installed.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
case CHOLMOD_OUT_OF_MEMORY:
*message = "CHOLMOD failure: Out of memory.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
case CHOLMOD_TOO_LARGE:
*message = "CHOLMOD failure: Integer overflow occurred.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
case CHOLMOD_INVALID:
*message = "CHOLMOD failure: Invalid input.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
case CHOLMOD_NOT_POSDEF:
*message = "CHOLMOD warning: Matrix not positive definite.";
- return LINEAR_SOLVER_FAILURE;
+ return LinearSolverTerminationType::FAILURE;
case CHOLMOD_DSMALL:
*message =
"CHOLMOD warning: D for LDL' or diag(L) or "
"LL' has tiny absolute value.";
- return LINEAR_SOLVER_FAILURE;
+ return LinearSolverTerminationType::FAILURE;
case CHOLMOD_OK:
if (cholmod_status != 0) {
- return LINEAR_SOLVER_SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
}
*message =
"CHOLMOD failure: cholmod_factorize returned false "
"but cholmod_common::status is CHOLMOD_OK."
"Please report this to ceres-solver@googlegroups.com.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
default:
*message = StringPrintf(
"Unknown cholmod return code: %d. "
"Please report this to ceres-solver@googlegroups.com.",
cc_.status);
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
}
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
}
cholmod_dense* SuiteSparse::Solve(cholmod_factor* L,
cholmod_dense* b,
- string* message) {
+ std::string* message) {
if (cc_.status != CHOLMOD_OK) {
*message = "cholmod_solve failed. CHOLMOD status is not CHOLMOD_OK";
return nullptr;
@@ -331,22 +344,34 @@
return cholmod_solve(CHOLMOD_A, L, b, &cc_);
}
-bool SuiteSparse::ApproximateMinimumDegreeOrdering(cholmod_sparse* matrix,
- int* ordering) {
- return cholmod_amd(matrix, nullptr, 0, ordering, &cc_);
+bool SuiteSparse::Ordering(cholmod_sparse* matrix,
+ OrderingType ordering_type,
+ int* ordering) {
+ CHECK_NE(ordering_type, OrderingType::NATURAL);
+ if (ordering_type == OrderingType::AMD) {
+ return cholmod_amd(matrix, nullptr, 0, ordering, &cc_);
+ }
+
+#ifdef CERES_NO_CHOLMOD_PARTITION
+ return false;
+#else
+ std::vector<int> CParent(matrix->nrow, 0);
+ std::vector<int> CMember(matrix->nrow, 0);
+ return cholmod_nested_dissection(
+ matrix, nullptr, 0, ordering, CParent.data(), CMember.data(), &cc_);
+#endif
}
bool SuiteSparse::ConstrainedApproximateMinimumDegreeOrdering(
cholmod_sparse* matrix, int* constraints, int* ordering) {
-#ifndef CERES_NO_CAMD
return cholmod_camd(matrix, nullptr, 0, constraints, ordering, &cc_);
-#else
- LOG(FATAL) << "Congratulations you have found a bug in Ceres."
- << "Ceres Solver was compiled with SuiteSparse "
- << "version 4.1.0 or less. Calling this function "
- << "in that case is a bug. Please contact the"
- << "the Ceres Solver developers.";
+}
+
+bool SuiteSparse::IsNestedDissectionAvailable() {
+#ifdef CERES_NO_CHOLMOD_PARTITION
return false;
+#else
+ return true;
#endif
}
@@ -366,48 +391,61 @@
}
LinearSolverTerminationType SuiteSparseCholesky::Factorize(
- CompressedRowSparseMatrix* lhs, string* message) {
+ CompressedRowSparseMatrix* lhs, std::string* message) {
if (lhs == nullptr) {
- *message = "Failure: Input lhs is NULL.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ *message = "Failure: Input lhs is nullptr.";
+ return LinearSolverTerminationType::FATAL_ERROR;
}
cholmod_sparse cholmod_lhs = ss_.CreateSparseMatrixTransposeView(lhs);
+ // If a factorization does not exist, compute the symbolic
+ // factorization first.
+ //
+ // If the ordering type is NATURAL, then there is no fill reducing
+ // ordering to be computed, regardless of block structure, so we can
+ // just call the scalar version of symbolic factorization. For
+ // SuiteSparse this is the common case since we have already
+ // pre-ordered the columns of the Jacobian.
+ //
+ // Similarly regardless of ordering type, if there is no block
+ // structure in the matrix we call the scalar version of symbolic
+ // factorization.
if (factor_ == nullptr) {
- if (ordering_type_ == NATURAL) {
- factor_ = ss_.AnalyzeCholeskyWithNaturalOrdering(&cholmod_lhs, message);
+ if (ordering_type_ == OrderingType::NATURAL ||
+ (lhs->col_blocks().empty() || lhs->row_blocks().empty())) {
+ factor_ = ss_.AnalyzeCholesky(&cholmod_lhs, ordering_type_, message);
} else {
- if (!lhs->col_blocks().empty() && !(lhs->row_blocks().empty())) {
- factor_ = ss_.BlockAnalyzeCholesky(
- &cholmod_lhs, lhs->col_blocks(), lhs->row_blocks(), message);
- } else {
- factor_ = ss_.AnalyzeCholesky(&cholmod_lhs, message);
- }
- }
-
- if (factor_ == nullptr) {
- return LINEAR_SOLVER_FATAL_ERROR;
+ factor_ = ss_.BlockAnalyzeCholesky(&cholmod_lhs,
+ ordering_type_,
+ lhs->col_blocks(),
+ lhs->row_blocks(),
+ message);
}
}
+ if (factor_ == nullptr) {
+ return LinearSolverTerminationType::FATAL_ERROR;
+ }
+
+ // Compute and return the numeric factorization.
return ss_.Cholesky(&cholmod_lhs, factor_, message);
}
CompressedRowSparseMatrix::StorageType SuiteSparseCholesky::StorageType()
const {
- return ((ordering_type_ == NATURAL)
- ? CompressedRowSparseMatrix::UPPER_TRIANGULAR
- : CompressedRowSparseMatrix::LOWER_TRIANGULAR);
+ return ((ordering_type_ == OrderingType::NATURAL)
+ ? CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR
+ : CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR);
}
LinearSolverTerminationType SuiteSparseCholesky::Solve(const double* rhs,
double* solution,
- string* message) {
+ std::string* message) {
// Error checking
if (factor_ == nullptr) {
*message = "Solve called without a call to Factorize first.";
- return LINEAR_SOLVER_FATAL_ERROR;
+ return LinearSolverTerminationType::FATAL_ERROR;
}
const int num_cols = factor_->n;
@@ -416,15 +454,14 @@
ss_.Solve(factor_, &cholmod_rhs, message);
if (cholmod_dense_solution == nullptr) {
- return LINEAR_SOLVER_FAILURE;
+ return LinearSolverTerminationType::FAILURE;
}
memcpy(solution, cholmod_dense_solution->x, num_cols * sizeof(*solution));
ss_.Free(cholmod_dense_solution);
- return LINEAR_SOLVER_SUCCESS;
+ return LinearSolverTerminationType::SUCCESS;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_NO_SUITESPARSE
diff --git a/internal/ceres/suitesparse.h b/internal/ceres/suitesparse.h
index 5dcc53f..703ee87 100644
--- a/internal/ceres/suitesparse.h
+++ b/internal/ceres/suitesparse.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,44 +34,24 @@
#define CERES_INTERNAL_SUITESPARSE_H_
// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
+#include "ceres/internal/config.h"
#ifndef CERES_NO_SUITESPARSE
#include <cstring>
+#include <memory>
#include <string>
#include <vector>
#include "SuiteSparseQR.hpp"
+#include "ceres/block_structure.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/linear_solver.h"
#include "ceres/sparse_cholesky.h"
#include "cholmod.h"
#include "glog/logging.h"
-// Before SuiteSparse version 4.2.0, cholmod_camd was only enabled
-// if SuiteSparse was compiled with Metis support. This makes
-// calling and linking into cholmod_camd problematic even though it
-// has nothing to do with Metis. This has been fixed reliably in
-// 4.2.0.
-//
-// The fix was actually committed in 4.1.0, but there is
-// some confusion about a silent update to the tar ball, so we are
-// being conservative and choosing the next minor version where
-// things are stable.
-#if (SUITESPARSE_VERSION < 4002)
-#define CERES_NO_CAMD
-#endif
-
-// UF_long is deprecated but SuiteSparse_long is only available in
-// newer versions of SuiteSparse. So for older versions of
-// SuiteSparse, we define SuiteSparse_long to be the same as UF_long,
-// which is what recent versions of SuiteSparse do anyways.
-#ifndef SuiteSparse_long
-#define SuiteSparse_long UF_long
-#endif
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class CompressedRowSparseMatrix;
class TripletSparseMatrix;
@@ -81,14 +61,14 @@
// provides the user with a simpler interface. The methods here cannot
// be static as a cholmod_common object serves as a global variable
// for all cholmod function calls.
-class SuiteSparse {
+class CERES_NO_EXPORT SuiteSparse {
public:
SuiteSparse();
~SuiteSparse();
// Functions for building cholmod_sparse objects from sparse
// matrices stored in triplet form. The matrix A is not
- // modifed. Called owns the result.
+ // modified. Called owns the result.
cholmod_sparse* CreateSparseMatrix(TripletSparseMatrix* A);
// This function works like CreateSparseMatrix, except that the
@@ -106,7 +86,7 @@
cholmod_dense CreateDenseVectorView(const double* x, int size);
// Given a vector x, build a cholmod_dense vector of size out_size
- // with the first in_size entries copied from x. If x is NULL, then
+ // with the first in_size entries copied from x. If x is nullptr, then
// an all zeros vector is returned. Caller owns the result.
cholmod_dense* CreateDenseVector(const double* x, int in_size, int out_size);
@@ -123,7 +103,7 @@
// Create and return a matrix m = A * A'. Caller owns the
// result. The matrix A is not modified.
cholmod_sparse* AATranspose(cholmod_sparse* A) {
- cholmod_sparse* m = cholmod_aat(A, NULL, A->nrow, 1, &cc_);
+ cholmod_sparse* m = cholmod_aat(A, nullptr, A->nrow, 1, &cc_);
m->stype = 1; // Pay attention to the upper triangular part.
return m;
}
@@ -139,12 +119,11 @@
cholmod_sdmult(A, 0, alpha_, beta_, x, y, &cc_);
}
- // Find an ordering of A or AA' (if A is unsymmetric) that minimizes
- // the fill-in in the Cholesky factorization of the corresponding
- // matrix. This is done by using the AMD algorithm.
- //
- // Using this ordering, the symbolic Cholesky factorization of A (or
- // AA') is computed and returned.
+ // Compute a symbolic factorization for A or AA' (if A is
+ // unsymmetric). If ordering_type is NATURAL, then no fill reducing
+ // ordering is computed, otherwise depending on the value of
+ // ordering_type AMD or Nested Dissection is used to compute a fill
+ // reducing ordering before the symbolic factorization is computed.
//
// A is not modified, only the pattern of non-zeros of A is used,
// the actual numerical values in A are of no consequence.
@@ -152,11 +131,15 @@
// message contains an explanation of the failures if any.
//
// Caller owns the result.
- cholmod_factor* AnalyzeCholesky(cholmod_sparse* A, std::string* message);
+ cholmod_factor* AnalyzeCholesky(cholmod_sparse* A,
+ OrderingType ordering_type,
+ std::string* message);
+ // Block oriented version of AnalyzeCholesky.
cholmod_factor* BlockAnalyzeCholesky(cholmod_sparse* A,
- const std::vector<int>& row_blocks,
- const std::vector<int>& col_blocks,
+ OrderingType ordering_type,
+ const std::vector<Block>& row_blocks,
+ const std::vector<Block>& col_blocks,
std::string* message);
// If A is symmetric, then compute the symbolic Cholesky
@@ -170,20 +153,11 @@
// message contains an explanation of the failures if any.
//
// Caller owns the result.
- cholmod_factor* AnalyzeCholeskyWithUserOrdering(
+ cholmod_factor* AnalyzeCholeskyWithGivenOrdering(
cholmod_sparse* A,
const std::vector<int>& ordering,
std::string* message);
- // Perform a symbolic factorization of A without re-ordering A. No
- // postordering of the elimination tree is performed. This ensures
- // that the symbolic factor does not introduce an extra permutation
- // on the matrix. See the documentation for CHOLMOD for more details.
- //
- // message contains an explanation of the failures if any.
- cholmod_factor* AnalyzeCholeskyWithNaturalOrdering(cholmod_sparse* A,
- std::string* message);
-
// Use the symbolic factorization in L, to find the numerical
// factorization for the matrix A or AA^T. Return true if
// successful, false otherwise. L contains the numeric factorization
@@ -196,58 +170,46 @@
// Given a Cholesky factorization of a matrix A = LL^T, solve the
// linear system Ax = b, and return the result. If the Solve fails
- // NULL is returned. Caller owns the result.
+ // nullptr is returned. Caller owns the result.
//
// message contains an explanation of the failures if any.
cholmod_dense* Solve(cholmod_factor* L,
cholmod_dense* b,
std::string* message);
+ // Find a fill reducing ordering. ordering is expected to be large
+ // enough to hold the ordering. ordering_type must be AMD or NESDIS.
+ bool Ordering(cholmod_sparse* matrix,
+ OrderingType ordering_type,
+ int* ordering);
+
+ // Find the block oriented fill reducing ordering of a matrix A,
+ // whose row and column blocks are given by row_blocks, and
+ // col_blocks respectively. The matrix may or may not be
+ // symmetric. The entries of col_blocks do not need to sum to the
+ // number of columns in A. If this is the case, only the first
+ // sum(col_blocks) are used to compute the ordering.
+ //
// By virtue of the modeling layer in Ceres being block oriented,
// all the matrices used by Ceres are also block oriented. When
// doing sparse direct factorization of these matrices the
- // fill-reducing ordering algorithms (in particular AMD) can either
- // be run on the block or the scalar form of these matrices. The two
- // SuiteSparse::AnalyzeCholesky methods allows the client to
- // compute the symbolic factorization of a matrix by either using
- // AMD on the matrix or a user provided ordering of the rows.
- //
- // But since the underlying matrices are block oriented, it is worth
- // running AMD on just the block structure of these matrices and then
- // lifting these block orderings to a full scalar ordering. This
- // preserves the block structure of the permuted matrix, and exposes
- // more of the super-nodal structure of the matrix to the numerical
- // factorization routines.
- //
- // Find the block oriented AMD ordering of a matrix A, whose row and
- // column blocks are given by row_blocks, and col_blocks
- // respectively. The matrix may or may not be symmetric. The entries
- // of col_blocks do not need to sum to the number of columns in
- // A. If this is the case, only the first sum(col_blocks) are used
- // to compute the ordering.
- bool BlockAMDOrdering(const cholmod_sparse* A,
- const std::vector<int>& row_blocks,
- const std::vector<int>& col_blocks,
- std::vector<int>* ordering);
+ // fill-reducing ordering algorithms can either be run on the block
+ // or the scalar form of these matrices. But since the underlying
+ // matrices are block oriented, it is worth running the fill
+ // reducing ordering on just the block structure of these matrices
+ // and then lifting these block orderings to a full scalar
+ // ordering. This preserves the block structure of the permuted
+ // matrix, and exposes more of the super-nodal structure of the
+ // matrix to the numerical factorization routines.
+ bool BlockOrdering(const cholmod_sparse* A,
+ OrderingType ordering_type,
+ const std::vector<Block>& row_blocks,
+ const std::vector<Block>& col_blocks,
+ std::vector<int>* ordering);
- // Find a fill reducing approximate minimum degree
- // ordering. ordering is expected to be large enough to hold the
- // ordering.
- bool ApproximateMinimumDegreeOrdering(cholmod_sparse* matrix, int* ordering);
-
- // Before SuiteSparse version 4.2.0, cholmod_camd was only enabled
- // if SuiteSparse was compiled with Metis support. This makes
- // calling and linking into cholmod_camd problematic even though it
- // has nothing to do with Metis. This has been fixed reliably in
- // 4.2.0.
- //
- // The fix was actually committed in 4.1.0, but there is
- // some confusion about a silent update to the tar ball, so we are
- // being conservative and choosing the next minor version where
- // things are stable.
- static bool IsConstrainedApproximateMinimumDegreeOrderingAvailable() {
- return (SUITESPARSE_VERSION > 4001);
- }
+ // Nested dissection is only available if SuiteSparse is compiled
+ // with Metis support.
+ static bool IsNestedDissectionAvailable();
// Find a fill reducing approximate minimum degree
// ordering. constraints is an array which associates with each
@@ -259,9 +221,6 @@
// Calling ApproximateMinimumDegreeOrdering is equivalent to calling
// ConstrainedApproximateMinimumDegreeOrdering with a constraint
// array that puts all columns in the same elimination group.
- //
- // If CERES_NO_CAMD is defined then calling this function will
- // result in a crash.
bool ConstrainedApproximateMinimumDegreeOrdering(cholmod_sparse* matrix,
int* constraints,
int* ordering);
@@ -288,12 +247,12 @@
cholmod_common cc_;
};
-class SuiteSparseCholesky : public SparseCholesky {
+class CERES_NO_EXPORT SuiteSparseCholesky final : public SparseCholesky {
public:
static std::unique_ptr<SparseCholesky> Create(OrderingType ordering_type);
// SparseCholesky interface.
- virtual ~SuiteSparseCholesky();
+ ~SuiteSparseCholesky() override;
CompressedRowSparseMatrix::StorageType StorageType() const final;
LinearSolverTerminationType Factorize(CompressedRowSparseMatrix* lhs,
std::string* message) final;
@@ -302,42 +261,39 @@
std::string* message) final;
private:
- SuiteSparseCholesky(const OrderingType ordering_type);
+ explicit SuiteSparseCholesky(const OrderingType ordering_type);
const OrderingType ordering_type_;
SuiteSparse ss_;
cholmod_factor* factor_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#else // CERES_NO_SUITESPARSE
-typedef void cholmod_factor;
+using cholmod_factor = void;
+
+#include "ceres/internal/disable_warnings.h"
namespace ceres {
namespace internal {
-class SuiteSparse {
+class CERES_NO_EXPORT SuiteSparse {
public:
- // Defining this static function even when SuiteSparse is not
- // available, allows client code to check for the presence of CAMD
- // without checking for the absence of the CERES_NO_CAMD symbol.
- //
- // This is safer because the symbol maybe missing due to a user
- // accidentally not including suitesparse.h in their code when
- // checking for the symbol.
- static bool IsConstrainedApproximateMinimumDegreeOrderingAvailable() {
- return false;
- }
-
- void Free(void* arg) {}
+ // Nested dissection is only available if SuiteSparse is compiled
+ // with Metis support.
+ static bool IsNestedDissectionAvailable() { return false; }
+ void Free(void* /*arg*/) {}
};
} // namespace internal
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_NO_SUITESPARSE
#endif // CERES_INTERNAL_SUITESPARSE_H_
diff --git a/internal/ceres/system_test.cc b/internal/ceres/system_test.cc
index 429973f..6134995 100644
--- a/internal/ceres/system_test.cc
+++ b/internal/ceres/system_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,6 +35,7 @@
#include <cstdlib>
#include "ceres/autodiff_cost_function.h"
+#include "ceres/internal/config.h"
#include "ceres/problem.h"
#include "ceres/solver.h"
#include "ceres/test_util.h"
@@ -42,8 +43,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// This class implements the SystemTestProblem interface and provides
// access to an implementation of Powell's singular function.
@@ -70,13 +70,13 @@
x_[3] = 1.0;
problem_.AddResidualBlock(
- new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), NULL, &x_[0], &x_[1]);
+ new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), nullptr, &x_[0], &x_[1]);
problem_.AddResidualBlock(
- new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), NULL, &x_[2], &x_[3]);
+ new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), nullptr, &x_[2], &x_[3]);
problem_.AddResidualBlock(
- new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), NULL, &x_[1], &x_[2]);
+ new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), nullptr, &x_[1], &x_[2]);
problem_.AddResidualBlock(
- new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), NULL, &x_[0], &x_[3]);
+ new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), nullptr, &x_[0], &x_[3]);
// Settings for the reference solution.
options_.linear_solver_type = ceres::DENSE_QR;
@@ -97,7 +97,7 @@
template <typename T>
bool operator()(const T* const x1, const T* const x2, T* residual) const {
// f1 = x1 + 10 * x2;
- *residual = *x1 + 10.0 * *x2;
+ *residual = x1[0] + 10.0 * x2[0];
return true;
}
};
@@ -107,7 +107,7 @@
template <typename T>
bool operator()(const T* const x3, const T* const x4, T* residual) const {
// f2 = sqrt(5) (x3 - x4)
- *residual = sqrt(5.0) * (*x3 - *x4);
+ *residual = sqrt(5.0) * (x3[0] - x4[0]);
return true;
}
};
@@ -115,9 +115,9 @@
class F3 {
public:
template <typename T>
- bool operator()(const T* const x2, const T* const x4, T* residual) const {
+ bool operator()(const T* const x2, const T* const x3, T* residual) const {
// f3 = (x2 - 2 x3)^2
- residual[0] = (x2[0] - 2.0 * x4[0]) * (x2[0] - 2.0 * x4[0]);
+ residual[0] = (x2[0] - 2.0 * x3[0]) * (x2[0] - 2.0 * x3[0]);
return true;
}
};
@@ -139,7 +139,7 @@
double PowellsFunction::kResidualTolerance = 1e-8;
-typedef SystemTest<PowellsFunction> PowellTest;
+using PowellTest = SystemTest<PowellsFunction>;
TEST_F(PowellTest, DenseQR) {
PowellsFunction powells_function;
@@ -186,17 +186,6 @@
}
#endif // CERES_NO_SUITESPARSE
-#ifndef CERES_NO_CXSPARSE
-TEST_F(PowellTest, SparseNormalCholeskyUsingCXSparse) {
- PowellsFunction powells_function;
- Solver::Options* options = powells_function.mutable_solver_options();
- options->linear_solver_type = SPARSE_NORMAL_CHOLESKY;
- options->sparse_linear_algebra_library_type = CX_SPARSE;
- RunSolverForConfigAndExpectResidualsMatch(*options,
- powells_function.mutable_problem());
-}
-#endif // CERES_NO_CXSPARSE
-
#ifndef CERES_NO_ACCELERATE_SPARSE
TEST_F(PowellTest, SparseNormalCholeskyUsingAccelerateSparse) {
PowellsFunction powells_function;
@@ -219,5 +208,4 @@
}
#endif // CERES_USE_EIGEN_SPARSE
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/test_util.cc b/internal/ceres/test_util.cc
index a131b79..25888a9 100644
--- a/internal/ceres/test_util.cc
+++ b/internal/ceres/test_util.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,6 +36,7 @@
#include <cmath>
#include "ceres/file.h"
+#include "ceres/internal/port.h"
#include "ceres/stringprintf.h"
#include "ceres/types.h"
#include "gflags/gflags.h"
@@ -134,7 +135,8 @@
}
std::string TestFileAbsolutePath(const std::string& filename) {
- return JoinPath(FLAGS_test_srcdir + CERES_TEST_SRCDIR_SUFFIX, filename);
+ return JoinPath(CERES_GET_FLAG(FLAGS_test_srcdir) + CERES_TEST_SRCDIR_SUFFIX,
+ filename);
}
std::string ToString(const Solver::Options& options) {
diff --git a/internal/ceres/test_util.h b/internal/ceres/test_util.h
index c33c69c..95aaa55 100644
--- a/internal/ceres/test_util.h
+++ b/internal/ceres/test_util.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,7 +33,8 @@
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/problem.h"
#include "ceres/solver.h"
#include "ceres/stringprintf.h"
@@ -45,20 +46,19 @@
// Expects that x and y have a relative difference of no more than
// max_abs_relative_difference. If either x or y is zero, then the relative
// difference is interpreted as an absolute difference.
-//
// If x and y have the same non-finite value (inf or nan) we treat them as being
// close. In such a case no error is thrown and true is returned.
-CERES_EXPORT_INTERNAL bool ExpectClose(double x,
- double y,
- double max_abs_relative_difference);
+CERES_NO_EXPORT bool ExpectClose(double x,
+ double y,
+ double max_abs_relative_difference);
// Expects that for all i = 1,.., n - 1
//
// |p[i] - q[i]| / max(|p[i]|, |q[i]|) < tolerance
-CERES_EXPORT_INTERNAL void ExpectArraysClose(int n,
- const double* p,
- const double* q,
- double tolerance);
+CERES_NO_EXPORT void ExpectArraysClose(int n,
+ const double* p,
+ const double* q,
+ double tolerance);
// Expects that for all i = 1,.., n - 1
//
@@ -66,17 +66,16 @@
//
// where max_norm_p and max_norm_q are the max norms of the arrays p
// and q respectively.
-CERES_EXPORT_INTERNAL void ExpectArraysCloseUptoScale(int n,
- const double* p,
- const double* q,
- double tolerance);
+CERES_NO_EXPORT void ExpectArraysCloseUptoScale(int n,
+ const double* p,
+ const double* q,
+ double tolerance);
// Construct a fully qualified path for the test file depending on the
// local build/testing environment.
-CERES_EXPORT_INTERNAL std::string TestFileAbsolutePath(
- const std::string& filename);
+CERES_NO_EXPORT std::string TestFileAbsolutePath(const std::string& filename);
-CERES_EXPORT_INTERNAL std::string ToString(const Solver::Options& options);
+CERES_NO_EXPORT std::string ToString(const Solver::Options& options);
// A templated test fixture, that is used for testing Ceres end to end
// by computing a solution to the problem for a given solver
@@ -85,7 +84,7 @@
// It is assumed that the SystemTestProblem has an Solver::Options
// struct that contains the reference Solver configuration.
template <typename SystemTestProblem>
-class SystemTest : public ::testing::Test {
+class CERES_NO_EXPORT SystemTest : public ::testing::Test {
protected:
void SetUp() final {
SystemTestProblem system_test_problem;
@@ -130,4 +129,6 @@
} // namespace internal
} // namespace ceres
+#include "ceres/internal/reenable_warnings.h"
+
#endif // CERES_INTERNAL_TEST_UTIL_H_
diff --git a/internal/ceres/thread_pool.cc b/internal/ceres/thread_pool.cc
index 821431c..1ce9ac8 100644
--- a/internal/ceres/thread_pool.cc
+++ b/internal/ceres/thread_pool.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,18 +28,14 @@
//
// Author: vitus@google.com (Michael Vitus)
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
-
-#ifdef CERES_USE_CXX_THREADS
+#include "ceres/thread_pool.h"
#include <cmath>
#include <limits>
-#include "ceres/thread_pool.h"
+#include "ceres/internal/config.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
namespace {
// Constrain the total number of threads to the amount the hardware can support.
@@ -57,7 +53,7 @@
: num_hardware_threads;
}
-ThreadPool::ThreadPool() {}
+ThreadPool::ThreadPool() = default;
ThreadPool::ThreadPool(int num_threads) { Resize(num_threads); }
@@ -83,7 +79,7 @@
GetNumAllowedThreads(num_threads) - num_current_threads;
for (int i = 0; i < create_num_threads; ++i) {
- thread_pool_.push_back(std::thread(&ThreadPool::ThreadMainLoop, this));
+ thread_pool_.emplace_back(&ThreadPool::ThreadMainLoop, this);
}
}
@@ -105,7 +101,4 @@
void ThreadPool::Stop() { task_queue_.StopWaiters(); }
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_USE_CXX_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/thread_pool.h b/internal/ceres/thread_pool.h
index cdf6625..8c8f06f 100644
--- a/internal/ceres/thread_pool.h
+++ b/internal/ceres/thread_pool.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -37,10 +37,9 @@
#include <vector>
#include "ceres/concurrent_queue.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// A thread-safe thread pool with an unbounded task queue and a resizable number
// of workers. The size of the thread pool can be increased but never decreased
@@ -58,7 +57,7 @@
// workers to stop. The workers will finish all of the tasks that have already
// been added to the thread pool.
//
-class CERES_EXPORT_INTERNAL ThreadPool {
+class CERES_NO_EXPORT ThreadPool {
public:
// Returns the maximum number of hardware threads.
static int MaxNumThreadsAvailable();
@@ -115,7 +114,6 @@
std::mutex thread_pool_mutex_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_THREAD_POOL_H_
diff --git a/internal/ceres/thread_pool_test.cc b/internal/ceres/thread_pool_test.cc
index e39f673..fa321b0 100644
--- a/internal/ceres/thread_pool_test.cc
+++ b/internal/ceres/thread_pool_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2018 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -28,23 +28,19 @@
//
// Author: vitus@google.com (Michael Vitus)
-// This include must come before any #ifndef check on Ceres compile options.
-#include "ceres/internal/port.h"
-
-#ifdef CERES_USE_CXX_THREADS
+#include "ceres/thread_pool.h"
#include <chrono>
#include <condition_variable>
#include <mutex>
#include <thread>
-#include "ceres/thread_pool.h"
+#include "ceres/internal/config.h"
#include "glog/logging.h"
#include "gmock/gmock.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Adds a number of tasks to the thread pool and ensures they all run.
TEST(ThreadPool, AddTask) {
@@ -193,7 +189,4 @@
EXPECT_EQ(2, thread_pool.Size());
}
-} // namespace internal
-} // namespace ceres
-
-#endif // CERES_USE_CXX_THREADS
+} // namespace ceres::internal
diff --git a/internal/ceres/thread_token_provider.cc b/internal/ceres/thread_token_provider.cc
index c7ec67f..6217e2b 100644
--- a/internal/ceres/thread_token_provider.cc
+++ b/internal/ceres/thread_token_provider.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,44 +30,20 @@
#include "ceres/thread_token_provider.h"
-#ifdef CERES_USE_OPENMP
-#include <omp.h>
-#endif
-
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
ThreadTokenProvider::ThreadTokenProvider(int num_threads) {
- (void)num_threads;
-#ifdef CERES_USE_CXX_THREADS
for (int i = 0; i < num_threads; i++) {
pool_.Push(i);
}
-#endif
}
int ThreadTokenProvider::Acquire() {
-#ifdef CERES_USE_OPENMP
- return omp_get_thread_num();
-#endif
-
-#ifdef CERES_NO_THREADS
- return 0;
-#endif
-
-#ifdef CERES_USE_CXX_THREADS
int thread_id;
CHECK(pool_.Wait(&thread_id));
return thread_id;
-#endif
}
-void ThreadTokenProvider::Release(int thread_id) {
- (void)thread_id;
-#ifdef CERES_USE_CXX_THREADS
- pool_.Push(thread_id);
-#endif
-}
+void ThreadTokenProvider::Release(int thread_id) { pool_.Push(thread_id); }
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/thread_token_provider.h b/internal/ceres/thread_token_provider.h
index 06dc043..5d375d1 100644
--- a/internal/ceres/thread_token_provider.h
+++ b/internal/ceres/thread_token_provider.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,15 +31,11 @@
#ifndef CERES_INTERNAL_THREAD_TOKEN_PROVIDER_H_
#define CERES_INTERNAL_THREAD_TOKEN_PROVIDER_H_
-#include "ceres/internal/config.h"
-#include "ceres/internal/port.h"
-
-#ifdef CERES_USE_CXX_THREADS
#include "ceres/concurrent_queue.h"
-#endif
+#include "ceres/internal/config.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Helper for C++ thread number identification that is similar to
// omp_get_thread_num() behaviour. This is necessary to support C++
@@ -48,12 +44,6 @@
// 0 to num_threads - 1 that can be acquired to identify the thread in a thread
// pool.
//
-// If CERES_NO_THREADS is defined, Acquire() always returns 0 and Release()
-// takes no action.
-//
-// If CERES_USE_OPENMP, omp_get_thread_num() is used to Acquire() with no action
-// in Release()
-//
//
// Example usage pseudocode:
//
@@ -66,9 +56,9 @@
// ttp.Release(token); // return token to the pool
// }
//
-class ThreadTokenProvider {
+class CERES_NO_EXPORT ThreadTokenProvider {
public:
- ThreadTokenProvider(int num_threads);
+ explicit ThreadTokenProvider(int num_threads);
// Returns the first token from the queue. The acquired value must be
// given back by Release().
@@ -78,20 +68,16 @@
void Release(int thread_id);
private:
-#ifdef CERES_USE_CXX_THREADS
// This queue initially holds a sequence from 0..num_threads-1. Every
// Acquire() call the first number is removed from here. When the token is not
// needed anymore it shall be given back with corresponding Release()
// call. This concurrent queue is more expensive than TBB's version, so you
// should not acquire the thread ID on every for loop iteration.
ConcurrentQueue<int> pool_;
-#endif
-
- ThreadTokenProvider(ThreadTokenProvider&);
- ThreadTokenProvider& operator=(ThreadTokenProvider&);
+ ThreadTokenProvider(ThreadTokenProvider&) = delete;
+ ThreadTokenProvider& operator=(ThreadTokenProvider&) = delete;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_THREAD_TOKEN_PROVIDER_H_
diff --git a/internal/ceres/tiny_solver_autodiff_function_test.cc b/internal/ceres/tiny_solver_autodiff_function_test.cc
index 2598188..c192cf3 100644
--- a/internal/ceres/tiny_solver_autodiff_function_test.cc
+++ b/internal/ceres/tiny_solver_autodiff_function_test.cc
@@ -1,6 +1,6 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -60,8 +60,8 @@
static double const kTolerance = std::numeric_limits<double>::epsilon() * 10;
TEST(TinySolverAutoDiffFunction, SimpleFunction) {
- typedef TinySolverAutoDiffFunction<AutoDiffTestFunctor, 2, 3>
- AutoDiffTestFunction;
+ using AutoDiffTestFunction =
+ TinySolverAutoDiffFunction<AutoDiffTestFunctor, 2, 3>;
AutoDiffTestFunctor autodiff_test_functor;
AutoDiffTestFunction f(autodiff_test_functor);
@@ -97,7 +97,7 @@
class DynamicResidualsFunctor {
public:
- typedef double Scalar;
+ using Scalar = double;
enum {
NUM_RESIDUALS = Eigen::Dynamic,
NUM_PARAMETERS = 3,
@@ -140,7 +140,7 @@
EXPECT_GT(residuals.squaredNorm() / 2.0, 1e-10);
TinySolver<AutoDiffCostFunctor> solver;
- solver.Solve(f, &x0);
+ solver.Solve(f_autodiff, &x0);
EXPECT_NEAR(0.0, solver.summary.final_cost, 1e-10);
}
diff --git a/internal/ceres/tiny_solver_cost_function_adapter_test.cc b/internal/ceres/tiny_solver_cost_function_adapter_test.cc
index 6f57193..638d873 100644
--- a/internal/ceres/tiny_solver_cost_function_adapter_test.cc
+++ b/internal/ceres/tiny_solver_cost_function_adapter_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -68,8 +68,8 @@
template <int kNumResiduals, int kNumParameters>
void TestHelper() {
std::unique_ptr<CostFunction> cost_function(new CostFunction2x3);
- typedef TinySolverCostFunctionAdapter<kNumResiduals, kNumParameters>
- CostFunctionAdapter;
+ using CostFunctionAdapter =
+ TinySolverCostFunctionAdapter<kNumResiduals, kNumParameters>;
CostFunctionAdapter cfa(*cost_function);
EXPECT_EQ(CostFunctionAdapter::NUM_RESIDUALS, kNumResiduals);
EXPECT_EQ(CostFunctionAdapter::NUM_PARAMETERS, kNumParameters);
@@ -85,8 +85,8 @@
double* parameters[1] = {xyz};
// Check that residual only evaluation works.
- cost_function->Evaluate(parameters, expected_residuals.data(), NULL);
- cfa(xyz, actual_residuals.data(), NULL);
+ cost_function->Evaluate(parameters, expected_residuals.data(), nullptr);
+ cfa(xyz, actual_residuals.data(), nullptr);
EXPECT_NEAR(
(expected_residuals - actual_residuals).norm() / actual_residuals.norm(),
0.0,
diff --git a/internal/ceres/tiny_solver_test.cc b/internal/ceres/tiny_solver_test.cc
index 2e70694..645ddc5 100644
--- a/internal/ceres/tiny_solver_test.cc
+++ b/internal/ceres/tiny_solver_test.cc
@@ -1,6 +1,6 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -39,13 +39,13 @@
namespace ceres {
-typedef Eigen::Matrix<double, 2, 1> Vec2;
-typedef Eigen::Matrix<double, 3, 1> Vec3;
-typedef Eigen::VectorXd VecX;
+using Vec2 = Eigen::Matrix<double, 2, 1>;
+using Vec3 = Eigen::Matrix<double, 3, 1>;
+using VecX = Eigen::VectorXd;
class ExampleStatic {
public:
- typedef double Scalar;
+ using Scalar = double;
enum {
// Can also be Eigen::Dynamic.
NUM_RESIDUALS = 2,
@@ -60,7 +60,7 @@
class ExampleParametersDynamic {
public:
- typedef double Scalar;
+ using Scalar = double;
enum {
NUM_RESIDUALS = 2,
NUM_PARAMETERS = Eigen::Dynamic,
@@ -77,7 +77,7 @@
class ExampleResidualsDynamic {
public:
- typedef double Scalar;
+ using Scalar = double;
enum {
NUM_RESIDUALS = Eigen::Dynamic,
NUM_PARAMETERS = 3,
@@ -94,7 +94,7 @@
class ExampleAllDynamic {
public:
- typedef double Scalar;
+ using Scalar = double;
enum {
NUM_RESIDUALS = Eigen::Dynamic,
NUM_PARAMETERS = Eigen::Dynamic,
@@ -115,7 +115,7 @@
void TestHelper(const Function& f, const Vector& x0) {
Vector x = x0;
Vec2 residuals;
- f(x.data(), residuals.data(), NULL);
+ f(x.data(), residuals.data(), nullptr);
EXPECT_GT(residuals.squaredNorm() / 2.0, 1e-10);
TinySolver<Function> solver;
diff --git a/internal/ceres/tiny_solver_test_util.h b/internal/ceres/tiny_solver_test_util.h
index 310bb35..003df2f 100644
--- a/internal/ceres/tiny_solver_test_util.h
+++ b/internal/ceres/tiny_solver_test_util.h
@@ -1,6 +1,6 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
diff --git a/internal/ceres/triplet_sparse_matrix.cc b/internal/ceres/triplet_sparse_matrix.cc
index 5dbf0e7..4bb6685 100644
--- a/internal/ceres/triplet_sparse_matrix.cc
+++ b/internal/ceres/triplet_sparse_matrix.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,21 +31,22 @@
#include "ceres/triplet_sparse_matrix.h"
#include <algorithm>
-#include <cstddef>
+#include <memory>
+#include <random>
+#include "ceres/compressed_row_sparse_matrix.h"
+#include "ceres/crs_matrix.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
-#include "ceres/random.h"
+#include "ceres/internal/export.h"
#include "ceres/types.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TripletSparseMatrix::TripletSparseMatrix()
: num_rows_(0), num_cols_(0), max_num_nonzeros_(0), num_nonzeros_(0) {}
-TripletSparseMatrix::~TripletSparseMatrix() {}
+TripletSparseMatrix::~TripletSparseMatrix() = default;
TripletSparseMatrix::TripletSparseMatrix(int num_rows,
int num_cols,
@@ -109,8 +110,9 @@
for (int i = 0; i < num_nonzeros_; ++i) {
// clang-format off
if ((rows_[i] < 0) || (rows_[i] >= num_rows_) ||
- (cols_[i] < 0) || (cols_[i] >= num_cols_))
+ (cols_[i] < 0) || (cols_[i] >= num_cols_)) {
return false;
+ }
// clang-format on
}
return true;
@@ -123,9 +125,12 @@
// Nothing to do if we have enough space already.
if (new_max_num_nonzeros <= max_num_nonzeros_) return;
- int* new_rows = new int[new_max_num_nonzeros];
- int* new_cols = new int[new_max_num_nonzeros];
- double* new_values = new double[new_max_num_nonzeros];
+ std::unique_ptr<int[]> new_rows =
+ std::make_unique<int[]>(new_max_num_nonzeros);
+ std::unique_ptr<int[]> new_cols =
+ std::make_unique<int[]>(new_max_num_nonzeros);
+ std::unique_ptr<double[]> new_values =
+ std::make_unique<double[]>(new_max_num_nonzeros);
for (int i = 0; i < num_nonzeros_; ++i) {
new_rows[i] = rows_[i];
@@ -133,10 +138,9 @@
new_values[i] = values_[i];
}
- rows_.reset(new_rows);
- cols_.reset(new_cols);
- values_.reset(new_values);
-
+ rows_ = std::move(new_rows);
+ cols_ = std::move(new_cols);
+ values_ = std::move(new_values);
max_num_nonzeros_ = new_max_num_nonzeros;
}
@@ -152,9 +156,9 @@
}
void TripletSparseMatrix::AllocateMemory() {
- rows_.reset(new int[max_num_nonzeros_]);
- cols_.reset(new int[max_num_nonzeros_]);
- values_.reset(new double[max_num_nonzeros_]);
+ rows_ = std::make_unique<int[]>(max_num_nonzeros_);
+ cols_ = std::make_unique<int[]>(max_num_nonzeros_);
+ values_ = std::make_unique<double[]>(max_num_nonzeros_);
}
void TripletSparseMatrix::CopyData(const TripletSparseMatrix& orig) {
@@ -165,13 +169,15 @@
}
}
-void TripletSparseMatrix::RightMultiply(const double* x, double* y) const {
+void TripletSparseMatrix::RightMultiplyAndAccumulate(const double* x,
+ double* y) const {
for (int i = 0; i < num_nonzeros_; ++i) {
y[rows_[i]] += values_[i] * x[cols_[i]];
}
}
-void TripletSparseMatrix::LeftMultiply(const double* x, double* y) const {
+void TripletSparseMatrix::LeftMultiplyAndAccumulate(const double* x,
+ double* y) const {
for (int i = 0; i < num_nonzeros_; ++i) {
y[cols_[i]] += values_[i] * x[rows_[i]];
}
@@ -192,6 +198,11 @@
}
}
+void TripletSparseMatrix::ToCRSMatrix(CRSMatrix* crs_matrix) const {
+ CompressedRowSparseMatrix::FromTripletSparseMatrix(*this)->ToCRSMatrix(
+ crs_matrix);
+}
+
void TripletSparseMatrix::ToDenseMatrix(Matrix* dense_matrix) const {
dense_matrix->resize(num_rows_, num_cols_);
dense_matrix->setZero();
@@ -252,10 +263,11 @@
num_nonzeros_ -= dropped_terms;
}
-TripletSparseMatrix* TripletSparseMatrix::CreateSparseDiagonalMatrix(
- const double* values, int num_rows) {
- TripletSparseMatrix* m =
- new TripletSparseMatrix(num_rows, num_rows, num_rows);
+std::unique_ptr<TripletSparseMatrix>
+TripletSparseMatrix::CreateSparseDiagonalMatrix(const double* values,
+ int num_rows) {
+ std::unique_ptr<TripletSparseMatrix> m =
+ std::make_unique<TripletSparseMatrix>(num_rows, num_rows, num_rows);
for (int i = 0; i < num_rows; ++i) {
m->mutable_rows()[i] = i;
m->mutable_cols()[i] = i;
@@ -272,8 +284,34 @@
}
}
-TripletSparseMatrix* TripletSparseMatrix::CreateRandomMatrix(
- const TripletSparseMatrix::RandomMatrixOptions& options) {
+std::unique_ptr<TripletSparseMatrix> TripletSparseMatrix::CreateFromTextFile(
+ FILE* file) {
+ CHECK(file != nullptr);
+ int num_rows = 0;
+ int num_cols = 0;
+ std::vector<int> rows;
+ std::vector<int> cols;
+ std::vector<double> values;
+ while (true) {
+ int row, col;
+ double value;
+ if (fscanf(file, "%d %d %lf", &row, &col, &value) != 3) {
+ break;
+ }
+ rows.push_back(row);
+ cols.push_back(col);
+ values.push_back(value);
+ num_rows = std::max(num_rows, row + 1);
+ num_cols = std::max(num_cols, col + 1);
+ }
+ VLOG(1) << "Read " << rows.size() << " nonzeros from file.";
+ return std::make_unique<TripletSparseMatrix>(
+ num_rows, num_cols, rows, cols, values);
+}
+
+std::unique_ptr<TripletSparseMatrix> TripletSparseMatrix::CreateRandomMatrix(
+ const TripletSparseMatrix::RandomMatrixOptions& options,
+ std::mt19937& prng) {
CHECK_GT(options.num_rows, 0);
CHECK_GT(options.num_cols, 0);
CHECK_GT(options.density, 0.0);
@@ -282,24 +320,25 @@
std::vector<int> rows;
std::vector<int> cols;
std::vector<double> values;
+ std::uniform_real_distribution<double> uniform01(0.0, 1.0);
+ std::normal_distribution<double> standard_normal;
while (rows.empty()) {
rows.clear();
cols.clear();
values.clear();
for (int r = 0; r < options.num_rows; ++r) {
for (int c = 0; c < options.num_cols; ++c) {
- if (RandDouble() <= options.density) {
+ if (uniform01(prng) <= options.density) {
rows.push_back(r);
cols.push_back(c);
- values.push_back(RandNormal());
+ values.push_back(standard_normal(prng));
}
}
}
}
- return new TripletSparseMatrix(
+ return std::make_unique<TripletSparseMatrix>(
options.num_rows, options.num_cols, rows, cols, values);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/triplet_sparse_matrix.h b/internal/ceres/triplet_sparse_matrix.h
index cc9fee5..bcb3d2b 100644
--- a/internal/ceres/triplet_sparse_matrix.h
+++ b/internal/ceres/triplet_sparse_matrix.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,21 +32,23 @@
#define CERES_INTERNAL_TRIPLET_SPARSE_MATRIX_H_
#include <memory>
+#include <random>
#include <vector>
+#include "ceres/crs_matrix.h"
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/sparse_matrix.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// An implementation of the SparseMatrix interface to store and
// manipulate sparse matrices in triplet (i,j,s) form. This object is
// inspired by the design of the cholmod_triplet struct used in the
// SuiteSparse package and is memory layout compatible with it.
-class CERES_EXPORT_INTERNAL TripletSparseMatrix : public SparseMatrix {
+class CERES_NO_EXPORT TripletSparseMatrix final : public SparseMatrix {
public:
TripletSparseMatrix();
TripletSparseMatrix(int num_rows, int num_cols, int max_num_nonzeros);
@@ -56,18 +58,19 @@
const std::vector<int>& cols,
const std::vector<double>& values);
- explicit TripletSparseMatrix(const TripletSparseMatrix& orig);
+ TripletSparseMatrix(const TripletSparseMatrix& orig);
TripletSparseMatrix& operator=(const TripletSparseMatrix& rhs);
- virtual ~TripletSparseMatrix();
+ ~TripletSparseMatrix() override;
// Implementation of the SparseMatrix interface.
void SetZero() final;
- void RightMultiply(const double* x, double* y) const final;
- void LeftMultiply(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
+ void LeftMultiplyAndAccumulate(const double* x, double* y) const final;
void SquaredColumnNorm(double* x) const final;
void ScaleColumns(const double* scale) final;
+ void ToCRSMatrix(CRSMatrix* matrix) const;
void ToDenseMatrix(Matrix* dense_matrix) const final;
void ToTextFile(FILE* file) const final;
// clang-format off
@@ -115,8 +118,8 @@
// Build a sparse diagonal matrix of size num_rows x num_rows from
// the array values. Entries of the values array are copied into the
// sparse matrix.
- static TripletSparseMatrix* CreateSparseDiagonalMatrix(const double* values,
- int num_rows);
+ static std::unique_ptr<TripletSparseMatrix> CreateSparseDiagonalMatrix(
+ const double* values, int num_rows);
// Options struct to control the generation of random
// TripletSparseMatrix objects.
@@ -132,10 +135,12 @@
// Create a random CompressedRowSparseMatrix whose entries are
// normally distributed and whose structure is determined by
// RandomMatrixOptions.
- //
- // Caller owns the result.
- static TripletSparseMatrix* CreateRandomMatrix(
- const TripletSparseMatrix::RandomMatrixOptions& options);
+ static std::unique_ptr<TripletSparseMatrix> CreateRandomMatrix(
+ const TripletSparseMatrix::RandomMatrixOptions& options,
+ std::mt19937& prng);
+
+ // Load a triplet sparse matrix from a text file.
+ static std::unique_ptr<TripletSparseMatrix> CreateFromTextFile(FILE* file);
private:
void AllocateMemory();
@@ -155,7 +160,8 @@
std::unique_ptr<double[]> values_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_TRIPLET_SPARSE_MATRIX_H__
diff --git a/internal/ceres/triplet_sparse_matrix_test.cc b/internal/ceres/triplet_sparse_matrix_test.cc
index 3af634f..e145c1a 100644
--- a/internal/ceres/triplet_sparse_matrix_test.cc
+++ b/internal/ceres/triplet_sparse_matrix_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,10 +32,10 @@
#include <memory>
+#include "ceres/crs_matrix.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TEST(TripletSparseMatrix, DefaultConstructorReturnsEmptyObject) {
TripletSparseMatrix m;
@@ -344,5 +344,42 @@
}
}
-} // namespace internal
-} // namespace ceres
+TEST(TripletSparseMatrix, ToCRSMatrix) {
+ // Test matrix:
+ // [1, 2, 0, 5, 6, 0,
+ // 3, 4, 0, 7, 8, 0,
+ // 0, 0, 9, 0, 0, 0]
+ TripletSparseMatrix m(3,
+ 6,
+ {0, 0, 0, 0, 1, 1, 1, 1, 2},
+ {0, 1, 3, 4, 0, 1, 3, 4, 2},
+ {1, 2, 3, 4, 5, 6, 7, 8, 9});
+ CRSMatrix m_crs;
+ m.ToCRSMatrix(&m_crs);
+ EXPECT_EQ(m_crs.num_rows, 3);
+ EXPECT_EQ(m_crs.num_cols, 6);
+
+ EXPECT_EQ(m_crs.rows.size(), 4);
+ EXPECT_EQ(m_crs.rows[0], 0);
+ EXPECT_EQ(m_crs.rows[1], 4);
+ EXPECT_EQ(m_crs.rows[2], 8);
+ EXPECT_EQ(m_crs.rows[3], 9);
+
+ EXPECT_EQ(m_crs.cols.size(), 9);
+ EXPECT_EQ(m_crs.cols[0], 0);
+ EXPECT_EQ(m_crs.cols[1], 1);
+ EXPECT_EQ(m_crs.cols[2], 3);
+ EXPECT_EQ(m_crs.cols[3], 4);
+ EXPECT_EQ(m_crs.cols[4], 0);
+ EXPECT_EQ(m_crs.cols[5], 1);
+ EXPECT_EQ(m_crs.cols[6], 3);
+ EXPECT_EQ(m_crs.cols[7], 4);
+ EXPECT_EQ(m_crs.cols[8], 2);
+
+ EXPECT_EQ(m_crs.values.size(), 9);
+ for (int i = 0; i < 9; ++i) {
+ EXPECT_EQ(m_crs.values[i], i + 1);
+ }
+}
+
+} // namespace ceres::internal
diff --git a/internal/ceres/trust_region_minimizer.cc b/internal/ceres/trust_region_minimizer.cc
index bcf05b3..d76f677 100644
--- a/internal/ceres/trust_region_minimizer.cc
+++ b/internal/ceres/trust_region_minimizer.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -42,9 +42,11 @@
#include "Eigen/Core"
#include "ceres/array_utils.h"
#include "ceres/coordinate_descent_minimizer.h"
+#include "ceres/eigen_vector_ops.h"
#include "ceres/evaluator.h"
#include "ceres/file.h"
#include "ceres/line_search.h"
+#include "ceres/parallel_for.h"
#include "ceres/stringprintf.h"
#include "ceres/types.h"
#include "ceres/wall_time.h"
@@ -59,10 +61,7 @@
} \
} while (0)
-namespace ceres {
-namespace internal {
-
-TrustRegionMinimizer::~TrustRegionMinimizer() {}
+namespace ceres::internal {
void TrustRegionMinimizer::Minimize(const Minimizer::Options& options,
double* parameters,
@@ -75,12 +74,13 @@
// Create the TrustRegionStepEvaluator. The construction needs to be
// delayed to this point because we need the cost for the starting
// point to initialize the step evaluator.
- step_evaluator_.reset(new TrustRegionStepEvaluator(
+ step_evaluator_ = std::make_unique<TrustRegionStepEvaluator>(
x_cost_,
options_.use_nonmonotonic_steps
? options_.max_consecutive_nonmonotonic_steps
- : 0));
+ : 0);
+ bool atleast_one_successful_step = false;
while (FinalizeIterationAndCheckIfMinimizerCanContinue()) {
iteration_start_time_in_secs_ = WallTimeInSeconds();
@@ -108,7 +108,7 @@
ComputeCandidatePointAndEvaluateCost();
DoInnerIterationsIfNeeded();
- if (ParameterToleranceReached()) {
+ if (atleast_one_successful_step && ParameterToleranceReached()) {
return;
}
@@ -117,6 +117,7 @@
}
if (IsStepSuccessful()) {
+ atleast_one_successful_step = true;
RETURN_IF_ERROR_AND_LOG(HandleSuccessfulStep());
} else {
// Declare the step unsuccessful and inform the trust region strategy.
@@ -139,8 +140,8 @@
double* parameters,
Solver::Summary* solver_summary) {
options_ = options;
- sort(options_.trust_region_minimizer_iterations_to_dump.begin(),
- options_.trust_region_minimizer_iterations_to_dump.end());
+ std::sort(options_.trust_region_minimizer_iterations_to_dump.begin(),
+ options_.trust_region_minimizer_iterations_to_dump.end());
parameters_ = parameters;
@@ -168,7 +169,6 @@
num_consecutive_invalid_steps_ = 0;
x_ = ConstVectorRef(parameters_, num_parameters_);
- x_norm_ = x_.norm();
residuals_.resize(num_residuals_);
trust_region_step_.resize(num_effective_parameters_);
delta_.resize(num_effective_parameters_);
@@ -182,7 +182,6 @@
// the Jacobian, we will compute and overwrite this vector.
jacobian_scaling_ = Vector::Ones(num_effective_parameters_);
- x_norm_ = -1; // Invalid value
x_cost_ = std::numeric_limits<double>::max();
minimum_cost_ = x_cost_;
model_cost_change_ = 0.0;
@@ -216,10 +215,11 @@
}
x_ = candidate_x_;
- x_norm_ = x_.norm();
}
if (!EvaluateGradientAndJacobian(/*new_evaluation_point=*/true)) {
+ solver_summary_->message =
+ "Initial residual and Jacobian evaluation failed.";
return false;
}
@@ -272,7 +272,8 @@
}
// jacobian = jacobian * diag(J'J) ^{-1}
- jacobian_->ScaleColumns(jacobian_scaling_.data());
+ jacobian_->ScaleColumns(
+ jacobian_scaling_.data(), options_.context, options_.num_threads);
}
// The gradient exists in the local tangent space. To account for
@@ -359,13 +360,13 @@
// Compute the trust region step using the TrustRegionStrategy chosen
// by the user.
//
-// If the strategy returns with LINEAR_SOLVER_FATAL_ERROR, which
+// If the strategy returns with LinearSolverTerminationType::FATAL_ERROR, which
// indicates an unrecoverable error, return false. This is the only
// condition that returns false.
//
-// If the strategy returns with LINEAR_SOLVER_FAILURE, which indicates
-// a numerical failure that could be recovered from by retrying
-// (e.g. by increasing the strength of the regularization), we set
+// If the strategy returns with LinearSolverTerminationType::FAILURE, which
+// indicates a numerical failure that could be recovered from by retrying (e.g.
+// by increasing the strength of the regularization), we set
// iteration_summary_.step_is_valid to false and return true.
//
// In all other cases, we compute the decrease in the trust region
@@ -379,9 +380,9 @@
iteration_summary_.step_is_valid = false;
TrustRegionStrategy::PerSolveOptions per_solve_options;
per_solve_options.eta = options_.eta;
- if (find(options_.trust_region_minimizer_iterations_to_dump.begin(),
- options_.trust_region_minimizer_iterations_to_dump.end(),
- iteration_summary_.iteration) !=
+ if (std::find(options_.trust_region_minimizer_iterations_to_dump.begin(),
+ options_.trust_region_minimizer_iterations_to_dump.end(),
+ iteration_summary_.iteration) !=
options_.trust_region_minimizer_iterations_to_dump.end()) {
per_solve_options.dump_format_type =
options_.trust_region_problem_dump_format_type;
@@ -397,7 +398,8 @@
residuals_.data(),
trust_region_step_.data());
- if (strategy_summary.termination_type == LINEAR_SOLVER_FATAL_ERROR) {
+ if (strategy_summary.termination_type ==
+ LinearSolverTerminationType::FATAL_ERROR) {
solver_summary_->message =
"Linear solver failed due to unrecoverable "
"non-numeric causes. Please see the error log for clues. ";
@@ -409,7 +411,8 @@
WallTimeInSeconds() - strategy_start_time;
iteration_summary_.linear_solver_iterations = strategy_summary.num_iterations;
- if (strategy_summary.termination_type == LINEAR_SOLVER_FAILURE) {
+ if (strategy_summary.termination_type ==
+ LinearSolverTerminationType::FAILURE) {
return true;
}
@@ -421,10 +424,15 @@
// = f'f/2 - 1/2 [ f'f + 2f'J * step + step' * J' * J * step]
// = -f'J * step - step' * J' * J * step / 2
// = -(J * step)'(f + J * step / 2)
- model_residuals_.setZero();
- jacobian_->RightMultiply(trust_region_step_.data(), model_residuals_.data());
- model_cost_change_ =
- -model_residuals_.dot(residuals_ + model_residuals_ / 2.0);
+ ParallelSetZero(options_.context, options_.num_threads, model_residuals_);
+ jacobian_->RightMultiplyAndAccumulate(trust_region_step_.data(),
+ model_residuals_.data(),
+ options_.context,
+ options_.num_threads);
+ model_cost_change_ = -Dot(model_residuals_,
+ residuals_ + model_residuals_ / 2.0,
+ options_.context,
+ options_.num_threads);
// TODO(sameeragarwal)
//
@@ -434,7 +442,10 @@
iteration_summary_.step_is_valid = (model_cost_change_ > 0.0);
if (iteration_summary_.step_is_valid) {
// Undo the Jacobian column scaling.
- delta_ = (trust_region_step_.array() * jacobian_scaling_.array()).matrix();
+ ParallelAssign(options_.context,
+ options_.num_threads,
+ delta_,
+ (trust_region_step_.array() * jacobian_scaling_.array()));
num_consecutive_invalid_steps_ = 0;
}
@@ -704,10 +715,12 @@
// Solver::Options::parameter_tolerance based convergence check.
bool TrustRegionMinimizer::ParameterToleranceReached() {
+ const double x_norm = x_.norm();
+
// Compute the norm of the step in the ambient space.
iteration_summary_.step_norm = (x_ - candidate_x_).norm();
const double step_size_tolerance =
- options_.parameter_tolerance * (x_norm_ + options_.parameter_tolerance);
+ options_.parameter_tolerance * (x_norm + options_.parameter_tolerance);
if (iteration_summary_.step_norm > step_size_tolerance) {
return false;
@@ -716,7 +729,7 @@
solver_summary_->message = StringPrintf(
"Parameter tolerance reached. "
"Relative step_norm: %e <= %e.",
- (iteration_summary_.step_norm / (x_norm_ + options_.parameter_tolerance)),
+ (iteration_summary_.step_norm / (x_norm + options_.parameter_tolerance)),
options_.parameter_tolerance);
solver_summary_->termination_type = CONVERGENCE;
if (is_not_silent_) {
@@ -750,14 +763,12 @@
// Compute candidate_x_ = Plus(x_, delta_)
// Evaluate the cost of candidate_x_ as candidate_cost_.
//
-// Failure to compute the step or the cost mean that candidate_cost_
-// is set to std::numeric_limits<double>::max(). Unlike
-// EvaluateGradientAndJacobian, failure in this function is not fatal
-// as we are only computing and evaluating a candidate point, and if
-// for some reason we are unable to evaluate it, we consider it to be
-// a point with very high cost. This allows the user to deal with edge
-// cases/constraints as part of the LocalParameterization and
-// CostFunction objects.
+// Failure to compute the step or the cost mean that candidate_cost_ is set to
+// std::numeric_limits<double>::max(). Unlike EvaluateGradientAndJacobian,
+// failure in this function is not fatal as we are only computing and evaluating
+// a candidate point, and if for some reason we are unable to evaluate it, we
+// consider it to be a point with very high cost. This allows the user to deal
+// with edge cases/constraints as part of the Manifold and CostFunction objects.
void TrustRegionMinimizer::ComputeCandidatePointAndEvaluateCost() {
if (!evaluator_->Plus(x_.data(), delta_.data(), candidate_x_.data())) {
if (is_not_silent_) {
@@ -811,7 +822,6 @@
// evaluator know that the step has been accepted.
bool TrustRegionMinimizer::HandleSuccessfulStep() {
x_ = candidate_x_;
- x_norm_ = x_.norm();
// Since the step was successful, this point has already had the residual
// evaluated (but not the jacobian). So indicate that to the evaluator.
@@ -825,5 +835,4 @@
return true;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/trust_region_minimizer.h b/internal/ceres/trust_region_minimizer.h
index be4d406..c9cdac7 100644
--- a/internal/ceres/trust_region_minimizer.h
+++ b/internal/ceres/trust_region_minimizer.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,8 +33,9 @@
#include <memory>
+#include "ceres/internal/disable_warnings.h"
#include "ceres/internal/eigen.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/minimizer.h"
#include "ceres/solver.h"
#include "ceres/sparse_matrix.h"
@@ -42,16 +43,13 @@
#include "ceres/trust_region_strategy.h"
#include "ceres/types.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Generic trust region minimization algorithm.
//
// For example usage, see SolverImpl::Minimize.
-class CERES_EXPORT_INTERNAL TrustRegionMinimizer : public Minimizer {
+class CERES_NO_EXPORT TrustRegionMinimizer final : public Minimizer {
public:
- ~TrustRegionMinimizer();
-
// This method is not thread safe.
void Minimize(const Minimizer::Options& options,
double* parameters,
@@ -140,8 +138,6 @@
// Scaling vector to scale the columns of the Jacobian.
Vector jacobian_scaling_;
- // Euclidean norm of x_.
- double x_norm_;
// Cost at x_.
double x_cost_;
// Minimum cost encountered up till now.
@@ -161,7 +157,8 @@
int num_consecutive_invalid_steps_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_TRUST_REGION_MINIMIZER_H_
diff --git a/internal/ceres/trust_region_minimizer_test.cc b/internal/ceres/trust_region_minimizer_test.cc
index 8993273..94c7162 100644
--- a/internal/ceres/trust_region_minimizer_test.cc
+++ b/internal/ceres/trust_region_minimizer_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -36,26 +36,26 @@
#include "ceres/trust_region_minimizer.h"
#include <cmath>
+#include <memory>
#include "ceres/autodiff_cost_function.h"
#include "ceres/cost_function.h"
#include "ceres/dense_qr_solver.h"
#include "ceres/dense_sparse_matrix.h"
#include "ceres/evaluator.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
#include "ceres/minimizer.h"
#include "ceres/problem.h"
#include "ceres/trust_region_strategy.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// Templated Evaluator for Powell's function. The template parameters
// indicate which of the four variables/columns of the jacobian are
// active. This is equivalent to constructing a problem and using the
-// SubsetLocalParameterization. This allows us to test the support for
+// SubsetManifold. This allows us to test the support for
// the Evaluator::Plus operation besides checking for the basic
// performance of the trust region algorithm.
template <bool col1, bool col2, bool col3, bool col4>
@@ -76,13 +76,11 @@
}
// clang-format on
- virtual ~PowellEvaluator2() {}
-
// Implementation of Evaluator interface.
- SparseMatrix* CreateJacobian() const final {
+ std::unique_ptr<SparseMatrix> CreateJacobian() const final {
CHECK(col1 || col2 || col3 || col4);
- DenseSparseMatrix* dense_jacobian =
- new DenseSparseMatrix(NumResiduals(), NumEffectiveParameters());
+ auto dense_jacobian = std::make_unique<DenseSparseMatrix>(
+ NumResiduals(), NumEffectiveParameters());
dense_jacobian->SetZero();
return dense_jacobian;
}
@@ -119,19 +117,19 @@
VLOG(1) << "Cost: " << *cost;
- if (residuals != NULL) {
+ if (residuals != nullptr) {
residuals[0] = f1;
residuals[1] = f2;
residuals[2] = f3;
residuals[3] = f4;
}
- if (jacobian != NULL) {
+ if (jacobian != nullptr) {
DenseSparseMatrix* dense_jacobian;
dense_jacobian = down_cast<DenseSparseMatrix*>(jacobian);
dense_jacobian->SetZero();
- ColMajorMatrixRef jacobian_matrix = dense_jacobian->mutable_matrix();
+ Matrix& jacobian_matrix = *(dense_jacobian->mutable_matrix());
CHECK_EQ(jacobian_matrix.cols(), num_active_cols_);
int column_index = 0;
@@ -141,7 +139,7 @@
1.0,
0.0,
0.0,
- sqrt(10.0) * 2.0 * (x1 - x4) * (1.0 - x4);
+ sqrt(10.0) * 2.0 * (x1 - x4);
// clang-format on
}
if (col2) {
@@ -149,7 +147,7 @@
jacobian_matrix.col(column_index++) <<
10.0,
0.0,
- 2.0*(x2 - 2.0*x3)*(1.0 - 2.0*x3),
+ 2.0*(x2 - 2.0*x3),
0.0;
// clang-format on
}
@@ -159,7 +157,7 @@
jacobian_matrix.col(column_index++) <<
0.0,
sqrt(5.0),
- 2.0*(x2 - 2.0*x3)*(x2 - 2.0),
+ 4.0*(2.0*x3 - x2),
0.0;
// clang-format on
}
@@ -170,13 +168,13 @@
0.0,
-sqrt(5.0),
0.0,
- sqrt(10.0) * 2.0 * (x1 - x4) * (x1 - 1.0);
+ sqrt(10.0) * 2.0 * (x4 - x1);
// clang-format on
}
VLOG(1) << "\n" << jacobian_matrix;
}
- if (gradient != NULL) {
+ if (gradient != nullptr) {
int column_index = 0;
if (col1) {
gradient[column_index++] = f1 + f4 * sqrt(10.0) * 2.0 * (x1 - x4);
@@ -188,7 +186,7 @@
if (col3) {
gradient[column_index++] =
- f2 * sqrt(5.0) + f3 * (2.0 * 2.0 * (2.0 * x3 - x2));
+ f2 * sqrt(5.0) + f3 * (4.0 * (2.0 * x3 - x2));
}
if (col4) {
@@ -240,10 +238,9 @@
minimizer_options.gradient_tolerance = 1e-26;
minimizer_options.function_tolerance = 1e-26;
minimizer_options.parameter_tolerance = 1e-26;
- minimizer_options.evaluator.reset(
- new PowellEvaluator2<col1, col2, col3, col4>);
- minimizer_options.jacobian.reset(
- minimizer_options.evaluator->CreateJacobian());
+ minimizer_options.evaluator =
+ std::make_unique<PowellEvaluator2<col1, col2, col3, col4>>();
+ minimizer_options.jacobian = minimizer_options.evaluator->CreateJacobian();
TrustRegionStrategy::Options trust_region_strategy_options;
trust_region_strategy_options.trust_region_strategy_type = strategy_type;
@@ -252,8 +249,8 @@
trust_region_strategy_options.max_radius = 1e20;
trust_region_strategy_options.min_lm_diagonal = 1e-6;
trust_region_strategy_options.max_lm_diagonal = 1e32;
- minimizer_options.trust_region_strategy.reset(
- TrustRegionStrategy::Create(trust_region_strategy_options));
+ minimizer_options.trust_region_strategy =
+ TrustRegionStrategy::Create(trust_region_strategy_options);
TrustRegionMinimizer minimizer;
Solver::Summary summary;
@@ -330,7 +327,7 @@
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
residuals[0] = target_length_;
for (int i = 0; i < num_vertices_; ++i) {
@@ -343,12 +340,12 @@
residuals[0] -= sqrt(length);
}
- if (jacobians == NULL) {
+ if (jacobians == nullptr) {
return true;
}
for (int i = 0; i < num_vertices_; ++i) {
- if (jacobians[i] != NULL) {
+ if (jacobians[i] != nullptr) {
int prev = (num_vertices_ + i - 1) % num_vertices_;
int next = (i + 1) % num_vertices_;
@@ -398,7 +395,7 @@
}
Problem problem;
- problem.AddResidualBlock(new CurveCostFunction(N, 10.), NULL, y);
+ problem.AddResidualBlock(new CurveCostFunction(N, 10.), nullptr, y);
Solver::Options options;
options.linear_solver_type = ceres::DENSE_QR;
Solver::Summary summary;
@@ -425,7 +422,7 @@
TEST(TrustRegionMinimizer, GradientToleranceConvergenceUpdatesStep) {
double x = 5;
Problem problem;
- problem.AddResidualBlock(ExpCostFunctor::Create(), NULL, &x);
+ problem.AddResidualBlock(ExpCostFunctor::Create(), nullptr, &x);
problem.SetParameterLowerBound(&x, 0, 3.0);
Solver::Options options;
Solver::Summary summary;
@@ -435,5 +432,4 @@
EXPECT_NEAR(expected_final_cost, summary.final_cost, 1e-12);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/trust_region_preprocessor.cc b/internal/ceres/trust_region_preprocessor.cc
index 0943edb..e07e369 100644
--- a/internal/ceres/trust_region_preprocessor.cc
+++ b/internal/ceres/trust_region_preprocessor.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,6 +32,7 @@
#include <numeric>
#include <string>
+#include <vector>
#include "ceres/callbacks.h"
#include "ceres/context_impl.h"
@@ -48,20 +49,19 @@
#include "ceres/trust_region_strategy.h"
#include "ceres/wall_time.h"
-namespace ceres {
-namespace internal {
-
-using std::vector;
+namespace ceres::internal {
namespace {
-ParameterBlockOrdering* CreateDefaultLinearSolverOrdering(
+std::shared_ptr<ParameterBlockOrdering> CreateDefaultLinearSolverOrdering(
const Program& program) {
- ParameterBlockOrdering* ordering = new ParameterBlockOrdering;
- const vector<ParameterBlock*>& parameter_blocks = program.parameter_blocks();
- for (int i = 0; i < parameter_blocks.size(); ++i) {
+ std::shared_ptr<ParameterBlockOrdering> ordering =
+ std::make_shared<ParameterBlockOrdering>();
+ const std::vector<ParameterBlock*>& parameter_blocks =
+ program.parameter_blocks();
+ for (auto* parameter_block : parameter_blocks) {
ordering->AddElementToGroup(
- const_cast<double*>(parameter_blocks[i]->user_state()), 0);
+ const_cast<double*>(parameter_block->user_state()), 0);
}
return ordering;
}
@@ -113,6 +113,7 @@
return ReorderProgramForSchurTypeLinearSolver(
options.linear_solver_type,
options.sparse_linear_algebra_library_type,
+ options.linear_solver_ordering_type,
pp->problem->parameter_map(),
options.linear_solver_ordering.get(),
pp->reduced_program.get(),
@@ -123,6 +124,7 @@
!options.dynamic_sparsity) {
return ReorderProgramForSparseCholesky(
options.sparse_linear_algebra_library_type,
+ options.linear_solver_ordering_type,
*options.linear_solver_ordering,
0, /* use all the rows of the jacobian */
pp->reduced_program.get(),
@@ -138,6 +140,7 @@
return ReorderProgramForSparseCholesky(
options.sparse_linear_algebra_library_type,
+ options.linear_solver_ordering_type,
*options.linear_solver_ordering,
pp->linear_solver_options.subset_preconditioner_start_row_block,
pp->reduced_program.get(),
@@ -160,8 +163,8 @@
// assume that they are giving all the freedom to us in choosing
// the best possible ordering. This intent can be indicated by
// putting all the parameter blocks in the same elimination group.
- options.linear_solver_ordering.reset(
- CreateDefaultLinearSolverOrdering(*pp->reduced_program));
+ options.linear_solver_ordering =
+ CreateDefaultLinearSolverOrdering(*pp->reduced_program);
} else {
// If the user supplied an ordering, then check if the first
// elimination group is still non-empty after the reduced problem
@@ -196,10 +199,16 @@
options.max_linear_solver_iterations;
pp->linear_solver_options.type = options.linear_solver_type;
pp->linear_solver_options.preconditioner_type = options.preconditioner_type;
+ pp->linear_solver_options.use_spse_initialization =
+ options.use_spse_initialization;
+ pp->linear_solver_options.spse_tolerance = options.spse_tolerance;
+ pp->linear_solver_options.max_num_spse_iterations =
+ options.max_num_spse_iterations;
pp->linear_solver_options.visibility_clustering_type =
options.visibility_clustering_type;
pp->linear_solver_options.sparse_linear_algebra_library_type =
options.sparse_linear_algebra_library_type;
+
pp->linear_solver_options.dense_linear_algebra_library_type =
options.dense_linear_algebra_library_type;
pp->linear_solver_options.use_explicit_schur_complement =
@@ -210,7 +219,6 @@
pp->linear_solver_options.max_num_refinement_iterations =
options.max_num_refinement_iterations;
pp->linear_solver_options.num_threads = options.num_threads;
- pp->linear_solver_options.use_postordering = options.use_postordering;
pp->linear_solver_options.context = pp->problem->context();
if (IsSchurType(pp->linear_solver_options.type)) {
@@ -224,30 +232,27 @@
if (pp->linear_solver_options.elimination_groups.size() == 1) {
pp->linear_solver_options.elimination_groups.push_back(0);
}
+ }
- if (options.linear_solver_type == SPARSE_SCHUR) {
- // When using SPARSE_SCHUR, we ignore the user's postordering
- // preferences in certain cases.
- //
- // 1. SUITE_SPARSE is the sparse linear algebra library requested
- // but cholmod_camd is not available.
- // 2. CX_SPARSE is the sparse linear algebra library requested.
- //
- // This ensures that the linear solver does not assume that a
- // fill-reducing pre-ordering has been done.
- //
- // TODO(sameeragarwal): Implement the reordering of parameter
- // blocks for CX_SPARSE.
- if ((options.sparse_linear_algebra_library_type == SUITE_SPARSE &&
- !SuiteSparse::
- IsConstrainedApproximateMinimumDegreeOrderingAvailable()) ||
- (options.sparse_linear_algebra_library_type == CX_SPARSE)) {
- pp->linear_solver_options.use_postordering = true;
- }
+ if (!options.dynamic_sparsity &&
+ AreJacobianColumnsOrdered(options.linear_solver_type,
+ options.preconditioner_type,
+ options.sparse_linear_algebra_library_type,
+ options.linear_solver_ordering_type)) {
+ pp->linear_solver_options.ordering_type = OrderingType::NATURAL;
+ } else {
+ if (options.linear_solver_ordering_type == ceres::AMD) {
+ pp->linear_solver_options.ordering_type = OrderingType::AMD;
+ } else if (options.linear_solver_ordering_type == ceres::NESDIS) {
+ pp->linear_solver_options.ordering_type = OrderingType::NESDIS;
+ } else {
+ LOG(FATAL) << "Congratulations you have found a bug in Ceres Solver."
+ << " Please report this to the maintainers. : "
+ << options.linear_solver_ordering_type;
}
}
- pp->linear_solver.reset(LinearSolver::Create(pp->linear_solver_options));
+ pp->linear_solver = LinearSolver::Create(pp->linear_solver_options);
return (pp->linear_solver != nullptr);
}
@@ -256,6 +261,8 @@
const Solver::Options& options = pp->options;
pp->evaluator_options = Evaluator::Options();
pp->evaluator_options.linear_solver_type = options.linear_solver_type;
+ pp->evaluator_options.sparse_linear_algebra_library_type =
+ options.sparse_linear_algebra_library_type;
pp->evaluator_options.num_eliminate_blocks = 0;
if (IsSchurType(options.linear_solver_type)) {
pp->evaluator_options.num_eliminate_blocks =
@@ -269,8 +276,8 @@
pp->evaluator_options.context = pp->problem->context();
pp->evaluator_options.evaluation_callback =
pp->reduced_program->mutable_evaluation_callback();
- pp->evaluator.reset(Evaluator::Create(
- pp->evaluator_options, pp->reduced_program.get(), &pp->error));
+ pp->evaluator = Evaluator::Create(
+ pp->evaluator_options, pp->reduced_program.get(), &pp->error);
return (pp->evaluator != nullptr);
}
@@ -316,12 +323,12 @@
}
} else {
// The user did not supply an ordering, so create one.
- options.inner_iteration_ordering.reset(
- CoordinateDescentMinimizer::CreateOrdering(*pp->reduced_program));
+ options.inner_iteration_ordering =
+ CoordinateDescentMinimizer::CreateOrdering(*pp->reduced_program);
}
- pp->inner_iteration_minimizer.reset(
- new CoordinateDescentMinimizer(pp->problem->context()));
+ pp->inner_iteration_minimizer =
+ std::make_unique<CoordinateDescentMinimizer>(pp->problem->context());
return pp->inner_iteration_minimizer->Init(*pp->reduced_program,
pp->problem->parameter_map(),
*options.inner_iteration_ordering,
@@ -329,13 +336,19 @@
}
// Configure and create a TrustRegionMinimizer object.
-void SetupMinimizerOptions(PreprocessedProblem* pp) {
+bool SetupMinimizerOptions(PreprocessedProblem* pp) {
const Solver::Options& options = pp->options;
SetupCommonMinimizerOptions(pp);
pp->minimizer_options.is_constrained =
pp->reduced_program->IsBoundsConstrained();
- pp->minimizer_options.jacobian.reset(pp->evaluator->CreateJacobian());
+ pp->minimizer_options.jacobian = pp->evaluator->CreateJacobian();
+ if (pp->minimizer_options.jacobian == nullptr) {
+ pp->error =
+ "Unable to create Jacobian matrix. Likely because it is too large.";
+ return false;
+ }
+
pp->minimizer_options.inner_iteration_minimizer =
pp->inner_iteration_minimizer;
@@ -348,15 +361,16 @@
strategy_options.trust_region_strategy_type =
options.trust_region_strategy_type;
strategy_options.dogleg_type = options.dogleg_type;
- pp->minimizer_options.trust_region_strategy.reset(
- TrustRegionStrategy::Create(strategy_options));
+ strategy_options.context = pp->problem->context();
+ strategy_options.num_threads = options.num_threads;
+ pp->minimizer_options.trust_region_strategy =
+ TrustRegionStrategy::Create(strategy_options);
CHECK(pp->minimizer_options.trust_region_strategy != nullptr);
+ return true;
}
} // namespace
-TrustRegionPreprocessor::~TrustRegionPreprocessor() {}
-
bool TrustRegionPreprocessor::Preprocess(const Solver::Options& options,
ProblemImpl* problem,
PreprocessedProblem* pp) {
@@ -370,10 +384,10 @@
return false;
}
- pp->reduced_program.reset(program->CreateReducedProgram(
- &pp->removed_parameter_blocks, &pp->fixed_cost, &pp->error));
+ pp->reduced_program = program->CreateReducedProgram(
+ &pp->removed_parameter_blocks, &pp->fixed_cost, &pp->error);
- if (pp->reduced_program.get() == NULL) {
+ if (pp->reduced_program.get() == nullptr) {
return false;
}
@@ -388,9 +402,7 @@
return false;
}
- SetupMinimizerOptions(pp);
- return true;
+ return SetupMinimizerOptions(pp);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/trust_region_preprocessor.h b/internal/ceres/trust_region_preprocessor.h
index 2655abe..14febda 100644
--- a/internal/ceres/trust_region_preprocessor.h
+++ b/internal/ceres/trust_region_preprocessor.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,21 +31,21 @@
#ifndef CERES_INTERNAL_TRUST_REGION_PREPROCESSOR_H_
#define CERES_INTERNAL_TRUST_REGION_PREPROCESSOR_H_
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/preprocessor.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-class CERES_EXPORT_INTERNAL TrustRegionPreprocessor : public Preprocessor {
+class CERES_NO_EXPORT TrustRegionPreprocessor final : public Preprocessor {
public:
- virtual ~TrustRegionPreprocessor();
bool Preprocess(const Solver::Options& options,
ProblemImpl* problem,
PreprocessedProblem* preprocessed_problem) override;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_TRUST_REGION_PREPROCESSOR_H_
diff --git a/internal/ceres/trust_region_preprocessor_test.cc b/internal/ceres/trust_region_preprocessor_test.cc
index a2a9523..2579361 100644
--- a/internal/ceres/trust_region_preprocessor_test.cc
+++ b/internal/ceres/trust_region_preprocessor_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,6 +33,7 @@
#include <array>
#include <map>
+#include "ceres/internal/config.h"
#include "ceres/ordered_groups.h"
#include "ceres/problem_impl.h"
#include "ceres/sized_cost_function.h"
@@ -72,7 +73,7 @@
EXPECT_FALSE(preprocessor.Preprocess(options, &problem, &pp));
}
-TEST(TrustRegionPreprocessor, ParamterBlockIsInfeasible) {
+TEST(TrustRegionPreprocessor, ParameterBlockIsInfeasible) {
ProblemImpl problem;
double x = 3.0;
problem.AddParameterBlock(&x, 1);
@@ -89,7 +90,7 @@
public:
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
return false;
}
};
@@ -120,7 +121,7 @@
public:
bool Evaluate(double const* const* parameters,
double* residuals,
- double** jacobians) const {
+ double** jacobians) const override {
for (int i = 0; i < kNumResiduals; ++i) {
residuals[i] = kNumResiduals * kNumResiduals + i;
}
@@ -225,7 +226,7 @@
TEST_F(LinearSolverAndEvaluatorCreationTest, SchurTypeSolverWithBadOrdering) {
Solver::Options options;
options.linear_solver_type = DENSE_SCHUR;
- options.linear_solver_ordering.reset(new ParameterBlockOrdering);
+ options.linear_solver_ordering = std::make_shared<ParameterBlockOrdering>();
options.linear_solver_ordering->AddElementToGroup(&x_, 0);
options.linear_solver_ordering->AddElementToGroup(&y_, 0);
options.linear_solver_ordering->AddElementToGroup(&z_, 1);
@@ -238,7 +239,7 @@
TEST_F(LinearSolverAndEvaluatorCreationTest, SchurTypeSolverWithGoodOrdering) {
Solver::Options options;
options.linear_solver_type = DENSE_SCHUR;
- options.linear_solver_ordering.reset(new ParameterBlockOrdering);
+ options.linear_solver_ordering = std::make_shared<ParameterBlockOrdering>();
options.linear_solver_ordering->AddElementToGroup(&x_, 0);
options.linear_solver_ordering->AddElementToGroup(&z_, 0);
options.linear_solver_ordering->AddElementToGroup(&y_, 1);
@@ -260,7 +261,7 @@
Solver::Options options;
options.linear_solver_type = DENSE_SCHUR;
- options.linear_solver_ordering.reset(new ParameterBlockOrdering);
+ options.linear_solver_ordering = std::make_shared<ParameterBlockOrdering>();
options.linear_solver_ordering->AddElementToGroup(&x_, 0);
options.linear_solver_ordering->AddElementToGroup(&z_, 0);
options.linear_solver_ordering->AddElementToGroup(&y_, 1);
@@ -281,7 +282,7 @@
Solver::Options options;
options.linear_solver_type = DENSE_SCHUR;
- options.linear_solver_ordering.reset(new ParameterBlockOrdering);
+ options.linear_solver_ordering = std::make_shared<ParameterBlockOrdering>();
options.linear_solver_ordering->AddElementToGroup(&x_, 0);
options.linear_solver_ordering->AddElementToGroup(&z_, 0);
options.linear_solver_ordering->AddElementToGroup(&y_, 1);
@@ -328,7 +329,7 @@
TEST_F(LinearSolverAndEvaluatorCreationTest, InvalidInnerIterationsOrdering) {
Solver::Options options;
options.use_inner_iterations = true;
- options.inner_iteration_ordering.reset(new ParameterBlockOrdering);
+ options.inner_iteration_ordering = std::make_shared<ParameterBlockOrdering>();
options.inner_iteration_ordering->AddElementToGroup(&x_, 0);
options.inner_iteration_ordering->AddElementToGroup(&z_, 0);
options.inner_iteration_ordering->AddElementToGroup(&y_, 0);
@@ -341,7 +342,7 @@
TEST_F(LinearSolverAndEvaluatorCreationTest, ValidInnerIterationsOrdering) {
Solver::Options options;
options.use_inner_iterations = true;
- options.inner_iteration_ordering.reset(new ParameterBlockOrdering);
+ options.inner_iteration_ordering = std::make_shared<ParameterBlockOrdering>();
options.inner_iteration_ordering->AddElementToGroup(&x_, 0);
options.inner_iteration_ordering->AddElementToGroup(&z_, 0);
options.inner_iteration_ordering->AddElementToGroup(&y_, 1);
diff --git a/internal/ceres/trust_region_step_evaluator.cc b/internal/ceres/trust_region_step_evaluator.cc
index 19045ae..a2333a0 100644
--- a/internal/ceres/trust_region_step_evaluator.cc
+++ b/internal/ceres/trust_region_step_evaluator.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,8 +35,7 @@
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
TrustRegionStepEvaluator::TrustRegionStepEvaluator(
const double initial_cost, const int max_consecutive_nonmonotonic_steps)
@@ -111,5 +110,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/trust_region_step_evaluator.h b/internal/ceres/trust_region_step_evaluator.h
index 03c0036..6df0427 100644
--- a/internal/ceres/trust_region_step_evaluator.h
+++ b/internal/ceres/trust_region_step_evaluator.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2016 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,8 +31,9 @@
#ifndef CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_
#define CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_
-namespace ceres {
-namespace internal {
+#include "ceres/internal/export.h"
+
+namespace ceres::internal {
// The job of the TrustRegionStepEvaluator is to evaluate the quality
// of a step, i.e., how the cost of a step compares with the reduction
@@ -74,7 +75,7 @@
// x = x + delta;
// step_evaluator->StepAccepted(cost, model_cost_change);
// }
-class TrustRegionStepEvaluator {
+class CERES_NO_EXPORT TrustRegionStepEvaluator {
public:
// initial_cost is as the name implies the cost of the starting
// state of the trust region minimizer.
@@ -116,7 +117,6 @@
int num_consecutive_nonmonotonic_steps_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_TRUST_REGION_STEP_EVALUATOR_H_
diff --git a/internal/ceres/trust_region_strategy.cc b/internal/ceres/trust_region_strategy.cc
index 7e429d5..da5a337 100644
--- a/internal/ceres/trust_region_strategy.cc
+++ b/internal/ceres/trust_region_strategy.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -32,20 +32,22 @@
#include "ceres/trust_region_strategy.h"
+#include <memory>
+
#include "ceres/dogleg_strategy.h"
#include "ceres/levenberg_marquardt_strategy.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-TrustRegionStrategy::~TrustRegionStrategy() {}
+TrustRegionStrategy::~TrustRegionStrategy() = default;
-TrustRegionStrategy* TrustRegionStrategy::Create(const Options& options) {
+std::unique_ptr<TrustRegionStrategy> TrustRegionStrategy::Create(
+ const Options& options) {
switch (options.trust_region_strategy_type) {
case LEVENBERG_MARQUARDT:
- return new LevenbergMarquardtStrategy(options);
+ return std::make_unique<LevenbergMarquardtStrategy>(options);
case DOGLEG:
- return new DoglegStrategy(options);
+ return std::make_unique<DoglegStrategy>(options);
default:
LOG(FATAL) << "Unknown trust region strategy: "
<< options.trust_region_strategy_type;
@@ -53,8 +55,7 @@
LOG(FATAL) << "Unknown trust region strategy: "
<< options.trust_region_strategy_type;
- return NULL;
+ return nullptr;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/trust_region_strategy.h b/internal/ceres/trust_region_strategy.h
index 176f73a..0e0a301 100644
--- a/internal/ceres/trust_region_strategy.h
+++ b/internal/ceres/trust_region_strategy.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -31,13 +31,14 @@
#ifndef CERES_INTERNAL_TRUST_REGION_STRATEGY_H_
#define CERES_INTERNAL_TRUST_REGION_STRATEGY_H_
+#include <memory>
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/linear_solver.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class LinearSolver;
class SparseMatrix;
@@ -54,7 +55,7 @@
// the LevenbergMarquardtStrategy uses the inverse of the trust region
// radius to scale the damping term, which controls the step size, but
// does not set a hard limit on its size.
-class CERES_EXPORT_INTERNAL TrustRegionStrategy {
+class CERES_NO_EXPORT TrustRegionStrategy {
public:
struct Options {
TrustRegionStrategyType trust_region_strategy_type = LEVENBERG_MARQUARDT;
@@ -72,10 +73,13 @@
// Further specify which dogleg method to use
DoglegType dogleg_type = TRADITIONAL_DOGLEG;
+
+ ContextImpl* context = nullptr;
+ int num_threads = 1;
};
// Factory.
- static TrustRegionStrategy* Create(const Options& options);
+ static std::unique_ptr<TrustRegionStrategy> Create(const Options& options);
virtual ~TrustRegionStrategy();
@@ -110,7 +114,8 @@
int num_iterations = -1;
// Status of the linear solver used to solve the Newton system.
- LinearSolverTerminationType termination_type = LINEAR_SOLVER_FAILURE;
+ LinearSolverTerminationType termination_type =
+ LinearSolverTerminationType::FAILURE;
};
// Use the current radius to solve for the trust region step.
@@ -139,7 +144,8 @@
virtual double Radius() const = 0;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_TRUST_REGION_STRATEGY_H_
diff --git a/internal/ceres/types.cc b/internal/ceres/types.cc
index 39bb2d8..e000560 100644
--- a/internal/ceres/types.cc
+++ b/internal/ceres/types.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,18 +34,17 @@
#include <cctype>
#include <string>
+#include "ceres/internal/config.h"
#include "glog/logging.h"
namespace ceres {
-using std::string;
-
// clang-format off
#define CASESTR(x) case x: return #x
#define STRENUM(x) if (value == #x) { *type = x; return true; }
// clang-format on
-static void UpperCase(string* input) {
+static void UpperCase(std::string* input) {
std::transform(input->begin(), input->end(), input->begin(), ::toupper);
}
@@ -63,7 +62,7 @@
}
}
-bool StringToLinearSolverType(string value, LinearSolverType* type) {
+bool StringToLinearSolverType(std::string value, LinearSolverType* type) {
UpperCase(&value);
STRENUM(DENSE_NORMAL_CHOLESKY);
STRENUM(DENSE_QR);
@@ -80,6 +79,7 @@
CASESTR(IDENTITY);
CASESTR(JACOBI);
CASESTR(SCHUR_JACOBI);
+ CASESTR(SCHUR_POWER_SERIES_EXPANSION);
CASESTR(CLUSTER_JACOBI);
CASESTR(CLUSTER_TRIDIAGONAL);
CASESTR(SUBSET);
@@ -88,11 +88,12 @@
}
}
-bool StringToPreconditionerType(string value, PreconditionerType* type) {
+bool StringToPreconditionerType(std::string value, PreconditionerType* type) {
UpperCase(&value);
STRENUM(IDENTITY);
STRENUM(JACOBI);
STRENUM(SCHUR_JACOBI);
+ STRENUM(SCHUR_POWER_SERIES_EXPANSION);
STRENUM(CLUSTER_JACOBI);
STRENUM(CLUSTER_TRIDIAGONAL);
STRENUM(SUBSET);
@@ -103,9 +104,9 @@
SparseLinearAlgebraLibraryType type) {
switch (type) {
CASESTR(SUITE_SPARSE);
- CASESTR(CX_SPARSE);
CASESTR(EIGEN_SPARSE);
CASESTR(ACCELERATE_SPARSE);
+ CASESTR(CUDA_SPARSE);
CASESTR(NO_SPARSE);
default:
return "UNKNOWN";
@@ -113,31 +114,50 @@
}
bool StringToSparseLinearAlgebraLibraryType(
- string value, SparseLinearAlgebraLibraryType* type) {
+ std::string value, SparseLinearAlgebraLibraryType* type) {
UpperCase(&value);
STRENUM(SUITE_SPARSE);
- STRENUM(CX_SPARSE);
STRENUM(EIGEN_SPARSE);
STRENUM(ACCELERATE_SPARSE);
+ STRENUM(CUDA_SPARSE);
STRENUM(NO_SPARSE);
return false;
}
+const char* LinearSolverOrderingTypeToString(LinearSolverOrderingType type) {
+ switch (type) {
+ CASESTR(AMD);
+ CASESTR(NESDIS);
+ default:
+ return "UNKNOWN";
+ }
+}
+
+bool StringToLinearSolverOrderingType(std::string value,
+ LinearSolverOrderingType* type) {
+ UpperCase(&value);
+ STRENUM(AMD);
+ STRENUM(NESDIS);
+ return false;
+}
+
const char* DenseLinearAlgebraLibraryTypeToString(
DenseLinearAlgebraLibraryType type) {
switch (type) {
CASESTR(EIGEN);
CASESTR(LAPACK);
+ CASESTR(CUDA);
default:
return "UNKNOWN";
}
}
bool StringToDenseLinearAlgebraLibraryType(
- string value, DenseLinearAlgebraLibraryType* type) {
+ std::string value, DenseLinearAlgebraLibraryType* type) {
UpperCase(&value);
STRENUM(EIGEN);
STRENUM(LAPACK);
+ STRENUM(CUDA);
return false;
}
@@ -150,7 +170,7 @@
}
}
-bool StringToTrustRegionStrategyType(string value,
+bool StringToTrustRegionStrategyType(std::string value,
TrustRegionStrategyType* type) {
UpperCase(&value);
STRENUM(LEVENBERG_MARQUARDT);
@@ -167,7 +187,7 @@
}
}
-bool StringToDoglegType(string value, DoglegType* type) {
+bool StringToDoglegType(std::string value, DoglegType* type) {
UpperCase(&value);
STRENUM(TRADITIONAL_DOGLEG);
STRENUM(SUBSPACE_DOGLEG);
@@ -183,7 +203,7 @@
}
}
-bool StringToMinimizerType(string value, MinimizerType* type) {
+bool StringToMinimizerType(std::string value, MinimizerType* type) {
UpperCase(&value);
STRENUM(TRUST_REGION);
STRENUM(LINE_SEARCH);
@@ -201,7 +221,7 @@
}
}
-bool StringToLineSearchDirectionType(string value,
+bool StringToLineSearchDirectionType(std::string value,
LineSearchDirectionType* type) {
UpperCase(&value);
STRENUM(STEEPEST_DESCENT);
@@ -220,7 +240,7 @@
}
}
-bool StringToLineSearchType(string value, LineSearchType* type) {
+bool StringToLineSearchType(std::string value, LineSearchType* type) {
UpperCase(&value);
STRENUM(ARMIJO);
STRENUM(WOLFE);
@@ -238,7 +258,7 @@
}
}
-bool StringToLineSearchInterpolationType(string value,
+bool StringToLineSearchInterpolationType(std::string value,
LineSearchInterpolationType* type) {
UpperCase(&value);
STRENUM(BISECTION);
@@ -259,7 +279,7 @@
}
bool StringToNonlinearConjugateGradientType(
- string value, NonlinearConjugateGradientType* type) {
+ std::string value, NonlinearConjugateGradientType* type) {
UpperCase(&value);
STRENUM(FLETCHER_REEVES);
STRENUM(POLAK_RIBIERE);
@@ -276,7 +296,7 @@
}
}
-bool StringToCovarianceAlgorithmType(string value,
+bool StringToCovarianceAlgorithmType(std::string value,
CovarianceAlgorithmType* type) {
UpperCase(&value);
STRENUM(DENSE_SVD);
@@ -294,7 +314,8 @@
}
}
-bool StringToNumericDiffMethodType(string value, NumericDiffMethodType* type) {
+bool StringToNumericDiffMethodType(std::string value,
+ NumericDiffMethodType* type) {
UpperCase(&value);
STRENUM(CENTRAL);
STRENUM(FORWARD);
@@ -311,7 +332,7 @@
}
}
-bool StringToVisibilityClusteringType(string value,
+bool StringToVisibilityClusteringType(std::string value,
VisibilityClusteringType* type) {
UpperCase(&value);
STRENUM(CANONICAL_VIEWS);
@@ -384,14 +405,6 @@
#endif
}
- if (type == CX_SPARSE) {
-#ifdef CERES_NO_CXSPARSE
- return false;
-#else
- return true;
-#endif
- }
-
if (type == ACCELERATE_SPARSE) {
#ifdef CERES_NO_ACCELERATE_SPARSE
return false;
@@ -408,6 +421,18 @@
#endif
}
+ if (type == CUDA_SPARSE) {
+#ifdef CERES_NO_CUDA
+ return false;
+#else
+ return true;
+#endif
+ }
+
+ if (type == NO_SPARSE) {
+ return true;
+ }
+
LOG(WARNING) << "Unknown sparse linear algebra library " << type;
return false;
}
@@ -417,6 +442,7 @@
if (type == EIGEN) {
return true;
}
+
if (type == LAPACK) {
#ifdef CERES_NO_LAPACK
return false;
@@ -425,6 +451,14 @@
#endif
}
+ if (type == CUDA) {
+#ifdef CERES_NO_CUDA
+ return false;
+#else
+ return true;
+#endif
+ }
+
LOG(WARNING) << "Unknown dense linear algebra library " << type;
return false;
}
diff --git a/internal/ceres/visibility.cc b/internal/ceres/visibility.cc
index 82bf6f1..6c10fb2 100644
--- a/internal/ceres/visibility.cc
+++ b/internal/ceres/visibility.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -33,6 +33,7 @@
#include <algorithm>
#include <cmath>
#include <ctime>
+#include <memory>
#include <set>
#include <unordered_map>
#include <utility>
@@ -43,18 +44,11 @@
#include "ceres/pair_hash.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::make_pair;
-using std::max;
-using std::pair;
-using std::set;
-using std::vector;
+namespace ceres::internal {
void ComputeVisibility(const CompressedRowBlockStructure& block_structure,
const int num_eliminate_blocks,
- vector<set<int>>* visibility) {
+ std::vector<std::set<int>>* visibility) {
CHECK(visibility != nullptr);
// Clear the visibility vector and resize it to hold a
@@ -62,8 +56,8 @@
visibility->resize(0);
visibility->resize(block_structure.cols.size() - num_eliminate_blocks);
- for (int i = 0; i < block_structure.rows.size(); ++i) {
- const vector<Cell>& cells = block_structure.rows[i].cells;
+ for (const auto& row : block_structure.rows) {
+ const std::vector<Cell>& cells = row.cells;
int block_id = cells[0].block_id;
// If the first block is not an e_block, then skip this row block.
if (block_id >= num_eliminate_blocks) {
@@ -79,16 +73,16 @@
}
}
-WeightedGraph<int>* CreateSchurComplementGraph(
- const vector<set<int>>& visibility) {
- const time_t start_time = time(NULL);
+std::unique_ptr<WeightedGraph<int>> CreateSchurComplementGraph(
+ const std::vector<std::set<int>>& visibility) {
+ const time_t start_time = time(nullptr);
// Compute the number of e_blocks/point blocks. Since the visibility
// set for each e_block/camera contains the set of e_blocks/points
// visible to it, we find the maximum across all visibility sets.
int num_points = 0;
- for (int i = 0; i < visibility.size(); i++) {
- if (visibility[i].size() > 0) {
- num_points = max(num_points, (*visibility[i].rbegin()) + 1);
+ for (const auto& visible : visibility) {
+ if (!visible.empty()) {
+ num_points = std::max(num_points, (*visible.rbegin()) + 1);
}
}
@@ -97,31 +91,31 @@
// cameras. However, to compute the sparsity structure of the Schur
// Complement efficiently, its better to have the point->camera
// mapping.
- vector<set<int>> inverse_visibility(num_points);
+ std::vector<std::set<int>> inverse_visibility(num_points);
for (int i = 0; i < visibility.size(); i++) {
- const set<int>& visibility_set = visibility[i];
- for (const int v : visibility_set) {
+ const std::set<int>& visibility_set = visibility[i];
+ for (int v : visibility_set) {
inverse_visibility[v].insert(i);
}
}
// Map from camera pairs to number of points visible to both cameras
// in the pair.
- std::unordered_map<pair<int, int>, int, pair_hash> camera_pairs;
+ std::unordered_map<std::pair<int, int>, int, pair_hash> camera_pairs;
// Count the number of points visible to each camera/f_block pair.
for (const auto& inverse_visibility_set : inverse_visibility) {
- for (set<int>::const_iterator camera1 = inverse_visibility_set.begin();
+ for (auto camera1 = inverse_visibility_set.begin();
camera1 != inverse_visibility_set.end();
++camera1) {
- set<int>::const_iterator camera2 = camera1;
+ auto camera2 = camera1;
for (++camera2; camera2 != inverse_visibility_set.end(); ++camera2) {
- ++(camera_pairs[make_pair(*camera1, *camera2)]);
+ ++(camera_pairs[std::make_pair(*camera1, *camera2)]);
}
}
}
- WeightedGraph<int>* graph = new WeightedGraph<int>;
+ auto graph = std::make_unique<WeightedGraph<int>>();
// Add vertices and initialize the pairs for self edges so that self
// edges are guaranteed. This is needed for the Canonical views
@@ -146,9 +140,8 @@
graph->AddEdge(camera1, camera2, weight);
}
- VLOG(2) << "Schur complement graph time: " << (time(NULL) - start_time);
+ VLOG(2) << "Schur complement graph time: " << (time(nullptr) - start_time);
return graph;
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/visibility.h b/internal/ceres/visibility.h
index 68c6723..2e5f4fc 100644
--- a/internal/ceres/visibility.h
+++ b/internal/ceres/visibility.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,14 +35,15 @@
#ifndef CERES_INTERNAL_VISIBILITY_H_
#define CERES_INTERNAL_VISIBILITY_H_
+#include <memory>
#include <set>
#include <vector>
#include "ceres/graph.h"
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
struct CompressedRowBlockStructure;
@@ -54,7 +55,7 @@
//
// In a structure from motion problem, e_blocks correspond to 3D
// points and f_blocks correspond to cameras.
-CERES_EXPORT_INTERNAL void ComputeVisibility(
+CERES_NO_EXPORT void ComputeVisibility(
const CompressedRowBlockStructure& block_structure,
int num_eliminate_blocks,
std::vector<std::set<int>>* visibility);
@@ -72,10 +73,11 @@
//
// Caller acquires ownership of the returned WeightedGraph pointer
// (heap-allocated).
-CERES_EXPORT_INTERNAL WeightedGraph<int>* CreateSchurComplementGraph(
+CERES_NO_EXPORT std::unique_ptr<WeightedGraph<int>> CreateSchurComplementGraph(
const std::vector<std::set<int>>& visibility);
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_VISIBILITY_H_
diff --git a/internal/ceres/visibility_based_preconditioner.cc b/internal/ceres/visibility_based_preconditioner.cc
index 0cf4afa..42e8a6e 100644
--- a/internal/ceres/visibility_based_preconditioner.cc
+++ b/internal/ceres/visibility_based_preconditioner.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -35,6 +35,8 @@
#include <iterator>
#include <memory>
#include <set>
+#include <string>
+#include <unordered_set>
#include <utility>
#include <vector>
@@ -50,14 +52,7 @@
#include "ceres/visibility.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
-
-using std::make_pair;
-using std::pair;
-using std::set;
-using std::swap;
-using std::vector;
+namespace ceres::internal {
// TODO(sameeragarwal): Currently these are magic weights for the
// preconditioner construction. Move these higher up into the Options
@@ -70,9 +65,8 @@
static constexpr double kSingleLinkageMinSimilarity = 0.9;
VisibilityBasedPreconditioner::VisibilityBasedPreconditioner(
- const CompressedRowBlockStructure& bs,
- const Preconditioner::Options& options)
- : options_(options), num_blocks_(0), num_clusters_(0) {
+ const CompressedRowBlockStructure& bs, Preconditioner::Options options)
+ : options_(std::move(options)), num_blocks_(0), num_clusters_(0) {
CHECK_GT(options_.elimination_groups.size(), 1);
CHECK_GT(options_.elimination_groups[0], 0);
CHECK(options_.type == CLUSTER_JACOBI || options_.type == CLUSTER_TRIDIAGONAL)
@@ -80,15 +74,12 @@
num_blocks_ = bs.cols.size() - options_.elimination_groups[0];
CHECK_GT(num_blocks_, 0) << "Jacobian should have at least 1 f_block for "
<< "visibility based preconditioning.";
- CHECK(options_.context != NULL);
+ CHECK(options_.context != nullptr);
// Vector of camera block sizes
- block_size_.resize(num_blocks_);
- for (int i = 0; i < num_blocks_; ++i) {
- block_size_[i] = bs.cols[i + options_.elimination_groups[0]].size;
- }
+ blocks_ = Tail(bs.cols, bs.cols.size() - options_.elimination_groups[0]);
- const time_t start_time = time(NULL);
+ const time_t start_time = time(nullptr);
switch (options_.type) {
case CLUSTER_JACOBI:
ComputeClusterJacobiSparsity(bs);
@@ -99,33 +90,26 @@
default:
LOG(FATAL) << "Unknown preconditioner type";
}
- const time_t structure_time = time(NULL);
+ const time_t structure_time = time(nullptr);
InitStorage(bs);
- const time_t storage_time = time(NULL);
+ const time_t storage_time = time(nullptr);
InitEliminator(bs);
- const time_t eliminator_time = time(NULL);
+ const time_t eliminator_time = time(nullptr);
LinearSolver::Options sparse_cholesky_options;
sparse_cholesky_options.sparse_linear_algebra_library_type =
options_.sparse_linear_algebra_library_type;
-
- // The preconditioner's sparsity is not available in the
- // preprocessor, so the columns of the Jacobian have not been
- // reordered to minimize fill in when computing its sparse Cholesky
- // factorization. So we must tell the SparseCholesky object to
- // perform approximate minimum-degree reordering, which is done by
- // setting use_postordering to true.
- sparse_cholesky_options.use_postordering = true;
+ sparse_cholesky_options.ordering_type = options_.ordering_type;
sparse_cholesky_ = SparseCholesky::Create(sparse_cholesky_options);
- const time_t init_time = time(NULL);
+ const time_t init_time = time(nullptr);
VLOG(2) << "init time: " << init_time - start_time
<< " structure time: " << structure_time - start_time
<< " storage time:" << storage_time - structure_time
<< " eliminator time: " << eliminator_time - storage_time;
}
-VisibilityBasedPreconditioner::~VisibilityBasedPreconditioner() {}
+VisibilityBasedPreconditioner::~VisibilityBasedPreconditioner() = default;
// Determine the sparsity structure of the CLUSTER_JACOBI
// preconditioner. It clusters cameras using their scene
@@ -133,13 +117,13 @@
// preconditioner matrix.
void VisibilityBasedPreconditioner::ComputeClusterJacobiSparsity(
const CompressedRowBlockStructure& bs) {
- vector<set<int>> visibility;
+ std::vector<std::set<int>> visibility;
ComputeVisibility(bs, options_.elimination_groups[0], &visibility);
CHECK_EQ(num_blocks_, visibility.size());
ClusterCameras(visibility);
cluster_pairs_.clear();
for (int i = 0; i < num_clusters_; ++i) {
- cluster_pairs_.insert(make_pair(i, i));
+ cluster_pairs_.insert(std::make_pair(i, i));
}
}
@@ -151,7 +135,7 @@
// of edges in this forest are the cluster pairs.
void VisibilityBasedPreconditioner::ComputeClusterTridiagonalSparsity(
const CompressedRowBlockStructure& bs) {
- vector<set<int>> visibility;
+ std::vector<std::set<int>> visibility;
ComputeVisibility(bs, options_.elimination_groups[0], &visibility);
CHECK_EQ(num_blocks_, visibility.size());
ClusterCameras(visibility);
@@ -160,13 +144,11 @@
// edges are the number of 3D points/e_blocks visible in both the
// clusters at the ends of the edge. Return an approximate degree-2
// maximum spanning forest of this graph.
- vector<set<int>> cluster_visibility;
+ std::vector<std::set<int>> cluster_visibility;
ComputeClusterVisibility(visibility, &cluster_visibility);
- std::unique_ptr<WeightedGraph<int>> cluster_graph(
- CreateClusterGraph(cluster_visibility));
+ auto cluster_graph = CreateClusterGraph(cluster_visibility);
CHECK(cluster_graph != nullptr);
- std::unique_ptr<WeightedGraph<int>> forest(
- Degree2MaximumSpanningForest(*cluster_graph));
+ auto forest = Degree2MaximumSpanningForest(*cluster_graph);
CHECK(forest != nullptr);
ForestToClusterPairs(*forest, &cluster_pairs_);
}
@@ -175,7 +157,8 @@
void VisibilityBasedPreconditioner::InitStorage(
const CompressedRowBlockStructure& bs) {
ComputeBlockPairsInPreconditioner(bs);
- m_.reset(new BlockRandomAccessSparseMatrix(block_size_, block_pairs_));
+ m_ = std::make_unique<BlockRandomAccessSparseMatrix>(
+ blocks_, block_pairs_, options_.context, options_.num_threads);
}
// Call the canonical views algorithm and cluster the cameras based on
@@ -185,15 +168,14 @@
// The cluster_membership_ vector is updated to indicate cluster
// memberships for each camera block.
void VisibilityBasedPreconditioner::ClusterCameras(
- const vector<set<int>>& visibility) {
- std::unique_ptr<WeightedGraph<int>> schur_complement_graph(
- CreateSchurComplementGraph(visibility));
+ const std::vector<std::set<int>>& visibility) {
+ auto schur_complement_graph = CreateSchurComplementGraph(visibility);
CHECK(schur_complement_graph != nullptr);
std::unordered_map<int, int> membership;
if (options_.visibility_clustering_type == CANONICAL_VIEWS) {
- vector<int> centers;
+ std::vector<int> centers;
CanonicalViewsClusteringOptions clustering_options;
clustering_options.size_penalty_weight = kCanonicalViewsSizePenaltyWeight;
clustering_options.similarity_penalty_weight =
@@ -239,7 +221,7 @@
const CompressedRowBlockStructure& bs) {
block_pairs_.clear();
for (int i = 0; i < num_blocks_; ++i) {
- block_pairs_.insert(make_pair(i, i));
+ block_pairs_.insert(std::make_pair(i, i));
}
int r = 0;
@@ -267,7 +249,7 @@
break;
}
- set<int> f_blocks;
+ std::set<int> f_blocks;
for (; r < num_row_blocks; ++r) {
const CompressedRow& row = bs.rows[r];
if (row.cells.front().block_id != e_block_id) {
@@ -285,14 +267,12 @@
}
}
- for (set<int>::const_iterator block1 = f_blocks.begin();
- block1 != f_blocks.end();
- ++block1) {
- set<int>::const_iterator block2 = block1;
+ for (auto block1 = f_blocks.begin(); block1 != f_blocks.end(); ++block1) {
+ auto block2 = block1;
++block2;
for (; block2 != f_blocks.end(); ++block2) {
if (IsBlockPairInPreconditioner(*block1, *block2)) {
- block_pairs_.insert(make_pair(*block1, *block2));
+ block_pairs_.emplace(*block1, *block2);
}
}
}
@@ -304,11 +284,11 @@
CHECK_GE(row.cells.front().block_id, num_eliminate_blocks);
for (int i = 0; i < row.cells.size(); ++i) {
const int block1 = row.cells[i].block_id - num_eliminate_blocks;
- for (int j = 0; j < row.cells.size(); ++j) {
- const int block2 = row.cells[j].block_id - num_eliminate_blocks;
+ for (const auto& cell : row.cells) {
+ const int block2 = cell.block_id - num_eliminate_blocks;
if (block1 <= block2) {
if (IsBlockPairInPreconditioner(block1, block2)) {
- block_pairs_.insert(make_pair(block1, block2));
+ block_pairs_.insert(std::make_pair(block1, block2));
}
}
}
@@ -328,7 +308,7 @@
eliminator_options.f_block_size = options_.f_block_size;
eliminator_options.row_block_size = options_.row_block_size;
eliminator_options.context = options_.context;
- eliminator_.reset(SchurEliminatorBase::Create(eliminator_options));
+ eliminator_ = SchurEliminatorBase::Create(eliminator_options);
const bool kFullRankETE = true;
eliminator_->Init(
eliminator_options.elimination_groups[0], kFullRankETE, &bs);
@@ -337,7 +317,7 @@
// Update the values of the preconditioner matrix and factorize it.
bool VisibilityBasedPreconditioner::UpdateImpl(const BlockSparseMatrix& A,
const double* D) {
- const time_t start_time = time(NULL);
+ const time_t start_time = time(nullptr);
const int num_rows = m_->num_rows();
CHECK_GT(num_rows, 0);
@@ -359,7 +339,7 @@
// scaling is not needed, which is quite often in our experience.
LinearSolverTerminationType status = Factorize();
- if (status == LINEAR_SOLVER_FATAL_ERROR) {
+ if (status == LinearSolverTerminationType::FATAL_ERROR) {
return false;
}
@@ -368,15 +348,16 @@
// belong to the edges of the degree-2 forest. In the CLUSTER_JACOBI
// case, the preconditioner is guaranteed to be positive
// semidefinite.
- if (status == LINEAR_SOLVER_FAILURE && options_.type == CLUSTER_TRIDIAGONAL) {
+ if (status == LinearSolverTerminationType::FAILURE &&
+ options_.type == CLUSTER_TRIDIAGONAL) {
VLOG(1) << "Unscaled factorization failed. Retrying with off-diagonal "
<< "scaling";
ScaleOffDiagonalCells();
status = Factorize();
}
- VLOG(2) << "Compute time: " << time(NULL) - start_time;
- return (status == LINEAR_SOLVER_SUCCESS);
+ VLOG(2) << "Compute time: " << time(nullptr) - start_time;
+ return (status == LinearSolverTerminationType::SUCCESS);
}
// Consider the preconditioner matrix as meta-block matrix, whose
@@ -395,7 +376,7 @@
int r, c, row_stride, col_stride;
CellInfo* cell_info =
m_->GetCell(block1, block2, &r, &c, &row_stride, &col_stride);
- CHECK(cell_info != NULL)
+ CHECK(cell_info != nullptr)
<< "Cell missing for block pair (" << block1 << "," << block2 << ")"
<< " cluster pair (" << cluster_membership_[block1] << " "
<< cluster_membership_[block2] << ")";
@@ -404,36 +385,44 @@
// dominance. See Lemma 1 in "Visibility Based Preconditioning
// For Bundle Adjustment".
MatrixRef m(cell_info->values, row_stride, col_stride);
- m.block(r, c, block_size_[block1], block_size_[block2]) *= 0.5;
+ m.block(r, c, blocks_[block1].size, blocks_[block2].size) *= 0.5;
}
}
// Compute the sparse Cholesky factorization of the preconditioner
// matrix.
LinearSolverTerminationType VisibilityBasedPreconditioner::Factorize() {
- // Extract the TripletSparseMatrix that is used for actually storing
+ // Extract the BlockSparseMatrix that is used for actually storing
// S and convert it into a CompressedRowSparseMatrix.
- const TripletSparseMatrix* tsm =
- down_cast<BlockRandomAccessSparseMatrix*>(m_.get())->mutable_matrix();
-
- std::unique_ptr<CompressedRowSparseMatrix> lhs;
+ const BlockSparseMatrix* bsm =
+ down_cast<BlockRandomAccessSparseMatrix*>(m_.get())->matrix();
const CompressedRowSparseMatrix::StorageType storage_type =
sparse_cholesky_->StorageType();
- if (storage_type == CompressedRowSparseMatrix::UPPER_TRIANGULAR) {
- lhs.reset(CompressedRowSparseMatrix::FromTripletSparseMatrix(*tsm));
- lhs->set_storage_type(CompressedRowSparseMatrix::UPPER_TRIANGULAR);
+ if (storage_type ==
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR) {
+ if (!m_crs_) {
+ m_crs_ = bsm->ToCompressedRowSparseMatrix();
+ m_crs_->set_storage_type(
+ CompressedRowSparseMatrix::StorageType::UPPER_TRIANGULAR);
+ } else {
+ bsm->UpdateCompressedRowSparseMatrix(m_crs_.get());
+ }
} else {
- lhs.reset(
- CompressedRowSparseMatrix::FromTripletSparseMatrixTransposed(*tsm));
- lhs->set_storage_type(CompressedRowSparseMatrix::LOWER_TRIANGULAR);
+ if (!m_crs_) {
+ m_crs_ = bsm->ToCompressedRowSparseMatrixTranspose();
+ m_crs_->set_storage_type(
+ CompressedRowSparseMatrix::StorageType::LOWER_TRIANGULAR);
+ } else {
+ bsm->UpdateCompressedRowSparseMatrixTranspose(m_crs_.get());
+ }
}
std::string message;
- return sparse_cholesky_->Factorize(lhs.get(), &message);
+ return sparse_cholesky_->Factorize(m_crs_.get(), &message);
}
-void VisibilityBasedPreconditioner::RightMultiply(const double* x,
- double* y) const {
+void VisibilityBasedPreconditioner::RightMultiplyAndAccumulate(
+ const double* x, double* y) const {
CHECK(x != nullptr);
CHECK(y != nullptr);
CHECK(sparse_cholesky_ != nullptr);
@@ -451,9 +440,9 @@
int cluster1 = cluster_membership_[block1];
int cluster2 = cluster_membership_[block2];
if (cluster1 > cluster2) {
- swap(cluster1, cluster2);
+ std::swap(cluster1, cluster2);
}
- return (cluster_pairs_.count(make_pair(cluster1, cluster2)) > 0);
+ return (cluster_pairs_.count(std::make_pair(cluster1, cluster2)) > 0);
}
bool VisibilityBasedPreconditioner::IsBlockPairOffDiagonal(
@@ -465,7 +454,7 @@
// each vertex.
void VisibilityBasedPreconditioner::ForestToClusterPairs(
const WeightedGraph<int>& forest,
- std::unordered_set<pair<int, int>, pair_hash>* cluster_pairs) const {
+ std::unordered_set<std::pair<int, int>, pair_hash>* cluster_pairs) const {
CHECK(cluster_pairs != nullptr);
cluster_pairs->clear();
const std::unordered_set<int>& vertices = forest.vertices();
@@ -474,11 +463,11 @@
// Add all the cluster pairs corresponding to the edges in the
// forest.
for (const int cluster1 : vertices) {
- cluster_pairs->insert(make_pair(cluster1, cluster1));
+ cluster_pairs->insert(std::make_pair(cluster1, cluster1));
const std::unordered_set<int>& neighbors = forest.Neighbors(cluster1);
for (const int cluster2 : neighbors) {
if (cluster1 < cluster2) {
- cluster_pairs->insert(make_pair(cluster1, cluster2));
+ cluster_pairs->insert(std::make_pair(cluster1, cluster2));
}
}
}
@@ -488,8 +477,8 @@
// of all its cameras. In other words, the set of points visible to
// any camera in the cluster.
void VisibilityBasedPreconditioner::ComputeClusterVisibility(
- const vector<set<int>>& visibility,
- vector<set<int>>* cluster_visibility) const {
+ const std::vector<std::set<int>>& visibility,
+ std::vector<std::set<int>>* cluster_visibility) const {
CHECK(cluster_visibility != nullptr);
cluster_visibility->resize(0);
cluster_visibility->resize(num_clusters_);
@@ -503,24 +492,25 @@
// Construct a graph whose vertices are the clusters, and the edge
// weights are the number of 3D points visible to cameras in both the
// vertices.
-WeightedGraph<int>* VisibilityBasedPreconditioner::CreateClusterGraph(
- const vector<set<int>>& cluster_visibility) const {
- WeightedGraph<int>* cluster_graph = new WeightedGraph<int>;
+std::unique_ptr<WeightedGraph<int>>
+VisibilityBasedPreconditioner::CreateClusterGraph(
+ const std::vector<std::set<int>>& cluster_visibility) const {
+ auto cluster_graph = std::make_unique<WeightedGraph<int>>();
for (int i = 0; i < num_clusters_; ++i) {
cluster_graph->AddVertex(i);
}
for (int i = 0; i < num_clusters_; ++i) {
- const set<int>& cluster_i = cluster_visibility[i];
+ const std::set<int>& cluster_i = cluster_visibility[i];
for (int j = i + 1; j < num_clusters_; ++j) {
- vector<int> intersection;
- const set<int>& cluster_j = cluster_visibility[j];
- set_intersection(cluster_i.begin(),
- cluster_i.end(),
- cluster_j.begin(),
- cluster_j.end(),
- back_inserter(intersection));
+ std::vector<int> intersection;
+ const std::set<int>& cluster_j = cluster_visibility[j];
+ std::set_intersection(cluster_i.begin(),
+ cluster_i.end(),
+ cluster_j.begin(),
+ cluster_j.end(),
+ std::back_inserter(intersection));
if (intersection.size() > 0) {
// Clusters interact strongly when they share a large number
@@ -545,7 +535,7 @@
// of integers so that the cluster ids are in [0, num_clusters_).
void VisibilityBasedPreconditioner::FlattenMembershipMap(
const std::unordered_map<int, int>& membership_map,
- vector<int>* membership_vector) const {
+ std::vector<int>* membership_vector) const {
CHECK(membership_vector != nullptr);
membership_vector->resize(0);
membership_vector->resize(num_blocks_, -1);
@@ -581,5 +571,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/visibility_based_preconditioner.h b/internal/ceres/visibility_based_preconditioner.h
index 0457b9a..d2d4aad 100644
--- a/internal/ceres/visibility_based_preconditioner.h
+++ b/internal/ceres/visibility_based_preconditioner.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2017 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -55,14 +55,14 @@
#include <utility>
#include <vector>
+#include "ceres/block_structure.h"
#include "ceres/graph.h"
#include "ceres/linear_solver.h"
#include "ceres/pair_hash.h"
#include "ceres/preconditioner.h"
#include "ceres/sparse_cholesky.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
class BlockRandomAccessSparseMatrix;
class BlockSparseMatrix;
@@ -122,9 +122,10 @@
// options.elimination_groups.push_back(num_cameras);
// VisibilityBasedPreconditioner preconditioner(
// *A.block_structure(), options);
-// preconditioner.Update(A, NULL);
-// preconditioner.RightMultiply(x, y);
-class VisibilityBasedPreconditioner : public BlockSparseMatrixPreconditioner {
+// preconditioner.Update(A, nullptr);
+// preconditioner.RightMultiplyAndAccumulate(x, y);
+class CERES_NO_EXPORT VisibilityBasedPreconditioner
+ : public BlockSparseMatrixPreconditioner {
public:
// Initialize the symbolic structure of the preconditioner. bs is
// the block structure of the linear system to be solved. It is used
@@ -133,14 +134,14 @@
// It has the same structural requirement as other Schur complement
// based solvers. Please see schur_eliminator.h for more details.
VisibilityBasedPreconditioner(const CompressedRowBlockStructure& bs,
- const Preconditioner::Options& options);
+ Preconditioner::Options options);
VisibilityBasedPreconditioner(const VisibilityBasedPreconditioner&) = delete;
void operator=(const VisibilityBasedPreconditioner&) = delete;
- virtual ~VisibilityBasedPreconditioner();
+ ~VisibilityBasedPreconditioner() override;
// Preconditioner interface
- void RightMultiply(const double* x, double* y) const final;
+ void RightMultiplyAndAccumulate(const double* x, double* y) const final;
int num_rows() const final;
friend class VisibilityBasedPreconditionerTest;
@@ -160,7 +161,7 @@
void ComputeClusterVisibility(
const std::vector<std::set<int>>& visibility,
std::vector<std::set<int>>* cluster_visibility) const;
- WeightedGraph<int>* CreateClusterGraph(
+ std::unique_ptr<WeightedGraph<int>> CreateClusterGraph(
const std::vector<std::set<int>>& visibility) const;
void ForestToClusterPairs(
const WeightedGraph<int>& forest,
@@ -176,7 +177,7 @@
int num_clusters_;
// Sizes of the blocks in the schur complement.
- std::vector<int> block_size_;
+ std::vector<Block> blocks_;
// Mapping from cameras to clusters.
std::vector<int> cluster_membership_;
@@ -193,10 +194,10 @@
// Preconditioner matrix.
std::unique_ptr<BlockRandomAccessSparseMatrix> m_;
+ std::unique_ptr<CompressedRowSparseMatrix> m_crs_;
std::unique_ptr<SparseCholesky> sparse_cholesky_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
#endif // CERES_INTERNAL_VISIBILITY_BASED_PRECONDITIONER_H_
diff --git a/internal/ceres/visibility_based_preconditioner_test.cc b/internal/ceres/visibility_based_preconditioner_test.cc
index 10aa619..4d52753 100644
--- a/internal/ceres/visibility_based_preconditioner_test.cc
+++ b/internal/ceres/visibility_based_preconditioner_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -47,8 +47,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
// TODO(sameeragarwal): Re-enable this test once serialization is
// working again.
@@ -67,8 +66,8 @@
// void SetUp() {
// string input_file = TestFileAbsolutePath("problem-6-1384-000.lsqp");
-// std::unique_ptr<LinearLeastSquaresProblem> problem(
-// CHECK_NOTNULL(CreateLinearLeastSquaresProblemFromFile(input_file)));
+// std::unique_ptr<LinearLeastSquaresProblem> problem =
+// CreateLinearLeastSquaresProblemFromFile(input_file));
// A_.reset(down_cast<BlockSparseMatrix*>(problem->A.release()));
// b_.reset(problem->b.release());
// D_.reset(problem->D.release());
@@ -96,7 +95,8 @@
// // conditioned.
// VectorRef(D_.get(), num_cols_).setConstant(10.0);
-// schur_complement_.reset(new BlockRandomAccessDenseMatrix(blocks));
+// schur_complement_ =
+// std::make_unique<BlockRandomAccessDenseMatrix>(blocks);
// Vector rhs(schur_complement_->num_rows());
// std::unique_ptr<SchurEliminatorBase> eliminator;
@@ -104,7 +104,7 @@
// eliminator_options.elimination_groups = options_.elimination_groups;
// eliminator_options.num_threads = options_.num_threads;
-// eliminator.reset(SchurEliminatorBase::Create(eliminator_options));
+// eliminator = SchurEliminatorBase::Create(eliminator_options);
// eliminator->Init(num_eliminate_blocks_, bs);
// eliminator->Eliminate(A_.get(), b_.get(), D_.get(),
// schur_complement_.get(), rhs.data());
@@ -242,8 +242,9 @@
// TEST_F(VisibilityBasedPreconditionerTest, OneClusterClusterJacobi) {
// options_.type = CLUSTER_JACOBI;
-// preconditioner_.reset(
-// new VisibilityBasedPreconditioner(*A_->block_structure(), options_));
+// preconditioner_ =
+// std::make_unique<VisibilityBasedPreconditioner>(
+// *A_->block_structure(), options_);
// // Override the clustering to be a single clustering containing all
// // the cameras.
@@ -275,7 +276,7 @@
// y.setZero();
// z.setZero();
// x[i] = 1.0;
-// preconditioner_->RightMultiply(x.data(), y.data());
+// preconditioner_->RightMultiplyAndAccumulate(x.data(), y.data());
// z = full_schur_complement
// .selfadjointView<Eigen::Upper>()
// .llt().solve(x);
@@ -287,8 +288,9 @@
// TEST_F(VisibilityBasedPreconditionerTest, ClusterJacobi) {
// options_.type = CLUSTER_JACOBI;
-// preconditioner_.reset(
-// new VisibilityBasedPreconditioner(*A_->block_structure(), options_));
+// preconditioner_ =
+// std::make_unique<VisibilityBasedPreconditioner>(*A_->block_structure(),
+// options_);
// // Override the clustering to be equal number of cameras.
// vector<int>& cluster_membership = *get_mutable_cluster_membership();
@@ -312,8 +314,9 @@
// TEST_F(VisibilityBasedPreconditionerTest, ClusterTridiagonal) {
// options_.type = CLUSTER_TRIDIAGONAL;
-// preconditioner_.reset(
-// new VisibilityBasedPreconditioner(*A_->block_structure(), options_));
+// preconditioner_ =
+// std::make_unique<VisibilityBasedPreconditioner>(*A_->block_structure(),
+// options_);
// static const int kNumClusters = 3;
// // Override the clustering to be 3 clusters.
@@ -336,5 +339,4 @@
// EXPECT_TRUE(PreconditionerValuesMatch());
// }
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/visibility_test.cc b/internal/ceres/visibility_test.cc
index a199963..3efc77c 100644
--- a/internal/ceres/visibility_test.cc
+++ b/internal/ceres/visibility_test.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -40,11 +40,7 @@
#include "glog/logging.h"
#include "gtest/gtest.h"
-namespace ceres {
-namespace internal {
-
-using std::set;
-using std::vector;
+namespace ceres::internal {
class VisibilityTest : public ::testing::Test {};
@@ -60,50 +56,50 @@
// Row 1
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
- row.cells.push_back(Cell(5, 0));
+ row.cells.emplace_back(0, 0);
+ row.cells.emplace_back(5, 0);
}
// Row 2
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 2;
- row.cells.push_back(Cell(0, 1));
- row.cells.push_back(Cell(3, 1));
+ row.cells.emplace_back(0, 1);
+ row.cells.emplace_back(3, 1);
}
// Row 3
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 4;
- row.cells.push_back(Cell(1, 2));
- row.cells.push_back(Cell(2, 2));
+ row.cells.emplace_back(1, 2);
+ row.cells.emplace_back(2, 2);
}
// Row 4
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 6;
- row.cells.push_back(Cell(1, 3));
- row.cells.push_back(Cell(4, 3));
+ row.cells.emplace_back(1, 3);
+ row.cells.emplace_back(4, 3);
}
bs.cols.resize(num_cols);
- vector<set<int>> visibility;
+ std::vector<std::set<int>> visibility;
ComputeVisibility(bs, num_eliminate_blocks, &visibility);
ASSERT_EQ(visibility.size(), num_cols - num_eliminate_blocks);
- for (int i = 0; i < visibility.size(); ++i) {
- ASSERT_EQ(visibility[i].size(), 1);
+ for (const auto& visible : visibility) {
+ ASSERT_EQ(visible.size(), 1);
}
std::unique_ptr<WeightedGraph<int>> graph(
@@ -139,46 +135,46 @@
// Row 1
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 0;
- row.cells.push_back(Cell(0, 0));
+ row.cells.emplace_back(0, 0);
}
// Row 2
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 2;
- row.cells.push_back(Cell(0, 1));
+ row.cells.emplace_back(0, 1);
}
// Row 3
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 4;
- row.cells.push_back(Cell(1, 2));
+ row.cells.emplace_back(1, 2);
}
// Row 4
{
- bs.rows.push_back(CompressedRow());
+ bs.rows.emplace_back();
CompressedRow& row = bs.rows.back();
row.block.size = 2;
row.block.position = 6;
- row.cells.push_back(Cell(1, 3));
+ row.cells.emplace_back(1, 3);
}
bs.cols.resize(num_cols);
- vector<set<int>> visibility;
+ std::vector<std::set<int>> visibility;
ComputeVisibility(bs, num_eliminate_blocks, &visibility);
ASSERT_EQ(visibility.size(), num_cols - num_eliminate_blocks);
- for (int i = 0; i < visibility.size(); ++i) {
- ASSERT_EQ(visibility[i].size(), 0);
+ for (const auto& visible : visibility) {
+ ASSERT_EQ(visible.size(), 0);
}
std::unique_ptr<WeightedGraph<int>> graph(
@@ -201,5 +197,4 @@
}
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/wall_time.cc b/internal/ceres/wall_time.cc
index 7163927..2f4cf28 100644
--- a/internal/ceres/wall_time.cc
+++ b/internal/ceres/wall_time.cc
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -30,11 +30,9 @@
#include "ceres/wall_time.h"
-#ifdef CERES_USE_OPENMP
-#include <omp.h>
-#else
#include <ctime>
-#endif
+
+#include "ceres/internal/config.h"
#ifdef _WIN32
#include <windows.h>
@@ -42,13 +40,9 @@
#include <sys/time.h>
#endif
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
double WallTimeInSeconds() {
-#ifdef CERES_USE_OPENMP
- return omp_get_wtime();
-#else
#ifdef _WIN32
LARGE_INTEGER count;
LARGE_INTEGER frequency;
@@ -58,10 +52,9 @@
static_cast<double>(frequency.QuadPart);
#else
timeval time_val;
- gettimeofday(&time_val, NULL);
+ gettimeofday(&time_val, nullptr);
return (time_val.tv_sec + time_val.tv_usec * 1e-6);
#endif
-#endif
}
EventLogger::EventLogger(const std::string& logger_name) {
@@ -72,7 +65,7 @@
start_time_ = WallTimeInSeconds();
last_event_time_ = start_time_;
events_ = StringPrintf(
- "\n%s\n Delta Cumulative\n",
+ "\n%s\n Delta Cumulative\n",
logger_name.c_str());
}
@@ -101,5 +94,4 @@
absolute_time_delta);
}
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
diff --git a/internal/ceres/wall_time.h b/internal/ceres/wall_time.h
index 9c92e9e..f99052b 100644
--- a/internal/ceres/wall_time.h
+++ b/internal/ceres/wall_time.h
@@ -1,5 +1,5 @@
// Ceres Solver - A fast non-linear least squares minimizer
-// Copyright 2015 Google Inc. All rights reserved.
+// Copyright 2023 Google Inc. All rights reserved.
// http://ceres-solver.org/
//
// Redistribution and use in source and binary forms, with or without
@@ -34,18 +34,16 @@
#include <map>
#include <string>
-#include "ceres/internal/port.h"
+#include "ceres/internal/disable_warnings.h"
+#include "ceres/internal/export.h"
#include "ceres/stringprintf.h"
#include "glog/logging.h"
-namespace ceres {
-namespace internal {
+namespace ceres::internal {
-// Returns time, in seconds, from some arbitrary starting point. If
-// OpenMP is available then the high precision openmp_get_wtime()
-// function is used. Otherwise on unixes, gettimeofday is used. The
-// granularity is in seconds on windows systems.
-CERES_EXPORT_INTERNAL double WallTimeInSeconds();
+// Returns time, in seconds, from some arbitrary starting point. On unixes,
+// gettimeofday is used. The granularity is microseconds.
+CERES_NO_EXPORT double WallTimeInSeconds();
// Log a series of events, recording for each event the time elapsed
// since the last event and since the creation of the object.
@@ -71,7 +69,7 @@
// Bar1: time1 time1
// Bar2: time2 time1 + time2;
// Total: time3 time1 + time2 + time3;
-class EventLogger {
+class CERES_NO_EXPORT EventLogger {
public:
explicit EventLogger(const std::string& logger_name);
~EventLogger();
@@ -83,7 +81,8 @@
std::string events_;
};
-} // namespace internal
-} // namespace ceres
+} // namespace ceres::internal
+
+#include "ceres/internal/reenable_warnings.h"
#endif // CERES_INTERNAL_WALL_TIME_H_
diff --git a/package.xml b/package.xml
index e7e3e02..9d43e43 100644
--- a/package.xml
+++ b/package.xml
@@ -1,6 +1,6 @@
<?xml version="1.0"?>
<!--
- Copyright 2017 Google Inc. All rights reserved.
+ Copyright 2023 Google Inc. All rights reserved.
http://ceres-solver.org/
Redistribution and use in source and binary forms, with or without
@@ -30,7 +30,7 @@
<package format="2">
<name>ceres-solver</name>
- <version>2.0.0</version>
+ <version>2.2.0</version>
<description>A large scale non-linear optimization library.</description>
<maintainer email="ceres-solver@googlegroups.com">
The Ceres Solver Authors
diff --git a/scripts/make_docs.py b/scripts/make_docs.py
index 7d5c4cd..9eab97e 100644
--- a/scripts/make_docs.py
+++ b/scripts/make_docs.py
@@ -1,8 +1,8 @@
-#!/usr/bin/python
+#!/usr/bin/python3
# encoding: utf-8
#
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -60,7 +60,7 @@
sphinx_exe = sys.argv[3]
# Run Sphinx to build the documentation.
-os.system('%s -b html -d %s %s %s' %(sphinx_exe, cache_dir, src_dir, html_dir))
+os.system('%s -n -a -d %s %s %s' %(sphinx_exe, cache_dir, src_dir, html_dir))
replacements = [
# The title for the homepage is not ideal, so change it.
diff --git a/scripts/make_release b/scripts/make_release
index ce5d5cd..8ec84bf 100755
--- a/scripts/make_release
+++ b/scripts/make_release
@@ -1,7 +1,7 @@
#!/bin/bash
#
# Ceres Solver - A fast non-linear least squares minimizer
-# Copyright 2015 Google Inc. All rights reserved.
+# Copyright 2023 Google Inc. All rights reserved.
# http://ceres-solver.org/
#
# Redistribution and use in source and binary forms, with or without
@@ -63,7 +63,7 @@
echo "$GIT_COMMIT" >> $VERSIONFILE
# Build the documentation.
-python $TMP/scripts/make_docs.py $TMP $DOCS_TMP
+python3 $TMP/scripts/make_docs.py $TMP $DOCS_TMP
cp -pr $DOCS_TMP/html $TMP/docs
# Build the tarball.
diff --git a/travis/install_travis_linux_deps.sh b/travis/install_travis_linux_deps.sh
deleted file mode 100755
index fd7cc78..0000000
--- a/travis/install_travis_linux_deps.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/bin/bash
-
-# Stop processing on any error.
-set -e
-
-# Install default versions of standard dependencies that are new enough in 18.04
-sudo apt-get install -y cmake
-sudo apt-get install -y libatlas-base-dev libsuitesparse-dev
-sudo apt-get install -y libgoogle-glog-dev libgflags-dev
-sudo apt-get install -y libeigen3-dev
diff --git a/travis/install_travis_osx_deps.sh b/travis/install_travis_osx_deps.sh
deleted file mode 100755
index adb949e..0000000
--- a/travis/install_travis_osx_deps.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-# Stop processing on any error.
-set -e
-
-function install_if_not_installed() {
- declare -r formula="$1"
- if [[ $(brew list ${formula} &>/dev/null; echo $?) -ne 0 ]]; then
- brew install ${formula}
- else
- echo "$0 - ${formula} is already installed."
- fi
-}
-
-# Manually trigger an update prior to installing packages to avoid Ruby
-# version related errors as per [1].
-#
-# [1]: https://github.com/travis-ci/travis-ci/issues/8552
-brew update
-
-install_if_not_installed cmake
-install_if_not_installed glog
-install_if_not_installed gflags
-install_if_not_installed eigen
-install_if_not_installed suite-sparse