Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 1 | namespace Eigen { |
| 2 | |
| 3 | /** \eigenManualPage TutorialLinearAlgebra Linear algebra and decompositions |
| 4 | |
| 5 | This page explains how to solve linear systems, compute various decompositions such as LU, |
| 6 | QR, %SVD, eigendecompositions... After reading this page, don't miss our |
| 7 | \link TopicLinearAlgebraDecompositions catalogue \endlink of dense matrix decompositions. |
| 8 | |
| 9 | \eigenAutoToc |
| 10 | |
| 11 | \section TutorialLinAlgBasicSolve Basic linear solving |
| 12 | |
| 13 | \b The \b problem: You have a system of equations, that you have written as a single matrix equation |
| 14 | \f[ Ax \: = \: b \f] |
| 15 | Where \a A and \a b are matrices (\a b could be a vector, as a special case). You want to find a solution \a x. |
| 16 | |
| 17 | \b The \b solution: You can choose between various decompositions, depending on what your matrix \a A looks like, |
| 18 | and depending on whether you favor speed or accuracy. However, let's start with an example that works in all cases, |
| 19 | and is a good compromise: |
| 20 | <table class="example"> |
| 21 | <tr><th>Example:</th><th>Output:</th></tr> |
| 22 | <tr> |
| 23 | <td>\include TutorialLinAlgExSolveColPivHouseholderQR.cpp </td> |
| 24 | <td>\verbinclude TutorialLinAlgExSolveColPivHouseholderQR.out </td> |
| 25 | </tr> |
| 26 | </table> |
| 27 | |
| 28 | In this example, the colPivHouseholderQr() method returns an object of class ColPivHouseholderQR. Since here the |
| 29 | matrix is of type Matrix3f, this line could have been replaced by: |
| 30 | \code |
| 31 | ColPivHouseholderQR<Matrix3f> dec(A); |
| 32 | Vector3f x = dec.solve(b); |
| 33 | \endcode |
| 34 | |
| 35 | Here, ColPivHouseholderQR is a QR decomposition with column pivoting. It's a good compromise for this tutorial, as it |
| 36 | works for all matrices while being quite fast. Here is a table of some other decompositions that you can choose from, |
| 37 | depending on your matrix and the trade-off you want to make: |
| 38 | |
| 39 | <table class="manual"> |
| 40 | <tr> |
| 41 | <th>Decomposition</th> |
| 42 | <th>Method</th> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 43 | <th>Requirements<br/>on the matrix</th> |
| 44 | <th>Speed<br/> (small-to-medium)</th> |
| 45 | <th>Speed<br/> (large)</th> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 46 | <th>Accuracy</th> |
| 47 | </tr> |
| 48 | <tr> |
| 49 | <td>PartialPivLU</td> |
| 50 | <td>partialPivLu()</td> |
| 51 | <td>Invertible</td> |
| 52 | <td>++</td> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 53 | <td>++</td> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 54 | <td>+</td> |
| 55 | </tr> |
| 56 | <tr class="alt"> |
| 57 | <td>FullPivLU</td> |
| 58 | <td>fullPivLu()</td> |
| 59 | <td>None</td> |
| 60 | <td>-</td> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 61 | <td>- -</td> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 62 | <td>+++</td> |
| 63 | </tr> |
| 64 | <tr> |
| 65 | <td>HouseholderQR</td> |
| 66 | <td>householderQr()</td> |
| 67 | <td>None</td> |
| 68 | <td>++</td> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 69 | <td>++</td> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 70 | <td>+</td> |
| 71 | </tr> |
| 72 | <tr class="alt"> |
| 73 | <td>ColPivHouseholderQR</td> |
| 74 | <td>colPivHouseholderQr()</td> |
| 75 | <td>None</td> |
| 76 | <td>+</td> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 77 | <td>-</td> |
| 78 | <td>+++</td> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 79 | </tr> |
| 80 | <tr> |
| 81 | <td>FullPivHouseholderQR</td> |
| 82 | <td>fullPivHouseholderQr()</td> |
| 83 | <td>None</td> |
| 84 | <td>-</td> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 85 | <td>- -</td> |
| 86 | <td>+++</td> |
| 87 | </tr> |
| 88 | <tr class="alt"> |
| 89 | <td>CompleteOrthogonalDecomposition</td> |
| 90 | <td>completeOrthogonalDecomposition()</td> |
| 91 | <td>None</td> |
| 92 | <td>+</td> |
| 93 | <td>-</td> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 94 | <td>+++</td> |
| 95 | </tr> |
| 96 | <tr class="alt"> |
| 97 | <td>LLT</td> |
| 98 | <td>llt()</td> |
| 99 | <td>Positive definite</td> |
| 100 | <td>+++</td> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 101 | <td>+++</td> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 102 | <td>+</td> |
| 103 | </tr> |
| 104 | <tr> |
| 105 | <td>LDLT</td> |
| 106 | <td>ldlt()</td> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 107 | <td>Positive or negative<br/> semidefinite</td> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 108 | <td>+++</td> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 109 | <td>+</td> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 110 | <td>++</td> |
| 111 | </tr> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 112 | <tr class="alt"> |
| 113 | <td>BDCSVD</td> |
| 114 | <td>bdcSvd()</td> |
| 115 | <td>None</td> |
| 116 | <td>-</td> |
| 117 | <td>-</td> |
| 118 | <td>+++</td> |
| 119 | </tr> |
| 120 | <tr class="alt"> |
| 121 | <td>JacobiSVD</td> |
| 122 | <td>jacobiSvd()</td> |
| 123 | <td>None</td> |
| 124 | <td>-</td> |
| 125 | <td>- - -</td> |
| 126 | <td>+++</td> |
| 127 | </tr> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 128 | </table> |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 129 | To get an overview of the true relative speed of the different decompositions, check this \link DenseDecompositionBenchmark benchmark \endlink. |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 130 | |
| 131 | All of these decompositions offer a solve() method that works as in the above example. |
| 132 | |
| 133 | For example, if your matrix is positive definite, the above table says that a very good |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 134 | choice is then the LLT or LDLT decomposition. Here's an example, also demonstrating that using a general |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 135 | matrix (not a vector) as right hand side is possible. |
| 136 | |
| 137 | <table class="example"> |
| 138 | <tr><th>Example:</th><th>Output:</th></tr> |
| 139 | <tr> |
| 140 | <td>\include TutorialLinAlgExSolveLDLT.cpp </td> |
| 141 | <td>\verbinclude TutorialLinAlgExSolveLDLT.out </td> |
| 142 | </tr> |
| 143 | </table> |
| 144 | |
| 145 | For a \ref TopicLinearAlgebraDecompositions "much more complete table" comparing all decompositions supported by Eigen (notice that Eigen |
| 146 | supports many other decompositions), see our special page on |
| 147 | \ref TopicLinearAlgebraDecompositions "this topic". |
| 148 | |
| 149 | \section TutorialLinAlgSolutionExists Checking if a solution really exists |
| 150 | |
| 151 | Only you know what error margin you want to allow for a solution to be considered valid. |
| 152 | So Eigen lets you do this computation for yourself, if you want to, as in this example: |
| 153 | |
| 154 | <table class="example"> |
| 155 | <tr><th>Example:</th><th>Output:</th></tr> |
| 156 | <tr> |
| 157 | <td>\include TutorialLinAlgExComputeSolveError.cpp </td> |
| 158 | <td>\verbinclude TutorialLinAlgExComputeSolveError.out </td> |
| 159 | </tr> |
| 160 | </table> |
| 161 | |
| 162 | \section TutorialLinAlgEigensolving Computing eigenvalues and eigenvectors |
| 163 | |
| 164 | You need an eigendecomposition here, see available such decompositions on \ref TopicLinearAlgebraDecompositions "this page". |
| 165 | Make sure to check if your matrix is self-adjoint, as is often the case in these problems. Here's an example using |
| 166 | SelfAdjointEigenSolver, it could easily be adapted to general matrices using EigenSolver or ComplexEigenSolver. |
| 167 | |
| 168 | The computation of eigenvalues and eigenvectors does not necessarily converge, but such failure to converge is |
| 169 | very rare. The call to info() is to check for this possibility. |
| 170 | |
| 171 | <table class="example"> |
| 172 | <tr><th>Example:</th><th>Output:</th></tr> |
| 173 | <tr> |
| 174 | <td>\include TutorialLinAlgSelfAdjointEigenSolver.cpp </td> |
| 175 | <td>\verbinclude TutorialLinAlgSelfAdjointEigenSolver.out </td> |
| 176 | </tr> |
| 177 | </table> |
| 178 | |
| 179 | \section TutorialLinAlgInverse Computing inverse and determinant |
| 180 | |
| 181 | First of all, make sure that you really want this. While inverse and determinant are fundamental mathematical concepts, |
| 182 | in \em numerical linear algebra they are not as popular as in pure mathematics. Inverse computations are often |
| 183 | advantageously replaced by solve() operations, and the determinant is often \em not a good way of checking if a matrix |
| 184 | is invertible. |
| 185 | |
| 186 | However, for \em very \em small matrices, the above is not true, and inverse and determinant can be very useful. |
| 187 | |
| 188 | While certain decompositions, such as PartialPivLU and FullPivLU, offer inverse() and determinant() methods, you can also |
| 189 | call inverse() and determinant() directly on a matrix. If your matrix is of a very small fixed size (at most 4x4) this |
| 190 | allows Eigen to avoid performing a LU decomposition, and instead use formulas that are more efficient on such small matrices. |
| 191 | |
| 192 | Here is an example: |
| 193 | <table class="example"> |
| 194 | <tr><th>Example:</th><th>Output:</th></tr> |
| 195 | <tr> |
| 196 | <td>\include TutorialLinAlgInverseDeterminant.cpp </td> |
| 197 | <td>\verbinclude TutorialLinAlgInverseDeterminant.out </td> |
| 198 | </tr> |
| 199 | </table> |
| 200 | |
| 201 | \section TutorialLinAlgLeastsquares Least squares solving |
| 202 | |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 203 | The most accurate method to do least squares solving is with a SVD decomposition. |
| 204 | Eigen provides two implementations. |
| 205 | The recommended one is the BDCSVD class, which scale well for large problems |
| 206 | and automatically fall-back to the JacobiSVD class for smaller problems. |
| 207 | For both classes, their solve() method is doing least-squares solving. |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 208 | |
| 209 | Here is an example: |
| 210 | <table class="example"> |
| 211 | <tr><th>Example:</th><th>Output:</th></tr> |
| 212 | <tr> |
| 213 | <td>\include TutorialLinAlgSVDSolve.cpp </td> |
| 214 | <td>\verbinclude TutorialLinAlgSVDSolve.out </td> |
| 215 | </tr> |
| 216 | </table> |
| 217 | |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 218 | Another methods, potentially faster but less reliable, are to use a Cholesky decomposition of the |
| 219 | normal matrix or a QR decomposition. Our page on \link LeastSquares least squares solving \endlink |
| 220 | has more details. |
| 221 | |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 222 | |
| 223 | \section TutorialLinAlgSeparateComputation Separating the computation from the construction |
| 224 | |
| 225 | In the above examples, the decomposition was computed at the same time that the decomposition object was constructed. |
| 226 | There are however situations where you might want to separate these two things, for example if you don't know, |
| 227 | at the time of the construction, the matrix that you will want to decompose; or if you want to reuse an existing |
| 228 | decomposition object. |
| 229 | |
| 230 | What makes this possible is that: |
| 231 | \li all decompositions have a default constructor, |
| 232 | \li all decompositions have a compute(matrix) method that does the computation, and that may be called again |
| 233 | on an already-computed decomposition, reinitializing it. |
| 234 | |
| 235 | For example: |
| 236 | |
| 237 | <table class="example"> |
| 238 | <tr><th>Example:</th><th>Output:</th></tr> |
| 239 | <tr> |
| 240 | <td>\include TutorialLinAlgComputeTwice.cpp </td> |
| 241 | <td>\verbinclude TutorialLinAlgComputeTwice.out </td> |
| 242 | </tr> |
| 243 | </table> |
| 244 | |
| 245 | Finally, you can tell the decomposition constructor to preallocate storage for decomposing matrices of a given size, |
| 246 | so that when you subsequently decompose such matrices, no dynamic memory allocation is performed (of course, if you |
| 247 | are using fixed-size matrices, no dynamic memory allocation happens at all). This is done by just |
| 248 | passing the size to the decomposition constructor, as in this example: |
| 249 | \code |
| 250 | HouseholderQR<MatrixXf> qr(50,50); |
| 251 | MatrixXf A = MatrixXf::Random(50,50); |
| 252 | qr.compute(A); // no dynamic memory allocation |
| 253 | \endcode |
| 254 | |
| 255 | \section TutorialLinAlgRankRevealing Rank-revealing decompositions |
| 256 | |
| 257 | Certain decompositions are rank-revealing, i.e. are able to compute the rank of a matrix. These are typically |
| 258 | also the decompositions that behave best in the face of a non-full-rank matrix (which in the square case means a |
| 259 | singular matrix). On \ref TopicLinearAlgebraDecompositions "this table" you can see for all our decompositions |
| 260 | whether they are rank-revealing or not. |
| 261 | |
| 262 | Rank-revealing decompositions offer at least a rank() method. They can also offer convenience methods such as isInvertible(), |
| 263 | and some are also providing methods to compute the kernel (null-space) and image (column-space) of the matrix, as is the |
| 264 | case with FullPivLU: |
| 265 | |
| 266 | <table class="example"> |
| 267 | <tr><th>Example:</th><th>Output:</th></tr> |
| 268 | <tr> |
| 269 | <td>\include TutorialLinAlgRankRevealing.cpp </td> |
| 270 | <td>\verbinclude TutorialLinAlgRankRevealing.out </td> |
| 271 | </tr> |
| 272 | </table> |
| 273 | |
| 274 | Of course, any rank computation depends on the choice of an arbitrary threshold, since practically no |
| 275 | floating-point matrix is \em exactly rank-deficient. Eigen picks a sensible default threshold, which depends |
| 276 | on the decomposition but is typically the diagonal size times machine epsilon. While this is the best default we |
| 277 | could pick, only you know what is the right threshold for your application. You can set this by calling setThreshold() |
| 278 | on your decomposition object before calling rank() or any other method that needs to use such a threshold. |
| 279 | The decomposition itself, i.e. the compute() method, is independent of the threshold. You don't need to recompute the |
| 280 | decomposition after you've changed the threshold. |
| 281 | |
| 282 | <table class="example"> |
| 283 | <tr><th>Example:</th><th>Output:</th></tr> |
| 284 | <tr> |
| 285 | <td>\include TutorialLinAlgSetThreshold.cpp </td> |
| 286 | <td>\verbinclude TutorialLinAlgSetThreshold.out </td> |
| 287 | </tr> |
| 288 | </table> |
| 289 | |
| 290 | */ |
| 291 | |
| 292 | } |