Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 1 | namespace Eigen { |
| 2 | |
| 3 | /** \eigenManualPage TutorialReductionsVisitorsBroadcasting Reductions, visitors and broadcasting |
| 4 | |
| 5 | This page explains Eigen's reductions, visitors and broadcasting and how they are used with |
| 6 | \link MatrixBase matrices \endlink and \link ArrayBase arrays \endlink. |
| 7 | |
| 8 | \eigenAutoToc |
| 9 | |
| 10 | \section TutorialReductionsVisitorsBroadcastingReductions Reductions |
| 11 | In Eigen, a reduction is a function taking a matrix or array, and returning a single |
| 12 | scalar value. One of the most used reductions is \link DenseBase::sum() .sum() \endlink, |
| 13 | returning the sum of all the coefficients inside a given matrix or array. |
| 14 | |
| 15 | <table class="example"> |
| 16 | <tr><th>Example:</th><th>Output:</th></tr> |
| 17 | <tr><td> |
| 18 | \include tut_arithmetic_redux_basic.cpp |
| 19 | </td> |
| 20 | <td> |
| 21 | \verbinclude tut_arithmetic_redux_basic.out |
| 22 | </td></tr></table> |
| 23 | |
| 24 | The \em trace of a matrix, as returned by the function \c trace(), is the sum of the diagonal coefficients and can equivalently be computed <tt>a.diagonal().sum()</tt>. |
| 25 | |
| 26 | |
| 27 | \subsection TutorialReductionsVisitorsBroadcastingReductionsNorm Norm computations |
| 28 | |
| 29 | The (Euclidean a.k.a. \f$\ell^2\f$) squared norm of a vector can be obtained \link MatrixBase::squaredNorm() squaredNorm() \endlink. It is equal to the dot product of the vector by itself, and equivalently to the sum of squared absolute values of its coefficients. |
| 30 | |
| 31 | Eigen also provides the \link MatrixBase::norm() norm() \endlink method, which returns the square root of \link MatrixBase::squaredNorm() squaredNorm() \endlink. |
| 32 | |
| 33 | These operations can also operate on matrices; in that case, a n-by-p matrix is seen as a vector of size (n*p), so for example the \link MatrixBase::norm() norm() \endlink method returns the "Frobenius" or "Hilbert-Schmidt" norm. We refrain from speaking of the \f$\ell^2\f$ norm of a matrix because that can mean different things. |
| 34 | |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 35 | If you want other coefficient-wise \f$\ell^p\f$ norms, use the \link MatrixBase::lpNorm lpNorm<p>() \endlink method. The template parameter \a p can take the special value \a Infinity if you want the \f$\ell^\infty\f$ norm, which is the maximum of the absolute values of the coefficients. |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 36 | |
| 37 | The following example demonstrates these methods. |
| 38 | |
| 39 | <table class="example"> |
| 40 | <tr><th>Example:</th><th>Output:</th></tr> |
| 41 | <tr><td> |
| 42 | \include Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp |
| 43 | </td> |
| 44 | <td> |
| 45 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.out |
| 46 | </td></tr></table> |
| 47 | |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 48 | \b Operator \b norm: The 1-norm and \f$\infty\f$-norm <a href="https://en.wikipedia.org/wiki/Operator_norm">matrix operator norms</a> can easily be computed as follows: |
| 49 | <table class="example"> |
| 50 | <tr><th>Example:</th><th>Output:</th></tr> |
| 51 | <tr><td> |
| 52 | \include Tutorial_ReductionsVisitorsBroadcasting_reductions_operatornorm.cpp |
| 53 | </td> |
| 54 | <td> |
| 55 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_operatornorm.out |
| 56 | </td></tr></table> |
| 57 | See below for more explanations on the syntax of these expressions. |
| 58 | |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 59 | \subsection TutorialReductionsVisitorsBroadcastingReductionsBool Boolean reductions |
| 60 | |
| 61 | The following reductions operate on boolean values: |
| 62 | - \link DenseBase::all() all() \endlink returns \b true if all of the coefficients in a given Matrix or Array evaluate to \b true . |
| 63 | - \link DenseBase::any() any() \endlink returns \b true if at least one of the coefficients in a given Matrix or Array evaluates to \b true . |
| 64 | - \link DenseBase::count() count() \endlink returns the number of coefficients in a given Matrix or Array that evaluate to \b true. |
| 65 | |
| 66 | These are typically used in conjunction with the coefficient-wise comparison and equality operators provided by Array. For instance, <tt>array > 0</tt> is an %Array of the same size as \c array , with \b true at those positions where the corresponding coefficient of \c array is positive. Thus, <tt>(array > 0).all()</tt> tests whether all coefficients of \c array are positive. This can be seen in the following example: |
| 67 | |
| 68 | <table class="example"> |
| 69 | <tr><th>Example:</th><th>Output:</th></tr> |
| 70 | <tr><td> |
| 71 | \include Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp |
| 72 | </td> |
| 73 | <td> |
| 74 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.out |
| 75 | </td></tr></table> |
| 76 | |
| 77 | \subsection TutorialReductionsVisitorsBroadcastingReductionsUserdefined User defined reductions |
| 78 | |
| 79 | TODO |
| 80 | |
| 81 | In the meantime you can have a look at the DenseBase::redux() function. |
| 82 | |
| 83 | \section TutorialReductionsVisitorsBroadcastingVisitors Visitors |
| 84 | Visitors are useful when one wants to obtain the location of a coefficient inside |
| 85 | a Matrix or Array. The simplest examples are |
| 86 | \link MatrixBase::maxCoeff() maxCoeff(&x,&y) \endlink and |
| 87 | \link MatrixBase::minCoeff() minCoeff(&x,&y)\endlink, which can be used to find |
| 88 | the location of the greatest or smallest coefficient in a Matrix or |
| 89 | Array. |
| 90 | |
| 91 | The arguments passed to a visitor are pointers to the variables where the |
| 92 | row and column position are to be stored. These variables should be of type |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 93 | \link Eigen::Index Index \endlink, as shown below: |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 94 | |
| 95 | <table class="example"> |
| 96 | <tr><th>Example:</th><th>Output:</th></tr> |
| 97 | <tr><td> |
| 98 | \include Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp |
| 99 | </td> |
| 100 | <td> |
| 101 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_visitors.out |
| 102 | </td></tr></table> |
| 103 | |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 104 | Both functions also return the value of the minimum or maximum coefficient. |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 105 | |
| 106 | \section TutorialReductionsVisitorsBroadcastingPartialReductions Partial reductions |
| 107 | Partial reductions are reductions that can operate column- or row-wise on a Matrix or |
| 108 | Array, applying the reduction operation on each column or row and |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 109 | returning a column or row vector with the corresponding values. Partial reductions are applied |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 110 | with \link DenseBase::colwise() colwise() \endlink or \link DenseBase::rowwise() rowwise() \endlink. |
| 111 | |
| 112 | A simple example is obtaining the maximum of the elements |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 113 | in each column in a given matrix, storing the result in a row vector: |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 114 | |
| 115 | <table class="example"> |
| 116 | <tr><th>Example:</th><th>Output:</th></tr> |
| 117 | <tr><td> |
| 118 | \include Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp |
| 119 | </td> |
| 120 | <td> |
| 121 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_colwise.out |
| 122 | </td></tr></table> |
| 123 | |
| 124 | The same operation can be performed row-wise: |
| 125 | |
| 126 | <table class="example"> |
| 127 | <tr><th>Example:</th><th>Output:</th></tr> |
| 128 | <tr><td> |
| 129 | \include Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp |
| 130 | </td> |
| 131 | <td> |
| 132 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_rowwise.out |
| 133 | </td></tr></table> |
| 134 | |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 135 | <b>Note that column-wise operations return a row vector, while row-wise operations return a column vector.</b> |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 136 | |
| 137 | \subsection TutorialReductionsVisitorsBroadcastingPartialReductionsCombined Combining partial reductions with other operations |
| 138 | It is also possible to use the result of a partial reduction to do further processing. |
| 139 | Here is another example that finds the column whose sum of elements is the maximum |
| 140 | within a matrix. With column-wise partial reductions this can be coded as: |
| 141 | |
| 142 | <table class="example"> |
| 143 | <tr><th>Example:</th><th>Output:</th></tr> |
| 144 | <tr><td> |
| 145 | \include Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp |
| 146 | </td> |
| 147 | <td> |
| 148 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_maxnorm.out |
| 149 | </td></tr></table> |
| 150 | |
| 151 | The previous example applies the \link DenseBase::sum() sum() \endlink reduction on each column |
| 152 | though the \link DenseBase::colwise() colwise() \endlink visitor, obtaining a new matrix whose |
| 153 | size is 1x4. |
| 154 | |
| 155 | Therefore, if |
| 156 | \f[ |
| 157 | \mbox{m} = \begin{bmatrix} 1 & 2 & 6 & 9 \\ |
| 158 | 3 & 1 & 7 & 2 \end{bmatrix} |
| 159 | \f] |
| 160 | |
| 161 | then |
| 162 | |
| 163 | \f[ |
| 164 | \mbox{m.colwise().sum()} = \begin{bmatrix} 4 & 3 & 13 & 11 \end{bmatrix} |
| 165 | \f] |
| 166 | |
| 167 | The \link DenseBase::maxCoeff() maxCoeff() \endlink reduction is finally applied |
| 168 | to obtain the column index where the maximum sum is found, |
| 169 | which is the column index 2 (third column) in this case. |
| 170 | |
| 171 | |
| 172 | \section TutorialReductionsVisitorsBroadcastingBroadcasting Broadcasting |
| 173 | The concept behind broadcasting is similar to partial reductions, with the difference that broadcasting |
| 174 | constructs an expression where a vector (column or row) is interpreted as a matrix by replicating it in |
| 175 | one direction. |
| 176 | |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 177 | A simple example is to add a certain column vector to each column in a matrix. |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 178 | This can be accomplished with: |
| 179 | |
| 180 | <table class="example"> |
| 181 | <tr><th>Example:</th><th>Output:</th></tr> |
| 182 | <tr><td> |
| 183 | \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp |
| 184 | </td> |
| 185 | <td> |
| 186 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.out |
| 187 | </td></tr></table> |
| 188 | |
| 189 | We can interpret the instruction <tt>mat.colwise() += v</tt> in two equivalent ways. It adds the vector \c v |
| 190 | to every column of the matrix. Alternatively, it can be interpreted as repeating the vector \c v four times to |
| 191 | form a four-by-two matrix which is then added to \c mat: |
| 192 | \f[ |
| 193 | \begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix} |
| 194 | + \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix} |
| 195 | = \begin{bmatrix} 1 & 2 & 6 & 9 \\ 4 & 2 & 8 & 3 \end{bmatrix}. |
| 196 | \f] |
| 197 | The operators <tt>-=</tt>, <tt>+</tt> and <tt>-</tt> can also be used column-wise and row-wise. On arrays, we |
| 198 | can also use the operators <tt>*=</tt>, <tt>/=</tt>, <tt>*</tt> and <tt>/</tt> to perform coefficient-wise |
| 199 | multiplication and division column-wise or row-wise. These operators are not available on matrices because it |
| 200 | is not clear what they would do. If you want multiply column 0 of a matrix \c mat with \c v(0), column 1 with |
| 201 | \c v(1), and so on, then use <tt>mat = mat * v.asDiagonal()</tt>. |
| 202 | |
| 203 | It is important to point out that the vector to be added column-wise or row-wise must be of type Vector, |
| 204 | and cannot be a Matrix. If this is not met then you will get compile-time error. This also means that |
| 205 | broadcasting operations can only be applied with an object of type Vector, when operating with Matrix. |
| 206 | The same applies for the Array class, where the equivalent for VectorXf is ArrayXf. As always, you should |
| 207 | not mix arrays and matrices in the same expression. |
| 208 | |
| 209 | To perform the same operation row-wise we can do: |
| 210 | |
| 211 | <table class="example"> |
| 212 | <tr><th>Example:</th><th>Output:</th></tr> |
| 213 | <tr><td> |
| 214 | \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp |
| 215 | </td> |
| 216 | <td> |
| 217 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.out |
| 218 | </td></tr></table> |
| 219 | |
| 220 | \subsection TutorialReductionsVisitorsBroadcastingBroadcastingCombined Combining broadcasting with other operations |
| 221 | Broadcasting can also be combined with other operations, such as Matrix or Array operations, |
| 222 | reductions and partial reductions. |
| 223 | |
| 224 | Now that broadcasting, reductions and partial reductions have been introduced, we can dive into a more advanced example that finds |
| 225 | the nearest neighbour of a vector <tt>v</tt> within the columns of matrix <tt>m</tt>. The Euclidean distance will be used in this example, |
| 226 | computing the squared Euclidean distance with the partial reduction named \link MatrixBase::squaredNorm() squaredNorm() \endlink: |
| 227 | |
| 228 | <table class="example"> |
| 229 | <tr><th>Example:</th><th>Output:</th></tr> |
| 230 | <tr><td> |
| 231 | \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp |
| 232 | </td> |
| 233 | <td> |
| 234 | \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.out |
| 235 | </td></tr></table> |
| 236 | |
| 237 | The line that does the job is |
| 238 | \code |
| 239 | (m.colwise() - v).colwise().squaredNorm().minCoeff(&index); |
| 240 | \endcode |
| 241 | |
| 242 | We will go step by step to understand what is happening: |
| 243 | |
| 244 | - <tt>m.colwise() - v</tt> is a broadcasting operation, subtracting <tt>v</tt> from each column in <tt>m</tt>. The result of this operation |
| 245 | is a new matrix whose size is the same as matrix <tt>m</tt>: \f[ |
| 246 | \mbox{m.colwise() - v} = |
| 247 | \begin{bmatrix} |
| 248 | -1 & 21 & 4 & 7 \\ |
| 249 | 0 & 8 & 4 & -1 |
| 250 | \end{bmatrix} |
| 251 | \f] |
| 252 | |
| 253 | - <tt>(m.colwise() - v).colwise().squaredNorm()</tt> is a partial reduction, computing the squared norm column-wise. The result of |
Austin Schuh | 189376f | 2018-12-20 22:11:15 +1100 | [diff] [blame^] | 254 | this operation is a row vector where each coefficient is the squared Euclidean distance between each column in <tt>m</tt> and <tt>v</tt>: \f[ |
Brian Silverman | 72890c2 | 2015-09-19 14:37:37 -0400 | [diff] [blame] | 255 | \mbox{(m.colwise() - v).colwise().squaredNorm()} = |
| 256 | \begin{bmatrix} |
| 257 | 1 & 505 & 32 & 50 |
| 258 | \end{bmatrix} |
| 259 | \f] |
| 260 | |
| 261 | - Finally, <tt>minCoeff(&index)</tt> is used to obtain the index of the column in <tt>m</tt> that is closest to <tt>v</tt> in terms of Euclidean |
| 262 | distance. |
| 263 | |
| 264 | */ |
| 265 | |
| 266 | } |