Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
matrix exponential∗ mathcam† 2013-03-21 15:52:13 The exponential of a real valued square matrix A, denoted by eA , is defined as eA = ∞ X 1 k A k! k=0 1 = I + A + A2 + · · · 2 Let us check that eA is a real valued square matrix. Suppose M is a real number such |Aij | < M for all entries Aij of A. Then |(A2 )ij | < nM 2 for all entries in A2 , where n is the order of A. (Alternatively, one could argue using matrix norms: We have ||eA || ≤ e||A|| for the 2-norm, and hence the entries of eA are bounded by M = ||eA ||.) Thus, in general, we have |(Ak )i,j | < nk M k+1 . Since P∞ nk k+1 converges, we see that eA converges to real valued n × n matrix. k=0 k! M Example 1. Suppose A is nilpotent, i.e., Ar = 0 for some natural number r. Then 1 1 e A = I + A + A2 + · · · + Ar−1 . 2! (r − 1)! Example 2. If A is diagonalizable, i.e., of the form A = LDL−1 , where D is a diagonal matrix, then eA = = ∞ X 1 (LDL−1 )k k! k=0 ∞ X 1 LDk L−1 k! k=0 D = Le L−1 . Further, if D = diag{a1 , · · · , an }, then Dk = diag{ak1 , · · · , akn } whence eA = L diag{ea1 , · · · , ean }L−1 . ∗ hMatrixExponentiali created: h2013-03-21i by: hmathcami version: h34162i Privacy setting: h1i hDefinitioni h15A15i h15-00i † This text is available under the Creative Commons Attribution/Share-Alike License 3.0. You can reuse this document or portions thereof only if you do so under terms that are compatible with the CC-BY-SA license. 1 For diagonalizable matrix A, it follows that det eA = etr A . However, this formula is, in fact, valid for all A. Properties Let A be a square n×n real valued matrix. Then the matrix exponential satisfies the following properties 1. For the n × n zero matrix O, eO = I, where I is the n × n identity matrix. 2. If A = L diag{a1 , · · · , an }L−1 for an invertible n × n matrix L, then eA = L diag{ea1 , · · · , ean }L−1 . 3. If A and B commute, then eA+B = eA eB . 4. The trace of A and the determinant of eA are related by the formula det eA = etr A . In effect, eA is always invertible. The inverse is given by (eA )−1 = e−A . 5. If eA is a rotational matrix, then tr A = 0. A relevant example on property 3. We report an interesting example where the cited property is valid. In the field of complex numbers consider the complex matrix C = A + iB, (1) being C hermitian, i.e. C | = C̄ (here ”|” and overline ”−” stand for tranposition and conjugation, respectively) and orthogonal, i.e C −1 = C | . From (1), C | = A| + iB | . Since C is orthogonal, from the complex equation CC | = I (I is the identity matrix), we have CC | = (A + iB)(A| + iB | ) = (AA| − BB | ) + i(BA| + AB | ) = I, whence the imaginary part leads to the equation BA| + AB | = 0. (2) But C is also hermitian, so that C | = A| + iB | = C̄ = A − iB, therefore A| = A is symmetric, and B | = −B is skew-symmetric. From these and (2), BA = AB, and this implies that exp(A)·exp(B) = exp(A+B). So that, the real and imaginary parts of an orthogonal and hermitian matrix verifies the property. Likewise, it is easy to show that if the complex matrix is symmetric and unitary, its real an imaginary components also verify this property. 2