Download PAPER Solving the nth Degree Polynomial Matrix Equation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras
To appear in International Journal of Nonlinear Analysis and Applications
Vol. 00, No. 00, Month 20XX, 1–10
PAPER
Solving the nth Degree Polynomial Matrix Equation
N. Samaras
a
a∗
and D.A. Petrakia
Department of Applied Informatics, School of Information Sciences, University of
Macedonia, 156 Egnatia Str., 54006 Thessaloniki, Greece
(released October 2013)
The algorithm for finding the nth roots of a matrix A is well-known. The aim of this
paper is to present the general case of the nth degree polynomial matrix equation. The
study of the general case help us to solve any polynomial matrix equation. The main
difficulty to solve the polynomial matrix equation is that, in general, the nth degree
polynomial function h(x), is not invertible. Even if the function h is invertible, it is
difficult to find the type of the inverse function and its derivatives. We designed an
algorithm, which enables us to bypass anything related with the inverse function of h.
In our algorithm we just used the polynomial function h and its derivatives. This is a
very effective procedure and our algorithm can be used for every polynomial function
h and any square matrix A. All the possible cases concerning the Jordan canonical
form of the matrix A are examined. Mathematical types to calculate the number of
different roots of the polynomial matrix equation and their algebraic multiplicity are
also presented.
Keywords: Polynomial matrix equation, Simple Matrix, Derogatory Matrix, Type
of roots, Interpolating polynomial
AMS Subject Classification: 11Cxx, 15Axx, 65H04
1.
Introduction
Matrix theory was developed by Augustin Cauchy (1789-1857), Arthur Cayley
(1821-1895), James Sylvester (1814-1897), Ferdinard Frobenius (1849-1917),
Leopold Kronecker (1823-1891), Karl Theodor Weierstrass (1815-1897) and
others. In 1858 Cayley in his seminar ”A Memoir on the Theory of Matrices”,
investigated the square root of a matrix. Sylverster proposed definitions of f (A)
for general f . There are four equivalent definitions of f (A), based on the Jordan
canonical form, polynomial interpolation, the components of a matrix and the
Cauchy integral formulae.
One of the earliest uses (1938) of matrix theory in practical applications was
by Robert Frazer, William Duncan, and Arthur Collar of the Aerodynamics
Department of the National Physical Laboratory of England, who were developing
matrix methods for analyzing unwanted vibrations in aircraft.
∗ Corresponding
author. Email: [email protected]; Tel: +302310891866; Fax: +302310891879
1
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras
Consider the following nth degree polynomial matrix equation:
h(X) = an Xn + +a1 X + a0 Iν = A,
(1)
where A, XϵMν×ν (K) , K = R or K = C.
The problem of determining all the nth roots X of a given matrix A has been
examined by many mathematicians [1-8]. Bjorck and Hammarling [9] show how
to compute a cubic root using the complex Schur decomposition. An algebraic
formulae giving the square roots of 2 × 2 matrices is presented in [10]. A study
of the matrix approach to polynomials and its further exploration is presented in
[11,12]. In [13], the determination of algebraic formulae giving all the solutions of
the matrix equation Xn = A, where n > 2 and A is a 2 × 2 matrix with real or
complex elements is presented. Some other methods for computing the nth root
are described by Bini et al. in [14].
In this paper an algorithm is presented for the first time, for solving any polynomial matrix equation of the form (1), with the use only of the polynomial function
h(x) and its derivatives.
Furthermore, we compute the number and the type of the roots of a polynomial
matrix equation and also their algebraic multiplicities.
2.
Basic Properties
In this section we describe five basic propositions related to matrices and polynomial equations.
Proposition 2.1. If M is any arbitrary matrix ν × ν , JM is his Jordan canonical
form, SM is the transition matrix and f (x) is a polynomial, then
f (M) = SM f (JM )S−1
M .
Proof. See theorems 9.4.1, 9.4.2, 9.4.3 in [15] or 1.2 in [3] for a complete proof.
(
Re[λ] Im[λ]
Proposition 2.2. If λϵC and M =
−Im[λ] Re[λ]
are the(real and imaginary
) part of λ, then
k ] Im[λk ]
Re[λ
Mk =
, kϵN∗ .
−Im[λk ] Re[λk ]
)
, where Re[λ] and Im[λ]
Proof. The (
proof is obvious )
using the method of induction for k.
Re[λ]
Im[λ]
It is, M1 =
true.
−Im[λ] Re[λ]
(
)
Re[λk ] Im[λk ]
Let Mk =
, then
−Im[λk ] Re[λk ]
(
)
Re[λk ] · Re[λ] − Im[λk ] · Im[λ] Re[λk ] · Im[λ] + Im[λk ] · Re[λ]
k+1
k
1
M
= M ·M =
−(Re[λk ] · Im[λ] + Im[λk ] · Re[λ]) Re[λk ] · Re[λ] − Im[λk ] · Im[λ]
(
)
Re[λk+1 ] Im[λk+1 ]
=
.
−Im[λk+1 ] Re[λk+1 ]
2
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras
(
)
Re[λ] Im[λ]
and f is a polynomial with real
Proposition 2.3. If λϵC, M =
−Im[λ] Re[λ]
(
)
Re[f (λ)] Im[f (λ)]
coefficients then f (M) =
.
−Im[f (λ)] Re[f (λ)]
Proof. It is an obvious conclusion from proposition 3.2.
Proposition 2.4. If q(x) is a polynomial of ν − 1 degree, h(x) is a polynomial of
)(k−1)
(
1
, k = 1, 2, . . . , ν − 1
n degree which satisfies the relations, q (k) (λ) = h(1) (q(x))
x=λ
then, (hoq)(1) (x) = 1 and (hoq)(k) (x) = 0, k = 2, 3, . . . , ν − 1.
Proof. Applying the derivatives rules for function hoq(x) the following results are
obtained.
1
(hoq)(1) (x) = (h(q(x)))(1) = h(1) (q(x))q (1) (x) = h(1) (q(x)) h(1) (q(x))
= 1.
(hoq)(2) (x) = (h(q(x)))(2) = h(2) (q(x))q (1) (x)2 + h(1) (q(x))q (2) (x) =
=
h(2) (q(x))
(h(1) (q(x)))2
+ h(1) (q(x))q (2) (x)
=
h(2) (q(x))
(h(1) (q(x)))2
(q(x))q (x)
+ h(1) (q(x)) −h(h(1)
(q(x)))2
=
h(2) (q(x))
(h(1) (q(x)))2
−
h(1) (q(x))h(2) (q(x))
(h(1) (q(x)))2 h(1) (q(x))
=
h(2) (q(x))
(h(1) (q(x)))2
−
h(2) (q(x))
(h(1) (q(x)))2
(2)
(1)
=0
So (hoq)(k) = 0, k = 3, 4, . . . , ν − 1.
Proposition 2.5. If q(x) is a polynomial of ν − 1 degree and h(x) is a polynomial
of n degree which satisfies the relations
(
)(k−1)
1
h(ρ) = λ, q(λ) = ρ, q (k) (λ) = h(1) (q(x))
, k = 1, 2, . . . , ν − 1 then
x=λ
(hoq)(λ) = λ, (hoq)(1) (λ) = 1 and (hoq)(k) (λ) = 0, k = 2, 3, . . . , ν − 1.
Proof. It is an obvious conclusion from proposition 3.4.
3.
3.1
Solving the nth degree polynomial matrix equation
Calculation of the roots
Let an Xn + an−1 Xn−1 + ... + a1 X + a0 Iν = A, with A, XϵMν×ν (K) , K = R
or K = C be a nth degree polynomial matrix equation and h (x) =
an xn + an−1 xn−1 + ... + a1 x + a0 , be the corresponding polynomial. There are two
basic cases concerning the algebraic multiplicity of the eigenvalues of the matrix A.
Case 1. The matrix A is simple.
If ρ1 , ρ2 , . . . , ρν be numbers such that h(ρi ) = λi , i = 1, 2, 3, . . . , ν and q(x) be the
interpolating polynomial to the data (λi , ρi ), i = 1, 2, 3, . . . , ν then (hoq)(A) = A.
Proof. If A = SA JA S−1
A then it is
−1
(hoq)(A) = (hoq)(SA JA S−1
A ) = SA (hoq)(JA )SA
= SA diag[(hoq)(λ1 ), (hoq)(λ2 ), . . . , (hoq)(λν )]S−1
A
= SA diag[h(q(λ1 )), h(q(λ2 )), . . . , h(q(λν ))]S−1
A
3
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras
= SA diag[h(ρ1 ), h(ρ2 ), . . . , h(ρν )]S−1
A
−1
= SA diag[λ1 , λ2 , . . . , λν ]S−1
A = SA JA SA = A.
Corollary 3.1. The matrix B = q(A) is a root of the polynomial matrix equation
an Xn + an−1 Xn−1 + ... + a1 X + a0 Iν = A.
Case 2. The matrix A is derogatory.
If λ1 , λ2 , . . . , λs be the, different by two, eigenvalues of matrix A with algebraic
multiplicities α1 , α2 , . . . , αs , geometric multiplicities γ1 , γ2 , . . . , γs and indices
d1 , d2 , . . . , ds , respectively.
Let be ρi , i = 1, 2, . . . , s numbers such that h(ρi ) = λi , i = 1, 2, . . . , s and q(x) be
the
polynomial to the data
( interpolating
( (
(
(
)
)(1) (
)(2)
)(αi −2) ))
1
1
1
1
λi , ρi , h(1) (q(x))
, h(1 )(q(x))
, h(1) (q(x))
, . . . , h(1) (q(x))
,
x=λi
x=λi
x=λi
x=λi
i = 1, 2, 3, . . . , s then (hoq)(A) = A.
Proof. The Jordan canonical form of matrix A is
JA = diag[λ1 Iγ1 −1 , Jd1 , . . . , λi Iγi −1 , Jdi , . . . , λs Iγs −1 , Jds ], where Jdi , i = 1, 2, . . . , s,
are matrices


λi 1 . . . 0 0
 0 λi . . . 0 0 


 . [15]
·
·
·
·
·
di × di of the form Jdi = 


 0 0 . . . λi 1 
0 0 . . . 0 λi
−1
It is (hoq)(A) = (hoq)(SA JA S−1
A ) = SA (hoq)(JA )SA =
= SA diag[(hoq)(λ1 )Iγ1 , (hoq)(Jd1 −1 ), . . . , (hoq)(λi )Iγi , (hoq)(Jdi −1 ), . . . , (hoq)(λs )Iγs , (hoq)(Jds −1 )]S−1
A
= SA diag[λ1 Iγ1 , Jd1 −1 , . . . , λi Iγi , Jdi −1 , . . . , λs Iγs , Jds −1 ]S−1
=
A
(from
3.4,
3.5),
so
A
(hoq)(A) = A or h(q(A)) = A.
Corollary 3.2. Let be B = q(A) then h(B) = A, so the matrix B = q(A) is a root
of the polynomial matrix equation an Xn + an−1 Xn−1 + ... + a1 X + a0 Iν = A.
3.2
Comment
If we work in the real vector space Mν×ν (R) and the matrix A has as eigenvalues
the conjugate complex
numbers λ and λ then the corresponding Jordan block is
)
(
Re[λ] Im[λ]
.
−Im[λ] Re[λ]
Proof.(We can assume,)without restricting the generality, that
Re[λ] Im[λ]
JA =
.
−Im[λ] Re[λ]
There is the invertible transition ν × ν matrix SA such that A = SA JA S−1
A .
Let ρ be a complex number such that h(ρ) = λ and q(x) be the first degree
interpolating polynomial, with real coefficients, fitted to the data (λ, ρ).
−1
It is (hoq)(A) = (hoq)(SA JA S−1
A ) = SA (hoq)(JA )SA ,
(
)
Re[(hoq)(λ)] Im[(hoq)(λ)]
with (hoq)(JA ) =
(from 3.2, 3.3),
−Im[(hoq)(λ)] Re[(hoq)(λ)]
(
)
Re[λ] Im[λ]
so (hoq)(A) =
= JA .
−Im[λ] Re[λ]
4
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras
It is (hoq)(A) = SA JA S−1
A = A or h(q(A)) = A.
Let be B = q(A) then h(B) = A, so the matrix B = q(A) is a root of the polynomial
matrix equation an Xn + an−1 Xn−1 + ... + a1 X + a0 Iν = A.
4.
The number of the roots of nth degree polynomial matrix
equation and their algebraic multiplicities
There are four main cases concerning the algebraic multiplicity of the eigenvalues
of the matrix A.
4.1
Case 1.
All the eigenvalues λ1 , λ2 , . . . , λν of the matrix A have algebraic multiplicity 1 and
each of the equations h(x) = λi , i = 1, 2, 3, . . . , ν has n different by two roots. In
this case the equation an Xn +an−1 Xn−1 +...+a1 X+a0 Iν = A has nν different roots.
Proof. There are ν different equations h(x) = λi , i = 1, 2, 3, . . . , ν with n different
by two roots each of them, so by the fundamental rule of counting there are
m = n · n · . . . · n = nν different ways, so the equation has nν different roots.
An illustrative
example
(
)
21
Let be A =
. We will solve the polynomial matrix equation
12
X2 − 5X + 7I2 = A. Let be h(x) = x2 − 5x + 7. It is n = 2 (degree of the
polynomial h(x)) and ν = 2 (dimension of the matrix A). The characteristic
polynomial of matrix A is p(x) = x2 − 4x + 3.
The eigenvalues of the matrix A are λ1 = 1, λ2 = 3 with algebraic multiplicities
a1 = 1 and a2 = 1 respectively. The equation h(x) = λ1 has roots ρ11 = 2 and
ρ12 = 3 with algebraic multiplicities α11 = 1 and α12 = 1 respectively. The
equation h(x) = λ2 has roots ρ21 = 1 and ρ22 = 4 with algebraic multiplicities
α21 = 1 and α22 = 1 respectively. So the equation has nν = 4 different by two
roots X[i] with algebraic multiplicities 1,(for i = 1, 2, 3, 4.
)
λ1 = 1 ρ11 = 2 ρ12 = 3
.
The matrix of the interpolation data is
λ2 = 3 ρ21 = 1 ρ22 = 4
The roots of the equation are
(1) The interpolating polynomial for which is q[1](1) = ρ11 = 2,
q[1](3) = ρ21 = 1 is q[1](x) = − x2 + 52 , and the correspondent matrix root is
(
)
3
1
2 −2
.
X[1] = q[1](A) =
− 12 32
(2) The interpolating polynomial for which is q[2](1) = ρ11 = 2,
q[2](3) = ρ22 = 4 is q[2](x) = x + 1, and the correspondent matrix root is
(
)
31
X[2] = q[2](A) =
.
13
(3) The interpolating polynomial for which is q[3](1) = ρ12 = 3,
q[3](3) = ρ21 = 1 is q[3](x) = −x + 4, and the correspondent matrix root is
(
)
2 −1
X[3] = q[3](A) =
.
−1 2
5
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras
(4) The interpolating polynomial for which is q[4](1) = ρ12 = 3,
q[4](3) = ρ22 = 4 is q[4](x) = x2 + 52 , and the correspondent matrix root is
(
)
X[4] = q[4](A) =
4.2
7
2
1
2
1
2
7
2
.
Case 2.
All the eigenvalues λ1 , λ2 , . . . , λν of the matrix A have algebraic multiplicity 1
and each of the equations h(x) = λi has mi different roots, ρi1 , ρi2 , . . . , ρimi and
each of them has algebraic multiplicity αi1 , αi2 , . . . , αimi , where i = 1, 2, 3, . . . , ν.
In this case the number of different roots of the polynomial matrix equation is
m = m1 ·m2 ·. . . ·mν . Each matrix root of the above m has as algebraic multiplicity
the product of the algebraic multiplicities of the ordinates corresponding to the
interpolating data. This means that if {(λ1 , ρ1j1 ) , (λ2 , ρ2j2 ) , ..., (λν , ρνjν )} is a set
of interpolating data then the algebraic multiplicity of the produced matrix root
X is αi1 · αi2 · . . . · αimi .
Proof. There are ν different equations h(x) = λi , i = 1, 2, 3, . . . , ν with mi different
by two roots each of them, so by the fundamental rule of counting there are
m = m1 · m2 · . . . · mν different ways, so the equation has m different roots.
An illustrative
example
)
(
43
. We will solve the polynomial matrix equation
Let be A =
12
X3 − 3X2 + 5I2 = A. Let be h(x) = x3 − 3x2 + 5, then h(1) (x) = 3x2 − 6x,
h(2) (x) = 6x. The characteristic polynomial of matrix A is p(x) = x2 − 6x + 5.
The eigenvalues of the matrix A are λ1 = 1, λ2 = 5 with algebraic multiplicities
a1 = 1 and a2 = 1 respectively. The equation h(x) = λ1 has m1 = 2 different
roots ρ11 = −1 and ρ12 = 2 with algebraic multiplicities α11 = 1 and α12 = 2
respectively. The equation h(x) = λ2 has m2 = 2 different roots ρ21 = 3 and
ρ22 = 0 with algebraic multiplicities α21 = 1 and α22 = 2 respectively.
So the equation has nν = 32 = 9 roots. The number of the different roots is
m = m1 · m2 = 2 · 2 = 4.
(
)
λ1 = 1 ρ11 = −1 ρ12 = 2
The matrix of the interpolation data is
.
λ2 = 5 ρ21 = 3 ρ22 = 0
The roots of the equation are
(1) The interpolating polynomial for which is q[1](1) = ρ11 = −1,
q[1](5) = ρ22 = 0 is q[1](x) = x4 − 54 , and the correspondent matrix root is
(
)
− 14 34
X[1] = q[1](A) =
.
1
3
4 −4
(2) The interpolating polynomial for which is q[2](1) = ρ11 = −1,
q[2](5) = ρ21 = 3 is q[2](x) = x − 2, and the correspondent matrix root is
(
)
23
X[2] = q[2](A) =
.
10
(3) The interpolating polynomial for which is q[3](1) = ρ12 = 2,
q[3](5) = ρ22 = 0 is q[3](x) = − x2 + 52 , and the correspondent matrix root is
)
(
1
3
−
2
2
.
X[3] = q[3](A) =
− 12 32
6
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras
(4) The interpolating polynomial for which is q[4](1) = ρ12 = 2,
q[4](5) = ρ21 = 3 is q[4](x) = x4 + 74 , and the correspondent matrix root is
(
)
11
4
1
4
X[4] = q[4](A) =
4.3
3
4
9
4
.
Case 3.
There exists at least one eigenvalue λi of the matrix A with algebraic multiplicity
greater than 1. In this case we do not know the exact number of the roots of the
equation. Let the matrix A has s different eigenvalues λ1 , λ2 , . . . , λs , where s < ν,
with algebraic multiplicities α1 , α2 , . . . , αs respectively. If the equation h(x) = λi
has mi different roots ρi1 , ρi2 , . . . , ρimi with algebraic multiplicity αi1 , αi2 , . . . , αimi
respectively, i = 1, 2, . . . , s, then the number of different roots of the polynomial
matrix equation is at least m = m1 · m2 · . . . · ms .
Proof. See Case 2.
An illustrative
example 1.
(
)
40
Let be A =
. We will solve the polynomial matrix equation X2 = A. Let
04
be h(x) = x2 . It is n = 2 (degree of the polynomial h(x)) and ν = 2 (dimension of
the matrix A). The characteristic polynomial of matrix A is p(x) = (x − 4)2 . The
matrix A has only one eigenvalue λ1 = 4 with algebraic multiplicity a1 = 2. The
equation h(x) = λ1 has the roots ρ11 = 2 and ρ12 = −2 with algebraic
multiplicities α11 = 1 and α12 = 1 respectively and m1 = 2.
So the given equation has an undefined number of roots. Let be h1 (x) = h(1)1(x)
(
)
λ1 = 4 ρ11 = 2 h1 (ρ11 ) = 41
then the matrix of the interpolation data is
.
λ1 = 4 ρ12 = −2 h1 (ρ12 ) = − 14
The roots of the equation are
(1) The interpolating polynomial for which is q[1](4) = ρ11 = 2,
q (1) [1](4) = h1 (ρ11 ) = 41 is q[1](x) = x4 + 1, and the correspondent matrix
root is
X[1] = q[1](A) =
(
20
02
)
.
(2) The interpolating polynomial for which is q[2](4) = ρ12 = −2,
q (1) [2](4) = h1 (ρ12 ) = − 14 is q[2](x) = − x4 − 1, and the correspondent matrix
(
)
−2 0
root is X[2] = q[2](A) =
.
0 −2
It can be verified that the matrix equation X2 = A has as roots also the following
matrices:
(
) (
)
2 a
−2 a
,
, ∀aϵR.
0 −2
0 2
So the equation has an infinity number of roots.
An illustrative example 2.
7
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras


20 −17 3 −24
 −5 8 −1 7 

Let be A = 
 −8 8 1 11  .
15 −15 3 −18
We will solve the polynomial matrix equation X2 + 2X + 3I4 = A.
Let be h(x) = x2 + 2x + 3, then h(1) (x) = 2x + 2, h(2) (x) = 2. The characteristic
polynomial of matrix A is p(x) = (x − 2)(x − 3)3 . The eigenvalue λ1 = 2 has
algebraic multiplicity a1 = 1. The eigenvalue λ2 = 3 has algebraic multiplicity
a2 = 3, so the given equation has an undefined number of roots. The equation
h(x) = λ1 has m1 = 1 root, ρ11 = −1 with algebraic multiplicity α11 = 2. The
equation h(x) = λ2 has m2 = 2 roots, ρ21 = 0 and ρ22 = −2.
The matrix of the interpolation data is


λ1 = 2 ρ11 = −1


(1)
 λ2 = 3 ρ21 = 0 h1 (ρ21 ) = 12 h1 (ρ21 ) = − 12  .
(1)
λ2 = 3 ρ22 = −2 h1 (ρ22 ) = − 12 h1 (ρ22 ) = − 12
Two roots of the given equation are
(1) The interpolating polynomial for which is q[1](2) = ρ11 = −1,
(1)
q[1](3) = ρ21 = 0, q (1) [1](3) = h1 (ρ21 ) = 12 , q (2) [1](3) = h1 (ρ21 ) = − 21 is
x3
4
−
5x2
2
+ 35x − 21
2 and the correspondent matrix root is

 274
27 13
−
−17
2
2 2


 −4 4 −2 5 

X[1] = q[1](A) = 
 − 13 13 − 7 8  .
2

 2 2
12 −15 3 −18
q[1](x) =
(2) The interpolating polynomial for which is q[2](2) = ρ11 = −1,
(1)
q[2](3) = ρ22 = −2, q (1) [2](3) = h1 (ρ22 ) = − 12 , q (2) [2](3) = h1 (ρ22 ) = − 21 is
3
2
77x
35
q[2](x) = − 3x4 + 13x
2 − 4 + 2
and the correspondent matrix root is

 31 27
17
− 2 2 − 13
2


 4 −6 2 −5 
.

X[2] = q[2](A) =  13
13 3

−
−8
2
2

 2
−12 15 −6 13
Perhaps the given equation has and other matrix roots.
4.4
Case 4.
There exists at least one eigenvalue λ of the matrix A with algebraic multiplicity
greater than or equal to two , for which the equation h(x) = λ has as root ρ the
eigenvalue λ with algebraic multiplicity greater than or equal to two also, and
then the algorithm can be applied. The polynomial matrix equation is impossible
or has an unknown number of roots.
Proof. It is h(1) (ρ) = 0, so the corresponding interpolation data are not exist.
8
May 22, 2016
International Journal of Nonlinear Analysis and Applications
Petraki&Samaras
An illustrative
example 1.
(
)
01
Let be A =
. We want to solve the polynomial matrix equation X2 = A.
00
Let be h(x) = x2 . It is n = 2 (degree of the polynomial h(x)) and ν = 2
(dimension of the matrix A). The characteristic polynomial of matrix A is
p(x) = x2 .
The matrix A has one eigenvalue λ1 = 0 with algebraic multiplicities a1 = 2. The
equation h(x) = λ1 has the root ρ11 = 0 with algebraic multiplicity α11 = 2. So
our algorithm cannot be applied, and we must try to examine if our equation has
a solution. It is easy to verify that the equation does not have a solution,
therefore it is impossible.
An illustrative
example
2.


400
Let be A =  0 0 0  . We want to solve the polynomial matrix equation X2 = A.
000
Let be h(x) = x2 . It is n = 2 (degree of the polynomial h(x)) and ν = 3
(dimension of the matrix A). The characteristic polynomial of matrix A is
p(x) = x2 (x − 4).
The matrix A has as eigenvalues the numbers λ1 = 0 with algebraic multiplicities
a1 = 2 and λ2 = 4 with algebraic multiplicities a2 = 1. The equation h(x) = λ1
has the root ρ11 = 0 with algebraic multiplicity α11 = 2. The equation h(x) = λ2
has the root ρ21 = 4 with algebraic multiplicity α21 = 1. Hence, our algorithm
cannot be applied, and we must try
if our equation
has a solution. It

 
to examine
±2 0 0
±2 0 0
is easy to verify that the matrices  0 0 a  ,  0 0 0  are the roots of the
0 a0
0 00
polynomial matrix equation X2 = A, ∀aϵR.
5.
Algorithm
Our paper is completed by presenting the algorithm that occurs from the previous
examples, in cases where the equation has a finite number of roots.
Step 1. Calculation of the different by two eigenvalues λ1 , λ2 , . . . , λs and their
algebraic multiplicities α1 , α2 , . . . , αs . Let k is set to be the biggest of the above
algebraic multiplicities.
Step 2. The function h1 (x) =
up to order k.
1
h(1) (x)
is defined and its derivatives are calculated
Step 3. Solution of the equations h(x) = λi , for i = 1, 2, . . . , s and their different
roots are symbolized with ρij , i = 1, 2, . . . , s and j = 1, 2, . . . , mi .
Step [(
4. The
d=
( definition of the interpolating data))
]
(1)
(α1 −1)
T able λi , h1 (ρiji ) , h1 (ρiji ) , ..., h1
(ρiji ) , i = 1, 2, ..., s, j = 1, 2, . . . , mi .
We find the corresponding interpolating polynomials q(x).
Step 5. Matrices X = q(A) are solutions of the given polynomial matrix equation.
9
May 22, 2016
International Journal of Nonlinear Analysis and Applications
6.
Petraki&Samaras
Conclusion
In this paper the algorithm for solving the nth degree polynomial matrix equation is
developed. Formulae are created in order to calculate the number of the equation’s
roots. Furthermore, the cases where the equation has no roots or has an infinite
number of roots are presented.
The results obtained from this work are the necessary and sufficient tools to solve
and study the nth degree polynomial matrix equation.
References
[1] N. J. Higham, Functions of Matrices, Theory and Computation, Society of Industrial
and Applied Mathematics, Philadelphia(USA), 2008.
[2] B. Iannazzo, On the Newton method for the matrix n-th root, SIAM J. Matrix Anal.
Appl. 28:2 (2006), pp.503-523.
[3] P. Psarrakos, On the n-th roots of a compex matrix, Electron. J. Linear Algebra 9
(2002), pp.32-41.
[4] C. Guo and N. J. Higham. A Schur-Newton method for the matrix pth root and
its inverse. SIAM J. Matrix Anal. Appl. 1998; 8:3, 788–804.
[5] S. Lacic, On the computation of the matrix k-th root, Z. Angew. Math. Mech. 78:3
(1998), pp.167-172.
[6] N. J. Higham, Computing real square roots of a real matrix, Linear Algebra Appl.
88/89 (1987), pp.405-430.
[7] G. Alefeld and N. Schneider, On square roots of M-matrices, Linear Algebra Appl. 42
(1982), pp.119-132.
[8] G. Cross and P. Lancaster, Square roots of complex matrices, Linear and Multilinear
Algebra. 1 ( 1974), pp.289-293.
[9] A. Bjorck and S. Hammarling, A Schur method for the square root of a Matrix, Linear
Algebra Appl.52/53 (2005), pp.349-378.
[10] D. Sullivan, The square roots of 2x2 matrices, Math. Mag.66 (1993), pp.314-316.
[11] T. Arponen, Matrix approach to polynomials, Linear Algebra Appl 2 (2004).pp.394,
257-276.
[12] T. Arponen, Matrix approach to polynomials, Linear Algebra Appl. 359 (2004), pp.181196.
[13] A. Choudhry, Extraction of nth roots of 2x2 matrices, Linear Algebra Appl.387 (2004),
pp.183-192.
[14] D. Bini, N. J. Higham and B. Beni, Algorithms for the matrix pth root, Numer.I
Algorithms.39:4 (2005), pp.349-378.
[15] P. Lancaster and M. Tismenetsk, Theory of matrices. California , Academic Press,
San Diego(USA),1985.
10
Related documents