Download Question 1.

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Quadratic form wikipedia , lookup

Cartesian tensor wikipedia , lookup

Eisenstein's criterion wikipedia , lookup

System of linear equations wikipedia , lookup

Linear algebra wikipedia , lookup

Capelli's identity wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Jordan normal form wikipedia , lookup

Rotation matrix wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Determinant wikipedia , lookup

Invariant convex cone wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Four-vector wikipedia , lookup

Oscillator representation wikipedia , lookup

Gaussian elimination wikipedia , lookup

Matrix calculus wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Transcript
Vladimir Protasov (Moscow State University)
Perron-Frobenius theory for matrix semigroups
Let A 1 ,
, A m be nonnegative d  d  matrices.
Do they have a product, which is strictly positive ?
NO:
The matrices may have a common invariant subspace
 Bi
Ai  
0
Ci 
 , i  1,..., m
Di 
Every product of these matrices has the same form A d1
 there are no positive products.
Let us assume that the family A 1 ,
Does a positive product exist ?
Example: A 1 ,
0

0
Ai  


1
1
0
, A m is irreducible.
No !
, A m are all permutation matrices.
0
0


1
, i  1,


0
,m
  
A dk  

 0 
A1 ,
, A m can be equivalent to permutation matrices.
0

0
Ai  


 ad
a1
0
0
0
All products Ad1


a2 
, i  1,


0
,m
Adk are also permutation matrices  there are no positive products.
More general: A1 ,, Am constitute permutations of the same partition of the set:   {1,..., d}.
Let  =
 j be a partition of the set  to r disjoint sets.
j 1,..,r
The matrix A j acts as a permutation of the sets  1,...,  r
 there are no positive products.
1
2
3
4
5
6
7
8
 =
j
j1,..,4
Ai :
Example:

1
2
3
0 * * 
* 0 0 




A1 =  * 0 0  ; A 2 =  0 * * 
* 0 0 
0 * * 




The family {A1 , A 2} has no positive products.
Question 1. How to characterize all families that have positive products ?
Question 2. Is it possible to decide the existence of a positive product
within polynomial time ?
An ''opposite'' problem: do nonnegative d  d  matrices A 1 ,
, A m have a zero product ?
This problem is NP-complete already for m  2 (Blondel, Tsitsiklis, 1997)
The case of one matrix (m=1)
If a family {A} possesses a positive product, then some power of A is positive.
Definition 1. A matrix is called primitive if it has a strictly positive power.
Primitive matrices share important spectral and dynamical properties with positive matrices
and have been studied extensively.
A =
*

*
0

*
0
0
*
0
*
0
0
*
0

0
*

0
AN =
*

*
*

*
*
*
*
*
*
*
*
*
*

*
*

*
Perron-Frobenius theorem (1912) A matrix A is not primitive if it is either reducible
or one of the following equivalent conditions is satisfied:
1) there are r  2 eigenvalues of A, counting with multiplicities, equal by modulo
to its spectral radius ( A).
They are all different and equal to  k =  e
2 ik
r
, k  0,... , r  1.
2) (Romanovsky, 1933) All cycles of the graph of the matrix A have g.c.d. = r.
3) There is a partition of the set   {1,..., d } into r sets  1 ,...,  r ,
on which A acts as a cyclic permutation.
0

B1
4) On the basis corresponding to that partition A has the form: A  


0
The number r is the imprimitivity index of the matrix A.
0
0
B r 1
Br 

0


0 
 If a matrix A is primitive, then A N  0 , whenever N  (d -1) 2  1.
(H.Wielandt, 1950),
Proofs: Rosenblatt, Holladay and Varga, Ptak, and Dionisio, Seneta .
 The primitivity of a matrix can be decided within polynomial time.
Can these results be generalized to families of m matrices
or to multiplicative matrix semigroups ?
One of the ways – strongly primitive families.
Definition 2. A family { A1 ,..., Am } is called strongly primitive if there is N
such that every product Ad1
Adk of length k  N is positive.
Strongly primitive families have been studied in many works.
Applications: inhomogeneous Markov chains, products of random matrices,
probabilistic automata, weak ergodicity in mathematical demography.
There is no generalization of Perron -Frobenius theory to strongly primitive families
The algorithmic complexity of deciding the strong primitivity of a matrix family
is unclear. Most likely, this is not polynomial.
Let N be the least integer such that all products of length N are positive.
There are families of d x d – matrices, for which N = 2 d  2
(Cohen, Sellers, 1982; Wu, Zhu, 2015)
(compare with N = (d -1)2  1 for one matrix)
Another generalization: the concept of primitive families.
Definition 3. A family of matrices is called primitive if there exists
at least one positive product.
Justification.
If the matrices of the family have neither zero columns no zero rows,
then almost all long products are positive.
Let g ( k ) be the number of positive products among all m k products Ad1
g (k )
 1 as k  .
mk
If each matrix Ad k in the product Ad1
Ad k of length k.
Then
Ad k
is chosen from the set { A1 , , Am }
independently with probability p  1/ m, then the product is positive with probability 1.
Question 1. How to characterize primitive families ?
Can the Perron-Frobenius theory be somehow generalized to primitive families ?
Is it true that if the family is not primitive, then all
A1 , , Am
constitute permutations of the same partition of the set:   {1,..., m},
 =
j
j 1,.., r
Question 2. Is it possible to decide the existence of a positive product
within polynomial time ?
The answers to both these questions are affirmative.
(under some mild assumptions on matrices)
?
The main results
Let us have a family { A1 ,..., Am } of nonnegative matrices. We make the following two assumptions:
(a)
The family is irreducible, i.e., the matrices A1 ,..., Am do not share a common invariant coordinate plane.
(b)
All matrices of the family have neither zero rows nor zero columns.
Theorem 1 [P., Voynov, 2012]. If a family of matrices satisfies (a) and (b), then it either possesses
a positive product, or there is a partition of the set   {1,..., d } into r  2 sets  1 ,...,  r ,
on which every matrix A j acts as a permutation.
(conjectured in 2010)
Remark 1. This means that there is a permutation of the basis of R d d, after which every matrix Aj
gets the block form:
Aj
 0

0
 


 Br
0
B2
0
0
B1 




0 
Remark 2. The permutations are not necessarily cyclic ! (In contrast to the P-F theorem)
Proofs of Theorem 1
(2012) P. , Voynov. By applying geometry of affine maps of convex polyhedra.
Call for purely combinatorial proofs
(2013) Alpin, Alpina. Combinatorial proof.
(2014) Blondel, Jungers, Olshevsky. Combinatorial proof.
(2015) P. , Voynov. By applying functional difference equations.
Theorem 2. Among all the partitions  
j
, there is a unique partition
j
with the maximal number of parts r.
r is the imprimitivity index of the family
This ``canonical'' patition can be found by an algorithm spending
2 m d 3 arithmetic operations.
Remark 3. In particular, the family is almost primitive  r  1.
Hence, that algorithm also decides the existence of a positive product.
Remark 4. The size of the instance of the algorithm is N  md 2 .
So, its complexity is less than 2 N 3/ 2 .
The computing of product of two d  d -matrices takes C d 2.376 operations
(in practice it is C d 3 operations). So, the complexity estimate of deciding primitivity
can hardly be improved significantly.
Remark 5. The imprimitivity index r is also related to the multiplicity
of the largest by modulo eigenvalues of matrices (as in the P-F theorem).
Theorem 3. For a family of stochastic matrices the imprimitivity index r equals to
the minimal total multiplicity of the largest by modulo eigenvalues in the matrix
semigroup generated by this family.
However, now the leading eigenvalues are not necessarily the roots of unity
k =  e
2 ik
r
, k  0,... , r  1.
Recall that all these results hold under the assumptions that the family { A1 ,..., Am } satisfies
(a)
The family is irreducible, i.e., the matrices A1 ,..., Am do not share a common invariant coordinate plane.
(b)
All matrices of the family have neither zero rows nor zero columns.
Remark 5. Both assumptions (a) and (b) are essential. For (a) this is obvious, for (b) there are examples.
Example 1. The set of four matrices
1

0
A1  
0

0
0 0 1
1


1 1 0
0
; A2  
0
0 0 0


0 0 0
0
0 1 0
0


1 0 1
0
; A3  
0
0 0 0


0 0 0
1
0 0 0
0


0 0 0
0
; A4  
0
1 1 0


0 0 1
1
is not primitive, however, it does not have a partition.
Reason: condition (b) is violated (all the matrices have zero rows).
(Blondel, Jungers, Olshevsky, 2014)
Without condition (b), the problem of deciding primitivity is NP-hard.
0 0 0

0 0 0
1 0 1

0 1 0
What about the minimal length N of the positive product ?
(Voynov, 2013) If the positive product exists, then N 
1 3
d  O(d 2 )
2
(Blondel, Jungers, Olshevsky, 2014) If the length of the shortest sincronizing word
in a deterministic automata is f (d ), then N  2 f (d )  d - 1
d3  d
(Pin, 1983) f (d ) 
6
1
This gives an estimate: N  d 3  o(d 2 )
3
Problem. Find a sharp estimate for N. By now we know only that
1 3
(d - 1)2  1  N 
d  o( d 2 )
3
Another generalization: m-primitive families
Fornasini, Valcher (1997), Olesky, Shader, van den Driessche (2002), etc.
Definition 4. A family A 1 ,
, A m is called m-primitive if there are numbers k1 , ... , km
such that the sum of all products of length N 
k
i
with k1 matrices A1 ,
k2 matrices A2 , ... , km matrices Am , is positive.
Definition 5. Hurwitz product A( k1 ,...,km ) corresponding to a ''color vector'' (k1 ,..., km )
is the sum of all products with k1 matrices A1 , k2 matrices A2 , ... , km matrices Am .
Example. A(2,0,1) = A12 A3  A1 A3 A1  A3 A12
(t1 A1  ...  tm Am ) N


k1 ... km  N
A( k1 ,...,km )t1k1 .... tmkm
The family is m-primitive if it has at least one positive Hurwitz product
Applications for graphs and for multivariate (2D, 3D, etc.) Markov chains.
The complexity of recognition of m-primitive families was unclear.
There is a criterion, which is highly non-polynomial.
For the minimal length N of products we have N  C d m1
(Olesky, Shader, van den Driessche (2002))
Our approach can be extended to m-primitive families.
Theorem 5 [P., 2013] If all matrices A j of the family have no zero columns or (!) no zero rows ,
then it is either m-primitive, or there is a partition of the set   {1,..., d } into r  2 sets  1 ,...,  r ,
on which all matrices A j act as commuting (!) permutations.
The proof is algebraic, it uses the theory of abelian groups
Theorem 6. Under the assumptions of Theorem 5, the m-primitivity of a set of matrices
1
is decidable within polynomial time. The complexity is 2md 3  m2 d 2 arithmetic operations.
2
Applications of primitivity of matrix families
inhomogeneous Markov chains
products of random matrices, Lyapunov exponents,
probabilistic automata,
refinement functional equations
mathematical ecology (succession models for plants)
Products of random matrices, Lyapunov exponents
Let { A1 ,..., Am } be a given family of arbitrary matrices.
Consider an infinite product Ad1
Ad k
, where each matrix Ad k
is chosen from the family { A1 , , Am } independently with probability p  1/ m .
A1
Ad1
Ad k 1 Ad k
A2
Am
Every choice is independent with equal probabilities 1/m (the simplest model)
1/ k
The ``mean rate of growth'', i.e., the value
Ad1
Ad k
converges
to a constant  with probability 1 (Furstenberg, Kesten, 1960).
This is a matrix analogue of the law of large numbers.
The value   log 2  is called the Lyapunov exponent of the family { A1 ,, Am }.
 is the ``spectral radius'' of the family, and  = lim
k 

1
E log 2 Ad1
k
Ad k

This result was significantly strengthened by V.Oseledec (multiplicative ergodic theorem, 1968)
The main problems:
 to characterize the convergence
1/ k
Ad1
Ad k

 to compute or estimate  for a given matrix family.
The problem of computing the Lyapunov exponent is algorithmically undecidable
(Blondel, Tsitsiclis, 2000)
In case of nonnegative matrices there are good results on both problems.
1) An analogue of the central limit theorem for matrices
(Watkins (1986), Hennion (1997), Ishitani (1997))
2) Efficient methods for estimating and for computing the Lyapunov exponent
(Key (1990), Gharavia, Anantharam (2005), Pollicott (2010), Jungers, P. (2011)).
All those results hold only for primitive families.
The existence of at least one positive product is always assumed in
the literature ``to avoid pathological cases ’’
Our Theorems 1 and 2 extend all those results to general families of nonnegative matrices.
Refinement equations with nonnegative coefficients
Refinement equation is a difference functional equation with the contraction of an argument
 ( x) 
N
c
k 0
c0 ,
, cN
k
 (2 x  k ) ,
is a sequence of complex numbers sutisfying some constraints.
cN  (2 x  N )
( x)
c0  (2 x)
c1  (2 x  1)
.......
This is a usual difference equation, but with the double contraction of the argument
Applications: wavelets theory, approximation theory, subdivision
algorithms , power random series, combinatorial number theory.
How to check the existence of a compactly supported solution ?
How to check if
  L p (R)
in case all the coefficients are nonnegative ?
I.Daubechies, D.Lagarias, 1991
A.Cavaretta, W.Dahmen, C.Micchelli, 1991
C.Heil, D.Strang, 1994
  C (R)  the family of matrices {T0 , T1} is strongly primitive
R.Q.Jia, 1995,
K.S.Lau, J.Wang, 1995
Y.Wang, 1996
  Lp (R)  the family {T0 , T1} is primitive
T0 , T1 are N  N  matrices, (Ti ) j k  c2 j  k 1i
Example. N  4, c0 , c1 , c2 , c3 , c4
 c0

c2

T0 
 c4

0
0 0
c1 c0
c3 c2
0
c4
0

0
c1 

c3 
 c1

c3

T1 
0

0
c0
c2
c4
0
0
c1
c3
0
0

c0 
c2 

c4 
Conclusions
Thus, if a family of matrices is not primitive, then all its matrices
constitute permutations of the canonical partition.
The canonical partition can be found by a fast algorithm.
This allows us to extend many results on Lyapunov exponents to general
families of nonnegative matrices.
In particular, to construct an efficient algorithm of computing the Lyapunov
exponents of nonnegative matrices.
Other applications: functional equations, succession models in mathematical ecology, etc.
Thank you!