Download “relations constraints” on lifting variables + SDP Relaxation Question

Document related concepts

Eigenvalues and eigenvectors wikipedia , lookup

Exact cover wikipedia , lookup

Genetic algorithm wikipedia , lookup

Knapsack problem wikipedia , lookup

Computational complexity theory wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Dynamic programming wikipedia , lookup

Inverse problem wikipedia , lookup

Travelling salesman problem wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Least squares wikipedia , lookup

Multi-objective optimization wikipedia , lookup

Multiple-criteria decision analysis wikipedia , lookup

Mathematical optimization wikipedia , lookup

Transcript
Globally Optimal Estimates for
Geometric Reconstruction
Problems
Tom Gilat, Adi Lakritz
Advanced Topics in Computer Vision Seminar
Faculty of Mathematics and Computer Science
Weizmann Institute
3 June 2007
outline

Motivation and Introduction

Background




Relaxations



Positive SemiDefinite matrices (PSD)
Linear Matrix Inequalities (LMI)
SemiDefinite Programming (SDP)
Sum Of Squares (SOS) relaxation
Linear Matrix Inequalities (LMI) relaxation
Application in vision


Finding optimal structure
Partial relaxation and Schur’s complement
Motivation
Geometric Reconstruction Problems
Polynomial optimization problems (POPs)
Triangulation problem in L2 norm
Given :
P1 ,..., Pn - 3  4 camera matrices
x1  x2  ...  xn  correspond ing image pointes
Goal :
estimation for X , the source in the world of the image points
X
x1
x2
2-views –Multi
exactview
solution
- optimization
x3
Triangulation problem in L2 norm
Given :
P1 ,..., Pn - 3  4 camera matrices
x1  x2  ...  xn  correspond ing image pointes
Goal :
estimation for X , the source in the world of the image points
perspective camera i
X
y
z
X
x'i
err
xi
x
i xi '  Pi X
i  0
X
Triangulation problem in L2 norm
PX
x
minimize reprojection error in all cameras
n
L2 error function : err( X )   d ( xi , Pi ( X )) 2 - non convex
i 1
domain is all points in front of all cameras
i  0
- convex
min err( X ) subject to i  0
Polynomial
minimization problem
Non convex
More computer vision problems
• Reconstruction problem:
known cameras, known corresponding points
find 3D points that minimize the projection
error of given image points
– Similar to triangulation for many points and cameras
• Calculating homography
given 3D points on a plane and
corresponding image points, calculate
homography
• Many more problems
Optimization problems
Introduction to optimization problems
minimize f 0 (x) – objective function
subject to f i (x)  b i , 1  i  m – constraint s
x  R , fi : R  R 1  i  m
n
n
feasible set  all vectors that satisfy th e constraint s
~
x is optimal if
~
x  feasible set
f0 (~
x ) is smallest among all vectors in feasible set
optimization problems
can the sequence a1 ,..., an be partitione d?
or are there x  ( x1 ,..., xn )  {1} s.t.  ai xi  0
n
i
min { f ( x)  ( a i x i ) 2  ( ( xi2  1)) 2 }  0 ?
i
i
NP - complete
optimization problems
optimization
solutions exist:
problems: local optimum
interior point methods
or high computational cost
convex
non convex
Linear
SemiDefinite
Programming
Programming
(LP)
(SDP)
non convex optimization
Non convex feasible set
Many algorithms
Get stuck in
local minima
init
Max
Min
level curves of f
optimization problems
optimization
solutions exist:
problems: local optimum
interior point methods
or high computational cost
convex
non convex
relaxation of problem
LP
SDP
global optimization – algorithms that converge to optimal solution
outline

Motivation and Introduction

Background




Relaxations



Positive SemiDefinite matrices (PSD)
Linear Matrix Inequalities (LMI)
SemiDefinite Programming (SDP)
Sum Of Squares (SOS) relaxation
Linear Matrix Inequalities (LMI) relaxation
Application in vision


Finding optimal structure
Partial relaxation and Schur’s complement
positive semidefinite (PSD) matrices
Definition: a matrix M in Rn×n is PSD if
1. M is symmetric: M=MT
n
T
2. for all x  R x Mx  0
denoted by M 0
M can be decomposed as AAT (Cholesky)
Proof:
: M  AAT  xT Mx  xT AAT x  ( AT x)T AT x  ( AT x) 2  0
: (spectral theorm and nonnegativ e eigenvalue s)
positive semidefinite (PSD) matrices
Definition: a matrix M in Rn×n is PSD if
1. M is symmetric: M=MT
n
T
2. for all x  R x Mx  0
denoted by M 0
M can be decomposed as AAT (Cholesky)
if M is rank 1 then M  vv , v  R
T
n
principal minors
The kth order principal minors of an n×n symmetric matrix M are the
determinants of the k×k matrices obtained by deleting n - k rows and
the corresponding n - k columns of M
 M 11
M
 21
 M 31
M 12
M 22
M 32
M 13 
M 23 
M 33 
first order: elements on diagonal
second order:
 M 11
M
 21
 M 31
M 12
M 22
M 32
M 13 
M 23 
M 33 
diagonal minors
The kth order principal minors of an n×n symmetric matrix M are the
determinants of the k×k matrices obtained by deleting n - k rows and
the corresponding n - k columns of M
 M 11
M
 21
 M 31
M 12
M 22
M 32
M 13 
M 23 
M 33 
first order: elements on diagonal
second order:
 M 11
M
 21
 M 31
M 12
M 22
M 32
M 13 
M 23 
M 33 
diagonal minors
The kth order principal minors of an n×n symmetric matrix M are the
determinants of the k×k matrices obtained by deleting n - k rows and
the corresponding n - k columns of M
 M 11
M
 21
 M 31
M 12
M 22
M 32
M 13 
M 23 
M 33 
first order: elements on diagonal
second order:
 M 11
M
 21
 M 31
M 12
M 22
M 32
M 13 
M 23 
M 33 
diagonal minors
The kth order principal minors of an n×n symmetric matrix M are the
determinants of the k×k matrices obtained by deleting n - k rows and
the corresponding n - k columns of M
 M 11
M
 21
 M 31
M 12
M 22
M 32
M 13 
M 23 
M 33 
first order: elements on diagonal
second order:
 M 11
M
 21
 M 31
M 12
M 22
M 32
M 13 
M 23 
M 33 
third order: det(M)
A matrix M 0 iff all the principal minors of M are nonnegativ e
Set of PSD matrices in 2D
x
y

y
0

z
x, z  0, xz  y 2
Set of PSD matrices
This set is convex
Proof:
M 0, t  0  xT Mx  0  t  xT (M ) x  0  tM 0
M , L0  x xT Mx, xT Lx  0 
x xT ( M  L) x  0  M  L0
so tM  (1  t ) L0
LMI – linear matrix inequality
n
A( x)  A0   xi Ai 0
i 1
x  (x 1 ,..., x n )  R , A i are k  k symmetric matrices
n
R
n
A(x )
k k matrices
feasible set K  {x  R n | A(x)0} is convex
Proof :
x,y  K , t  0  A(x),A(y)0 
tA( x)  (1  t ) A( y )0 
A(tx  (1  t ) y )0  tx  (1  t ) y  K
LMI
example: find the feasible set of the 2D LMI
1 0 0  1 1 1
0 1 0
 1  x1
0 2 0   1 0 0 x  1  1 0 x   x  x

 
 1 
 2  1 2
0 0 1  1 0 0
0 0 1
 x1
A0
A1
A2
x1  x2
2  x2
0
A(x )
x1 
0  0
1  x2 
reminder
A matrix M 0 iff all the principal minors of M are nonnegativ e
LMI
example: find the feasible set of the 2D LMI
1 0 0  1 1 1
0 1 0
 1  x1
0 2 0   1 0 0 x  1  1 0 x   x  x

 
 1 
 2  1 2
0 0 1  1 0 0
0 0 1
 x1
A0
A1
A2
1st order principal minors
1  x1  0, 2  x2  0, 1  x2  0
x1  x2
2  x2
0
A(x )
x1 
0  0
1  x2 
LMI
example: find the feasible set of the 2D LMI
1 0 0  1 1 1
0 1 0
 1  x1
0 2 0   1 0 0 x  1  1 0 x   x  x

 
 1 
 2  1 2
0 0 1  1 0 0
0 0 1
 x1
A0
A1
A2
2nd order principal minors
(1  x1 )( 2  x2 )  ( x1  x2 ) 2  0,
(2  x2 )( 1  x2 )  0,
(1  x1 )( 1  x2 )  x12  0
x1  x2
2  x2
0
A(x )
x1 
0  0
1  x2 
LMI
example: find the feasible set of the 2D LMI
1 0 0  1 1 1
0 1 0
 1  x1
0 2 0   1 0 0 x  1  1 0 x   x  x

 
 1 
 2  1 2
0 0 1  1 0 0
0 0 1
 x1
A0
A1
A2
3rd order principal minors
(1  x1 )((1  x1 )( 2  x2 )  ( x1  x2 ) 2 ) 
x1 x2 ( 2  x2 )  0
Intersection of all inequalities
x1  x2
2  x2
0
A(x )
x1 
0  0
1  x2 
Semidefinite Programming (SDP) = LMI
an extension of LP
LP :
minimize cT x sub. to Ax  b
SDP : minimize c T x sub. to Ax  b, B( x)0
outline

Motivation and Introduction

Background




Relaxations



Positive SemiDefinite matrices (PSD)
Linear Matrix Inequalities (LMI)
SemiDefinite Programming (SDP)
Sum Of Squares (SOS) relaxation
Linear Matrix Inequalities (LMI) relaxation
Application in vision


Finding optimal structure
Partial relaxation and Schur’s complement
Sum Of Squares relaxation (SOS)
Unconstrained polynomial optimization problem (POP)
means the feasible set is Rn
H. Waki, S. Kim, M. Kojima, and M. Muramatsu. SOS and SDP relaxations for POPs with
structured sparsity. SIAM J. Optimization, 2006.
Sum Of Squares relaxation (SOS)
Define :
N  { f | f ( x)  0 x  R n and f ( x) is a polynomial }
n
SOS  { f | polynomial s g1 ( x),..., g n ( x) s.t. f ( x)   g i ( x) 2 }
i 1
SOS  N , SOS  N
f ( x)  N \ SOS is rare
SOS relaxation for unconstrained polynomials
P : find min f ( x)
P': find max p s.t. f ( x)  p  N
Proof :
if p  min f ( x) then
x f ( x)  p  0
and for any y larger tha n p this will not hold
so max p s.t f ( x)  p  0 is min f ( x)
SOS relaxation for unconstrained polynomials
P : find min f ( x)
P': find max p s.t. f ( x)  p  N
P' ': find max q s.t. f ( x)  q  SOS - relaxation
guarantees bound on p :
pq
P' ' can be solves by SDP
f (x)
p
q
x
monomial basis
 1 


 x1 
 x 
v2 ( x )   22 
 x1 
x x 
 1 2
 x2 
 2 
n  r

dim( vr ( x))  d(r)  
 r 
if f(x) is of degree r then f(x)  aT vr(x) where a  R d (r )
example
f(x)  1  2 x1-3x2  4 x12-5 x1 x2  6 x22 
(1,2,3,5,4,6)(1,x1,x2 ,x1 x2 ,x12 ,x22 )T
SOS relaxation to SDP
SOS 2 r  set of SOS polynomial s of degree 2r 
n
{ g i ( x) 2 | deg( g i ( x))  r} 
i 1
n
{ (aiT vr ( x)) 2 | ai  R d ( r ) } 
i 1
n
{vr ( x)
T
d (r )
T
}
R

a
|
)
x
(
v
a
a
 ii r
i
i 1
{vr ( x)T Vvr ( x) | V 0}
SDP
max q sub. to f ( x)  q  vr ( x)T Vvr ( x) s.t. V0
SOS relaxation to SDP
example:
f(x)  2 x1-3x2  4 x12 x22
 1 


 x1 
 x 
f ( x)  q   22 
 x1 
x x 
 1 2
 x2 
 2 
T
 V11

V21
V
 31
V41
V
 51
V
 61
V12
V13 V14
V15
V22 V23 V24 V25
V32 V33 V34 V35
V42 V43 V44 V45
V52 V53 V54 V55
V62 V63 V64 V65
V16  1 


V26  x1 
V36  x2 
 2 , V 0
V46  x1 
V56  x1 x2 
V66  x22 
SDP
max q sub. to
- q  V11, 2  2V12 , - 3  2V13 , 4  2V46  V55 , else Vii  0, V0
SOS for constrained POPs
possible to extend this method for constrained POPs
by use of generalized Lagrange dual
SOS relaxation summary
POP
SOS
relaxation
SOS
problem
SDP
Global
estimate
So we know how to solve a POP that is a SOS
And we have a bound on a POP that is not an SOS
H. Waki, S. Kim, M. Kojima, and M. Muramatsu. SOS and SDP relaxations for POPs with
structured sparsity. SIAM J. Optimization, 2006.
relaxations
SOS:
POP
SOS
relaxation
SOS
SDP
problem
Global
estimate
LMI:
POP
LMI
relaxation
linear & LMI
problem
SDP +
converge
Global
estimate
outline

Motivation and Introduction

Background




Relaxations



Positive SemiDefinite matrices (PSD)
Linear Matrix Inequalities (LMI)
SemiDefinite Programming (SDP)
Sum Of Squares (SOS) relaxation
Linear Matrix Inequalities (LMI) relaxation
Application in vision


Finding optimal structure
Partial relaxation and Schur’s complement
LMI relaxations

Constraints are handled

Convergence to optimum is guaranteed

Applies to all polynomials, not SOS as
well
A maximization problem
max
g 0 ( x )  x2
s.t.
g1 ( x)  3  2 x2  x12  x22  0
g 2 ( x)   x1  x2  x1 x2  0
g 3 ( x)  1  x1 x2  0
Note that:
a. Feasible set is non-convex.
b. Constraints are quadratic
Feasible set
LMI – linear matrix inequality, a reminder
n
A( x)  A0   xi Ai 0
i 1
x  (x 1 ,..., x n )  R , A i are k  k symmetric matrices
n
R
n
A(x )
k k matrices
feasible set K  {x  R n | A(x)0} is convex
Motivation
An SDP:
min
cT x
s.t.
A0  x1 A1  ...  xn An 0
Mx  b
Goal
Polynomial Optimization
SDP with solution close
Problem
to global optimum of the
original problem
What is it good for?
SDP problems can be solved much more efficiently then general
optimization problems.
LMI Relaxations is iterative process
LMI:
POP
Step 1: introduce new
variables
Apply higher
order
Linear + LMI + rank
constraints
relaxations
Step 2: relax
constraints
SDP
LMI relaxation – step 1 (the R2 case)
i j
Replace monomials x1 x2 by “lifting variables”
Rule:
i j
1 2
xx
 x1 
 x 
 2 
 x12  


 x1 x2 
 x22 
Example:
g 2 ( x)   x1  x2  x1 x2  0 
 yij
 y10 
y 
 01 
 y20 
 
 y11 
 y02 
g 2 ( x)   y10  y01  y11  0
Introducing lifting variables
max
g 0 ( x )  x2
s.t.
g1 ( x)  3  2 x2  x12  x22  0
g 2 ( x)   x1  x2  x1 x2  0
g 3 ( x)  1  x1 x2  0
Lifting
max
g 0 ( x)  y01
s.t.
g1 ( x)  3  2 y01  y20  y02  0
g 2 ( x)   y10  y01  y11  0
g 3 ( x)  1  y11  0
New problem is linear, in particular convex
max
g 0 ( x)  y01
s.t.
g1 ( x)  3  2 y01  y20  y02  0
g 2 ( x)   y10  y01  y11  0
g 3 ( x)  1  y11  0
Not equivalent to the original problem.
Lifting variables are not independent in the original problem:
y10  x1
y01  x2
y11  y01  y10
y11  x1 x2
Goal, more specifically
Linear problem
max
g 0 ( x)  y01
s.t.
g1 ( x)  3  2 y01  y20  y02  0
g 2 ( x)   y10  y01  y11  0
(obtained by lifing)
g 3 ( x)  1  y11  0
+
“relations constraints” on lifting variables
(For example, we demand : y11  y01  y10 )
Relaxation
SDP
Question: how do we guarantee that the
relations between lifting variables hold?
y11  y01  y10
y20  y10  y10
and so on.....
LMI relaxation – step 2
Take
v1 ( x)  [1, x1 , x2 ]T
Note that:
Because:
A  vvT
the basis of the degree 1 polynomials.
1
v1 ( x)v1 ( x)T   x1
 x2

Apply lifting and get:
x1
x12
x1 x2
x2 
x1 x2   0
x22 
A0 ( A is positive semidefini te)
1
M   y10
 y01
y10
y20
y11
y01 
y11 
y02 
If the relations constraints hold then
1
M   y10
 y01
y10
y20
y11
y01 
y11  0
y02 
This is because we can decompose M as follows:
1
M   y10
 y01
y10
y20
y11
y01 
y11   [1, y10 , y01 ]T [1, y10 , y01 ]
y02 
Assuming relations hold
( y10  y01  y11 and so on...)
Rank M = 1
We’ve seen:
Relations constraints
hold
1
M   y10
 y01
y10
y20
y11
y01 
y11  0
y02 
rank M  1
What about the opposite:
Relations constraints
hold
1
M   y10
 y01
y10
y20
y11
y01 
y11  0
y02 
rank M  1
This is true as well
LMI relaxation – step 2, continued
1
M   y10
 y01
Relations constraints
hold
y10
y20
y11
y01 
y11  0
y02 
rank M  1
By the following:
M  vvT
 M 0 and Rank M  1
All relations equalities are in the set
of equalities
[M ]ij  [vv ]ij
T
Conclusion of the analysis
max g 0 ( x)  y01
s.t.
g1 ( x)  3  2 y01  y20  y02  0
g 2 ( x)   y10  y01  y11  0
g 3 ( x)  1  y11  0
The “y feasible set”
1
M   y10
 y01
y10
y20
y11
y01 
y11 
y02 
Subset of feasible set with
M 0 , rank M  1
Relations constraints hold
Relaxation, at last
We denote
1
M 1 ( y )  M   y10
 y01
y10
y20
y11
y01 
y11 
y02 
moment matrix
of order 1
Original problem is equivalent to the following:
max
g 0 ( x)  y01
s.t.
g1 ( x)  3  2 y01  y20  y02  0
g 2 ( x)   y10  y01  y11  0
g 3 ( x)  1  y11  0
together with the additional constraint
rank M1 ( y)  1
M 1 ( y ) 0
Relax by dropping the non-convex constraint
rank M1 ( y)  1
LMI relaxation of order 1
max
g 0 ( x)  y01
s.t.
g1 ( x)  3  2 y01  y20  y02  0
g 2 ( x)   y10  y01  y11  0
g 3 ( x)  1  y11  0
M 1 ( y ) 0
Feasible set
Rank constrained LMI vs. unconstrained
1
X 
x
x
y 
LMI relaxations of higher order
It turns out that we can do better:
Apply LMI relaxations
of higher order
A tighter SDP
Relaxations of higher order incorporate the
inequality constraints in LMI
• We show relaxation of order 2
• It is possible to continue and apply relaxations
• Theory guarantees convergence to global optimum
LMI relaxations of second order
2
2 T
Let v2 ( x)  [1, x1 , x2 , x1 , x1 x2 , x2 ] be a basis of polynomials
of degree 2.
T
v
(
x
)
v
(
x
)
0
Again, 2
2
Lifting gives:
Again, we will relax by dropping the rank constraint.
Inequality constraints to LMI
Replace our constraints by LMIs and have a tighter relaxation.
g3 ( x)  1  x1 x2  0
Lifting
LMI
Constraint :
g 3 ( x)v1 ( x)v1 ( x)T 0
Lifting
For example,
M1 ( g3 y)12  g3 ( y)  y10  (1  y11) y10  y10  y21
Linear
Constraint :
g3 ( y)  1  y11  0
M 1 ( g 3 y ) 0
LMI relaxations of order 2
max
g 0 ( x)  y01
s.t.
g1 ( x)  3  2 y01  y20  y02  0
g 2 ( x)   y10  y01  y11  0
g 3 ( x)  1  y11  0
This procedure brings a
new SDP
M 1 ( y ) 0
max y01
s.t.
M 1 ( g1 y )0, M 1 ( g 2 y )0, M 1 ( g 3 y )0,
M 2 ( y ) 0
Second SDP feasible set is included in the first SDP
feasible set
Silimarly, we can continue and apply higher order
relaxations.
Theoretical basis for the LMI relaxations
If the feasible set defined by constraints g i ( x)  0 is compact, then under
mild additional assumptions, Lassere proved in 2001 that there is an
asymptotic convergence guarantee:

k
lim k  p  g

p k
g
is the solution to k’th relaxation
is the solution for the
original problem (finding a
maximum)
Moreover, convergence is fast: p k is very close to g  for small k
Lasserre J.B. (2001) "Global optimization with polynomials and the problem of moments" SIAM J.
Optimization 11, pp 796--817.
Checking global optimality
The method provides a
certificate of global optimality:
rank M n ( y)  1
SDP solution is global
optimum
An important experimental observation:
Minimizing
trace (M n ( y))
Low rank moment matrix
We add to the objective function the trace of the moment
matrix weighted by a sufficiently small positive scalar

max y01    trace( M 2 ( y ))
s.t.
M 1 ( g1 y )0, M 1 ( g 2 y )0, M 1 ( g 3 y )0,
M 2 ( y ) 0
LMI relaxations in vision
Application
outline

Motivation and Introduction

Background




Relaxations



Positive SemiDefinite matrices (PSD)
Linear Matrix Inequalities (LMI)
SemiDefinite Programming (SDP)
Sum Of Squares (SOS) relaxation
Linear Matrix Inequalities (LMI) relaxation
Application in vision


Finding optimal structure
Partial relaxation and Schur’s complement
Finding optimal structure
A perspective camera
The relation between a U in the 3D space
and u in the image plane is given by:
u  PU
Camera center
u~ is the measured point
u is the reprojecte d point
Image plane
P is the camera matrix and

is the depth.
Measured image points are corrupted by independent Gaussian noise.
u~i , i  1..N
We want to minimize the least squares errors between measured and
projected points.
We therefore have the following optimization problem:
N
2
~
min  d (ui , ui ( x))
s.t.
i 1
d (,)
x
i ( x)  0
is the Euclidean distance.
is the set of unknowns
Each term in the cost function can be written as:
2
2
f
(
x
)

f
(
x
)
i2
d (u~i , ui ( x)) 2  i1
i ( x) 2
Where
f i1 ( x), f i 2 ( x), i ( x)
are polynomials.
Our objective is therefore to minimize a sum of rational functions.
How can we turn the optimization problem into a polynomial optimization problem?
N
Suppose that each term in
 d (u~ , u ( x))
i 1
i
i
2
has an upper bound
 i , then
f i1 ( x) 2  f i 2 ( x) 2
 i
2
i ( x)
Then our optimization problem is equivalent to the following:
min  1   2  ...   N
s.t. f i1 ( x) 2  f i 2 ( x) 2   i i ( x) 2
i ( x)  0
i  1..N
This is a polynomial optimization problem, for which we apply LMI relaxations.
Note that we introduced many new variables – one for each term.
Partial Relaxations
Problem: an SDP with a large number of variables can be
computationally demanding.
A large number of variables can arise from:
LMI relaxations of high order
Introduction of new variables as we’ve seen
This is where partial relaxations come in.
For that we introduce Schur’s complement.
Schur’s comlement
 A B
 BT C  0


C0
Set:
i ( x) 2
A
 0
C  i
C  BT A1 B 0


2
i ( x) 
0
C0
 f i1 ( x) 
B

f
(
x
)
 i2 
Schur’s comlement - applying
 A B
 BT C  0


C0
i ( x) 2

 0
 f i1 ( x)

0
i ( x)
2
f i 2 ( x)
f i1 ( x) 

f i 2 ( x)  0
 i 


C  B T A1 B0
C0
f i1 ( x) 2  f i 2 ( x) 2   i i ( x) 2
i ( x)  0
i ( x)  0
Derivation of right side:
CBT
 i   f i1 ( x)
*
A-1
i ( x) 2
f i 2 ( x)
 0
*
B
>0
  f i1 ( x) 
0

2  
i ( x)   f i 2 ( x)
0
Partial relaxations
Schur’s complement allows us to state our optimization problem as follows:
min  1   2  ...   N
i ( x) 2

s.t.  0
 f i1 ( x)

0
i ( x) 2
f i 2 ( x)
i ( x)  0
The only non-linearity is due to
i (x )
We can apply LMI relaxations only on
f i1 ( x) 

f i 2 ( x)  0
 i 
i  1..N
2
x  [ x1 , x2 ,...]T
and leave
 i , i  1..N
If we were to apply full relaxations for all variables, the problem would become
Intractable for small N.
Partial relaxations
Disadvantage of partial relaxations: we are not able to ensure
asymptotic convergence to the global optimum.
However, we have a numerical certificate of global optimality just as in the
case of full relaxations:
The moment matrix of the relaxed variables is of rank one
Solution of partially relaxed problem is the global optimum
Application: Triangulation, 3 cameras
Goal: find the optimal 3D point. Camera matrices are
known, measured point is assumed to be in the origin of
each view.
Camera matrices:
Full relaxation vs. partial relaxtion
Summary
• Geometric Vision Problems to POPs
– Triangulation and reconstruction problem
• Relaxations of POPs
– Sum Of Squares (SOS) relaxation
• Guarantees bound on optimal solution
• Usually solution is optimal
– Linear Matrix Inequalities (LMI) relaxation
•
•
•
•
•
First order LMI relaxation: lifting, dropping rank constraint
Higher order LMI relaxation: linear constraints to LMIs
Guarantee of convergence, reference to Lassere
Certificate of global optimality
Application in vision
•
•
•
Finding optimal structure
Partial relaxation and Schur’s complement
Triangulation problem, benefit of partial relaxations
References





F. Kahl and D. Henrion. Globally Optimal Estimates for
Geometric Reconstruction Problems. Accepted IJCV
H. Waki, S. Kim, M. Kojima, and M. Muramatsu. Sums of
squares and semidefinite programming relaxations for
polynomial optimization problems with structured sparsity.
SIAM J. Optimization, 17(1):218–242, 2006.
J. B. Lasserre. Global optimization with polynomials and
the problem of moments. SIAM J. Optimization, 11:796–
817, 2001.
S. Boyd and L. Vandenberghe. Convex Optimization.
Cambridge University Press, 2004.
R. I. Hartley and A. Zisserman. Multiple View Geometry in
Computer Vision. Cambridge University Press, 2004.
Second Edition.