Download Sangkyum`s slides

Document related concepts

Genetic algorithm wikipedia , lookup

Perturbation theory wikipedia , lookup

Computational fluid dynamics wikipedia , lookup

Computational chemistry wikipedia , lookup

Inverse problem wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Simulated annealing wikipedia , lookup

Reinforcement learning wikipedia , lookup

Computational complexity theory wikipedia , lookup

Multi-objective optimization wikipedia , lookup

Knapsack problem wikipedia , lookup

Dynamic programming wikipedia , lookup

Exact cover wikipedia , lookup

Mathematical optimization wikipedia , lookup

Multiple-criteria decision analysis wikipedia , lookup

Transcript
Combinatorial Auctions:
A Survey
Sven de Vries & Rakesh Vohra (2000)
Contents
1. Introduction
2. CAP
3. Decentralized Methods
Introduction(1)
• Complimentarities between different assets
– Bidders have preferences not just for particular items but
for sets of bundels of items
– Traveling to LA
• (restaurants and hotels for the intermediate cities, car)
or (airline ticket, taxi)
• Auctions where bidders submit bids on
combinations : recently been aroused
– Jackson(1976),Caplice(1996),Rothkopf(1998),Fujishima(
1999),Sandholm(1999)
– Increases in computing power
Introduction(2)
• Tools
– ‘SBIDS’ by SAITECH-INC
– ‘OptiBid’ by Logistics.com
• Combinatorial Auction Problem (CAP)
– Selecting the winning set of bids.
– Can be formulated as an Integer Program
1. Introduction
2. CAP
3. Decentralized Methods
CAP
1. CAP
2. SPP
3. Solvable Instances of SPP
4. Exact Methods
5. Approximate Methods
CAP(1)
CAP
(Combinatorial Auction Problem)
-Selecting the winning set of bids-
Difficulty
Resolution
– Each bidder must submit a
bid for every subset of objects
he is interested in
– How to transmit this bidding
function in a succinct way to
the auctioneer
–To restrict the kinds
of combinations that
bidders may bid on
– How to decide which
collection of bids to accept
- Solving CAP
CAP(2)
CAP
(Combinatorial Auction Problem)
-Selecting the winning set of bids-
Difficulty
Resolution
– Each bidder must submit a
bid for every subset of objects
he is interested in
– How to transmit this bidding
function in a succinct way to
the auctioneer
–To restrict the kinds
of combinations that
bidders may bid on
– How to decide which
collection of bids to accept
- Solving CAP
CAP(3)
• Notations
–
–
–
–
N : the set of bidders
M : the set of m distinct objects
S : subset of M
bj(S) : the bid that agent j in N has announced he is
willing to pay for S
– b(S )  max jN b j (S )
CAP(4)
• CAP formula :
max
 b( S ) x ( S )
S M
s.t. x( S )  1i  M
iS
x( S )  0,1S  M
CAP(4)
• CAP formula :
max
 b( S ) x ( S )
S M
s.t. x( S )  1i  M
iS
x( S )  0,1S  M
– x(S) = 1 : the highest bid on the set S is to be accepted
0 : no bid on the set S are accepted
CAP(4)
• CAP formula :
max
 b( S ) x ( S )
S M
s.t. x( S )  1i  M
iS
x( S )  0,1S  M
–
 x(S )  1i  M
iS
: no object in M is assigned to
more than one bidder
CAP(4)
• CAP formula :
max
 b( S ) x ( S )
S M
s.t. x( S )  1i  M
iS
x( S )  0,1S  M
• Call this formulation CAP1
CAP(5)
• Superadditive :
–
b j ( A)  b j ( B)  b j ( A  B)
for all j∈N and A,B⊂M
such that A  B  
• CAP1 correctly models CAP when the bid
functions bj are all superadditive
– The goods complement each other.
• When goods are substitutes, CAP1 is incorrect.
– Why ?
• Superadditive formula doesn’t hold for some j,A,B.
• An optimal solution to CAP1 may assign A,B to bidder j and
incorrectly record a revenue of bj(A)+bj(B) rather than b j ( A  B)
CAP(6)
• How to obviate this difficulty ?
– Through the introduction of dummy good g
• bj(A) => bj(A∪{g})
bj(B) => bj(B∪{g})
bj(A∪B) remains the same
M => M∪{g}
• If A is assigned to j, then B cannot be assigned to j.
– Through the formula CAP2
CAP(7)
• CAP2 formulation
max 
j
b
 (S ) y(S , j)
jN S  M
• CAP1 formulation
max
 b( S ) x ( S )
S M
s.t.  y ( S , j )  1i  M
s.t. x( S )  1i  M
 y(S , j )  1j  N
x( S )  0,1S  M
iS jN
S M
y ( S , j )  0,1S  M , j  N
iS
CAP(8)
• CAP2 formulation
max 
j
b
 (S ) y(S , j)
jN S  M
s.t.  y ( S , j )  1i  M
iS jN
 y(S , j )  1j  N
S M
y ( S , j )  0,1S  M , j  N
No bidder receives more than
one subset
CAP(9)
• CAP2 formulation
max 
j
b
 (S ) y(S , j)
jN S  M
s.t.  y ( S , j )  1i  M
iS jN
 y(S , j )  1j  N
S M
y ( S , j )  0,1S  M , j  N
Overlapping sets of goods
are never assigned
CAP(10)
• Assumption of CAP1,CAP2
– There is at most one copy of each object.
• Extending the formulation
– The case when there are multiple copies of the same
object and each bidder wants at most one copy of each
object :
• The right hand sides of the contraints in CAP1, CAP2 take
on values larger than 1.
– The case when there are multiple copies and the bidder
may want more than one copy of the same object :
• Multi-unit combinatorial auctions (Leyton-Brown 2000)
CAP
1. CAP
2. SPP
3. Solvable Instances of SPP
4. Exact Methods
5. Approximate Methods
SPP(1)
• Set Packing Problem
– Given a ground set M of elements and a collection V of
subsets with non-negative weights, find the largest
weight collection of subsets that are pairwise disjoint.
SPP(2)
• Set Packing Problem
– Given a ground set M of elements and a collection V of
subsets with non-negative weights, find the largest
weight collection of subsets that are pairwise disjoint.
• Notation
– x(j) = 1 if the j-th set in V with weight c(j) is selected
0 otherwise
– a(i,j) = 1 if the j-th set in V contains element i∈M
0 otherwise
SPP(3)
• Notation
– x(j) = 1 if the j-th set in V with weight c(j) is selected
0 otherwise
– a(i,j) = 1 if the j-th set in V contains element i∈M
0 otherwise
• SPP Formulation
max  c( j ) x( j )
jV
s.t. a(i, j ) x( j )  1i  M
jV
x( j )  0,1j  V
SPP(3)
• Notation
– x(j) = 1 if the j-th set in V with weight c(j) is selected
0 otherwise
– a(i,j) = 1 if the j-th set in V contains element i∈M
0 otherwise
• SPP Formulation
• CAP Formulation
max  c( j ) x( j )
max
s.t. a(i, j ) x( j )  1i  M
s.t. x( S )  1i  M
x( j )  0,1j  V
x( S )  0,1S  M
jV
jV
 b( S ) x ( S )
S M
iS
SPP(3)
• Notation
– x(j) = 1 if the j-th set in V with weight c(j) is selected
0 otherwise
– a(i,j) = 1 if the j-th set in V contains element i∈M
0 otherwise
• SPP Formulation
• CAP Formulation
max  c( j ) x( j )
max
s.t. a(i, j ) x( j )  1i  M
s.t. x( S )  1i  M
x( j )  0,1j  V
x( S )  0,1S  M
jV
jV
 b( S ) x ( S )
S M
iS
SPP(4)
Other related Prolems
Set Partitioning Problem
(SPA)
Set Covering Problem
(SCP)
max  c( j ) x( j )
max  c( j ) x( j )
s.t. a(i, j ) x( j )  1i  M
s.t. a(i, j ) x( j )  1i  M
x( j )  0,1j  V
x( j )  0,1j  V
jV
jV
jV
jV
SPP(5)
Set Partitioning Problem
(SPA)
max  c( j ) x( j )
jV
s.t. a(i, j ) x( j )  1i  M
jV
x( j )  0,1j  V
– Bidders are sellers (rather than buyers).
– Trucking companies bidding for the
opportunity to ship goods from a particular
warehouse to retail outlet.
SPP(6)
Set Covering Problem
(SCP)
max  c( j ) x( j )
jV
s.t. a(i, j ) x( j )  1i  M
jV
x( j )  0,1j  V
– Auction problems in procurement rather
than selling terms.
– Scheduling of crews for railways.
Complexity of SPP
• No polynomial time algorithm for SPP is known.
• Any algorithm for the CAP that uses directly the
bids for the sets, must scan the bids and the
number of such bids could be exponential in |M|.
– |M| : the number of variables
=> |V| : the number of solutions to check = 2|M|
• SPP : NP-hard (NP-complete)
• Effective solution procedures for CAP
– The number of distinct bids is not large
– Be structured in computationally useful ways.
CAP
1. CAP
2. SPP
3. Solvable Instances of SPP
4. Exact Methods
5. Approximate Methods
Solvable Instances of SPP
1. Total Unimodularity
2. Balanced Matrices
3. Perfect Matrices
4. Graph Theoretic Methods
5. Using Preferences
Solvable Instances of SPP
•
Usual way in which instances SPP can be solved
by a polynomial algorithm
– When the extreme points of the polyhedron
P( A)  {x :  a (i, j ) x( j )  1i  M ; x( j )  0j V }
jV
are all integral, i.e. 0-1.
– In these cases, we can simply drop the integrality
requirement from the SPP and solve it as a linear
program
•
A polyhedron with all integral extreme points is
called integral.
Total Unimodularity(TU) (1)
• A matrix is TU if the determinant of every square
submatrix is 0,1 or –1.
• A : TU  At : TU
• If A={a(i,j)}i∈M,j∈V is TU, then all extreme point of
the polyhedron P(A) are integral.
• There is a polynomial time algorithm to decide
whether a matrix is TU.
Total Unimodularity(TU) (2)
• Theorem 2.1) Let B be a matrix each of whose
entries is 0,1 or -1. Suppose each subset S of
columns of B can be divided into two sets L and R
such that
b 
b  0,1i

jS  L
ij

jS  R
ij
then B is TU. The converse is also true.
• Theorem 2.2) All 0-1 matrices with the
consecutive ones property are TU.
– A 0-1 matrix has the consecutive ones property if the
non-zero entries in each column occur consecutively.
Total Unimodularity(TU) (3)
• For example,
– Objects to be auctioned : parcels of land along a shore
line
• Shore line is important : it imposes a linear order on the
parcels
– Make a restriction to bid only contiguous parcels
• The most interesting combinations would be contiguous, in
the bidders eyes.
– Two computational consequences.
• Number of distinct bids would be limited by a polynomial in
the number of objects.
• The constraint matrix A of the CAP would have the
consecutive ones property in the columns.
Balanced Matrices(1)
• A 0-1 matrix B is balanced if it has no square
submatrix of odd order with exactly two 1’s in
each row and column.
• Theorem 2.3) Let B be a balanced 0-1 matrix.
Then the following linear program :


max  c( j ) x( j ) : bij x( j )  1i, x( j )  0j 
j
 j

has an integral optimal solution whenever the
c(j)’s are integral.
Balanced Matrices(2)
• For example,
– Consider a tree T with a distance function d.
• v : vertex of T
• N(v,r) : set of all vertices in T that are within distance r of v.
– The vertices represent parcels of land connected by a
read network with no cycles.
– Bidders bid for subsets of parcels which is to be of the
form N(v,r).
– Row of the constraint matrix : for each vertex
Column : for each set of the form N(v,r)
– This constraint matrix is balanced.
Perfect Matrices
• If the contraints matrix A can be identified with the
vertex-clique adjacency matrix of what is known
as a perfect graph, then SPP can be solved in
polynomial time.
• A simple graph G is perfect if, for every induced
subgraph H of G, the number of vertices in a
maximum clique is  ( H )
–
 ( H ) , the chromatic number of H, is the minumum k for
which H is k-colorable.
Graph Theoretic Methods
• There are situations where P(A) is not integral yet
the SPP can be solved in polynomial time because
the contraint matrix A admits a graph theoretic
interpretation in terms of an easy problem.
– When each column of the matrix A contains at most two
1’s. => maximum weight matching problem
(can be solved in polynomial time)
• At most two 1’s per row of A => NP-hard
– When A has the circular ones property.
• A 0-1 has the circular ones property if the non-zero entries
in each column (row) are consecutive
• First and last entries in each column (row) are treated
consecutive
• Note the resemblance to the consecutive ones property
Graph Theoretic Methods
• There are situations where P(A) is not integral yet
the SPP can be solved in polynomial time because
the contraint matrix A admits a graph theoretic
interpretation in terms of an easy problem.
– When each column of the matrix A contains at most two
1’s. => maximum weight matching problem
(can be solved in polynomial time)
• At most two 1’s per row of A => NP-hard
– When A has the circular ones property.
=> A can be identified with the vertex-clique adjacency
matrix of a circular arc graph.
=> maximum weight independent set problem for a
circular arc graph. (can be solved in poly time)
Using Preferences(1)
• Restrictions in the preference orderings of the
bidders
– Suppose that bidders come in two types
• Type one : bj(.) = g1(.)
• Type two : bj(.) = g2(.)
where g1 and g2 are non-decreasing integer valued
supermodular functions
The dual of CAP2 is : min  pi   q j
iM
jN
s.t. pi  q j  g 1 ( S )S  M , j  N 1
iS
2
2
p

q

g
(
S
)

S

M
,
j

N
 i j
iS
pi , q j  0i  M , j  N
Using Preferences(1)
• Restrictions in the preference orderings of the
bidders
– Suppose that bidders come in two types
• Type one : bj(.) = g1(.)
• Type two : bj(.) = g2(.)
where g1 and g2 are non-decreasing integer valued
supermodular functions
The dual of CAP2 is : min  pi   q j
iM
This Problem is an instance of the
polymatroid intersection problem.
(polynomially solvable)
jN
s.t. pi  q j  g 1 ( S )S  M , j  N 1
iS
2
2
p

q

g
(
S
)

S

M
,
j

N
 i j
iS
pi , q j  0i  M , j  N
Using Preferences(1)
• Restrictions in the preference orderings of the
bidders
– Suppose that bidders come in two types
• Type one : bj(.) = g1(.)
• Type two : bj(.) = g2(.)
where g1 and g2 are non-decreasing integer valued
supermodular functions
– Using the method to solve problems with three or more
types of bidders is not possible.
• It is known in those cases that the dual problem above
admits fractional extreme points.
• The problem of finding an in integer optimal solution for the
intersection of three or more polymatroids is NP-hard.
Using Preferences(2)
• Restrictions in the preference orderings of the
bidders
– When each of the bj(.) have the gross substitutes
property, CAP2 reduces to a sequence of matroid
partition problems, each of which can be solved in
polynomial time.
CAP
1. CAP
2. SPP
3. Solvable Instances of SPP
4. Exact Methods
5. Approximate Methods
Exact Methods(1)
• The upper bound on the optimal solution value is
obtained by solving a relaxation of the
optimization problem.
– Replace the given problem by one with a larger feasible
region that is more easily solved.
• Lagrangean relaxation
– Will be discussed later
• Linear programming relaxation
– Only the integrality constraints are relaxed
Exact Methods(2)
• Exact methods
– Branch and bound
– Cutting planes
– Hybrid called branch and cut
Exact Methods(2)
•
Exact methods
– Branch and bound
1. At each stage, after solving the LP, a fractional variable xj
is selected and two subproblems are set up, one where
xj=1 and the other where xj=0. (Branch)
2. Solve the LP relaxation of the two subproblems.
3. From each subproblem with a nonintegral solution we
branch again to generate two subproblems and so on.
4. By comparing the LP bound across nodes in different
branches of the tree, one can prune some branches in
advance. (Bound)
– Cutting planes
– Hybrid called branch and cut
Exact Methods(3)
•
Exact methods
– Branch and bound
– Cutting planes
•
•
Find linear inequalities (cuts) that are violated by a solution
of a given relaxation but are satisfied by all feasible zeroone solution.
If one adds enough cuts, one is left with integral extreme
points.
– Hybrid called branch and cut
Exact Methods(4)
•
Exact methods
– Branch and bound
– Cutting planes
– Hybrid called branch and cut
•
•
Works like branch and bound, but tightens the bounds in
every node of the tree by adding cuts.
Since even small instances of the CAP1 may involve a
huge number of columns (bids), this method needs to be
augmented with another method known as column
generation.
(It works by generating a column when needed rather than
all at once.)
Exact Methods(5)
• How successful exact approaches are :
– Being able to find an optimal solution to an instance of
SPA with 1,053,137 variables and 145 constraints in
under 25 minutes.
• Major impetus behind the desire to solve large
instances of SPA(SPC) quickly has been the
airline industry.
– Assinging crews to routes can be formulated as an SPA.
• The rows of the SPA correspond to flight legs.
• The columns correstpond to a sequence of flight legs that
would be assigned to a crew.
CAP
1. CAP
2. SPP
3. Solvable Instances of SPP
4. Exact Methods
5. Approximate Methods
Approximate Methods
• Probably every heuristic approach for solving
general integer programming problems has been
applied to the SPP.
– Greedy, Interchange/steepest ascent approach, genetic
algorithms, probabilistic search, simulated annealing,
neural networks
• Give up on finding the optimal solution.
– Rather one seeks a feasible solution fast and hopes that
it is near optimal.
– How close to optimal is the solution ?
• Worst-case analysis
• Probabilistic analysis
• Empirical testing
1. Introduction
2. CAP
3. Decentralized Methods
Decentralized Methods
1. Duality in Integer Programming
2. Lagrangean Relaxation
Decentralized Methods
• One way of reducing some of the computational
burden in solving the CAP.
– Auctioneer : sets prices for the objects
Agents : announce which sets of objects they will
purchase ar the posted prices
– If two or more agents compete for the same object, the
auctioneer adjusts the price vector.
– Bidders : save from specifying their bids for every
possible combination
auctioneer : saves from having to process each bid
function
Duality in Integer Programming(1)
• Decentralized approach
– Auctioneer chooses a feasible solution.
– Bidders are asked to submit improvements.
– Auctioneer agrees to share a portion of the revenue gain
with the bidder.
• Above method can be viewed as instances of dual
based procedures for solving an integer program.
Duality in Integer Programming(2)
• The (superadditive) dual to SPP
– the problem of finding a superadditive, non-decreasing
function F : R m  R1 such that
min F (1)
s.t.F ( a j )  c j j  V
F (0)  0
– If the primal integer program has the integrality property,
there is an optimal integer solution to its LP relaxation,
and the dual function F will be linear,i.e., F (u )   yi ui
i
Duality in Integer Programming(3)
• The (superadditive) dual to SPP
– If the primal integer program has the integrality property,
there is an optimal integer solution to its LP relaxation,
and the dual function F will be linear,i.e., F (u )   yi ui
i
The dual becomes :
min  yi
i
s.t. aij yi  c j j  V
i
yi  0i  M
Duality in Integer Programming(3)
• The (superadditive) dual to SPP
– If the primal integer program has the integrality property,
there is an optimal integer solution to its LP relaxation,
and the dual function F will be linear,i.e., F (u )   yi ui
i
The dual becomes :
Superadditive dual reduces to the
dual of the linear programming
relaxation of SPP.
yi : can be interpreted as the price
of the object i.
min  yi
i
s.t. aij yi  c j j  V
i
yi  0i  M
Duality in Integer Programming(4)
• Solving the superadditive dual problem is as hard
as solving the original primal problem.
• It is possible to reformulate the superadditive dual
problem as a linear program.
– The number of variables is exponential in the size of the
original problem.
– For small or specially structured problems, this can
provide some insight.
• In general, one relies on the solution to the LP
dual and uses its optimal value to guide the
search for an optimal solution to the original
primal integer program.
=> Lagrangean Relaxation
Lagrangean Relaxation(1)
• Relax some of the constraints of the original
problem by moving them into the objective
function with a penalty term.
– Infeasible solutions : allowed but penalized in proportion
to the amount of infeasibility.
Lagrangean Relaxation(2)
• Recall the SPP: Z  max  c( j ) x( j )
jV
s.t. a(i, j ) x( j )  1i  M
jV
x( j )  0,1j  V
• Notation
– ZLP : optimal objective function value to LP relaxation of
SPP. (Note that Z ≤ ZLP)
– Z ( )  max
c( j ) x( j )  i (1  a(i, j ) x( j ))

jV
s.t.
0  x( j )  1j

iM

jV
Lagrangean Relaxation(3)
• Theorem3.2) Z LP  min( Z ( ))
 0
– Computing Z(∧) is easy.
• Simply set x(j)=1 if (c( j )   i a(i, j ))  0
iM
0 otherwise
since
 c( j ) x( j)    (1   a(i, j) x( j))   (c( j)    a(i, j)) x( j)   
jV
iM
i
jV
jV
iM
i
iM
i
Lagrangean Relaxation(3)
• Theorem3.2) Z LP  min( Z ( ))
 0
– Computing Z(∧) is easy.
– Using subgradient algorithm, finding ∧ which minimizes
Z(∧) can be done.
– Therefore, ZLP can be found in a fast procedure.
• Lagrangean relaxation is not guaranteed to find
the optimal solution to the underlying problem.
– It finds an optimal solution to a relaxation of it.
– The resulting solution may not be too infeasible, so
could be fudged into a feasible solution without a great
reduction in objective function value.
Lagrangean Relaxation(4)
• Market Interpretation
– Auctioneer chooses a price vector ∧ for the objects.
– Bidders submit bids.
– If the highest bid c(j) for the jth bundle exceeds  i a(i, j )
iM
this bundle is tentatively assigned to that bidder.
• SAA (simultaneous ascending auction)
– Bidders bid simultaneously in rounds.
– Bids must be increased by a specified minimum from
one round to the next.
– Bidders adjust prices which is different from the way of
Lagrangean Relaxation.
– Exposure problem occurs.
Lagrangean Relaxation(5)
• Exposure Problem
– Bidders pay too much for individual items or bidders with
preferences for certain bundles drop out early to limit
losses.
– For example,
• A bidder A values the bundle of goods i and j at $100 but
each at $0.
• In SAA, A has to submit high bids on i and j to secure them.
• Suppose that it loses the bidding on i.
• A is left standing with a high bid j which A valued at $0.
– Any auction scheme that relies on prices for individual
items will face this problem.
Lagrangean Relaxation(6)
• AUSM (Adaptive User Selection Mechanism)
– Asynchronous in that bids on subsets can be submitted
at any time.
– Difficult to connect to the Lagrangean ideas.
• Iterative auction scheme
– Hybrid of the SAA and AUSM
– Easier to connect to the Lagrangean framework.
– Bidders submit bids on packages rather than on
individual items.
The End
Even if the researcher does not find
what was initially expected,
the pursuit of a personally important topic is still rewarding and
generally produces continuing researches.