Survey

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
```17619
>> David Wilson: The speaker today is Ben, who has been an intern here and also host a student at
University of Washington. So he'll tell us what he did over the summer.
>> Ben Birnbuam: Okay. Am I on the microphone? I can talk lightly? So, yes, this is really I guess
like two talks. They're related by the sort of meta idea of local dynamics for equilibrium concepts.
But the two settings are very different and two different types of equilibrium are different and the
techniques very different. Basically it will be two talks. I'll do one and then the other.
So the first part is on local dynamics for balanced outcomes, and this is with Elisa and Nikhil. Elisa
is another student at the University of Washington.
So the setup that we're considering in this part of the talk is you're given a graph and it's a
weighted graph on the edges. And you can think of the nodes on this graph as agents. And the
weight on the edges signifies the value on the edge.
The agents will be bargaining on how to split the value on their adjacent edges. The catch is that at
the end each agent can make at most one deal with at most one of its neighbors.
So that means that an outcome in this bargaining game is a matching along with an allocation on
the vertices such that the allocation for vertex is zero if it's not in the matching and for every edge
that's in the matching the sum of the values for the vertices adds up to the weight of that edge.
So you can ask the question what kind of outcome -- sorry. So here's a sample graph with a
sample allocation. Should be pretty clear. Here's the matching and the value of 18 is split between
these two nodes and the value of 14 is split between these two nodes.
So you can ask the question what kind of outcome would you expect and one sort of basic property
you would expect in some sort of outcome from this game is stability. So let me show you what I
mean by stability.
So this is the outcome that I showed you in the last slide. But if you look at this edge right here, the
sum of these allocation values adds up to 10, which is less than the weight of this edge. So that
means that even though these two vertices are making deals with other neighbors, if they made a
deal with each other, then they could get more than what they're currently getting. This person
could get 2.5 and this person could get 8.5.
So this is an unstable edge and a stable outcome is an outcome in which every edge is stable. So
that means that XU plus XV is greater than or equal to all edges.
So this is kind of the most basic thing we would want in a solution concept for this game. But we
can also incorporate a notion of fairness in terms of the solution.
So let me take a quick detour to talk about Nash's Bargaining Solution and how it relates to this
case. I'm sure probably half of you have seen this stuff before. In Nash's bargaining game, there's
two players, and they have W dollars to split among each other, and if they don't make a deal, then
Player One gets alpha one and Player Two gets alpha two. So you can ask what's a reasonable way
to split the money.
And Nash's bargaining solution says you should just split the surplus which is the amount of
money minus the two alternatives. So here's a little -- so if this right here is the amount of money
you have to split, this is what Player One would get if they didn't make a deal this is what Player
Two would get if they didn't make a deal. You split it right here. Player One gets this and Player
Two gets this.
>>: The split between, divided up?
>> Ben Birnbuam: Yeah. So this distance is this same distance. So this is Nash's bargaining
solution, in a more general setting Nash proved that you get some axioms that seem reasonable for
how you would split this. And this is the unique solution to satisfy those axioms.
So a balanced outcome, which is the kind of solution concept I'll be talking about for fixed
bargaining network generalizes Nash's bargaining solution for graphs.
So before we had these alternatives, alpha one and alpha two, that's just sort of giving us part of
the input to the game. But on a graph we're actually going to have them defined by the outcome on
the graph itself.
So the alternative for vertex U is you look at suppose U is in the matching, and you look all of its
neighbors except for the one it's matched to and you look at the weight on the edge of that
neighbor minus what the neighbor is getting. So that's what you could get from that neighbor, and
then you take the maximum of that with zero because he doesn't have to make a deal with anybody
if he doesn't want to.
And a balanced outcome is a stable outcome such that ->>: What's the second axiom, matches?
>> Ben Birnbuam: This is the maximum between these two quantities. It's the maximum between
this and zero. So a balanced outcome is a stable outcome such that for every matched edge they
satisfy Nash's bargaining solution with the alternatives defined in this way, and this is the same
saying XU minus alpha U is the same as XV minus alpha V.
Assuming that XU and XV add up to the weight of the edge.
So here's an example. This is the same input graph. This has no neighbor besides the one it's
matched to. Alternative is zero. This is getting minus two from here, five minus seven but four
from here. So it's the best alternative is four. This guy's getting zero and this guy's getting zero.
So you see that seven minus zero is equal to 11 minus four and seven minus zero is equal to seven
minus zero.
So this is a balanced outcome, I think. Okay. So it's not clear whether or not these exist in all
graphs. And in stock of 2008, so this was studied by economists and sociologists from maybe the
past 20 years. And in Stock 2008 Kleinberg & Tardos had a paper that brought this to the attention
of computer scientists and they proved that a balanced outcome exists if and only a stable outcome
exists and you can find one in polynomial time.
>>: [inaudible].
>> Ben Birnbuam: Yes. And that the computer science recruitment for new grad students.
>>: So the balanced outcome is technically correct, right?
>> Ben Birnbuam: Yes. So I guess only if F is trivial. So I guess in some talks they asked the
question, Kleinberg and Tardos asked the question this is the polynomial algorithm which sort of
helps to justify it.
>>: Which is actually known a long time before, right?
>> Ben Birnbuam: I guess it's unclear. I know people have been talking about that. I have to -- I
don't know yet. This may have been known.
>>: Even when you -- known by physicists.
>>: The way I heard the story, after this already [inaudible] was that they found out that it was
literature in by economists like 20 years ago already elaborated.
>> Ben Birnbuam: So they definitely have -- there's definitely stuff about this that they didn't talk
about in that paper in the economics literature. And I think this top part is pretty easy to see.
[laughter].
>>: Say that again.
>> Ben Birnbuam: The if and only if. I don't know about the polynomial algorithm, though. I need
to look more into it.
In any case they asked this question are there natural local dynamics that converge quickly to a
balanced outcome. So here's an idea, which is start with a fixed matching and just do the following
algorithm. Just keep repeating. To pick an arbitrary edge and balance it. What I mean by balance
it, I mean set the end points so they satisfy this balanced condition.
>>: I saw the question but I didn't understand the motivation. There is an algorithm.
>> Ben Birnbuam: There is an algorithm. But if it's a justification for an equilibrium is that it can
happen in practice. So one sort of test that it could happen in practice is that there's an algorithm
to compute it. But beyond that, if the only algorithm that computes it needs knowledge of the entire
game or the entire market, then it's a little bit less realistic. So we want an algorithm where the
nodes sort of only act with the information they have and still compute this equilibrium.
So start with a fixed matching M and then repeat the step where you choose an arbitrary edge and
balance it. So what we did some work this winter, with Yosee, Elisa, Nikhil and you value and we
showed these dynamics do converge to a balanced outcome whenever one exists on the matching
M that you start with. So this requires a fixed matching M.
So this sort of left two open questions. One was we actually don't have the best time bound we can
show is an exponential convergence rate, and we don't know whether or not that's tight. We'd like
to prove this is polynomial.
But another open question, which is the one I'm going to talk about today, is these dynamics
assume you start with a fixed matching. So it would be nice to have an algorithm where the
matching is computed along with the allocation, and that's one thing we worked on this summer.
>>: So if there's no, the specific matching started with, does not contain balance.
>>: If it's not optimal.
>>: Then you're not going to converge.
>> Ben Birnbuam: No, actually you'll still converge, but it won't be a stable outcome. So a
balanced outcome exists on matching M if and only if M has a stable outcome which happens if and
only if M is a maximum matching.
>>: If what is?
>> Ben Birnbuam: If M is only a maximum matching, almost.
>>: There's some ->> Ben Birnbuam: If and only if M is a maximum matching and a maximum matching is a fractional
matching. Assuming that there exists a balanced outcome on the graph. I'm sorry.
>>: So any maximum matching would work?
>> Ben Birnbuam: Yes. If there is a balanced outcome on that graph, then any maximum matching
would work.
>>: So you confuse the edges arbitrarily as long as you choose each one in order?
>> Ben Birnbuam: Exactly. You need some sort of nonstarvation. But except for that you can
choose the edges. We also assume you choose the edges in such a way you make it at least
epsilon progress. Some sort of -- well, so for our time bound, for exponential time bound it's like
one over epsilon and then it's exponential in the size of the graph. That time bound requires that
you make at least epsilon progress each step.
>>: Progress in what sense?
>> Ben Birnbuam: That the allocation changes at least by epsilon. But nonstarvation condition you
said is sufficient to show that it converges.
>>: Is it possible to make epsilon progress?
>> Ben Birnbuam: We converge to, if it's not your epsilon closest to being balanced. So we're
talking about an approximate, converging to an approximate balanced solution.
>>: Convergence exact in finite time.
>> Ben Birnbuam: Typically it won't.
>>: That's why it converges [laughter].
>>: Array convergence.
>>: Yes, but it's always possible to have an edge ->>: [indiscernible]
>> Ben Birnbuam: Okay. So we considered the second question, how would you compute a
matching. So the thing that we have from the summer is that we have an algorithm that will
compute both the matching allocation without global coordination and bipartite graphs. So like
what I was saying before, in a bipartite there always exists a balanced outcome. That's a theorem.
And as I mentioned a balanced outcome exists with them if and only if M is maximum. So an
algorithm that computes a balanced outcome will also have to compute a maximum matching.
So you can ask the question, does there exist a local algorithm, a known local algorithm to compute
a maximum matching? And the answer is yes. So I guess in the 1980s Bertsekas, who is at MIT,
did some work on this auction algorithm, which is an algorithm for maximum matching of bipartite
graphs, can you, think of it as a local algorithm. Some way that described the algorithm.
So I'm going to call, have a bipartite graph with bipartite U and V. I'm going to call U the bidders, V
the items. Think of it as an auction. Different ways to describe it. This is a way I'll describe it that
makes it easy to think about in terms of these balanced outcomes.
So this algorithm will maintain an allocation X on the vertices and a matching M. And BUV is going
to be, I'm going to call the offer of U for V. And that's basically the weight of the edge minus XU,
but we don't -- we take the maximum of that and zero.
Okay.
>>: What's in the allocation?
>> Ben Birnbuam: An allocation is a vector on the vertices. So it's ->>: Satisfying the --
>> Ben Birnbuam: No, I'm just saying it's a map from the vertices to the positive real numbers for
now. So initialization is we set on the left-hand side we set all the bidder's allocation to infinity, on
the right side we set the allocation to zero. We start with an empty matching.
And the algorithm keeps repeating the following until we have a perfect matching. So I'm going to
assume that the two sides both have size N and we just add zero weight edges if we need to. And
that is a complete graph.
So we select an unmatched bidder, and so I'm going to illustrate this, too. So take this unmatched
bidder and look at the item that gives it the highest offer. Call that item V star.
Let row be the offer from V star. And let sigma be the second highest offer. The auction
algorithm -- so this is one version of the auction algorithm, says set the new value for XU to be
between row and sigma. And actually anywhere in between row and sigma will work. But I'm going
to just assume right now you take the average of the two.
Then you set the value for V star to be the weight of the edge minus what you just set XU to be. So
XV star and XU add up to the weight of that edge.
And then you add the edge UV to the -- sorry, this should be V star. You add the edge V star to the
matching. These should also have stars. And you remove what V star was currently matched to if
it was already in the matching. And you repeat this until you have a perfect matching.
>>: Even when V star is not matched you don't give XU the maximum, the most, the best that it can
do?
>> Ben Birnbuam: No. I mean, as I've written it right now it could be that the second highest bid is
the same as the maximum bid, in which case you would.
So let me just illustrate this really quickly. Here's the update step. So let's say we choose this top
bidder. So this is the graph -- so these are the weights of the edges on the left and this will be the
matching on the right.
So the top bidder is currently getting an offer from two from that top item and an offer of one from
the bottom two items. So we match the top item and get 1.5. Then let's look at the second bidder,
it's getting .5 from the top, 4 from the middle and 1 from the bottom. We match the middle and take
the average of 4 and 1.
Seems like I can go -- seems like people understand this. So then you just keep repeating.
>>: This would take a long time?
>> Ben Birnbuam: It might. I mean ->>: You don't have polynomial time guarantee [indiscernible] this [inaudible] operation.
>> Ben Birnbuam: Let me tell you right now the algorithm won't work the way I've written it.
[laughter] in this case it does work, this is a maximum matching and it took four steps. So it works.
[laughter].
So here's the problem with it as I've stated, which is that it can get stuck. So assume this is a
complete graph and I'm just showing edges that have weight one and the edges that I'm not
showing are zero weighted.
So at the beginning we'll do this and then when you look at this bottom bidder, it's getting an offer
of one from both of these and zero from here. So it's going to get one and match to one of the top
two items.
And then you just repeat, because then this guy is getting an offer of one from both of these two.
So it will match to one of those two. So we're not making any progress here. So the actual way to
implement the auction algorithm is to change the update rule to make sure you reduce it by some
amount epsilon each time. Some small amount epsilon.
And so let me just quickly -- I don't want to talk too much about the auction item because it's old
and not what I did. But let me just say why it works real quickly.
So you can show that the allocations on the left are decreasing. And that means that the
allocations on the right are increasing. So you have some sort of progress.
And so, by the way, this proof -- this is a lot like stable marriage for those who know that Gail
Shapley algorithm for marriage, the proof is similar as well. So once an item V is matched, it stays
matched. That's not hard to show. Then with this, the way I change it you subtract epsilon each
time you know you make some positive progress each time.
And then the auction item and the setup you're always matched to the person who is giving you the
best offer within epsilon. So this means essentially that you're within epsilon of being a vertex
cover. So this is a primal dual algorithm.
So the allocation values are within epsilon of being a vertex cover. So when the size of the
matching is high enough, you have some sort of relaxed complementary slackness, and for small
enough epsilon, if you have integer edge weights, this means the matching is going to be optimal.
So that's just the outline of the proof. So you can think of this now as a two phase algorithm, where
first these agents compute a maximum matching and then they go to the edge balancing dynamics I
described earlier to compute a balanced outcome. But that's not very satisfactory, because that
requires some global, some person who has global knowledge saying okay now switch to phase
two.
So what if we change the auction algorithm to interleave the balancing steps with the auction
steps? So now instead of just choosing an arbitrary unmatched bidder we're just going to choose
any bidder. If U is not in the matching then do the auction step as normal.
If U is in the matching, then do the balancing step, basically, that's what this is right here. So if this
algorithm gets to's point where the size, where we get a perfect matching, then we'll always be in
the second case and we'll just have the edge balancing dynamics. We know if we get to that point
it's just going to be, we know from the work we did last winter that this will converge to a balanced
outcome.
So we just need to show that this will eventually get to a point -- sorry, and if -- so what we need to
show is that this will get to a point where the size of M equals N and M is a maximum matching, and
by our earlier work that will show that this converges to a balanced outcome.
So that's what we can do. And it's actually not all that hard. So it's right now the proof is more
messy than anything else. I'll show you quickly why you might expect this to be the case and then
we'll move on to the second part of the talk.
Okay. So the reason why the proof of the auction algorithm still goes through is that the two types
of steps are actually kind of similar. So let's say we're choosing an unmatched -- so this is sort of a
hand waving argument to show why they're similar. I'm not sure if it will make sense or not.
So let's say we're in the step where we have an unmatched bidder U, and V star is the one giving it
its highest offer row and it's also giving the highest offer sigma. So we know that ignoring the
epsilon issue, we know that the new value of XU is one half row plus sigma.
Row here is essentially the weight of this edge minus the allocation for this vertex. And then so
that's where we get this. And you can think of essentially what is the alternative -- the alternative of
if you look at this edge, the best offer that U is getting besides this edge is sigma. And the best
offer that V is getting besides this edge is basically what it's currently getting from somebody else,
which is XV star.
So you can kind of make the substitution alpha V star for XV star and alpha U for sigma. And this is
just the balancing step. So you can make this more formal and show -- so because they're similar
you can kind of show the same sort of proof outline works. You show monotonicity and then you
have this vertex cover property throughout the algorithm so that at the end it will be a maximum
matching.
So right now the proof, the hard part of the proof as it's written right now is that once you add -- you
still need to add epsilon and that gets sort of messy, maybe. It's unclear whether it can be made
more simple and more elegant.
>>: I don't understand, probably a silly question but why don't you first run the auction algorithm,
get the matching and then run the algorithm that you have previously.
>> Ben Birnbuam: So that requires the nodes to know when a maximum matching has been
reached.
>>: [indiscernible].
>> Ben Birnbuam: The way it's written right now a node only knows whether or not it's matched. It
doesn't have to know anything about the matching for other nodes.
>>: That was a silly question.
>> Ben Birnbuam: Yeah, that would -- [laughter] so that's basically all I want to talk about with this.
We definitely have more stuff we want to do. So the algorithm as it's written requires the specific
initial conditions, the auction algorithm. And it doesn't treat the vertices is symmetrically. There's
one side and it does all the bidding. There's the other side and it's just the items. Because of this
we don't know how to extend it to general graphs. So we'd like to find an algorithm that does each
of these two things.
>>: Is there an auction algorithm for general graphs?
>> Ben Birnbuam: No. Not that I know of. So the auction algorithm is similar to the Gail Shipley
algorithm for stable marriage.
>>: Stable roommates.
>> Ben Birnbuam: Stable roommates there's an algorithm but it might not be quite so distributed.
I'm not sure. Definitely we want to look into that stuff.
>>: Also there's a gap -- if you just write standard constraints for matching in a general graph, then
you would have a gap two to five [phonetic], strong enough to capture matching.
>> Ben Birnbuam: Right. But it turns out that a balanced outcome, it's not hard to show using
duality a balance exists on the graph if and only if that gap is one. That gap being one, that's the
condition for a stable outcome. A stable outcome is essentially optimal primal dual pair to this
matching relaxation.
So since a balanced outcome exists if and only if a stable outcome exists, balanced outcome gap
exists if and only if there's no gap.
>>: But for which constraint? Just the standard?
>> Ben Birnbuam: Just the standard fractional matching.
>>: Matching?
>> Ben Birnbuam: Yeah.
>>: It can still be perfect matching in the gap of these constraints too, right?
>> Ben Birnbuam: Yeah, but ->>: In that case there's no stable ->> Ben Birnbuam: There's no stable outcome. So we say we don't care what happens in that case.
And then the other question, of course, is trying to prove that the edge balancing dynamics
themselves just converge in polynomial time. Although, actually, because these initial starting
conditions because we have this monotonicity, the algorithm that I've described converges in
pseudo polynomial time. So it depends on the weights of the edges, it depends linearly on the
weights of the edges but in terms of the size of the graph it converges in polynomial time.
Any other questions from the first part?
>>: Maybe algorithm seems more natural than the edge [inaudible] because the whole point is that
there are several alternatives, why shouldn't people change the deal, change the [indiscernible].
>> Ben Birnbuam: Sorry, what was the last part?
>>: Just a comment that maybe your algorithm is more natural than this simple edge balancing
algorithm, because you expect people negotiating, they would really make use of the fact that there
are other alternatives and so matching would change.
>> Ben Birnbuam: Yeah. We hope it's -- okay. So completely changing gears, so this is another
project I was working on. And this is with Nikhil and Lin Xiao, and so you work in optimization and
machine learning?
>>: Yes.
>> Ben Birnbuam: Okay.
>>: Nikhil.
>> Ben Birnbuam: So he's also at Microsoft Research. So let me describe a model for a market.
There's a set of M buyers and each buyer has a budget BI. And then we have N items and there's
one unit of each item. And it's divisible. And let's say UIJ is the utility of buyer I for one unit of item
J.
Given some sort of allocation of items to buyers, the utility of buyer I -- that's X. The utility of buyer
I is just the weighted sum of the allocation according to its utilities. So this is where the linear
comes from.
And it will just be convenient to scale things so that the sum of the budgets is one and for every
buyer I the sum of the utilities is one.
So what is an equilibrium in this market? So in general market equilibrium is a way to match the
demand of the buyers to the supply of the sellers using prices.
So in this case there's two conditions for being in equilibrium. The first is that for every bidder I the
vector -- sorry. In equilibrium is a price vector on the items and an allocation. So the first condition
is that for every I the vector XI maximizes the following optimization problem, which this is just the
utility. And this is just the constraint that no bidder spends more than its budget. Given the prices,
every buyer should be giving an optimal bundle of items. That's one constraint for equilibrium.
The other constraint is that the market clears. So every seller sells every unit, sells the complete
unit of its item. So if these two conditions hold, we say that the allocation X on the price vector P
are in equilibrium.
>>: So sorry, in this maximization of prices X by ->> Ben Birnbuam: Yes.
>>: Another statement that prices are [inaudible].
>> Ben Birnbuam: Given that the prices are fixed XIJ maximizes this.
So here's a very brief summary of what is known. So at least ->>: MV3 [phonetic] that no one wants to change, no buyer wants to change the location?
>>: Optimize these ->>: Pardon?
>>: The bidder optimizes. Knows the pieces, he's getting the best.
>> Ben Birnbuam: That's over all possible allocations. So in 1959 Eisenberg and Gale showed, I'm
definitely not an expert in the history of this so I might be getting things wrong. I hope not. So
Eisenberg and Gale showed an equilibrium always exists. The way they did that they gave a
convex program and showed a optimal primal -- with the [inaudible] dual variables as a prices form
a market equilibrium.
>>: Exist for every price vector, right?
>> Ben Birnbuam: No, price vector, the equilibrium is ->>: That's the value.
>> Ben Birnbuam: The equilibrium ->>: [indiscernible].
>>: What if I take all the prices to be zero?
>> Ben Birnbuam: It doesn't necessarily exist for every price vector. An equilibrium is an
allocation and a set of prices. For every set of UIJs and budgets there exists some price and
allocation that are in equilibrium.
>>: Is it going to be always true for price vector [indiscernible].
>> Ben Birnbuam: It shouldn't always be true.
>>: You go back one.
>> Ben Birnbuam: I mean the -- so I guess this would hold.
>>: [indiscernible] everyone wanting finite quantity.
>> Ben Birnbuam: Exactly. So this wouldn't hold. The prices are zero you could always satisfy
>>: If you really want to.
>>: So you can -- so you could always scale E. [indiscernible].
>>: You have [indiscernible].
>>: It does, because the budgets are fixed.
>> Ben Birnbuam: Okay. So Eisenberg and Gale showed an optimum solution to this convex
program is a market equilibrium if you take the prices to be the dual variables corresponding to this
constraint, the optimal dual variables corresponding to this constraint.
I'm not going to go into that because I'll talk about something similar in a little bit. So then in 2002
Nikhil and some other people, Papadimitriou, Saberi, Vazirani gave -- so this is a convex program
so it can be solved using the website method. But Nikhil and these other people gave
combinatorial primal dual algorithm that computes an equilibrium. And this is where I'm not an
expert.
There's been some work on some dynamics to compute an algorithm called tatonement, I'm
probably not even saying that correctly. But I guess this work either doesn't apply to the linear
Fisher model or it needs some form of global coordination.
So again our question is are there local dynamics that compute this algorithm. So here's one
candidate. These are called the proportional response dynamics. And these were introduced for
the Fisher market, these were introduced by Li Zhang this year, and Li Zhang is at MSR Silicon
Valley.
So these dynamics maintain a set of bid vectors, and you can think of the bid as just being the
allocation times the price. And we're going to define the price at time T to be the sum of the bids
for that item at time T.
So here are the dynamics. Given some bid vector time P, we compute the bid vector at time two
plus one so it's proportional to the utility that bidder I -- so that BIJ is proportional to he utility
bidder I is getting from J.
So this is the current utility that bidder I is getting from item J. And we can replace XIJT by BIJT
over PJT. So essentially we get a new bid based on the old bid times this factor UIJ over PJT.
>>: This proportional means that for each bidder they sum up to the budget really?
>> Ben Birnbuam: Yes, I'm sorry, I forgot to say that. So you scale these so that they sum up to
the budgets, exactly. For each bidder.
And so in a paper I think in ICALP, Jang proved these dynamics do converge to an approximate
equilibrium which I'll define later. In time it's polynomial in the number of bidders, the number of
items, one over the approximation factor and one over the minimum utility and one over the
minimum budget.
And remember these are scaled so that these add up to one for each bidder and those add up to
one. So this is pseudo polynomial time. What we'd really like is to prove that these dynamics
converge polynomial and log of these quantities.
And that was our goal for this summer.
>>: Epsilon.
>> Ben Birnbuam: N over one epsilon and N.
So I'm going to talk for the rest of the time I'm going to talk about -- we haven't proved this. But we
have an approach that we hope will work.
So the first thing is we have a new -- or a different convex program that also computes an
equilibrium. So this convex program, the BIJs are the variables. And here I've written this
constraint PJ is equal to sum of the of the PIJs. It's symbolic constraint, because that's the only
place where PJs appear. Think of PJ as the sum of the BIJs. The variables of the BIJs, and this is
the convex program. So the main constraint is that the bids add up to the budgets.
The sum of the bids for each bidder adds up to its budget, and then we have this kind of weird
objective function. So why -- let me say briefly why this actually works. So we have two conditions
for -- so we want to show why an optimal solution to this is equilibrium.
So we have two conditions. The first is market clearance, and that's easy, because for every -- we
maintain market clearance at every -- sorry. Forget what I just said. We sum the XIJs and that's
equal to the sum of VIJ over PJ and we define PJ to be the sum of the BIJ so that's just equal to
one. That's by the way we defined PJ. We have market clearance. So the interesting part is to
show bidder optimality. So to do that, the first observation is that if you take -- if you call this
objective function F and you take the derivative with respect to BIJ, it's pretty straightforward to
show you just do it, and it gets to be this quantity. Log of UIJ over PJ minus one. And this UIJ over
PJ comes up a lot. You can think of that as sort of the bang for buck.
So UIJ is the utility of bidder I for one unit of item J and PJ is the price. That's the amount of utility
you get per unit of price.
>>: Where does this come from?
>> Ben Birnbuam: It comes. [laughter].
>>: BIJ. Right. Right. Okay. Sorry. Go ahead.
>> Ben Birnbuam: Oh, yeah, it's more complicated than -- it's not just a linear function.
Okay. So like I was saying this is the bang for buck, and bidder optimality, I had it written as XIJ
optimize this optimization problem. But you can also think of that as bidder optimality is
essentially equivalent to the following, which is that if the bid is greater than zero, then that
implies -- so for bidder I, if the bid for an item J is greater than zero, that implies that the bang for
buck for item J is equal to the maximum bang for buck for any item. That's what the optimal
solution to that problem looks like.
So let's say -- so I claim that an optimal solution to this problem has this property. And suppose it
didn't. So suppose that there is a J such that BIJ is greater than zero but this is not equal to the
maximum value.
Then take some of that -- take a very small amount of that bid and transfer it to some other J prime
that does have the maximum bang for buck. Because the gradient is monotonic in bang for buck.
That means we'll actually improve the value of this objective function, if we take BIJ to be small
enough.
So that's a contradiction. So we have that this holds and so that this computes A market
equilibrium. So that's the start. And so here we have this equation for the gradient of the objective
function of this convex program. And the proportional response algorithm is basically then just
saying BIJT plus one is proportional to BIJT times E to the IJth gradient because this minus one
doesn't matter since we're just taking it proportional. And remember I should have it written here
but proportional response was just BIJ times the bang for buck.
So as I understand it, and this is definitely Lin's area, as I understand it, this is a fairly standard
algorithm for convex optimization, at least it's been studied before. And the general form of the
algorithm you add a step size. So here's step size is one over gamma.
And if you add a one over gamma here, then that's the equivalent to putting one over gamma this
exponent bang for buck. So consider these dynamics now, these are like proportional response,
but sort of it's a smaller step size. So we can show that for these dynamics, if there exists a
constant L such that this is satisfied, then for any T greater than this quantity, and for any step size
that's small enough, step size is one over gamma, then we're within epsilon of optimizing the
objective function of this convex program.
And this is using some techniques from convex optimization that is also Lynn's specialty. So I'm
not going to go into that very much. But the main thing is that if we can show this like Lipschitz
continuity of the gradient of the objective function, then we can prove a convergence time. That's
what we like, as long as this constant is nice.
So how do we do that? That's what I'm going to focus on. So let me rewrite what that Lipschitz
continuity meant. So let's say we have two bid vectors and let's assume that as before PJ is the
sum of the BIJs and Q be the sum of the AIJs. So this is the left-hand side of the equation, this is
the maximum of this quantity right here, which is the maximum of this QJ, the log of QJ over PJ.
So suppose that J star maximizes this quantity and we can assume that QJ star equals PJ star plus
Y for some Y. And then that means that this is equal to the log of one plus Y over PJ star, just
substituting that in here, which is less than or equal to Y over PJ star, and then if QJ star is equal to
PJ star plus Y, then that means that the difference in the L 1 norm between A and B has to be at
least Y, because this is the sum for one specific item.
So here we -- this is basically what we need. So if we can lower bound this, then we can prove this
Lipschitz continuity of the gradients. So we would -- so we asked how can we lower bound this
minimum price? . We don't know how to do that. So instead we're going to change the dynamics in
such a way that it's sort of guaranteed that the price won't get too small.
So let's change the dynamics so that -- so this is the same except for every place PJT by pi JT and
let pi JT be the maximum of the true price in some constant delta. Here we are sort of saying the
price has to be at least some constant.
So these dynamics correspond to the same sort of descent algorithm except we have a different
objective function G. This objective function G is such that its gradient is exactly equal to this for
every I and J. And we have an explicit -- it's not that interesting to put it up here but we have an
explicit form for G and we can tell it's close to F in terms of delta. And because we have -- because
we've replaced this PJ with this pi J, we kind of get this Lipschitz continuity of the gradients for
free.
So we've changed the algorithm so that we don't let the price go too small that gives us Lipschitz
continuity. And then we show that this new algorithm, it minimizes some other objective function
but the two objective functions are not that far apart. It's also going to come close to the original
objective function.
So when we put the constants together, at least this is -- I'm not saying -- this is what we have right
now. But we can show that for these modified dynamics if T is at least this big, then you can get
within epsilon. So so far this is okay. This is polynomial and M epsilon N and polynomial in 1 over
B min. So there's a big missing step here which is what we still have to work on, which is we can
show convergence in terms of this potential function but we don't know what that means in terms
of convergence in terms of how close it actually is to being at equilibrium.
So epsilon minus equilibrium is something more or less that each bidder is within 1 minus epsilon
of getting its maximum bundle. So I think the main part left it seems that this approach will work is
relating this epsilon to this epsilon. That's it.
[applause].
>> David Wilson: Questions?
[applause]
```
Related documents