Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Designing Games for Distributed Optimization
Na Li and Jason R. Marden
IEEE Journal of Selected Topics in Signal Processing,
Designing
Vol. 7, No. 2, pp. 230-242,
2013 Games for Distributed
Optimization
Na Li and Jason R. Marden
IEEE Journal of Selected Topics in Signal Processing,
Vol. 7, No. 2, pp. 230-242, 2013
Presenter:
Seyyed Shaho Alaviani
Presenter:
Seyyed Shaho Alaviani
Introduction
-advantages of game theory
Problem Formulation and Preliminaries
- potential games
-state based potential games
-stationary state Nash equilibrium
Main Results
- state based game design
-analytical properties of designed game
-learning algorithm
Numerical Examples
Conclusions
Network
-Consensus
-Rendezvous
-Formation
-Schooling
-Flocking
All: special cases of distributed optimization
Introduction
Game Theory: a powerful tool for the design and control of
multi agent systems
Using game theory requires two steps:
1- modelling the agent as self-interested decision maker in a game theoretical
environment:
defining a set of choices and a local objective function for each decision
maker
2- specifying a distributed learning algorithm that enables the agents to reach
a Nash equilibrium of the designed game
Core advantage of game theory:
It provides a hierarchical decomposition between
the distribution and optimization problem (game design)
and
the specific local decision rules (distributed learning algorithm)
Example: Lagrangian
The goal of this paper:
To establish a methodology for the design of local agent
objective functions that leads to desirable system-wide behavior
Graph
Connected and disconnected graphs
connected
disconnected
Directed and undirected graphs
directed
undirected
Problem Formulation and Preliminaries
Consider a multi-agent of π agents, π = {1,2, β¦ , π}
ππ βΆ set of decisions, nonempty convex subset of real numbers
Optimization problem:
min π(π£1 , π£2 , β¦ , π£π )
π£
s.t. π£π β ππ , π β π
where
π is a convex function,
and
the graph is undirected and connected
Physics:
Main properties of potential games:
1- a PSNE is guaranteed to exist
2- there are several distributed learning algorithms with
proven asymptotic guarantees
3- learning PSNE in potential games is robust:
heterogeneous clock rates and informational delays are
not problematic
Stochastic games( L. S. Shapley, 1953):
In a stochastic game the play proceeds by steps
from position to position, according to transition
probabilities controlled jointly by two players.
State Based Potential Games(J. Marden, 2012):
A simplification of stochastic games that represents and
extension to strategic form games where an underlying
state space is introduced to the game theoretic
environment
Main Results
State Based Game Design:
The goal is to establish a state based game formulation for our
distributed optimization problem that satisfies the following properties:
A State Based Game Design for Distributed Optimization:
- State Space
- Action sets
- State dynamics
- Invariance associated with state dynamics
- Agent cost functions
State Space:
Action sets:
An action for agent I is defined as a tuple ππ = (π£π , ππ )
π£π indicates a change in the agent value π£π
ππ indicates a change in the agentβs estimation term ππ
State Dynamics:
For a state π₯ = (π£, π) and an action π = π£, π ,
the ensuing state π₯ = (π£, π) is given by
Invariance associated with state dynamics:
Let π£ 0 = (π£1 0 , β¦ , π£π 0 ) be the initial values of the agents
Define the initial estimation terms π(0) to satisfy
π
π ππ 0 = ππ£π (0)
Then for all π‘ β₯ 1
πππ π‘ = ππ£π (π‘)
π
Agent cost functions:
Analytical Properties of Designed Game
Theorem 2 shows that the designed game is a state based potential game.
Theorem 2: The state based game is a state based potential
game with potential function
and π₯ = (π£, π) represents the ensuing state.
Theorem 3 shows that all equilibria of the designed game are
solutions to the optimization problem.
Theorem 3: Let G be the state based game. Suppose that π is a
differentiable convex function, the communication
graph is connected and undirected, and at least one
of the following conditions is satisfied:
Question:
Could the results in Theorem 2 and 3 have been attained
using framework of strategic form games?
impossible
Learning Algorithm
We prove that the learning algorithm gradient
play converges to a stationary state NE.
Assumptions:
Theorem 4: Let G be a state based potential game with
a potential function Ξ¦(π₯, π) that satisfies
2
the assumption. If the step size ππ β€ πΏ for
all π β π, then the state action pair
(π₯ π‘ , π(π‘)) of the gradient play
asymptotically converges to a stationary state NE.
Numerical Examples
Example 1:
Consider the following function to be minimized
Example 2: Distributed Routing Problem
Application: the Internet
destination
source
m routes
Amount traffic
Percentage of traffic that agent i designates to route r
For each route r, there is an associated congestion function ππ that
reflects the cost of using the route as a function of the amount of
traffic on that route.
Then total congestion in the network will be
R=5
N=10
Communication graph
πΌ = 900
Conclusions:
- This work presents an approach to distributed optimization using the framework
of state based potential games.
- We provide a systematic methodology for localizing the agentsβ objective
functions while ensuing that the resulting equilibria are optimal with regards to
the system level objective function.
- It is proved that the learning algorithm gradient play guarantees convergence to
a stationary state NE in any state based potential game
- Robustness of the approach
MANY THANKS
FOR
YOUR ATTENTION