Download Summary of big ideas

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Exact cover wikipedia , lookup

Computational fluid dynamics wikipedia , lookup

Natural computing wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Post-quantum cryptography wikipedia , lookup

Artificial intelligence wikipedia , lookup

Perturbation theory wikipedia , lookup

Mathematical economics wikipedia , lookup

Pattern recognition wikipedia , lookup

Operational transformation wikipedia , lookup

Theoretical computer science wikipedia , lookup

Simplex algorithm wikipedia , lookup

Lateral computing wikipedia , lookup

Algorithm wikipedia , lookup

Dynamic programming wikipedia , lookup

Knapsack problem wikipedia , lookup

Inverse problem wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Multi-objective optimization wikipedia , lookup

Computational complexity theory wikipedia , lookup

Genetic algorithm wikipedia , lookup

Multiple-criteria decision analysis wikipedia , lookup

Mathematical optimization wikipedia , lookup

Transcript
Mathematical Modeling and
Optimization:
Summary of “Big Ideas”
A schematic view of
modeling/optimization process
Real-world
problem
assumptions,
abstraction,data,
simplifications
makes sense?
change the model,
assumptions?
Solution to
real-world
problem
Mathematical
model
optimization
algorithm
interpretation
Solution to
model
Mathematical models in
Optimization
• The general form of an optimization model:
min or max f(x1,…,xn)
(objective function)
subject to gi(x1,…,xn) ≥ 0
(functional constraints)
x1,…,xn  S
(set constraints)
• x1,…,xn are called decision variables
• In words,
the goal is to find x1,…,xn that
– satisfy the constraints;
– achieve min (max) objective function value.
Types of
Optimization Models
Stochastic
(probabilistic
information on data)
Discrete, Integer
(S = Zn)
Linear
(f and g are linear)
Deterministic
(data are certain)
Continuous
(S = Rn)
Nonlinear
(f and g are nonlinear)
What is Discrete Optimization?
Discrete Optimization
is a field of applied mathematics,
combining techniques from
• combinatorics and graph theory,
• linear programming,
• theory of algorithms,
to solve optimization problems
over discrete structures.
Solution Methods for Discrete
Optimization Problems
•
•
•
•
Integer Programming
Network Algorithms
Dynamic Programming
Approximation Algorithms
Integer Programming
• Programming = Planning in this context
• In a survey of Fortune 500 firms, 85% of those
responding said that they had used linear or integer
programming.
• Why is it so popular?
– Many different real-life situations can be modeled as IPs.
– There are efficient algorithms to solve IPs.
Topics in this class about
Integer Programming
• Modeling real-life situations as integer
programs
• Applications of integer programming
• Solution methods (algorithms) for integer
programs
• (maybe) Using software (called AMPL) to
solve integer programs
IP modeling techniques
Using binary variables
Restrictions on number of options
Contingent decisions
Variables (functions) with k possible values
Either-Or Constraints
Big M method
Balance constraints
Fixed Charges
Making choices with non-binary variables
Piecewise linear functions
IP applications
Facility Location Problem
Knapsack Problem
Multi-period production planning
Inventory management
Fair representation in electoral systems
Consultant hiring
Bin Packing Problem
Pairing Problem
Traveling Salesman Problem
Difficulties of real-life modeling
• The problems that you will encounter in the real
world are always a lot messier than the ``clean''
problems we looked at in this class; there are always
side constraints that complicate getting even a
feasible solution.
• Most real world problems have multiple objectives,
and it is hard to choose from among them.
• In the real world you must gain experience with how
to adapt the idealized models of academia to each
new problem you are asked to solve.
Utilizing the relationship between
problems
Important modeling skill:
– Suppose you know
how to model Problems A1,…,Ap;
– You need to solve Problem B;
– Notice the similarities between Problems Ai and B;
– Build a model for Problem B, using the model for
Problem Ai as a prototype.
• Examples:
– The version of the facility location problem as a special
case of the knapsack problem
– Transshipment and Assignment problems as special cases
of Maximum Flow Problem
Complexity of Solving Discrete
Optimization Problems
Two classes:
 Class 1 problems have polynomial-time algorithms for
solving the problems optimally.
Examples: Min. Spanning Tree Problem,
Maximum Flow Problem
 For Class 2 problems (NP-hard problems)
• No polynomial-time algorithm is known;
• And more likely there is no one.
Example: Traveling Salesman Problem
• Most discrete optimization problems are in the second class.
Three main directions to solve
NP-hard discrete optimization problems:
• Integer programming techniques
• Approximation algorithms
• Heuristics
 Important Observation: Any solution method suggests a
tradeoff between time and accuracy.
On time-accuracy tradeoff schedule:
Brute force
Integer
programming
Approximation
algorithms
Heuristics
Most
accuracy
Least
accuracy
Worst time
Best time
Solving Integer Programs (IP)
vs solving Linear Programs (LP)
LP algorithms
– Simplex Method
– Interior-point methods
IP algorithms use the above-mentioned LP
algorithms as subroutines.
The algorithms for solving LPs are much more timeefficient than the algorithms for IPs.
 Important modeling consideration: Whenever
possible avoid integer variables in your model.
LP-relaxation-based solution methods
for Integer Programs
 Branch-and-Bound Technique
 Cutting Plane Algorithms
Basic Concepts of Branch-and-Bound
The basic concept underlying the branch-and-bound technique
is to divide and conquer.
Since the original “large” problem is hard to solve directly,
it is divided into smaller and smaller subproblems
until these subproblems can be conquered.
The dividing (branching) is done by partitioning
the entire set of feasible solutions into smaller and smaller
subsets.
The conquering (fathoming) is done partially by
(i) giving a bound for the best solution in the subset;
(ii) discarding the subset if the bound indicates that
it can’t contain an optimal solution.
Summary of branch-and-bound
Steps for each iteration:
1) Branching: Among the unfathomed subproblems, select the
one that was created most recently. (Break ties according to
which has larger LP value.)
Choose a variable xi which has a noninteger value xi* in the
LP solution of the subproblem. Create two new subproblems
by adding the respective constraints xi   xi* and xi ≥  xi* .
2) Bounding: Solve the new subproblems, record their LP
solutions. Based on the LP values, update the incumbent,
and the lower and upper bounds for OPT(IP) if necessary.
3) Fathoming: For each new subproblem, apply the three
fathoming tests. Discard the subproblems that are fathomed.
4) Optimality test: If there are no unfathomed subproblems left
then return the current incumbent as optimal solution
(if there is no incumbent then IP is infeasible.)
Otherwise, perform another iteration.
Importance of tight lower and upper
bounds in branch-and-bound
 Having tight lower and upper bounds on the IP
optimal value might significantly reduce the number
of branch-and-bound iterations.
 For maximization problem,
• A lower bound can be found
by applying a fast heuristic algorithm to the problem.
• An upper bound can be found by solving a relaxation
of the problem (e.g., LP-relaxation).
 If the current lower and upper bounds are close
enough, we can stop the branch-and-bound
algorithm and return the current incumbent solution
(it can’t be too far from the optimum).
General Idea of Cutting Plane Technique



Add new constraints (cutting planes) to the problem such that
(i) the set of feasible integer solutions remains the same, i.e.,
we still have the same integer program.
(ii) the new constraints cut off some of the fractional solutions
making the feasible region of the LP-relaxation smaller.
Smaller feasible region might result in a better LP value (i.e.,
closer to the IP value), thus making the search for the optimal
IP solution more efficient.
Each integer program might have many different
formulations.
Important modeling skill:
Give as tight formulation as possible.
How? Find cutting planes that make the formulation of the
original IP tighter.
Methods of getting Cutting Planes
1) Exploit the special structure of the problem
to get cutting planes
(e.g., bin packing problem, pairing problem)
 Often can be hard to get
 Topic of intensive research
2) More general methods are also available
 Can be used automatically for many problems
(e.g., knapsack-type constraints)
Branch-and-cut algorithms
• Integer programs are rarely solved based solely on cutting
plane method.
• More often cutting planes and branch-and-bound are
combined to provide a powerful algorithmic approach for
solving integer programs.
• Cutting planes are added to
the subproblems created in branch-and-bound
to achieve tighter bounds
and thus to accelerate the solution process.
• This kind of methods are known as
branch-and-cut algorithms.
Network Models
• Minimum Spanning Tree Problem
• Traveling Salesman Problem
• Min. Spanning Tree is in Class 1 (has polynomialtime algorithms for solving the problem optimally).
• Traveling Salesman Problem is in Class 2 (NP-hard
problem).
Methods for solving NP-hard problems
Three main directions to solve
NP-hard discrete optimization problems:
• Integer programming techniques
• Heuristics
• Approximation algorithms
• We gave examples of all three methods for TSP.
• 2-approximation algorithm for TSP was given and
analyzed in details.
Some Characteristics of
Approximation Algorithms
•
•
•
•
Time-efficient (sometimes not as efficient as heuristics)
Don’t guarantee optimal solution
Guarantee good solution within some factor of the optimum
Rigorous mathematical analysis to prove the approximation
guarantee
• Often use algorithms for related problems as subroutines
The 2-approximation algorithm for TSP used the algorithm of
finding a minimum spanning tree as subroutine.
Performance of TSP algorithms in practice
• A more sophisticated algorithm (which again uses the MST
algorithm as a subroutine) guarantees a solution within
factor of 1.5 of the optimum (Christofides).
• For many discrete optimization problems, there are
benchmarks of instances on which algorithms are tested.
• For TSP, such a benchmark is TSPLIB.
• On TSPLIB instances, the Christofides’ algorithm outputs
solutions which are on average 1.09 times the optimum.
For comparison, the nearest neighbor algorithm outputs
solutions which are on average 1.26 times the optimum.
• A good approximation factor often leads to good
performance in practice.
Dynamic Programming
• Dynamic programming is a widely-used mathematical
technique for solving problems that can be divided into
stages and where decisions are required in each stage.
• The goal of dynamic programming is to find a combination
of decisions that optimizes a certain amount associated with
a system.
General characteristics of Dynamic
Programming
• The problem structure is divided into stages
• Each stage has a number of states associated with it
• Making decisions at one stage transforms one state of the
current stage into a state in the next stage.
• Given the current state, the optimal decision for each of the
remaining states does not depend on the previous states or
decisions. This is known as the principle of optimality for
dynamic programming.
• The principle of optimality allows to solve the problem
stage by stage recursively.
Problems that can be solved by
Dynamic Programming
• Shortest path problems
• Multi-period production planning
• Resource allocation problems
Advantages of Dynamic Programming
• More time-efficient compared to integer
programming (but large-scale real-life problems
might require lots of states)
• Can solve a variety of problems (however integer
programming is a more universal technique)
• Can handle stochastic data (for example, stochastic
demand in multi-period production planning)