* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download copenhagen 1996
Survey
Document related concepts
Natural computing wikipedia , lookup
Generalized linear model wikipedia , lookup
Perturbation theory wikipedia , lookup
Computational complexity theory wikipedia , lookup
Inverse problem wikipedia , lookup
Computational electromagnetics wikipedia , lookup
Numerical continuation wikipedia , lookup
Knapsack problem wikipedia , lookup
Dynamic programming wikipedia , lookup
Least squares wikipedia , lookup
Gene expression programming wikipedia , lookup
Multi-objective optimization wikipedia , lookup
Genetic algorithm wikipedia , lookup
Weber problem wikipedia , lookup
Transcript
Integer Programming Linear Programming (Simplex method) Integer Programming (Branch & Bound method) Lasdon, L.S: Optimization Theory for Large Systems, MacMillan, 1970 (Section 1.2) CPLEX Manual Cornuejols, G., Trick, M.A., and Saltzman, M.J.: A Tutorial on Integer Programming, Summer 1995, http://mat.gsia.cmu.edu/orclass/integer/integer.html Michał Pióro ‹#› A problem and its solution extreme point maximise subject to z = x1 + 3x2 - x1 + x2 1 x1 + x2 2 x1 0 , x2 0 x2 -x1+x2 =1 x1+3x2 =c (1/2,3/2) c=5 x1 c=3 x1+x2 =2 Michał Pióro c=0 ‹#› Linear Programme in standard form SIMPLEX Linear programme maximise z = j=1,2,...,n cjxj subject to j=1,2,...,m aijxj = bi , xj 0 , Michał Pióro Indices j=1,2,...,n i=1,2,...,m Constants c = (c1,c2,...,cn) b = (b1,b2,...,bm) A = (aij) Variables x = (x1, x2,...,xn) variables equality constraints cost coefficients constraint left-hand-sides m × n matrix of constraint coefficients Linear programme (matrix form) maximise n>m cx rank(A) = m subject to i=1,2,...,m Ax = b j=1,2,...,n x 0 ‹#› Transformation of LPs to the standard form slack variables j=1,2,...,m aijxj bi to j=1,2,...,m aijxj + xn+i = bi , xn+i 0 j=1,2,...,m aijxj bi to j=1,2,...,m aijxj - xn+i = bi , xn+i 0 nonnegative variables xk with unconstrained sign: xk = xk - xk , xk 0 , xk 0 Exercise: transform the following LP to the standard form maximise z = x1 + x2 subject to 2x1 + 3x2 6 x1 + 7x2 4 x1 + x2 = 3 x1 0 , x2 unconstrained in sign Michał Pióro ‹#› standard form Basic facts of Linear Programming feasible solution - satisfying constraints basis matrix - a non-singular m × m submatrix of A basic solution to a LP - the unique vector determined by a basis matrix: n-m variables associated with columns of A not in the basis matrix are set to 0, and the remaining m variables result from the square system of equations basic feasible solution - basic solution with all variables nonnegative (at most m variables can be positive) Theorem 1. The objective function, z, assumes its maximum at an extreme point of the constraint set. Theorem 2. A vector x = (x1, x2,...,xn) is an extreme point of the constraint set if and only if x is a basic feasible solution. Michał Pióro ‹#› original task: (IP) maximise cx subject to Ax = b x 0 and integer linear relaxation: (LR) maximise cx subject to Ax = b x0 Integer Programming paper 4 The optimal objective value for (LR) is greater than or equal to the optimal objective for (IP). If (LR) is infeasible then so is (IP). If (LR) is optimised by integer variables, then that solution is feasible and optimal for (IP). If the cost coefficients c are integer, then the optimal objective for (IP) is less than or equal to the “round down” of the optimal objective for (LR). application to network design: paper 2, section 4.1 Michał Pióro ‹#› Branch and Bound (a Knapsack Problem) maximise subject to 8x1 + 11x2 + 6x3+ 4x4 5x1 + 7x2 + 4x3 + 3x4 14 xj {0,1} , j=1,2,3,4 (LR) solution: x1 = 1, x2 = 1, x3 = 0.5, x4 = 0, z = 22 no integer solution will have value greater than 22 add the constraint to (LR) x3 = 0 Fractional z = 21.65 x1 = 1, x2 = 1, x3 = 0, x4 = 0.667 Michał Pióro Fractional z = 22 x3 = 1 Fractional z = 21.85 x1 = 1, x2 = 0.714, x3 = 1, x4 = 0 ‹#› we know that the optimal integer solution is not greater than 21.85 (21 in fact) we will take a subproblem and branch on one of its variables - we choose an active subproblem (here: not chosen before) - we choose a subproblem with highest solution value Branch and Bound cntd. Fractional z = 22 x3 = 0 Fractional z = 21.65 x3 = 1 Fractional z = 21.85 x3 = 1, x2 = 0 Integer z = 18 INTEGER x3 = 1, x2 = 1 Fractional z = 21.8 no further branching, not active x1 = 1, x2 = 0, x3 = 1, x4 = 1 Michał Pióro x1 = 0.6, x2 = 1, x3 = 1, x4 = 0 ‹#› Branch and Bound cntd. Fractional z = 22 x3 = 0 Fractional z = 21.65 x3 = 1 Fractional z = 21.85 there is no better solution than 21: fathom x3 = 1, x2 = 0 Integer z = 18 INTEGER x3 = 1, x2 = 1 Fractional z = 21.8 x3 = 1, x2 = 1, x1 = 0 Integer z = 21 INTEGER x3 = 1, x2 = 1, x1 = 1 Infeasible x1 = 0, x2 = 1, x3 = 1, x4 = 1 Michał Pióro optimal INFEASIBLE x1 = 1, x2 = 1, x3 = 1, x4 = ? ‹#› Branch & Bound - summary Solve the linear relaxation of the problem. If the solution is integer, then we are done. Otherwise create two new subproblems by branching on a fractional variable. A subproblem is not active when any of the following occurs: you have already used the subproblem to branch on all variables in the solution are integer the subproblem is infeasible you can fathom the subproblem by a bounding argument. Choose an active subproblem and branch on a fractional variable. Repeat until there are no active subproblems. Remarks If x is restricted to integer (but not necessarily to 0 or 1), then if x = 4.27 you would branch with the constraints x4 and x5. If some variables are not restricted to integer you do not branch on them. Michał Pióro ‹#› Basic approaches to combinatorial optimisation Combinatorial Optimisation Problem given a finite set S (solution space) and evaluation function F : S R find i S minimising F(i) Michał Pióro Local Search Simulated Annealing Simulated Allocation Evolutionary Algorithms ‹#› Local Search Combinatorial Optimisation Problem given a finite set S (solution space) and evaluation function F : S R find i S minimising F(i) N(i) S - neighbourhood of a feasible point (configuration) i S A modification: steepest descent begin choose an initial solution i S; repeat choose a neighbour jN(i) Unable to leave local minima if F(j) < F(i) then i := j; Quality (rather poor) of LS depends on until jN(i), F(j) F(i) the range of the neighbourhood relationend ship and on the initial solution. Michał Pióro ‹#› Local Search for the Knapsack Problem Solution space: S = { x : wx W, x 0, x - integer } Problem: find x S maximising cx Neighbourhood: N(x) = { yS : y is obtained from x by adding or exchanging one object } Suspicious! Michał Pióro ‹#› Simulated Annealing Combinatorial Optimisation Problem given a finite set S (solution space) and evaluation function F : S R find i S minimising F(i) N(i) S - neighbourhood of a feasible point (configuration) i S uphill moves are permitted but only with a certain (decreasing) probability (“temperature” dependent) according to the so called Metropolis test paper 5, section 3 neighbours are selected at random Johnson, D.S. et al.: Optimization by Simulated Annealing: an Experimental Evaluation, Operations Research, Vol.39, No.1, 1991 Michał Pióro ‹#› Simulated Annealing - algorithm begin choose an initial solution i S; select an initial temperature T > 0; while stopping criterion not true count := 0; while count < L choose randomly a neighbour jN(i); F:= F(j) - F(i); if F 0 then i := j else if random(0,1) < exp (-F / T) then i := j; count := count + 1 end while; Metropolis test reduce temperature end while end Michał Pióro ‹#› Simulated Annealing - limit theorem limit theorem: global optimum will be found for fixed T, after sufficiently number of steps: Prob { X = i } = exp(-F(i)/T) / Z(T) Z(T) = jS exp(-F(j)/T) for T0, Prob { X = i } remains greater than 0 only for optimal configurations iS this is not a very practical result: too many moves (number of states squared) would have to be made to achieve the limit sufficiently closely Michał Pióro ‹#› Simulated Annealing applied to the Travelling Salesman Problem (TSP) Combinatorial Optimisation Problem given a finite set S (solution space) and evaluation function F : S R find p S minimising F(p), N(p) S - neighbourhood of a feasible point (configuration) p S TSP S = { p : set of all cyclic permutations of length n with cycle n } p(i) - city visited just after city no. i F(p) = i=1,2,…,n cip(i) ( cip(i) - distance between city i and p(i) ) N(p) - neighbourhood of p (next slide) application to network design: paper 2, section 4.2 Michał Pióro ‹#› TSP - neighbourhood relation j p i q j j’ j’ i i’ i’ Hamiltonian circuit to Hamiltonian circuit any p reachable from any q Michał Pióro ‹#› Evolutionary algorithms - basic notions population = a set of chromosomes generation = a consecutive population chromosome = a sequence of genes individual, solution (point of the solution space) genes represent internal structure of a solution fitness function = cost function Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs, Springer, 1996 Michał Pióro ‹#› Genetic operators mutation is performed over a chromosome with certain (low) probability it perturbates the values of the chromosome’s genes crossover exchanges genes between two parent chromosomes to produce an offspring in effect the offspring has genes from both parents chromosomes with better fitness function have greater chance to become parents In general, the operators are problem-dependent Michał Pióro ‹#› ( + ) - Evolutionary Algorithm begin n:= 0; initialise(P0); select an initial temperature T > 0; while stopping criterion not true On:= ; for i:= 1 to do On:= Oncrossover(Pn): for On do mutate(); n:= n+1, Pn:= select_best(OnPn); end while end Michał Pióro ‹#› Evolutionary Algorithm for the flow problem Chromosome: x = (x1,x2,...,xD) Gene: xd = (xd1,xd2,...,xdm(d)) - flow pattern for the demand d 5 2 3 3 1 4 1 2 0 0 3 5 1 0 2 1 2 3 chromosome paper 7 Michał Pióro ‹#› Evolutionary Algorithm for the flow problem cntd. crossover of two chromosomes each gene of the offspring is taken from one of the parents for each d=1,2,…,D: xd := xd(1) with probability 0.5 xd := xd(2) with probability 0.5 better fitted chromosomes have greater chance to become parents mutation of a chromosome for each gene shift some flow from one path to another everything at random Michał Pióro ‹#›