Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Orthogonal matrix wikipedia , lookup

Matrix calculus wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Gaussian elimination wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix multiplication wikipedia , lookup

Transcript
Dynamic Programming
• Reading Material: Chapter 7.
Dynamic Programming
•
•
Dynamic Programming algorithms address
problems whose solution is recursive in nature,
but has the following property: The direct
implementation of the recursive solution results
in identical recursive calls that are executed
more than once.
Dynamic programming implements such
algorithms by evaluating the recurrence in a
bottom-up manner, saving intermediate results
that are later used in computing the desired
solution
Fibonacci Numbers
•
f0  0
f1  1
f n  f n 1  f n  2
n2
• What is the recursive algorithm that
computes Fibonacci numbers? What is its
time complexity?
– Note that it can be shown that
fn  
n2
,
1 5

2
Computing the Binomial Coefficient
• Recursive Definition
1
n 
    n  1  n  1
  

 k  
 k  1  k 
• Actual Value
n
n!
  
 k  k!n  k !
k  0 or k  n
0k n
Computing the Binomial Coefficient
• What is the direct recursive algorithm for
computing the binomial coefficient? How
much does it cost?
– Note that
n
n


n!
2

 

n
 n / 2  n / 2!n / 2!
Optimization Problems and
Dynamic Programming
•
Optimization problems with certain properties make
another class of problems that can be solved more
efficiently using dynamic programming.
Development of a dynamic programming solution to an
optimization problem involves four steps
•
–
Characterize the structure of an optimal solution
•
•
–
–
–
Optimal substructures, where an optimal solution consists of subsolutions that are optimal.
Overlapping sub-problems where the space of sub-problems is small in
the sense that the algorithm solves the same sub-problems over and
over rather than generating new sub-problems.
Recursively define the value of an optimal solution.
Compute the value of an optimal solution in a bottom-up
manner.
Construct an optimal solution from the computed optimal
value.
Longest Common Subsequence Problem
•
•
•
Problem Definition: Given two strings A and B
over alphabet , determine the length of the
longest subsequence that is common in A and B.
A subsequence of A=a1a2…an is a string of the
form ai1ai2…aik where 1i1<i2<…<ik n
Example: Let  = { x , y , z }, A = xyxyxxzy,
B=yxyyzxy, and C= zzyyxyz
–
–
–
LCS(A,B)=yxyzy
LCS(B,C)=
LCS(A,C)=
Hence the length =
Hence the length =
Hence the length =
Straight-Forward Solution
• Brute-force search
– How many subsequences exist in a string of
length n?
– How much time needed to check a string
whether it is a subsequence of another string of
length m?
– What is the time complexity of the brute-force
search algorithm of finding the length of the
longest common subsequence of two strings of
sizes n and m?
Dynamic Programming Solution
• Let L[i,j] denote the length of the longest
common subsequence of a1a2…ai and
b1b2…bj, which are substrings of A and B of
lengths n and m, respectively. Then
L[i,j] =
L[i,j] =
L[i,j] =
when i = 0 or j = 0
when i > 0, j > 0, ai=bj
when i > 0, j > 0, aibj
LCS Algorithm
Algorithm LCS(A,B)
Input: A and B strings of length n and m respectively
Output: Length of longest common subsequence of A and
B
Initialize L[i,0] and L[0,j] to zero;
for i ← 1 to n do
for j ← 1 to m do
if ai = bj then
L[i,j] ← 1 + L[i-1,j-1]
else
L[i,j] ← max(L[i-1,j],L[i,j-1])
end if
end for;
end for;
return L[n,m];
Example (Q7.5 pp. 220)
• Find the length of the longest common
subsequence of A=xzyzzyx and B=zxyyzxz
Example (Cont.)
0
z
0
x
0
y
0
y
0
z
0
x
0
z
0
x
z
y
z
z
y
x
0
0
0
0
0
0
0
Complexity Analysis of LCS Algorithm
• What is the time and space complexity of
the algorithm?
Matrix Chain Multiplication
• Assume Matrices A, B, and C have dimensions
210, 102, and 210 respectively. The
number of scalar multiplications using the
standard Matrix multiplication algorithm for
– (A B) C is
– A (B C) is
• Problem Statement: Find the order of
multiplying n matrices in which the number of
scalar multiplications is minimum.
Straight-Forward Solution
• Again, let us consider the brute-force method. We
need to compute the number of different ways that
we can parenthesize the product of n matrices.
– e.g. how many different orderings do we have for the
product of four matrices?
– Let f(n) denote the number of ways to parenthesize the
product M1, M2, …, Mn.
• (M1M2…Mk) (M k+1M k+2…Mn)
• What is f(2), f(3) and f(1)?
Catalan Numbers
1  2n  2 

• f (n)  
n  n 1 
• Cn=f(n+1)
• Using Stirling’s Formula, it can be shown
that f(n) is approximately
4
n
4 n
1.5
Cost of Brute Force Method
• How many possibilities do we have for
parenthesizing n matrices?
• How much does it cost to find the number
of scalar multiplications for one
parenthesized expression?
• Therefore, the total cost is
The Recursive Solution
– Since the number of columns of each matrix Mi is
equal to the number of rows of Mi+1, we only need to
specify the number of rows of all the matrices, plus
the number of columns of the last matrix, r1, r2, …,
rn+1 respectively.
– Let the cost of multiplying the chain Mi…Mj (denoted
by Mi,j) be C[i,j]
– If k is an index between i+1 and j, what is the cost of
multiplying Mi,j considering multiplying Mi,k-1 with
Mk,j?
– Therefore, C[1,n]=
The Dynamic Programming Algorithm
C[1,1] C[1,2] C[1,3] C[1,4] C[1,5] C[1,6]
C[2,2] C[2,3] C[2,4] C[2,5] C[2,6]
C[3,3] C[3,4] C[3,5] C[3,6]
C[4,4] C[4,5] C[4,6]
C[5,5] C[5,6]
C[6,6]
Example (Q7.11 pp. 221-222)
• Given as input 2 , 3 , 6 , 4 , 2 , 7 compute the
minimum number of scalar multiplications:
MatChain Algorithm
Algorithm MatChain
Input: r[1..n+1] of +ve integers corresponding to the dimensions
of a chain of matrices
Output: Least number of scalar multiplications required to
multiply the n matrices
for i := 1 to n do
C[i,i] := 0; // diagonal d0
for d := 1 to n-1 do // for diagonals d1 to dn-1
for i := 1 to n-d do
j := i+d;
C[i,j] := ;
for k := i+1 to j do
C[i,j] := min{C[i,j],C[i,k-1]+C[k,j]+r[i]r[k]r[j+1];
end for;
end for;
return C[1,n];
Time and Space Complexity of MatChain Algorithm
• Time Complexity
n 1
nd
j
n 1
nd
id
d 1
i 1
k i 1
d 1
i 1
k i 1
   c  c  1
n 1
 c
d 1
nd
n 1
nd
n 1
i 1
d 1
i 1
d 1
 d  c d 1  c  d (n  d )
n 1
 c  dn  d
2
d 1
 ( n 3 )
• Space Complexity


n  1n
n  1n2n  1
 cn
c
2
6
The Knapsack Problem
• Let U = {u1, u2, …, un} be a set of n items to
be packed in a knapsack of size C.
• Let sj and vj be the size and value of the jth
item, where sj, vj , 1  j  n.
• The objective is to fill the knapsack with some
items from U whose total size does not exceed
C and whose total value is maximum.
– Assume that the size of each item does not exceed
C.
The Knapsack Problem Formulation
• Given n +ve integers in U, we want to find a
subset SU s.t.
v
ui S
i
is maximized subject to the constraint
s
ui S
i
C
Inductive Solution
• Let V[i,j] denote the value obtained by filling a
knapsack of size j with items taken from the first i
items {u1, u2, …, ui} in an optimal way:
– The range of i is
– The range of j is
– The objective is to find V[
• V[i,0] =
,
]
V[0,j] =
• V[i,j] = V[i-1,j]
= max {V[i-1,j], V[
,
if
]+vi} if
Example (pp. 223 Question 7.22)
• There are five items of sizes 3, 5, 7, 8, and 9 with values 4,
6, 7, 9, and 10 respectively. The size of the knapsack is 22.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
0
0
0
0
0
0
0
0
0
0
0
0
Algorithm Knapsack
Algorithm Knapsack
Input: A set of items U = {u1,u2,…,un} with sizes s1,s2,…,sn and
values v1,v2,…,vn, respectively and knapsack capacity C.
si  C
Output: the maximum value of
vi subject to
ui S
ui S
for i := 0 to n do
V[i,0] := 0;
for j := 0 to C do
V[0,j] := 0;
for i := 1 to n do
for j := 1 to C do
V[i,j] := V[i-1,j];
if si  j then
V[i,j] := max{V[i,j], V[i-1,j-si]+vi}
end for;
end for;
return V[n,C];


Time and Space Complexity of
the Knapsack Algorithm
All-Pairs Shortest Paths
• Problem: For every vertex u, v  V, calculate
(u, v).
• Possible Solution 1:
• Cost of Possible Solution 1:
Dynamic Programming Solution
• Define a k-path from u to v, where
u,v  {1 , 2 , … , n}
to be any path whose intermediate vertices all
have indices less than or equal to k.
–
–
–
–
k
d
• i, j
What is a 0-path?
What is a 1-path?
…
What is an n-path?
length [i, j ]

k 1
k 1
k 1
min
d
,
d

d

i, j
i ,k
k, j


if k  0
if 1  k  n
Floyd’s Algorithm
Algorithm Floyd
Input: An n  n matrix length[1..n, 1..n] such that
length[i,j] is the weight of the edge (i,j) in a
directed graph G = ({1,2,…,n}, E)
Output: A matrix D with D[i,j] = [i,j]
1 D = length; //copy the input matrix length into D
2 for k = 1 to n do
3
for i = 1 to n do
4
for j = 1 to n do
5
D[i,j] = min{D[i,j] , D[i,k] + D[k,j]}
Example
2
11
12
5
15
8
2
4
1
11
1
3
5
4
Example (Cont.)
0-p 1
1
2
3
4
2
3
4
1-p 1
1
2
3
4
2
3
4
2-p 1
1
2
3
4
2
3
4
3-p 1
1
2
3
4
2
3
4
Time and Space Complexity
• Time Complexity:
• Space Complexity:
Example (Cont.)
0-p
1
2
3
4
1 2 3 4
0 12 5 
15 0 8 11
 4 0 2
1 5 11 0
1-p
1
2
3
4
1 2
0 12
15 0
 4
1 5
3
5
8
0
6
4

11
2
0
2-p
1
2
3
4
1 2
0 12
15 0
19 4
1 5
3-p
1
2
3
4
1
0
15
19
1
3
5
8
0
6
4
7
10
2
0
3
5
8
0
6
4
23
11
2
0
2
9
0
4
5
Example (Cont.)
3-p
1
2
3
4
1
0
15
19
1
2
9
0
4
5
3
5
8
0
6
4
7
10
2
0
4-p
1
2
3
4
1
0
11
3
1
2
9
0
4
5
3
5
8
0
6
4
7
10
2
0