* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Divide and Conquer Algorithms
List of first-order theories wikipedia , lookup
History of the function concept wikipedia , lookup
Big O notation wikipedia , lookup
Georg Cantor's first set theory article wikipedia , lookup
Vincent's theorem wikipedia , lookup
Large numbers wikipedia , lookup
Mathematics of radio engineering wikipedia , lookup
Wiles's proof of Fermat's Last Theorem wikipedia , lookup
Fermat's Last Theorem wikipedia , lookup
Non-standard calculus wikipedia , lookup
Halting problem wikipedia , lookup
List of important publications in mathematics wikipedia , lookup
Central limit theorem wikipedia , lookup
Nyquist–Shannon sampling theorem wikipedia , lookup
Brouwer fixed-point theorem wikipedia , lookup
Elementary mathematics wikipedia , lookup
Fundamental theorem of algebra wikipedia , lookup
Divide and Conquer Algorithms Definition The divide-and-conquer strategy solves a problem by: 1. Breaking it into subproblems that are themselves smaller instances of the same type of problem 2. Recursively solving these subproblems 3. Appropriately combining their answers Recurrence Relationship An equation that recursively defines a sequence, once one or more initial terms are given: each further term of the sequence is defined as a function of the preceding terms. Example g(n) = g(n-1) + 2n -1 g(0) = 0 Define the function f(g) = n^2 and the recurrence relation f(n) = f(n-1) + f(n-2) f(1) = 1 f(0) = 1 defines the famous Fibanocci sequence 1,1,2,3,5,8,13,.... Solving a Recurrence Relationship Solving a recurrence relation: Given a function defined by a recurrence relation, we want to find a "closed form" of the function. In other words, we would like to eliminate recursion from the function definition. There are several techniques for solving recurrence relations. The main techniques for us are the iteration method (also called expansion, or unfolding methods) and the Master Theorem method. Here is an example of solving the above recurrence relation for g(n) using the iteration method: g(n) = g(n-1) + 2n - 1 = [g(n-2) + 2(n-1) - 1] + 2n - 1 // because g(n-1) = g(n-2) + 2(n-1) -1 // = g(n-2) + 2(n-1) + 2n - 2 = [g(n-3) + 2(n-2) -1] + 2(n-1) + 2n - 2 // because g(n-2) = g(n-3) + 2(n-2) -1 // = g(n-3) + 2(n-2) + 2(n-1) + 2n - 3 ... = g(n-i) + 2(n-i+1) +...+ 2n - i ... = g(n-n) + 2(n-n+1) +...+ 2n - n = 0 + 2 + 4 +...+ 2n - n // because g(0) = 0 // = 2 + 4 +...+ 2n - n = 2*n*(n+1)/2 - n // using arithmetic progression formula 1+...+n = n(n+1)/2 // = n^2 Multiplication The product of two complex numbers is represented as (a + bi)(c + di) = ac - bd + (bc + ad)i This representation involves 4 multiplications . The mathematician Carl Friedrich Gauss (1777-1855) noticed that it could be done by using only 3 multiplications as bc + ad = (a + b)(c + d) - ac - bd: Multiplication of Two n-Bits Integers Suppose x and y are two n-bit integers, and assume for convenience that n is a power of 2 (the more general case is hardly any different). As a first step toward multiplying x and y, split each of them into their left and right halves, which are n/2 bits long: For instance, if x = 101101102 then XL = 10112, XR = 01102, Continued… Continued… We will compute x.y via the expression on the right. The additions take linear time, as do the multiplications by powers of 2 (which are merely left-shifts). The significant operations are the four n/2-bit multiplications, xL yL; xLyR; xRyL; xRyR; these we can handle by four recursive calls. Thus our method for multiplying n-bit numbers starts by making recursive calls to multiply these four pairs of n/2-bit numbers (four subproblems of half the size), and then evaluates the preceding expression in O(n) time. Writing T(n) for the overall running time on n-bit inputs, we get the recurrence relation Algorithm for multiplication of n-bits numbers Master’s Theorem Divide-and-conquer algorithms often follow a generic pattern: they tackle a problem of size n by recursively solving, say, a sub problems of size n/b and then combining these answers in O(𝑛𝑑 ) time, for some a; b; d > 0 (in the multiplication algorithm, a = 3, b = 2, and d = 1). Their running time can therefore be captured by the equation 𝑛 T(n) = a T ( ) + O(𝑛𝑑 ). 𝑏 We next derive a closed-form solution to this general recurrence so that we no longer have to solve it explicitly in each new instance. Master Theorem 𝒏 ( ) 𝒃 If T(n) = a T + O(𝒏𝒅 ). for some constants a > 0, b > 1, and d ≥0 Master Theorem This single theorem tells us the running times of most of the divide-and-conquer procedures we are likely to use. Proof. To prove the claim, let's start by assuming for the sake of convenience that n is a power of b. This will not influence the final bound in any important way. after all, n is at most a multiplicative factor of b away from some power of b and it will allow us to 𝑛 ignore the rounding effect in . 𝑏 Next, notice that the size of the subproblems decreases by a factor of b with each level of recursion, and therefore reaches the base case after 𝑙𝑜𝑔𝑏 n levels Master Theorem This is the height of the recursion tree. Its branching factor is a, so the kth level of the tree is made up of 𝑎𝑘 subproblems, each of 𝑛 size 𝑘. The total work done at this level is 𝑏 As k goes from 0 (the root) to 𝑙𝑜𝑔𝑏 n (the leaves), these numbers 𝑎 form a geometric series with ratio 𝑑.Finding the sum of such a 𝑏 series in big-O notation is easy and comes down to three cases. Master Theorem Test your self Test yourself