Survey

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Exact cover wikipedia , lookup

Corecursion wikipedia , lookup

Natural computing wikipedia , lookup

Knapsack problem wikipedia , lookup

Post-quantum cryptography wikipedia , lookup

Recursion (computer science) wikipedia , lookup

Sieve of Eratosthenes wikipedia , lookup

Simulated annealing wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Simplex algorithm wikipedia , lookup

Lateral computing wikipedia , lookup

Travelling salesman problem wikipedia , lookup

Multiplication algorithm wikipedia , lookup

Dijkstra's algorithm wikipedia , lookup

Theoretical computer science wikipedia , lookup

Pattern recognition wikipedia , lookup

Binary search algorithm wikipedia , lookup

Algorithm characterizations wikipedia , lookup

Fast Fourier transform wikipedia , lookup

Operational transformation wikipedia , lookup

Computational complexity theory wikipedia , lookup

Genetic algorithm wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Algorithm wikipedia , lookup

Quicksort wikipedia , lookup

Time complexity wikipedia , lookup

Transcript
```CmSc250 Intro to Algorithms
Chapter 4. Decrease-and-Conquer Algorithms
1. Introduction
Basic idea of Decrease and Conquer strategy:
1. Reduce problem instance to smaller instance of the same problem and extend solution
2. Solve smaller instance
3. Extend solution of smaller instance to obtain solution to original problem
Examples of Decrease-and-Conquer Algorithms
 Decrease by one:
•
•
•
Insertion sort
Topological sorting (graphs)
Algorithms for generating permutations, subsets
 Decrease by a constant factor
•
•
•
•
Binary search
Fake-coin problems
Multiplication à la russe
Josephus problem
NOTE: Binary search is often treated as a divide-and-conquer algorithm
In fact, the divide and conquer algorithms are decrease and conquer with constant factor 2
 Variable-size decrease
•
•
Euclid’s algorithm
Selection by partition
2. Decrease-by-one
2.1. Insertion sort
Algorithm to sort n elements:

Sort n-1 elements of the array

Insert the n-th element
Complexity: (n2) in the worst and the average case, and (n) on almost sorted arrays.
1
2.2. Analysis of the Insertion Sort
To insert the last element we need at most N-1 comparisons and N-1 movements.
To insert the N-1st element we need N-2 comparisons and N-2 movements.
….
To insert the 2nd element we need 1 comparison and one movement.
To sum up:
2* (1 + 2 + 3 +… N - 1) = 2 * N* (N-1) / 2 = N(N-1) = Θ (N2)
if the greater part of the array is sorted, the complexity is almost O(N)
The average complexity is proved to be = Θ (N2)
Time efficiency
Cworst(n) = n(n-1)/2  Θ(n2)
Cavg(n) ≈ n2/4  Θ(n2)
Cbest(n) = n - 1  Θ(n) (also fast on almost sorted arrays)
Space efficiency: in-place
Stability: yes
Best elementary sorting algorithm overall
2.3. A lower bound for simple sorting algorithms
Simple sorting algorithms swap elements that are not ordered. Swapping is done by bubble sort, and by
insertion sort. Thus the complexity depends on the number of swaps. To estimate how many swaps are
needed on average, we define inversion in the following way:
2
Definition 1 An inversion is an ordered pair (Ai, Aj) such that i < j but Ai > Aj.
Example: 10,6, 7, 15, 3,1
Inversions are: (10,6), (10,7), (10,3),(10,1)
(6,3), (6,1)
(7,3), (7,1)
(15,3), (15,1)
(3,1)
The following is true:



Swapping adjacent elements that are out of order removes one inversion.
A sorted array has no inversions.
Sorting an array that contains i inversions requires at least i (implicit) swaps of adjacent
elements.
How many inversions are there in an average unsorted array?
In general this is a tricky question to answer - just what is meant by average? However, we can make a
couple of simplifying assumptions:
o There are no duplicates in the list.
o Since the elements are unique (by assumption), all that matters is their relative rank.
Accordingly we identify them with the first N integers {1, 2, ..., N} and assume the elements
we have to sort are the first N integers.
Under these circumstances we can say the following:
Theorem 1 [Average number of inversions] The average number of inversions in an array of N
distinct elements is
N (N - 1) / 4
Proof: Given an array A, consider Ar, which is the array in reverse order. Now consider a pair (x, y)
with x < y. This pair is an inversion in exactly one of A, Ar. The total number of such pairs is given by
N (N - 1)/2, and (on average) half of these will be inversions in A.
Thus A has N (N - 1) / 4 inversions.
Consequently the insertion sort has an average running time of O(N2). In fact we can generalize this
result to all sorting algorithms that work by exchanging adjacent elements to eliminate inversions.
Theorem 2 Any algorithm that sorts by exchanging adjacent elements requires Ω (N2) time on
average.
The proof follows immediately from the fact that the average number of inversions is N(N-1)/4: each
adjacent swap removes only one inversion, so Ω (N2) swaps are required.
Theorem 2 above implies that for a sorting algorithm to run in less than quadratic time it must do
something other than swap adjacent elements.
3
3. Decrease by a constant factor
Decrease by a constant factor algorithms are usually logarithmic in complexity. Algorithms that
use a factor of 2 are also known as divide-and-conquer algorithms. Binary search is a typical
example. Decrease by a constant factor algorithms are very efficient especially when the factor is
greater than 2 as in the fake-coin problem discussed below.
3.1. The Fake-coin problem
The original puzzle reads as follows: You have 27 coins among which one is fake and it is lighter
than the others. You have also a balance scale and you can compare the weight of any two sets of
coins. How can you find the fake coin with three measurements only?
In general, given n coins among which one is fake, and a balance scale, what is the minimum
number of measurement to identify the fake coin, assuming that it is lighter than the other coins?
3.2. Multiplication a la Russe (Russian peasant method)
We can multiply two positive integers using only addition and division by 2.
The algorithm is based on the observation that
N*M = (N/2) * (M * 2) if N is even, and
N*M = ((N-1)/2) * (M*2) + M if N is odd
The base case is N = 1: 1*M = M
Example:
Compute 20 * 26
n
m
20 26
10 52
5 104 104
2 208 +
1 416 416
----------520
The result is obtained by adding all elements in the third column - these are the elements for which
the value in column ‘n’ is odd.
4. Variable size decrease algorithms
In these algorithms the reduction pattern varies from one iteration of the algorithm to another.
Examples are Euclid’s algorithm for computing the greatest common divisor , search and
insertion in binary search trees, and others.
4.1. Euclid's algorithm for finding the greatest common divisor.
Based on the observation that the GCD (greatest common divisor) of two integer numbers M and N, M
> N, is the same as the GCD of N and the remainder of the integer division M / N.
4
Recursion:
gcd (long M,long N){
if (M%N == 0), return N;
else return gcd(N, M%N);
}
The algorithm works by computing remainders. The last non-zero remainder is the answer. Here is a
non-recursive implementation of the algorithm:
long gcd ( long m, long n){
long rem;
while (n != 0){
rem = m % n;
m = n;
n = rem;
}
return m;
}
Example:
M = 24, N = 15
M
24
N
15
rem
9
15
9
6
9
6
3
6
3
0
3
0
5
We can prove that given M > N, the remainder M % N is at most M/2 .
a. case 1: N  M/2. Since the remainder is less than N, it would be less than
M /2.
b. case 2: N > M/2. In this case the remainder would be M - N, and since N > M/2, M - N
would be less than M/2.
After two iterations the remainder appears in the first column, so after two iterations the remainder
would be half of its original value. Hence the number of iterations is at most 2logN, i.e. O(logN)
4.2. The Prime example: What is the probability for two numbers less than N to be relatively
prime ( e.g 7 and 9 are relatively prime, gcd(9,7) = 1) ?
Probability = (number of prime pairs) / (number of all pairs)
Count the operations in the following function:
double probRelPrime (int n)
{
int rel = 0, tot = 0; // rel - number of prime pairs
// tot - number of all pairs
int i, j;
for ( i = 0;
i <=n; i++)
for (j = i+1; j <= n; j++)
{
tot++;
if (gcd(i,j) ==1)
}
return
rel++;
(rel/tot);
}
Two nested loops each runs up to N.
The body contains a function with complexity O(logN).
Hence the complexity of probRelPrime is O(N2logN)
5. Conclusion
The Decrease-and-Conquer paradigm relies on a relation between an instance of the problem and a
smaller instance of the same problem. It may happen that a given problem can be solved by
decrease-by-constant as well as decrease-by-factor versions of the paradigm, for example
computing an.
While the algorithms in this group are usually described recursively, the implementations can be
either recursive or iterative. The iterative implementations may require more coding effort,
however they avoid the overload that accompanies recursion.
6
```
Related documents