Download TCSS 343: Large Integer Multiplication Suppose we want to multiply

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Lateral computing wikipedia , lookup

Sieve of Eratosthenes wikipedia , lookup

Recursion (computer science) wikipedia , lookup

Knapsack problem wikipedia , lookup

Genetic algorithm wikipedia , lookup

Post-quantum cryptography wikipedia , lookup

Dynamic programming wikipedia , lookup

Quicksort wikipedia , lookup

Simplex algorithm wikipedia , lookup

Smith–Waterman algorithm wikipedia , lookup

Computational complexity theory wikipedia , lookup

Corecursion wikipedia , lookup

Dijkstra's algorithm wikipedia , lookup

Fisher–Yates shuffle wikipedia , lookup

Fast Fourier transform wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Algorithm wikipedia , lookup

Algorithm characterizations wikipedia , lookup

Euclidean algorithm wikipedia , lookup

Time complexity wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Transcript
TCSS 343: Large Integer Multiplication
Suppose we want to multiply two n-bit integers. Using the pencil-and-paper algorithm,
we would need to perform n2 bitwise multiplications, because each bit in the first number
would have to be multiplied by each bit in the second number. We can decrease the total
number of operations performed by using a divide and conquer approach to solve this
problem.
Input: Two n-bit integers, a and b
Output: a  b
The idea behind the divide and conquer algorithm is to transform the problem of
multiplying two n-bit integers into the problem of multiplying two (n/2)-bit numbers
some number of times.
Let a = an-1 2n-1 + an-2 2n-2 + …+ a1 21 + a0 20, and
let b = bn-1 2n-1 + bn-2 2n-2 + …+ b1 21 + b0 20,
where for all 0  i  n – 1, ai  {0, 1} and bi  {0, 1}.
In other words, an-1 an-2 … a1 a0 is the binary representation of a (and similarly for b).
Now let us define
ahigh = an-1 an-2 … an/2 ,
alow = a(n/2)-1 … a0 ,
bhigh = bn-1 bn-2 … bn/2 , and
blow = b(n/2)-1 … b0 .
That is, ahigh is the number represented by the high-order bits of a, and alow is the number
represented by the low-order bits of a. Another way of defining ahigh and alow is:
alow = a mod 2n/2, and
ahigh = a – alow .
Let c = a  b. Then
c = (ahigh 2n/2 + alow)  (bhigh 2n/2 + blow)
= (ahigh  bhigh) 2n + (ahigh  blow + alow  bhigh) 2n/2 + (alow  blow)
= (ahigh  bhigh) 2n +
[[(ahigh + alow)  (bhigh + blow)] – (ahigh  bhigh) – (alow  blow)] 2n/2 +
(alow  blow)
In the last step above, instead of computing (ahigh  blow + alow  bhigh) directly by
multiplying two pairs of (n/2)-bit integers, we "indirectly" compute the value by
multiplying one pair of (n/2)-bit integers and then subtracting values that we need to
compute anyway (ahigh  bhigh) and (alow  blow).
Let c2 = (ahigh  bhigh),
c1 = [[(ahigh + alow)  (bhigh + blow)] – (ahigh  bhigh) – (alow  blow)], and
c0 = (alow  blow).
Once we have computed the products (ahigh  bhigh), (alow  blow), and (ahigh + alow)  (bhigh +
blow), then we can compute all three values c2, c1, and c0. Once we have computed c2, c1,
and c0, then it is easy to compute c by multiplying c2 by 2n (which is just shifting c2 by n
bits), multiplying c1 by 2n/2 (which is just shifting c1 by n/2 bits), and adding those
products to c0. These last additions and shifts can be done in O(n) time. (How?)
So the only task left is to compute the three products (ahigh  bhigh), (alow  blow), and (ahigh
+ alow)  (bhigh + blow). Each number in each pair of numbers to be multiplied has n/2 bits,
and so we can recursively apply our multiplication algorithm to these three subproblems.
To recap, the divide and conquer algorithm to compute the product of two n-bit integers,
a and b, is:
1. Compute alow = a mod 2n/2, and ahigh = a – alow .
2. Compute blow = b mod 2n/2, and bhigh = b – blow .
3. Recursively compute (ahigh  bhigh), (alow  blow), and (ahigh + alow)  (bhigh + blow).
4. Compute c2 = (ahigh  bhigh), c1 = [[(ahigh + alow)  (bhigh + blow)] – (ahigh  bhigh) – (alow 
blow)], and c0 = (alow  blow).
5. Compute c = c2 2n + c1 2n/2 + c0 .
Let T(n) be the time it takes to multiply two n-bit integers using the algorithm above.
Then the following recurrence equation applies:
T(1) = 1
T(n) = 3 T(n/2) + cn
Since we can multiply two 1-bit integers in constant time, T(1) = 1. (If we wanted to be
more accurate, we would say T(1) = d, for some constant d.) In the recursive case, in
order to multiply two n-bit integers, we need to solve three instances of the problem of
multiplying two (n/2)-bit integers and then spending cn time (for some constant c) to
compose the results into the answer for the larger (n-bit) problem.
The Master Theorem can be applied to determine what order of growth T(n) is. In our
case, a = 3, b = 2, and d = 1 (since n = n1). Since a > bd (or, equivalently, logb a > d),
then by the Master Theorem T(n) = (n logb a )  (n log2 3 ) .
Is this running time better than the running time of the pencil-and-paper algorithm?
Since log2 3  1.585, then the divide and conquer algorithm runs in time roughly
(n1.585), which is better than (n2). (Why?)
As a practical matter, we can improve our divide and conquer algorithm by stopping the
recursion at a point where it is more efficient to do so. For example, if our machine has
32-bit integers, we can stop the recursion when n = 32 (or maybe when n = 16), because
we know that our machine can perform 32-bit multiplications quickly.
Exercise: What would be the running time of our divide and conquer algorithm if instead
of performing the three multiplications in Step 3, we recursively computed the four
multiplications (ahigh  bhigh), (ahigh  blow), (alow  bhigh), (alow  blow) to compute c?