Download Lecture 7 - NYU Computer Science

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Polynomial greatest common divisor wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Transcript
G22.2950-001 Introduction to Cryptography
October 20, 2004
Lecture 7
Lecturer: Victor Shoup
Scribe: Arami Jung and Tammy Lee
Last lecture we have stated the theorem, M sb (most significant bit) is hard core bit if
discrete exponentiation is one-way permutation, and we have proved the theorem based on
the assumption an algorithm A computes M sb(x) exactly, that is ² = 1/2. Now, we are
going to prove the theorem with the assumption an algorithm A predicts M sb(x) with the
advantage at least ² and ² need not be 1/2.
Theorem 1. Msb is hard core bit if discrete exponentiation is one-way.
Proof.
First, we will start to prove this based on fixed p, γ and later, we will observe it with
random prime p and generator γ produced by some probabilistic algorithm.
1. Fixed (p, γ)
Suppose we have an algorithm A that predicts M sb with the advantage at least ², that
is for fixed p and γ,
P r[x ←R [0, ..., p − 1), b ← A(γ x ) : b = M sb(x)] ≥
1
+²
2
Assume ² is ”non-negligible”. We will design an algorithm that computes logγ α for all
α in expected poly-time (P oly(log p, 1( )(²)).
(a) Use an algorithm A to get an algorithm B that computes P Sqrt correctly with
probability at least 12 + ². An algorithm B is as follows.
B : input α (assumed to be uniformly distributed over (Z∗p )2 )
compute two square roots β, −β
c ←R {0, 1}
β̃ ← c = 0 ? β : −β
b ← A(β̃)
if b = 1 then β̃ ← −β̃
return β̃
Observe,
α ∈R (Z∗p )2
β̃ ∈R Z∗p
L7-1
(b) We want to ”improve” B so that its output is almost always correct for all input.
Plus this into bit shifting algorithm. Later, we will call the newly improved
algorithm C. We can do this by assuming input α to an algorithm B has a
”small” discrete logarithm. The idea is as follows. Assume α = γ 2x , 0 ≤ x < t.
And randomize the input
p−1
r ←R [0, ...,
)
2
β ← B(α · γ 2r )
Observe,
γ 2r ∈ (Z∗p )2
⇒ αγ 2r ∈ (Z∗p )2
⇒ β = P sqrt(αγ 2r ) with probability ≥
1
2
+²
Figure 1: Our goal is to find the square root in the right half cycle
If we limit the bound of r from the above, then the correct probability of B may
be less. But the limitation suggest the way to determine the principal square root
of α and we can reduce the error by using Chernoff Bounds. Here are the steps:
Suppose r < p−1
− t. Then x + r < p−1
because 0 ≤ x < t. Therefore,
2
2
2r
x+r
P sqrt(αγ ) = γ . Let’s define β̃ as P sqrt(αγ 2r ). Note that
β0 = P sqrt(α) ⇐⇒ β0 = γ x
⇐⇒ β0 γ r = γ x · γ r
⇐⇒ β0 γ r = β̃.
This shows that we can check whether a square root of α is the principal square
root of α or not with randomly chosen r from the above range and the principal
) but from [0 . . . p−1
−t), then
square root of αγ 2r . If we choose r not from [0 . . . p−1
2
2
B will correctly outputs the principal square root of αγ 2r with probability ≥ 12 +
t
² − (p−1)/2
because statistical distance between uniform on [0 . . . p−1
) and uniform
2
L7-2
t
on [0 . . . p−1
− t) is (p−1)/2
. We can change the lower bound of the probability to
2
p−1
1
²
+ 2 by setting t ≤ 4 · ².
2
Now an algorithm C comes as follows.
C : input t, α such that α = γ 2x , 0 ≤ x < t, t ≤ p−1
·²
4
β0 , β1 = two square roots of α
c0 ← 0, where c0 = counter for the number of times β0 = P sqrt
c1 ← 0, where c1 = counter for the number of times β1 = P sqrt
repeat k times
r ←R [0, ..., p−1
− t)
2
2r
β̃ ← B(α · γ )
if β0 γ r = β̃ then + +c0
else
+ +c1
if c0 > c1 then return β0
else return β1
How good is algorithm C? We can find the correctness of C using Chernoff
bound which shows the bound of the probability that a random variable deviated
from its expected value by some specified amount. Since the lower bound of the
² 2
correctness of B is 12 + 2² given t ≤ p−1
· ², the error bound of C is e−k( 2 ) . If
4
we set k = Θ( ²t2 ) then the error bound is less than 2−T . Therefore, algorithm
C computes the principal square root of α correctly with the probability at least
1 − 2−T .
(c) Recall that we can compute logγ α given an oracle for P Sqrt. So, the remaining
work is how to resolve the issue of restriction on x, 0 ≤ x < t. Generally, we want
to compute logγ α = x where x ∈ [0, ..., p − 1). We can resolve this by efficiently
reducing the problem of discrete logarithm in [0, p − 1) interval to that in length-t
interval. Note that for some q, x = qt + y where 0 ≤ y < t. And q = Θ( pt ). This
gives the fact that for some i ≤ pt , γαit ∈ {γ 0 , . . . , γ t−1 }. Since t = Θ(p²) gives
q = Θ( 1² ), we can find the right γαit within O( 1² ) tries.
2. Unfixed (p, γ)
The above assumed p, γ fixed, Now, let’s discuss it with a real starting point: an
algorithm A, a polynomial P , and an infinite set Λ.
¯
¯
¯
¯
1
λ
λ
x
¯P r[(p, γ) ← SysP aram(1 ), x ←R [0, ..., p − 1), b ← A(1 , p, γ, γ ) : M sb(x) = b] − ¯
¯
2¯
≥
1
P (λ)
Observations:
L7-3
f or all λ ∈ Λ
• P (λ) is used in the reduction.
• We need to get rid of the absolute value from the above equation i.e. infinitely
often:
P r[(p, γ) ← SysP aram(1λ ), x ←R [0, ..., p − 1), b ← A(1λ , p, γ, γ x ) : M sb(x) = b]
≥
1
1
+
2 P (λ)
• With respect to random (p, γ), not all (p, γ) may be ”‘good”’ such that the conclusion in the first case is true. Let’s call (p, γ) ”good” if
P r[(p, γ) ← SysP aram(1λ ), x ←R [0, ..., p − 1), b ← A(1λ , p, γ, γ x ) : M sb(x) = b
| f ixed (p, γ)
]
≥
1
1
+
2 2P (λ)
We want to show that P r[(p, γ) are ”good”] is not too small.
Figure 2: Pr[success] given on (p,γ)
1
2
+
1
p
≤ P r[success]
= P r[success | good (p, γ)] · P r[good (p, γ)]
+P r[success | bad (p, γ)] · P r[bad (p, γ)]
1
)·1
≤ 1 · P r[good (p, γ)] + ( 12 + 2P
Therefore,
1
+ P1
2
1
2P
≤ P r[good] + 12 +
≤ P r[good]
1
2P
That is to say that the probability (p, γ) are good is not too small.
L7-4
2
Assuming discrete logarithm is hard, we get a secure PRBG that take x ∈ [0, ..., p − 1)
0
as an input and output {0, 1}l . This is a bit ”inconvenient”, as the seed of PRBG is not a
random bit string. Instead, we can do the following: Given a random seed, s ∈ {0, 1}l ,
• interprete s as a number in [0, ..., 2l )
• apply mod (p − 1) to the number
This will be ”close” to be uniform distribution over [0, ..., p − 1).
L7-5