Download Concrete analysis of Regev`s worst-case to average

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Boson sampling wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Orchestrated objective reduction wikipedia , lookup

Probability amplitude wikipedia , lookup

Transcript
CONCRETE ANALYSIS OF REGEV’S WORST-CASE/AVERAGE-CASE
REDUCTION
SANJIT CHATTERJEE, NEAL KOBLITZ, ALFRED MENEZES, AND PALASH SARKAR
1. Introduction
Let q√= q(n) and m = m(n) be integers, and let α = α(n) ∈ (0, 1) be such that
αq > 2 n. Let χ be the probability distribution on Zq obtained by sampling from a
Gaussian distribution with mean 0 and variance α2 /2π, and then multiplying by q and
rounding to the closest integer modulo q. Then the (search version of the) LWE problem
is the following: Let s be a secret vector selected uniformly at random from Znq . Given m
samples (ai , ai · s + ei ), where each ai is selected independently and uniformly at random
from Znq , and where each ei is selected independently from Zq according to χ, determine s.
The decisional version of LWE, called DLWE, asks us to determine whether we have been
given m LWE samples (ai , ai · s + ei ) or m random samples (ai , ui ), where each ui is selected
independently and uniformly at random from Zq .
Regev [2] proved that the existence of an efficient algorithm that solves DLWE in the
average case implies the existence of an efficient quantum algorithm that solves SIVPγ in
the wors case where γ = Õ(n/α). In [1] we provided the following refinement of Regev’s
theorem:
√
Claim 1. Let q = n2 and α = 1/( n log2 n), whence γ = Õ(n1.5 ). Suppose that there is an
algorithm W that, given m = nc samples, solves DLWE for a fraction 1/nd1 of all s ∈ Znq
with advantage at least 1/nd2 . Then there is a polynomial-time algorithm W 0 for solving
SIVPγ that calls the W oracle a total of O(n11+c+d1 +2d2 ) times.
In the remainder of this note, we provide some justification of the claim.
2. Gaussian distributions
Recall that the Gaussian distribution with mean 0 and variance σ 2 is the distribution on
R given by the probability density function
2
1
−x
√
exp
.
2σ 2
2π · σ
For x ∈ Rn and s > 0, define the Gaussian function scaled by s:
−πkxk2
ρs (x) = exp
.
s2
Date: April 24, 2016.
1
2
SANJIT CHATTERJEE, NEAL KOBLITZ, ALFRED MENEZES, AND PALASH SARKAR
The Gaussian distribution Ds of parameter s over Rn is given by the probability density
function
ρs (x)
Ds (x) = n .
s
R
Note that Ds is indeed a probability distribution since x∈Rn ρs (x) dx = sn .
If L is a lattice, we can define
X
ρs (L) =
ρs (x).
x∈L
Then the discrete Gaussian probability distribution DL,s of width s for x ∈ L is
DL,s (x) =
ρs (x)
.
ρs (L)
Let L be a (full-rank integer) lattice of dimension n.
3. Concrete analysis
In this note, the tightness gap of a reduction algorithm from problem A to problem B is
the number of calls to the oracle for B that are made by the reduction algorithm.
Regev’s worst-case/average-case reduction has two main components:
(1) The reduction of (search-)LWE to average-case DLWE (denoted DLWEac ).
(2) The reduction of worst-case SIVPγ to LWE.
3.1. Reduction of LWE to DLWEac . This reduction has three parts.
3.1.1. Worst-case to average-case. DLWEwc denotes the worst-case DLWE problem. Lemma 4.1
in [2] shows that that an algorithm W1 that solves DLWEac for a fraction n1d1 of all s ∈ Znq
with acceptance probabilities differing by at least n1d2 can be used to construct an algorithm
W2 that solves DLWEwc with probability essentially 1 for all s ∈ Znq . The algorithm W2
invokes W1 a total of O(nd1 +2d2 +2 ) times.
3.1.2. Search to decision. Lemma 4.2 in [2] shows that an algorithm W2 which solves
DLWEwc for all s ∈ Znq with probability essentially 1 can be used to construct an algorithm W3 that solves (search-)LWE for all s ∈ Znq with probability essentially 1. Algorithm
W3 invokes W2 a total of nq times, so this reduction has a tightness gap of nq.
3.1.3. Continuous to discrete. Lemma 4.3 in [2] shows that an algorithm W3 that solves
LWE can be used to construct an algorithm W5 that solves LWEq,Ψα . (See [2] for the
definition of the LWEq,Ψα problem.) This reduction is tight.
3.2. Reduction of SIVPγ to LWE. This reduction has tightness gap 6n6+c . The reduction has two parts.
CONCRETE ANALYSIS OF REGEV’S REDUCTION
3
3.2.1. DGS to LWE. Let = (n) be some negligible function of n. Theorem 3.1 of [2]
shows that an algorithm W4 that solves LWEq,Ψα given m samples can be used to construct
a quantum algorithm W9 for DGS√2n·η (L)/α . Here, η (L) is the “smoothing parameter with
accuracy ”, and DGSr0 (discrete Gaussian sampling problem) is the problem√of computing
a sample from
distribution DL,r0 where r0 ≥ 2n·η (L)/α.
√ the discrete Gaussian probability
√ i
Let r = 2n · η (L)/α. Let ri = r · (αq/ n) for i ∈ [0, 3n]. Algorithm W9 begins by
producing nc samples from DL,r3n (Lemma 3.2 in [2]); the W4 oracle is not used in this step.
Next, it uses the ‘iterative step’ to use the nc samples from DL,ri to produce nc samples
from DL,ri−1 for i = 3n, 3n − 1, . . . , 1. Since r0 = r, the last step produces the desired
sample from DL,r .
The iterative step (Lemma 3.3 in [2]) uses nc samples from DL,ri to produce one sample
from DL,ri−1 ; this step is then repeated to produce nc samples from DL,ri−1 . Thus, the
iterative step is executed a total of 3n · nc = 3n1+c times.
Each iterative step has two parts.
(1) The first part invokes W4 a total of n2 times:
• Lemma 3.7 in [2] uses W4 to construct an algorithm W5 that solves LWEq,Ψβ ;
W5 invokes W4 n times.
• Lemma 3.11 in [2] uses W5 and the nc samples from DL,ri to construct an
(q)
algorithm W6 that solves the CVPL∗ ,αq/√2r problem. The reduction is tight.
i
• Lemma 3.5 in [2] uses W6 to construct an algorithm W7 that solves the CVPL∗ ,αq/√2ri
problem. Algorithm W7 invokes W6 n times.
(2) The second part (Lemma 3.14 in [2]) uses W7 to construct a quantum algorithm W8
that produces a sample from DL,ri−1 . This reduction is tight.
Since each iterative step has tightness gap n2 , the total tightness gap for the reduction
of DGS to LWE reduction is 3n3+c .
3.2.2. SIVPγ to DGS. Lemma 3.17 in [2] uses W9 to construct an algorithm W10 that solves
SIVP2√2nη (L)/α . Algorithm W10 invokes W9 2n3 times.
p
Lemma 2.12 in [2] states that η (L) ≤ ω(log n)·λn (L) for some negligible function (n).
Thus
√ p
√
n
2 2n ω(log n)
2 2nη (L)
γ=
=
= Õ
= Õ(n1.5 ).
α · λn (L)
α
α
3.3. Summary. Regev’s reduction of SIVPγ to DLWEac has tightness gap
nd1 +2d2 +2 · nq · 3n3+c · 2n3 = 6n11+c+d1 +2d2 .
References
[1] S. Chatterjee, N. Koblitz, A. Menezes and P. Sarkar, Another look at tightness II: practical issues in
cryptography, Cryptography ePrint Archive, Report 2016/360, available at http://eprint.iacr.org/2016/
360.
[2] O. Regev, On lattices, learning with errors, random linear codes, and cryptography, Journal of the ACM,
56 (2009), pp. 34:1-32:40.