Download Simple construction of almost k-wise independent random variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
Simple Constructions of Almost k-wise Independent
Random Variables
Noga Alon’
Oded Goldreicht
Johan Histad*
R e d Peralta!
August 6, 1990
Abstract
1
Introduction
In recent years, randomization has played a
We present three alternative simple constructions of small probability spaces on n bits for
which any k bits are almost independent. The
number of bits used to specify a point in the
sample space is O(loglogn+k+log $), where c
is the statistical difference between of the distribution induced on any k bit locations and
the uniform distribution. This is asymptotically comparable to the construction recently
presented by Naor and Naor (our constant is
better as long as E < 2 k / n ) . An additional advantage of our constructions is their simplicity.
central role in the development of efficient algorithms. Notable examples are the massive
use of randomness in computational number
theory (e.g., primality testing [21, 23, 14, 11)
and in parallel algorithms (e.g. [16, 191).
A randomized algorithm can be viewed as
a two-stage procedure in which first a %ample point” is chosen at random and next a deterministic procedure is applied to the sample
point. In the generic case the sample point is
an arbitrary string of specific length (say n),
the sample space consists of the set of all 2%
strings, and “choosing a sample at random”
Two of our constructions are based on bit
amounts to taking the outcome of n consesequences which are widely believed to posses
quative unbiased coin tosses. However, as ob“randomness properties” and our results can
served by Luby [16], in many cases the albe viewed as an explanation and establishment
gorithm “behaves as well” when the sample
of these beliefs.
is chosen from a much smaller sample space.
If points in the smaller sample space can be
compactly represented and generated (i.e. re‘Sadtler Faculty of Exact Sciences, Tel Aviv Uni- constructed to their full length from the comversity, Israel, and IBM Almaden Research Center, pact representation) then this yields a saving
San Jose, CA 95120.
in the number of coin tosses required for the
+Computer Science Dept., Technion, Haifa, Israel.
Supported by grant No. 86-00301 from the United procedure. In some cases the required number
States - Israel Binational Science Foundation (BSF), of coin tosses gets so small that one can deJerusalem, Israel.
terministically scan all possible outcomes (e.g.
!Royal Institute of Technology, Stockholm, Sweden.
!Dept. of ElectricalEngineering and Computer Sci- [IS]).
To summerize, the construction of small
ence, University of Wisconsin, Milwaukee. Supported
by NSF grant No. CCR-8909657.
sample spaces which have some randomness
544
CH2925-6/90/0000/0544$01.OO (8 1990 IEEE
properties is of major theoretical and practical importance. A typical property is that the
probability distribution, induced on every k
bit locations in a string randomly selected in
the sample space, should be uniform. Such a
sample space is called k-wise independent.
Alon, Babai and Itai [3] presented an efficient construction of k-wise independent sample spaces of size nkI2, where n is (as above)
the length of the strings in the sample space.
This result is the best possible, in view of the
matching lower bound of Chor. et. al. 191.
Hence, k-wise independent sample spaces of
size polynomial in n are only possible for constant k. This fact led Naor and Naor to introduce the notion of almost k-wise independent
sample spaces. Loosely speaking, the probability distribution induced on every k bit locations in the sample string is “statistically
close” to uniform. Clearly, if an algorithm
“behaves well” on points chosen from a k-wise
independent sample space then it will “behave
essentially as well” on points chosen from an
almost k-wise independent sample space.
Naor and Naor presented an efficient construction of an almost k-wise independent
sample space [20].Points in their sample space
are specified by O(log1og n k log $) bits,
where e is a bound on the statistical m e r ence between the distribution induced on k
bit locations and the uniform one. The heart
of their construction is a sample space of size
& for which the exclusive-or of any fixed
bit locations, in the sample point, induces a
0-1 random variable with bias bounded by e
(i.e. the exclusive-or of these bits is 1 with
f e)). The constant in the exprobability
ponent depends, among other things, on the
constants involved in an explicit construction
of an expander (namely the degree and second
eigenvalue of the expander). Using the best
known expanders [17]this constant is a little
bit bigger than 4.
We present three alternative constructions
+ +
i(1
of sample spaces of size ( ? ) z for which the
exclusive-or of any fixed bit locations, in the
sample point, induces a 0-1 random variable
with bias bounded by e. Another construction
with similar parameters can be given by applying the known properties of the duals of the
page 280). Our three conBCH codes (see [18],
structions are so simple that they can be described in the three corresponding paragraphs
below:
‘
0
A point in the
by two bit strings
def
‘pace is
length
Of
m = logn/e ea& denoted f o
fm-1
and s O - ~ - 8 , , , - 1 , where f o = 1 and tm
fi . ti is
ifieducible PolYnomial. The n-bit sample string, denoted
r o . . r,-l is determined by r; = 8; for
i < rn and r; =
. f i - m + j for
+
E
:
;
’
ET=i’fj
j
2
2. A point in the second sample space is
specified by a residue x modulo a fixed
prime p 2 (./e)’.
The n-bit sample
string, denoted ro . r n - l , is determined
by ri = 0 if x i is a quadratic residue
modulo p and T; = 1 otherwise.
+
-
3. A point in the third sample space is specified by two bit strings of length
kf
lognjeeach, denoted and y. The
,,-
bit samplestring, denoted r o . . .r n - l , is
determined by letting ri equal the innerproduct-mod-2 of the binary vectors ci
and y, where xi is the j t h power o f x when
as an ,,lement of GF(2m).
The first construction may be viewed as an
explanation for the popularity of using linear feedback shift registers for sampling purposes. We showed that when both the feedback
rule and the starting sequence are selected at
random, the resulting feedback sequence enjoys “almost independence” comparable to the
length of the feedback rule (i.e., the sequence
is “almost” O(m)-wise independent, where m
is the length of the feedback rule). Similarly,
an explanation is provided for the “random
structure” of quadratic characters: a random
subsequence of quadratic characters (mod p)
enjoys “ahnost independence” comparable to
the logarithm of the prime moduli (i.e. the sequence Xp(x io), ...,X,(x i n - l ) is “almost”
O(1ogp)-wise independent when x is randomly
selected).
+
+
There are several standard ways of quantifying this condition (i.e. interpreting the phrase
W “ s t ” ) : cf. [SI. We use two very natural
ways corresponding to the L, and L1 norms:
Deflnition 2 (almost k-wise independence):
Let S, be aample space and X = x l - . . x , be
chosen uniformly from
0
S, i s
(max-norm):
max norm)
( e , k)-independent (in
if for any k positions
iz < ... < i k
2
S,.
il
<
and any k-bit string a , we
have
Preliminaries
--
J P 7 [ X i l I;, *
X i b = a]- 2 - k l 5 e.
We will consider probability distributions on
0 (statistical closeness): S
, is e-away (in
binary strings of length n. In particular, we
L1 norm) from k-wise independence if for
will construct probability distributions which
any k positions i l < iz <
< ik we have
are uniform over some set S C ( 0 , I}”, called
the sample space. The parameter that will be
IPr[x;, xi2 * * X i b = a]- 2+
5 €.
of interest to us is the ‘‘size of the probability
aE{O,1)’
space”; namely, the number of strings in the
support (i.e. 1st). The aim is to construct
Clearly, if S, is (e, k)-independent (in max
“small” probability spaces which have LLgood”
norm) then it is at most 2ke-away (in L1 norm)
randomness properties. In particular we will
from k-wise independence, whereas if S, is ebe interested in k-wise independence.
away (in L1 norm) from k-independence then
Convention: A sample space which is conit is (e, k)-independent (in max norm). The
tained in ( 0 ,
will be usually sub-indexed
first relation seems more typical.
by n. The super-index will usually represent
an upper bound on logarithm (to base 2) of
the cardinality of the sample space. Hence, 2.2 Linear Tests
Sr denotes a sample space of 5 2 m strings The heart of each of our constructions is a
each of length n .
sample space which is very close to random
2.1
Almost k-wise Independence
Deflnition 1 (k-wise independence): A
samwhen
X = 2 1 . . x, is chosen uniformly from S,
< ik
then for any k poaitions i l < iz <
and any k-bit string a, we have
ple space
S i s k-wise independent if
-
.--
Pr[xi,zi, . - x i , = a]= 2-k.
For all practical purposes it is sufficient that
a bit sequence is “almost” k-wise independent.
with respect to “linear Boolean tests” (i.e.,
tests which take the exclusive-or of the bits
in some fixed locations in the string). Following Naor and Naor [20], these sample spaces
can be used in various ways to achieve almost
k-wise independence.
Deflnition S :
0
Let (a,@)Zdenote the inner-product mod
2 of the binary vectors a and fl (i.e.
(cy1 . - - a , , P 1 ~--P,,)z
=
a ; & mod
2).
Er=,
0
A 0-1 random variable X i s called €-biased Boolean tests can be used to sample succinct
representations of points in the linear k-wise
if
IPr[X = 01 - P r [ X = 111 5 e.
independent space. The sample obtained can
be shown to be e-biased w.r.t. linear tests.
Usind Lemma 1 we get.
Let S, be sample space and X = x1 - - . x ,
be chosen uniformly from S,. The Sam- Lemma 2 (Naor and Naor): Let SF c
ple space S, i s said to be e-biased with ( 0 , l ) " be a sample space of cardinality 2m thaf
respect to linear tests if for every a = is €-biased with respect to linear tests. Let k
a1 ..-a,,E {0,1}" - { O } ,
the random be an integer and Lk c (0, l } N be a k-wise
independent linear sample space of cardinality
varialle (a,X), is e-biased.
2". Suppose Lz is defined by the linear map
Clearly, the uniform distribution Over
n- T . Then, the sample space R z constrgcted
bit strings is unbiased (0-biased) with respect by apply;ng the linear map T to each sample
to all linear tests. A linear test Can be inter- point in S:, i s €-biased with respect to linear
Preted as trykg to refute the randomness of a festj. Hence R;E:i s (e, k)-;ndependent (in max
Probability space by taking a fixed linear corn- norm), and 2ke-away (in L1 norm) from kbination of the bits in the sample.
wise independence.
The following Lemma, attributed to Vazirani [24] (see also [25][9]),links the ability to Corollary 1 (Naor and Naor): Let k < n be
pass h e a r tests with almost independence. A an integer and N 5 2f-1. Given a sample
proof can be easily derived using the analysis space, S;, as in Lemma 2, one can construct
in [Chor et. al., XOR-Lemma].
a sample space R;E: c { O , I } N of cardinality
2m such that R;E: i s (e, k)-independent (in max
Le"a
(Vazirani): Let sn c {o,l)n be a
and 2ke-away (in L1 norm) f..m ksample space that is e-biased with respect to
wise independence.
linear tests. Then, for ewery k, the sample
space sn 13 ( € 3 k)-indePendent (in 71182 norm),
In view of Corollary 1, the rest of the paand 2ke-away (in L 1 norm) from k-wise
per deals merely with the construction of small
pendence.
sample spaces which have small bias with respect to linear tests.
A more dramatic application of e-biased
(w.r.t. linear tests) sample spaces was sugThe LFSR Construction
gested by Naor and Naor [20]: it combines 3
the use of a sample space which is e-biased
w.r.t. h e a r tests with a "linear" k-wise in- ow first construction is based on linear feeddependent sample space. A sample space is back shift register (LFSR) sequences.
called linear if its elements are obtained by a
linear transformation of their succinct repre- Deflnition 4 ( h e a r feedback shift regkter
and f =
sentation (equivalently, the sample space is a sequences): Let ii = 8 0 , 8 1 , . . . a,-1
linear subspace). For example, the construc- f o , f l , . . . fm-lbe two sequences of m bits
tion of a k-wise independent sample space pre- each. The shift regster sequence generated by
sented by Alon, Babai and Itai (31 is linear. the feedback rule f and the start sequence B
Naor and Naor observed that a sample space is 70,t l , .. . ~ " - 1 where r; = a; f o r i < m and
m-1
which is almost unbiased with respect to linear T ; =
fj t i - m + j for i 2 nt.
0
the starting vector (i.e. ii). A key observation is that the r;'s are a linear combination of
the s j ' s (which are the only indeterminates as
the fi's were fixed). It is useful (and standard
Construction 1 (Sample Space Aim): The practice) to notice that in GF(2), the reducaample apace A;" ;a the s e t of all shift reg- tion of t i modulo f ( t ) (= t"
CEi' f; t i )
iater aequencea generated b y a feedback rule is a linear combination of t o ,t ' , . . . tm-' and
def
f = fofl...fm-l
with fo = 1 and f ( t ) = that this linear combination is identical to the
t"
. t J being an irreducible poly- coefficients in the expression of r; as a linear
nomial (such a feedback rule i a called non- combination of the u j ' s . Hence, a linear combidegenerate). Namely, A:m contains all ae- nation of the ri's (which is exactly what (a,r)2
quencea F = tor1 . r,-l such that there exists is) corresponds to a linear combination of the
a non-degenerate feedback rule
and a start corresponding powers of t'. This linear combination can be either identically zero or not.
sequence i generating 7.
The first case means that the polynomial f ( t )
Hence, the size of the sample space Ai" is divides the polynomial g ( t ) 'Gf
a; ti;
~n
view of whereas in the second case (a,r)Z being a non
at most 22" (actually, it is x
Corolarry 1 we now c o n h e ourselves to evalu- constant combination of the 8;'s is unbiased
ating the bias of this sample space with respect when the 8;'s are uniformly selected. Hence
the bias of (a,r ) 2 , when r is uniformly selected
to linear Boolean tests.
in Ai" equals the probability that the polyProposition 1 : The aample space A;" nomial f ( t ) divides the polynomial g ( t ) . The
i a *-biased
with respect t o linear teata. latter probability is bounded by the fraction
Namely, for any nonzero a the random vari- of irreducible monk polynomials of degree m
able (a,r)Z i s ( n - 1)2-"-biaaed when r i s ae- which divide a specific polynomial of degree
n - 1. There are at most
irreducible monic
lected uniformly in Ai".
polynomials of degree m which divide a polyProof: For the rest of this section we consider nomial of degree n - 1. Dividing by the number
only polynomials over G F ( 2 ) . The number of of irreducible monic polynomials of degree m
(i.e.
the proposition follows. I
irreducible monic polynomial of degree m is
Our sample space will consist of all shift register sequences generated by "non-degenerate"
feedback rules and any starting sequence.
+
-
+ c?=i'f,
--
3
5).
5
5)
4
5.
The Quadratic Charact er Co nst ruct ion
This expression is well approximated by
For the rest of this section we will, for no- Our second construction is based on
tational simplicity, treat the number of irre- Weil's Theorem regarding charcter sums (cf.
ducible monicmpolynomials of degree m as if [Schmidt, p. 43, Thm. ZC]). A special case of
it is exactly
(The tiny error introduced this theorem is stated below.
by the approximation is negligable anyhow.) Deflnition 5 (Quadratic Character): Let p
am
Hence, the size of A;" is
be an odd prime and x be an integer relatively
Fix the feedback rule (Le. 7) and consider prime to p . The Quadratic Character of x
the distribution of (a,r ) 2 when we only vary mod p , denoted Xp(t), i s 1 if t i s a quadratic
k.
5.
548
residue m o d u l o p a n d -1 otherwise. For x a
multiple o f p w e d e f i n e X,(x) = 0 .
Theorem 1 (Wed): L e t p be a n odd p r i m e .
L e t f ( t ) be a p o l y n o m i a l o v e r G F ( p ) h a v i n g
precisely n d i s t i n c i zeros, w h i c h is n o t a perfect
square. T h e n ,
~ z E ~ ( p ) x P 5( w( n)-~
where f(x) = Il:z/(x + i ) a i . Using Theorem
1, our proposition follows. I
5
The Powering Construction
Our third construction is based on arithmetic
in finite fields. In particular, we will use
GF(2") arithmetic and also consider the field
elements as binary strings.
The ith bit in the jthsample string, in our
sample space, will be X p ( i j ) . A translation Construction 3 (Sample Space C:*): L e t
from fl sequences to { O , 1 } sequences can be bin : GF(2'") H ( 0 , 1)'" be a one-to-one m a p easily effected. Theorem 1 will be used to ana- p i n g satisfying bin(0) = 0" a n d bin(u v ) =
lyze the bias of this sample space with respect bin(x) @ bin(y), where a @ 0 m e a n s t h e bitby-bit x o r of t h e b i n a r y strings a a n d 0. ( T h e
to linear tests.
s t a n d a r d r e p r e s e n t a t i o n of GF(2'") as a wector
Construction 2 (Sample Space BFBP): T h e space satisfies t h e above conditions.) A string
sample space BkKP consits of p strings. in t h e s a m p l e space C:" i s specified u s i n g t w o
T h e xth s t r i n g , x = O , l , ...,p - 1, i s field e l e m e n t s , x # 0 a n d y. The ith bit in
r ( x ) = rO(x)rl(x)...rn-l(x) where ri(x) = t h i s s t r i n g i s t h e inner-product of xi a n d y.
1 - X , z+i
,j ) , f o r i = 0 , 1 , ...,n - I .
M o r e precisely, t h e ith bit of t h e s a m p l e p o i n t
i s (bin(xi),bin(y))z.
Hence, the size of the sample space BFgP is
exactly p .
Hence, the size of the sample space C:m is
at most 2'" (actually, it is (2" - 1)~'"). We
Proposition 2 : T h e s a m p l e space BFgP i s
$-biased w i t h respect t o linear tests. N a m e l y , now evaluate the bias of this sample space with
respect to linear Boolean tests.
for a n y n o n z e r o a t h e r a n d o m variable ( a ,r)Z
is " - b i a s e d w h e n r i s selected u n i f o r m l y in Proposition 3 : T h e sample space C:*
dF
i s *-biased
w i t h respect t o linear tests.
BfpKp.
N a m e l y , for a n y n o n z e r o CY t h e r a n d o m variProof: The bias of (a,r)Z equals the expec- able ( a ,r ) i ~s ( n - 1)2-'"-biased w h e n r i s setation of ( - l ) ( a * r ) a taken over all possible r's. lected u n i f o r m l y in c:*.
Hence the bias is
Proof: Let r ( x , y ) = r o ( l : , y ) . . . r ~ - - l ( f , ~ ) d e note the sample point specified by the field elements t and y. Note that
+
+
+
Using ( - l ) r i ( z =
) X,(Z
i ) and X,(ZY) =
XP(x)Xp(y),we get the following expression for
the bias:
which equals (bin(C:z:
pa(t) =
xyzt
ait'
aixi), bin(y))z. Let
be a polynomial over
G F ( 2 ) . We are interested in the probability that (bin(p,(r)),bin(y))z = 1, when 2 E
GF(2"') - ( 0 ) and y E GF(2"') are chosen
uniformly. As in the proof of Proposition 1,
we analyze this probability by fixing E and going through all possible y's. There are two
cases to consider. If E is not a zero of the
polynomial p,(t) then bin(p,(z)) # 0"' and
(bin(p,(z)), bin(y))Z is unbiased when selecting y uniformly. If E is a zero of p , ( t ) then
(bin(p,(t)),bin(y))z = 0 0 for all y's, but
p,(t) has at most n - 1 zeros. Hence, the
proposition follows. I
6
Concluding Remarks
All three constructions use 5 (n/e)' sample
points to achieve e-bias with respect to linear
tests'. Using results from the theory of error correcting codes, it can be shown that the
size of a sample space achieving e-bias (w.r.t.
linear tests) is n(Min(&,2"})
[4]. On
the other hand, an easy probabilistic argument shows that there exists sample spaces
with n/e2 strings (each of length n) achieving
e-bias with respect to linear tests.
All three constructions admit fast translation of the succinct representation into the full
length sample point. In fact, given succinct
representation a and bit location i, the ith bit
of the sample determined by a can be computed in n/C.
All three constructions can be generalized
to d-ary strings, for any prime d. The generalized constructions have small bias with respect
to linear tests (which compute a linear combination mod d of the d-ary values considered
as elements of Zd). The first such generalization is due to Azar, Motwani and Naor [5]
'In fact the sample space of Construction 1 has size
O( -).(n/C)'
and Construction 3 can be modified
log(n/r)
so that the sample space has size O(-)a
W n l 4
-(n/e)a.
(extending the characters construction). The
second such generalization is due to Guy Even
[12] (extending the LFSR construction). Our
third construction can be easily generalized as
well. However, if one is interested in distributions over d-ary sequences which are statistically close to k-wise independent (d-ary) distributions then these construction don't offer
any improvement in efficiency (over the trivial
construction which uses a binary construction)
(cf. [51,[121).
An issue to be addressed is the "semiexplicit" presentation of all three constructions. To be fully specified, the first construction requires a list of irreducible polynomials of degree m over G F ( 2 ) ' the second construction requires a prime p (of size x Pm),
wheras the third construction assumes a representation of GF(2"') (which amounts to airreducible polynomial of degree m over GF(2)).
The reader may wonder whether these requirements can be met in the applications (in which
the sample spaces are used). In the rest of this
section we answer this question in the a h a tive.
In some applications we are allowed to use a
preprocessing stage of complexity comparable
to the size of the sample space. Two notable
examples follow
The sample space S i is used for deterministic simulation of a randomized algorithm. In such a case the overall complexity of the simulation is 2' times the cost
of one call to the randomized algorithm.
Hence, adding a preprocessing stage of
complexity 2' does not increase the overall complexity.
0
The sample space S: contains strings
of length comparable to 2' (i.e. s =
O(1ogn)). This is the case, for example,
when m is selected such that the sample
space is €-away from logn-wise indepen-
dent, for some fixed e (or e = n - O ( ’ ) ) (cf.
11, 15, 19, 71). This sampling requires
O ( m ) bits. Call the resulting sample space
[201).
Em.
In a preprocessing stage (of complexity 2’),
we may enumerate all monic polynomials of
degree E and discard those which have nontrivial divisors. Similarly, to find a prime
larger than M , we can test all integers in the
interval [ M ,2M] for primality.
In case such a preprocessing is too costly we
either omit it or replace it by a randomized
preprocessing stage of complexity m 0 ( l ) . In
the 2”d and Yd constructions all that is needed
is one “element” (either a prime in [ M ,2M] or
a irreducible polynomial of degree m ) . Such an
element can be found by sampling the strings
of length I ( I equals m or 1 log M , respectively). In both cases the density of good elA straightforward sample will
ements is x
require I 2 independently selected C-bit strings,
meaning that we use 1’ coin flips in the precomputation (which dominates the O ( I ) coin
flips used to select a sample point in the sample space). An alternative procedure is suggested below (for sake of clarity we consider
the problem of finding an irreducible polynomial of degree m ) .
+
:.
Construction 4 (sample
irreducible polynomials):
space
for
Use pairwise-independent sampling to
specify m monic polynomials of degree m .
With probability at least $, at least one of
these polynomials is irreducible. The pairwise independent sampling requires 2m
bits ( c f . [IO]). Call the resulting sample
.
space P,,,
Use an expander-path of length O ( m ) t o
specify O ( m ) points in the sample space
P,. With probability at least 1 - 2-,,
at least one of these points specifies a
sequence of m polynomials containing at
least one irreducible polynomial ( c f . [2,
A sample point in E , specifies O ( m 2 )
polynomials and with overwhelming probability at least one of them is irreducible.
Say we output the first irreducible polynomial among these m2 polynomials.
Construction 4 suffices as a randomized preprocessing for Constructions 2 and 3, and for
a modification of Construction 1 (sketched below). However, for Construction l (as appearing in Section 3) we need the ability to select a random irreducible polynomial (and not
merely to find and fix one). Construction 4
does get “close” to that god. although the
output does not specify a uniformly selected
irreducible polynomial, it is easy to see that
the probability that a particular polynomial
appears in the output is bounded above by
O(m2+), where N denotes the number of irreis the probducible polynomials (and hence
ability that a particular one is picked when we
select with uniform probability). Thus, the
probability that the polynomial selected by
Construction 4 divides a fixed n degree polyHence,
nomial is bounded above by m2 .
the implementation of Construction 1 in which
the feedback rule is selected using Construcwhich
tion 4 yields a sample space of size
is +$-biased
with respect to linear tests.
Finally, we sketch a modification of Construction 1, suggested by Y. Azar. Fix a “nondegenerated’’ feedback rule f (i.e. an irreducible polynomial of degree m ) . The sample point specified by a start sequence ii =
8 0 , E 1 , ..., E , - I
and an integer k < 2, (called
the gap) is r 0 , tk,.. . r ( , - l ) k where ri = a i for
i < m and ri =
f, ri-,,,+j for i 2 km.
It can be shown that for any fixed irreducible
polynomial f ( t ) (of degree m ) and any degreen polynomial g ( t ) , when k is uniformly chosen
551
k
+.
cT=i’
*.
in {1,2,...,2m - I}, the probability that f ( t )
divides g ( t k ) is bounded above by
Using
the argument of Proposition 1 it follows that
the modified sample space is *-biased with
respect to linear tests.
[7]M. Bellare, 0.Goldreich, S. Goldwasser,
and J. Rompel, “Randomness in Interactive Proofs”, these proceedings.
[8]R. Ben-Natan, “On Dependent Random
Variables Over Small Sample Spaces”,
M.Sc. Thesis, Computer Science Dept.,
Hebrew University, Jerusalem, Israel,
Feb. 1990.
Acknowledgment: Noga Alon wishes to
thank Ronny Roth for stimulating conversations. Oded Goldreich wishes to thank Guy
Even for collaboration in the early stages of
this research. Johan HBstad wishes to thank
Avi Wigderson for inviting him to Israel. Rend
Peralta wishes to thank Eric Back for referring
him to Schmidt’s book.
[9]B. Chor, J. F’reidmann, 0. Goldreich,
J. Hastad, S. Rudich, and R. Smolensky, “The bit extraction problem and
t-resilient functions”, Proc. 26th FOCS,
1985,pp. 396-407
[lo]B. Chor and 0. Goldreich, “On the Power
References
[l] Adleman, L.M.,
and M-D.A. Huang,
“Recognizing Primes in Random Polynomial Time”, Proc. 19th STOC, 1987,pp.
462-470.
[2] M. Ajtai, J. Komlbs, E. Szemerdi, “Deterministic Simulation in LOGSPACE”,
PTOC.19th S T O C , 1987,pp. 132-140.
of Two-Point Based Sampling,” Jour. of
Complextty, Vol 5, 1989,pp. 96-106.
[11] A. Cohen and A. Wigderson, “Dispensers,
Deterministic Amplification, and Weak
Random Sources”, 90th FOCS, 1989,pp.
14-19.
(121 G. Even, private communication, May
1990.
(31 N. Alon, L. Babai, and A. Itai, “A fast [13]0 . Goldreich, R. hpagliazzo, L.A. Levin,
R. Venkatesan, D. Zuckerman, “Security
and Simple Randomized Algorithm for
Preserving Amplification of Hardness”,
the Maximal Independent Set Problem”,
these proceedings.
J . of Algorithms, Vol. 7, 1986, pp. 567583.
[14]S. Goldwasser, and J. Kilian, “Almost All
Primes Can Be Quickly Certified”,
[4]N. Alan, J. Bmck, J. Naor, M. Naor
l J t h ’ T O C , 1986,PP. 316-329.
and R. Roth, ‘‘Construction of asymptotic&’ good low-rate emor-correcting
codes through pseudo-random graphs”, in
preparation.
[IS] R. hpagliazzo, and D. Zuckeman, “How
to Recycle Random Bits”, 90th FOCS,
1989,pp. 248-253.
[5] y. AZw, R- Motwani and J- Naor, “AP- [IS]M. Luby, “A simple parallel algorithm for
proximating probability distributions using small sample spaces”, preprint (1990).
[6]E. Bach, “Realistic Analysis of some Randomized Algorithms”, Proc. 19th STOC,
1987,pp. 453-461.
552
the maximal independent set problem”,
Proc. 17th STOC, 1985,pp. 1-10.
(171 A. Lubotzky, R. Phillips, P. S m a k ,
“Explicit Expanders and the Ramanujan
Conjectures”, STOC 86.
(181 F. J. MacWilliams and N. J. A. Sloane,
The Theory of Error-Correcting Codes,
North Holland, Amsterdam, 1977.
[19]K. Mulmuley, U.V. Vazirani and V.V.
Vazirani, “Matching is as easy as Matrix
Inversion”, Proc. 19th STOC, 1987, pp.
345-354.
[20]J. Naor and M. Naor, “Smd-bias Probability Spaces: Efficient Constructions
and Applications”, 22nd STOC, 1990,pp.
213-223.
[21] M.O. Rabin, “Probabilistic Algorithms
for Testing Primality”, J. of Num. Th.,
12, pp. 128-138,1980.
[22] W. M. Schmidt, Equations over finite
fields, an elementary approach, Lecture
Notes in Mathematics, Vol. 536, Springer
Verlag ( 1976).
[23]R. Solovay, and V. Strassen, “A Fast
Monte-Carlo Test for Primality”, SIAM
J. of Comp., 6 , pp. 84-85,1977.
[24]U.V. Vazirani, “Randomness, Adversaries and Computation” Ph.D. Thesis,
EECS, UC Berkeley, 1986.
[25] U.V. Vazirani, and V.V. Vazirani, “Efficient and Secure Pseudo-Random Number Generation,” Proc. 25th FOCS, 1984,
pp. 458-463.
553