Download 13.21. Wilson`s Primality Test威尔逊判别法

Document related concepts

Wiles's proof of Fermat's Last Theorem wikipedia , lookup

Algorithm wikipedia , lookup

List of prime numbers wikipedia , lookup

Quadratic reciprocity wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Proofs of Fermat's little theorem wikipedia , lookup

Transcript
Study started since 2011 年 10 月 17 日
素性测试算法,素性测定(11Y11) primality testing
Bachelor semester project Randomized and Deterministic Primality Testing.pdf
1.
2.
3.
10
Quotes from Renowned Mathematicians ................................................................................. 8
Keyword .................................................................................................................................. 10
Introduction ............................................................................................................................ 12
3.1.
The Aim of Thesis .................................................................................................... 12
3.2.
The Scope of Thesis ................................................................................................. 12
3.3.
Research Hierarchical Diagram................................................................................ 12
4. Related Math Branches or Sub-Disciples................................................................................. 13
5. Presentation Styles .................................................................................................................. 13
6. Structure of Writing ................................................................................................................ 15
7. Styles ....................................................................................................................................... 18
8. Definitions. Terminology, Signs and Symbols .......................................................................... 19
8.1.
Alphabets and Signs ............................................................................................. 19
8.1.1.
Latin ................................................................................................................. 19
8.1.2.
Greek ............................................................................................................... 19
8.1.3.
Hebrew ............................................................................................................ 20
8.1.4.
Russian ............................................................................................................ 20
8.2.
Nomenclature ......................................................................................................... 20
8.3.
Notation .................................................................................................................. 20
8.4.
Definitions ............................................................................................................... 22
8.5.
List of theorems, formulars and equations ............................................................. 22
8.6.
List of abbreviations ................................................................................................ 22
8.7.
List of people mentioned in the book ..................................................................... 22
8.8.
Index ........................................................................................................................ 22
8.9.
Explanations ............................................................................................................ 23
8.10.
Fonts ........................................................................................................................ 23
8.11.
List of symbols ......................................................................................................... 23
8.12.
Sub and super scripts .............................................................................................. 23
8.13.
Marks....................................................................................................................... 24
8.14.
词冠??? ................................................................................................................... 24
8.15.
UNITS ....................................................................................................................... 24
8.16.
Units designation ..................................................................................................... 24
8.17.
Constants ................................................................................................................. 24
9. Comparisons of Algorithm ...................................................................................................... 26
10.
Intro ................................................................................................................................. 26
10.1.
TOC for Introduction to Algorithms ......................................................................... 26
11.
Recent developments in primality testing....................................................................... 31
11.1.
Recent developments in primality testing............................................................... 31
12.
Comparison / Benchmarking for Primality Testing .......................................................... 32
12.1.
Comparison study for Primality testing using Mathematica ................................... 32
13.
Deterministic Primality Testing........................................................................................ 33
13.1.
AKS primality test .................................................................................................... 33
13.2.
APR - Adleman–Pomerance–Rumely primality test ............................................ 33
13.3.
Atkin sieve ............................................................................................................... 34
13.3.1. Contents .......................................................................................................... 35
13.3.2. Algorithm......................................................................................................... 35
13.3.3. Pseudocode ..................................................................................................... 36
13.3.4. Explanation ...................................................................................................... 37
13.3.5. Computational complexity .............................................................................. 37
13.3.6. See also ........................................................................................................... 37
13.3.7. References ....................................................................................................... 38
13.4.
Bhattacharjee and Pandey ...................................................................................... 38
13.5.
Brillhart, Lehmer, Selfridge Test based on Lucas Test .............................................. 40
13.6.
Cyclotomic Deterministic Primality Test Cyclotomy ................................................ 40
13.7.
Demytko deterministic primality test method ........................................................ 40
13.8.
Elliptic curve methods ............................................................................................. 41
13.9.
Eratosthenes Sieve .................................................................................................. 41
13.9.1. Contents .......................................................................................................... 43
13.9.2. Algorithm description...................................................................................... 43
13.9.3. Example ........................................................................................................... 45
13.9.4. Algorithm complexity ...................................................................................... 45
13.9.5. Implementation ............................................................................................... 46
13.9.6. Arithmetic progressions .................................................................................. 46
13.9.7. Euler's sieve ..................................................................................................... 46
13.9.8. See also ........................................................................................................... 47
13.9.9. References ....................................................................................................... 47
13.10. Goldwasser Kilian Algorithm ................................................................................... 47
13.11. Jacobi Sums ............................................................................................................. 48
13.12. Lucas 素性测定算法 ............................................................................................... 48
13.12.1.
Contents .................................................................................................. 49
13.12.2.
Concepts .................................................................................................. 49
13.12.3.
Example ................................................................................................... 50
13.12.4.
Algorithm................................................................................................. 51
13.12.5.
See also.................................................................................................... 51
13.13. Lucas-Lehmer 测试 ................................................................................................. 51
13.13.1.
梅森素数判定算法 Lucas-Lehmer 测试................................................. 52
13.13.2.
Contents .................................................................................................. 53
13.13.3.
The test.................................................................................................... 53
13.13.4.
Time complexity ...................................................................................... 54
13.13.5.
Examples.................................................................................................. 55
13.13.6.
13.13.7.
13.13.8.
Proof of correctness ................................................................................ 56
Applications ............................................................................................. 60
See also.................................................................................................... 60
13.14. Lucas–Lehmer–Riesel .......................................................................................... 60
13.14.1.
Contents .................................................................................................. 61
13.14.2.
The algorithm .......................................................................................... 61
13.14.3.
Finding the starting value ........................................................................ 61
13.14.4.
How does the test work?......................................................................... 61
13.14.5.
LLR software ............................................................................................ 62
13.14.6.
References ............................................................................................... 62
13.14.7.
External links ........................................................................................... 62
13.14.8.
推广的 Lucas 型素性测定算法 .............................................................. 63
13.14.9.
Massey–Omura–Kryptosystem ............................................................ 63
13.15. Miller-Primzahltest .................................................................................................. 63
13.16. Pocklington Lehmer primality test .......................................................................... 63
13.16.1.
Contents .................................................................................................. 64
13.16.2.
Pocklington criterion ............................................................................... 64
13.16.3.
Generalized Pocklington method ............................................................ 66
13.16.4.
The test.................................................................................................... 67
13.16.5.
Example ................................................................................................... 67
13.17. Proth deterministic primality test method.............................................................. 68
13.18. Sundaram sieve ....................................................................................................... 69
13.18.1.
Contents .................................................................................................. 69
13.18.2.
Algorithm................................................................................................. 69
13.18.3.
Correctness.............................................................................................. 70
13.18.4.
Computational complexity ...................................................................... 70
13.18.5.
See also.................................................................................................... 70
13.18.6.
References ............................................................................................... 71
13.19. Trial division ............................................................................................................ 71
13.20. Ward’s primality test ............................................................................................ 71
13.21. Wilson's Primality Test 威尔逊判别法 ................................................................... 71
14.
Randomized /Probabilistic/ Probable / Provable / Primality Testing .............................. 73
14.1.
Adelman-Huang algorithm ...................................................................................... 73
14.2.
Agrawal-Biswas algorithm or Agarwal-Biswas Probabilistic Testing ........................ 74
14.3.
AKS parallel sorting algorithm of Ajtai, Koml´os and Szemer´edi .......................... 75
14.4.
ALI primality test ..................................................................................................... 75
14.5.
APR Test ................................................................................................................... 75
14.6.
APRT-CL (or APRCL) ................................................................................................. 75
14.7.
素性测试的 ARCL 算法........................................................................................... 75
14.8.
Baillie–PSW ........................................................................................................... 76
14.9.
BPP algorithm .......................................................................................................... 77
14.10. Baillie and Wagstaff Method ................................................................................... 78
14.11. Chen--Kao and Lewin--Vadhan tests ....................................................................... 79
14.12. Chinese Primality Test ............................................................................................. 79
14.13. Chinese Remaindering............................................................................................. 79
14.14. Cohen-Lenstra Method ........................................................................................... 80
14.15. Colin Plumb primality test (Euler Criterion) ............................................................ 80
14.16. Combination Algorithm ........................................................................................... 80
14.17. Cyclotomic Probabilistic Primality Test.................................................................... 81
14.18. ECPP Elliptic Curve Primality Proving ...................................................................... 81
14.19. Elliptic Curve Primality Testing ................................................................................ 82
14.19.1.
Proposition .............................................................................................. 83
14.19.2.
Proof ........................................................................................................ 83
14.19.3.
14.19.4.
Goldwasser–Kilian algorithm ................................................................ 84
Problems with the algorithm................................................................... 85
14.19.5.
14.19.6.
14.19.7.
14.19.8.
Atkin–Morain elliptic curve primality test (ECPP) ................................. 85
The test.................................................................................................... 86
Complex multiplication method .............................................................. 87
Discussion ................................................................................................ 87
14.19.9.
Example of Atkin–Morain ECPP ............................................................. 88
14.19.10.
Complexity and running times ................................................................ 89
14.19.11.
Conjecture ............................................................................................... 89
14.19.12.
Conjecture 2 ............................................................................................ 90
14.19.13.
Primes of special form ............................................................................. 90
14.19.14.
Group structure of E(FN) .......................................................................... 91
14.19.15.
Theorem 1 ............................................................................................... 91
14.19.16.
Theorem 2 ............................................................................................... 91
14.19.17.
Theorem 3 ............................................................................................... 91
14.19.18.
Theorem 4 ............................................................................................... 91
14.19.19.
The algorithm .......................................................................................... 92
14.19.20.
Justification of the algorithm .................................................................. 93
14.19.21.
References ............................................................................................... 93
14.20. Demytko .................................................................................................................. 93
14.21. Euler Test ................................................................................................................. 94
14.22. Fermat 素性测试 .................................................................................................... 94
14.22.1.
Contents .................................................................................................. 95
14.22.2.
Concept ................................................................................................... 95
14.22.3.
Example ................................................................................................... 96
14.22.4.
Algorithm and running time .................................................................... 96
14.22.5.
Flaw ......................................................................................................... 96
14.22.6.
Applications ............................................................................................. 97
14.23. Fermat-Euler............................................................................................................ 98
14.24. Frobenius pseudoprimality test .............................................................................. 98
14.25. Goldwasser Kilian Algorithm ................................................................................... 98
14.26. Gordon‟s algorithm ................................................................................................ 98
14.27. 雅克比和素性判别方法 ......................................................................................... 98
14.28. Konyagin – Pomerance n-1 Test .......................................................................... 99
14.29. Lehmann.................................................................................................................. 99
14.30. Maurer‟s algorithm .............................................................................................. 100
14.31. Miller-Rabin / Rabin-Miller 素性测试算法 Miller-Rabin Compositeness Test ..... 101
14.31.1.
快速判定素数-----Miller-Rabin 算法 .................................................... 101
14.31.2.
Miller-Rabin Algorithm .......................................................................... 102
14.31.3.
Miller-Rabbin 素性测试[ZJUT1517]...................................................... 103
14.31.4.
Rabin-Miller ........................................................................................... 105
14.31.5.
Contents ................................................................................................ 106
14.31.6.
Concepts ................................................................................................ 106
14.31.7.
Example ................................................................................................. 108
14.31.8.
Algorithm and running time .................................................................. 109
14.31.9.
Accuracy of the test ............................................................................... 109
14.31.10.
Deterministic variants of the test .......................................................... 110
14.31.11.
Notes ..................................................................................................... 111
14.32. MONTE CARLO PRIMALITY TESTS .......................................................................... 111
14.32.1.
A NOTE ON MONTE CARLO PRIMALITY TESTS AND ALGORITHMIC
INFORMATION THEORY ................................................................................................. 112
14.33. Pépin's ................................................................................................................... 112
14.33.1.
Contents ................................................................................................ 113
14.33.2.
Description of the test ........................................................................... 113
14.33.3.
Proof of correctness .............................................................................. 113
14.33.4.
References ............................................................................................. 114
14.34. Proth's theorem .................................................................................................... 114
14.34.1.
Contents ................................................................................................ 115
14.34.2.
Numerical examples .............................................................................. 115
14.34.3.
History ................................................................................................... 116
14.34.4.
See also.................................................................................................. 116
14.34.5.
References ............................................................................................. 116
14.35. Random Quadratic Frobenius Test (RQFT) ............................................................ 117
14.36. Solovay- Strassen 算法 .......................................................................................... 117
14.36.1.
Solovay-Strassen primality test. ............................................................ 117
14.36.2.
Solovag-Strasson ................................................................................... 118
14.36.3.
Contents ................................................................................................ 119
14.36.4.
Concepts ................................................................................................ 119
14.36.5.
Example ................................................................................................. 120
14.36.6.
Algorithm and running time .................................................................. 120
14.36.7.
Accuracy of the test ............................................................................... 121
14.36.8.
Average-case behaviour ........................................................................ 122
14.36.9.
Complexity............................................................................................. 122
14.37. Square Root Compositeness Theorem .................................................................. 122
14.38. Schwartz--Zippel test............................................................................................. 123
15.
Papers of Others ............................................................................................................ 124
15.1.
素性检测算法研究及其在现代密码学中的应用 ............................................... 124
16.
Math Tools ..................................................................................................................... 127
16.1.
Axiom .................................................................................................................... 127
17.
18.
19.
20.
16.2.
Bignum .................................................................................................................. 127
16.3.
Derive .................................................................................................................... 127
16.4.
GMP Library........................................................................................................... 127
16.5.
GNU Octave ........................................................................................................... 127
16.6.
Kant ....................................................................................................................... 127
16.7.
LiDIA ...................................................................................................................... 128
16.8.
Lisp ........................................................................................................................ 128
16.9.
Macsyma ............................................................................................................... 128
16.10. Magma .................................................................................................................. 128
16.11. Maple .................................................................................................................... 128
16.12. MathCad ................................................................................................................ 128
16.13. Mathematica ......................................................................................................... 129
16.14. Matlab ................................................................................................................... 129
16.15. Maxima.................................................................................................................. 129
16.16. MIRACL .................................................................................................................. 129
16.17. MuPAD................................................................................................................... 129
16.18. NTL library ............................................................................................................. 129
16.19. OpenMP ................................................................................................................ 130
16.20. Pari -GP.................................................................................................................. 130
16.21. Reduce ................................................................................................................... 130
16.22. Sage ....................................................................................................................... 130
16.23. Simath ................................................................................................................... 130
16.24. Ubasic .................................................................................................................... 131
COMPARISONS .............................................................................................................. 131
MY IDEAS FOR FURTHER IMPROVEMENT OF COMPLEXITY .......................................... 131
RESOURCES ................................................................................................................... 132
19.1.
MAJOR NUMBER THEORISTS................................................................................. 132
19.2.
KEY UNIVERSITIES .................................................................................................. 132
19.3.
KEY RESEARCH INSTITUTIONS ............................................................................... 132
19.4.
SEMINARS, SYMPOSIUMS, WORKSHOPS FORUMS ............................................... 132
19.5.
JOURNALS.............................................................................................................. 132
19.6.
ACADEMIC WEB RESOURCES ................................................................................ 132
LITERATURE –PRIMALITY ............................................................................................ 133
20.1.
BOOKS (BK) ............................................................................................................ 133
20.2.
LECTURE SCRIPTS .................................................................................................. 133
20.3.
THESES FOR POSTDOC, PHD, MASTER AND BACHELOR DEGREES (PDT, DT,MT,BT)
133
20.3.1. BT- Bachelor Thesis ........................................................................................ 133
20.3.2. MT – Master Thesis ................................................................................... 133
20.3.3. ST - Senior Thesis........................................................................................... 133
20.4.
GENERAL PAPER (GP) ............................................................................................ 133
20.5.
COLLECTIONS OF PAPERS ...................................................................................... 133
20.6.
PRESENTATIONS/SLIDES AT SEMINARS.................................................................. 133
20.7.
OTHER PAPERS....................................................................................................... 133
20.8.
PROPOSALS / SUGGESTIONS ................................................................................. 133
LITERATURE – OTHER RELATED ................................................................................. 134
21.1.
ALGEBRA................................................................................................................ 134
21.2.
NUMBER THEORY .................................................................................................. 134
21.3.
COMPUTER COMPLEXITY ...................................................................................... 134
21.4.
COMPLEX ANALYSIS / FUNCTIONS ........................................................................ 134
21.5.
Cryptography ......................................................................................................... 134
22.
APPENDICES .................................................................................................................. 134
22.1.
CHARTS .................................................................................................................. 135
22.2.
TABLES ................................................................................................................... 135
22.3.
DATABASES ............................................................................................................ 135
22.4.
MULTIMEDIA DATA ................................................................................................ 135
22.5.
COMPUTATION CODES .......................................................................................... 135
22.6.
WEBSITE FOR THIS THESIS ..................................................................................... 135
21.
1. Quotes from Renowned Mathematicians
Disquisitiones arithmeticae
”
Dass die Aufgabe, die Primzahlen von den zusammengesetzten zu unterscheiden [. . . ] zu den
wichtigsten und n¨utzlichsten der gesamten Arithmetik geh¨ort [. . . ] ist so bekannt, dass
es ¨uberfl¨ussig w¨are, hier ¨uber viele Worte zu verlieren. “
Carl Friedrich Gauß (1801)
Carl Friedrich Gauß (1777{1855):
"Zahlentheorie ist die Königin der Mathematik.
Mathematics is the queen of sciences and arithmetic the queen of mathematics
Carl Friedrich Gauss
The problem of distinguishing prime numbers from composite numbers and of resolving the
latter into their prime factors is known to be one of the most important and useful in arithmetic.
(. . . ) Further, the dignity of the science itself seems to require that every possible means be
explored for the solution of a problem so elegant and so celebrated problem be zealously
cultivated.
Carl Friedrich Gauss
« Le problème où l’on se propose de distinguer les nombres premiers des nombres composés,
et
de résoudre ceux-ci en leurs facteurs premiers, est connu comme l’un des plus importants et des
plus utiles de toute l’Arithmétique ; il a sollicité l’industrie et la sagacité des géomètres tant
anciens que modernes, à un point tel qu’il serait superflu de discuter en détail à cet égard.
[. . . ]
« De surcroît, la dignité de la science même semble demander que tous les secours possibles
soient explorés avec soin pour parvenir à la solution d’un problème si élégant et si célèbre. »
« Problema, numeros primos a compositis dignoscendi, hosque in factores suos primos
resoluendi,
ad grauissima ac utilissima totius Arithmeticæ pertinere, et geometrarum tum ueterum tum
recentiorum
industriam ac sagacitatem occupauisse, tam notum est, ut de hac re copiose loqui superfluum
foret. [. . . ]
« Prætereaque scientiæ dignitas requirere uidetur, ut omnia subsidia ad solutionem problematis
tam elegantis ac celebris sedulo excolantur. »
Disquisitiones Arithmeticæ
de Gauß est, dans le texte latin :
Johann Carl Friedrich Gauß, Disquisitiones Arithmeticæ, 329.
2. Keyword
素性测试算法
素性测定 page
primality prime
Prime number
testing test prove
proving
Primzahltests
test de primalité
nombres premiers
Πρώτοι αριθμοί
δοκιμές αριθμό
проверки простоты чисел
Randomized (Probalistic) Provable Primality Testing
Deterministic Primality Testing
AKS Primality testing
AKS Primality test
AKS Prime testing
AKS Prime test
AKS Primzahltests
AKS test de primalité
nombres premiers
Le test de primalité AKS
AKS δοκιμές αριθμό
AKS проверки простоты чисел
Prime number
3. Introduction
3.1. The Aim of Thesis
3.2. The Scope of Thesis
3.3. Research Hierarchical Diagram
4. Related Math Branches or Sub-Disciples
5. Presentation Styles
用各种 IT 技术
Interactive
Short texts
Electronically available only
Electronic formats
Strictly local yet
Easy to read even for laymen and like lovestory
Use as much as drawings, pictures.animations as possible to accompany or replace texts
Pictures
Drawings 图 三维图标
表
Graphs
Charts
Pies
SLIDES
PPT
Flash SWF Slides
OTHER FORMS OF SLIDES
EBOOKS IN CHM, HLP, EXE OR OTHERS
Movies and Animations
3D
VRML
FLASH MOVIES
web audio and video
streaming
QUICKTIME
REALMEDIA
WINDOWSMEDIA
WEB PORTAL
web incl web2.0
html
php
java
SOFTWARE
DATABASE
MOBILE AND HANDHELD DEVICES
DVD/VCDS
USB
网络多媒体
multimedia)
6. Structure of Writing
Newton or Euclid styles
Give every definiyion, axiom, proposition, theorem, corrolary etc a serial number and a name
conventions or notations
Common Notions
Πρώτοι Αριθμοί
Definitions
Terms
Terminology
Etymology
Encyclopedia
Glossary
Lexicon
Nomenclature
Abbreviations
List of Symbols
Index
Axiom
Principles
Assertion
Assumptions
Hypothesis
Conjecture
Suggestion
Common Notions
Fact
Proposition
If we can prove a statement true, then that statement is called a proposition.
a proposition is a less important or less fundamental assertion,
Predicates
Theory
Laws
Lemma
If a theorem is not particularly interesting, but is useful in proving an interesting statement,
then it’s often called a lemma. This one is found in Euclid’s Elements.
Sometimes instead of proving a theorem or proposition all at once, we break the proof down into
modules; that is, we prove several supporting propositions, which are called lemmas, and use the
results of these propositions to prove the main result.
a lemma is something that we will use later in this book to prove a proposition or theorem,
Statement
Rules
Porism
Theorem
A proposition of major importance is called a theorem.
a theorem is a deeper culmination of ideas,
A theorem is a valid implication of sufficient interest to warrant special attention.
If we can prove a proposition or a theorem, we will often, with very little effort, be able to derive
other related propositions called corollaries.
a corollary is an easy consequence of a proposition, theorem, or lemma.
Collonary
A corollary is a theorem that logically follows very simply from a theorem. A corollary is a
theorem that logically follows very simply from a theorem. Sometimes it follows from part of the
proof of a theorem rather than from the statement of the theorem. In any case, it should be easy
to
see why it’s true.
Collonary
A corollary is a theorem that logically follows very simply from a theorem. A corollary is a
theorem that logically follows very simply from a theorem. Sometimes it follows from part of the
proof of a theorem rather than from the statement of the theorem. In any case, it should be easy
to
see why it’s true.
Postulations
Scholium
Conclusions
Consequence
Comment
Remark
Observation
Claim
Proof
Equations
Formulas
Note
Caveat
Thesis
Conics
Literature Reference / Bibliography
 Books
 Databases
 Multimedia
 Websites

 Theses
 Proceedings
 Papers
 Presentations/Slides
7. Styles
Use as much as drawings, pictures.animations as possible to accompany or replace texts
Every definition shall be expressed in bold letters
8. Definitions. Terminology, Signs and
Symbols
燃烧界和燃烧污染控制领域存在许多混乱的说法,应该避免
任何科学都从概念开始。
术语名辞定语 Nomenclature 或 List of symbols
Definitions
Assertion
Hypothesis
Terms
Terminology
8.1. Alphabets
and Signs
详细规划和说明下列字母的意义和用途:拉丁字母,希腊字母,俄文字母和希伯来字母。
同时要符合国内外学术界的惯例和习惯,又要强调概念清晰,唯一,统一,易于理解和明确。
花体字母
空心体字母
重体字母
斜体字母
8.1.1. Latin
8.1.2. Greek
8.1.3. Hebrew
8.1.4. Russian
8.2. Nomenclature
Nomenclature is a term that applies to either a list of names or terms, or to the system of
principles, procedures and terms related to naming—which is the assigning of a word or phrase
to a particular object, event, or property. The principles of naming vary from the relatively
informal conventions of everyday speech to the internationally-agreed principles, rules and
recommendations that govern the formation and use of the specialist terms used in scientific and
other disciplines. ... nomenclature concerns itself more with the rules and conventions that are
used for the formation of names (wikipedia)
8.3. Notation
We will use
to denote the base 2 logarithm instead of
logarithms and natural logarithms will be denoted as
n and
or lgn. Base 10
respectively.
The notation ord r (a) represents the order of a modulo r, which is the smallest positive integer k,
such that
1(modr).
The notation  (r) will be used to represent Euler’s totient function, which is defined as the
number of positive integers less than or equal to r that are relatively prime to r.
The notation f(x) ≡g(x)mod(h(x),p) is used throughout to mean that f(x)=g(x) in the ring
Zp[x]/(h(x)). In some cases, p will be prime and will have degree d and be irreducible in Zp(x), so
that Zp[x]/(h(x)).will be a finite field of order
.
Notation: p(x) ≡ q(x) mod (h(x), n) means that h(x)|p(x) − q(x) all coefficients are taken
modulo n
Time complexity functions will be written in “big-O” notation. A function f(x) is considered to
be O (g(x))(pronounced “big-oh of g”) if given some function g(x) there exists a constant c such
that |f(x)|≤c|g (x)|⋅ for all values of x 0, where x is defined to be the binary input length of n.
The function M(n) will be used to represent the time complexity function for multiplication.
8.4. Definitions
Diophantine equation
Euclidean Algorithm Extended Euclid Algorithm
The greatest common divisor GCD (or greatest common factor GCF)
8.5. List of theorems, formulars and equations
8.6. List of abbreviations
8.7. List of people mentioned in the book
8.8. Index
8.9. Explanations
8.10. Fonts
italic: for scalers;
Bold italic: for vectors in D-dimensional vector space, D = 1, 2, 3;
Boldface: for vectors in Rb ;
Sans serif: for operators or second rank tensors (matrices);
Bold sans serif: for matrices with vector/tensor elements;
BLA C K B O A RD BO LD : for vector space.
8.11. List of symbols
8.12. Sub and super scripts
8.13. Marks
@↑↓
≈≠≡⌂∩∑∏ ≈∵∴⊥∥≌∽∈
8.14. 词冠???
10 12 Τ Τερα 核 10 9 Giga
京 10 -9 Nano 钎 10 -12 Pico
8.15. UNITS
8.16. Units designation
Atm BYU bar etc
8.17. Constants
9. Comparisons of Algorithm
Available software such as Maple, Mathmatica in computing primality tests
Different high level computer languages such as C/C++, Java
Low-level languages such as assembly
Machine languages
Different Oss such as Windows, Linux, BSD
Software available
Plarforms available
10. Intro
10.1. TOC for Introduction to Algorithms
Table of Contents
Preface
I Foundations
1 The Role of Algorithms in Computing
1.1 Algorithms
1.2 Algorithms as a technology
2 Getting Started
2.1 Insertion sort
2.2 Analyzing algorithms
2.3 Designing Algorithms
3 Growth of Functions
3.1 Asymptotic notation
3.2 Standard notations and common functions
4 Recurrences
4.1 The substitution method
4.2 The recursion-tree method
4.3 The master method
4.4 Proof of the master theorem
5 Probabilistic Analysis and Randomized Algorithms
5.1 The hiring problem
5.2 Indicator random variables
5.3 Randomized algorithms
5.4 Probabilistic analysis and further uses of indicator random variables
II Sorting and Order Statistics
6 Heapsort
6.1 Heaps
6.2 Maintaining the heap property
6.3 Building a heap
6.4 The heapsort algorithm
6.5 Priority queues
7 Quicksort
7.1 Description of quicksort
7.2 Performance of quicksort
7.3 Randomized versions of quicksort
7.4 Analysis of quicksort
8 Sorting in Linear Time
8.1 Lower bounds for sorting
8.2 Counting sort
8.3 Radix sort
8.4 Bucket sort
9 Medians and Order Statistics
9.1 Minimum and maximum
9.2 Selection in expected linear time
9.3 Selection in worst-case linear time
III Data Structures
10 Elementary Data Structures
10.1 Stacks and queues
10.2 Linked lists
10.3 Implementing pointers and objects
10.4 Representing rooted trees
11 Hash Tables
11.1 Direct-address tables
11.2 Hash tables
11.3 Hash functions
11.4 Open addressing
11.5 Perfect hashing
12 Binary Search Trees
12.1 What is a binary search tree?
12.2 Querying a binary search tree
12.3 Insertion and deletion
12.4 Randomly built binary search trees
13 Red-Black Trees
13.1 Properties of red-black trees
13.2 Rotations
13.3 Insertion
13.4 Deletion
14 Augmenting Data Structures
14.1 Dynamic order statistics
14.2 How to augment a data structure
14.3 Interval trees
IV Advanced Design and Analysis Technique
15 Dynamic Programming
15.1 Assembly-line scheduling
15.2 Matrix-chain multiplication
15.3 Elements of dynamic programming
15.4 Longest common subsequence
15.5 Optimal binary search trees
16 Greedy Algorithms
16.1 An activity-selection problem
16.2 Elements of the greedy strategy
16.3 Huffman codes
16.4 Theoretical foundations for greedy methods
16.5 A task-scheduling problem
17 Amortized Analysis
17.1 Aggregate analysis
17.2 The accounting method
17.3 The potential method
17.4 Dynamic tables
V Advanced Data Structures
18 B-Trees
18.1 Definition of B-trees
18.2 Basic operations on B-trees
18.3 Deleting a key from a B-tree
19 Binomial Heaps
19.1 Binomial trees and binomial heaps
19.2 Operations on binomial heaps
20 Fibonacci Heaps
20.1 Structure of Fibonacci heaps
20.2 Mergeable-heap operations
20.3 Decreasing a key and deleting a node
20.4 Bounding the maximum degree
21 Data Structures for Disjoint Sets
21.1 Disjoint-set operations
21.2 Linked-list representation of disjoint sets
21.3 Disjoint-set forests
21.4 Analysis of union by rank with path compression
VI Graph Algorithms
22 Elementary Graph Algorithms
22.1 Representations of graphs
22.2 Breadth-first search
22.3 Depth-first search
22.4 Topological sort
22.5 Strongly connected components
23 Minimum Spanning Trees
23.1 Growing a minimum spanning tree
23.2 The algorithms of Kruskal and Prim
24 Single-Source Shortest Paths
24.1 The Bellman-Ford algorithm
24.2 Single-source shortest paths in directed acyclic graphs
24.3 Dijkstra's algorithm
24.4 Difference constraints and shortest paths
24.5 Proofs of shortest-paths properties
25 All-Pairs Shortest Paths
25.1 Shortest paths and matrix multiplication
25.2 The Floyd-Warshall algorithm
25.3 Johnson's algorithm for sparse graphs
26 Maximum Flow
26.1 Flow networks
26.2 The Ford-Fulkerson method
26.3 Maximum bipartite matching
26.4 Push-relabel algorithms
26.5 The relabel-to-front algorithm
VII Selected Topics
27 Sorting Networks
27.1 Comparison networks
27.2 The zero-one principle
27.3 A bitonic sorting network
27.4 A merging network
27.5 A sorting network
28 Matrix Operations
28.1 Properties of matrices
28.2 Strassen's algorithm for matrix multiplication
28.3 Solving systems of linear equations
28.4 Inverting matrices
28.5 Symmetric positive-definite matrices and least-squares approximation
29 Linear Programming
29.1 Standard and slack forms
29.2 Formulating problems as linear programs
29.3 The simplex algorithm
29.4 Duality
29.5 The initial basic feasible solution
30 Polynomials and the FFT
30.1 Representation of polynomials
30.2 The DFT and FFT
30.3 Efficient FFT implementations
31 Number-Theoretic Algorithms
31.1 Elementary number-theoretic notions
31.2 Greatest common divisor
31.3 Modular arithmetic
31.4 Solving modular linear equations
31.5 The Chinese remainder theorem
31.6 Powers of an element
31.7 The RSA public-key cryptosystem
31.8 Primality testing
31.9 Integer factorization
32 String Matching
32.1 The naive string-matching algorithm
32.2 The Rabin-Karp algorithm
32.3 String matching with finite automata
32.4 The Knuth-Morris-Pratt algorithm
33 Computational Geometry
33.1 Line-segment properties
33.2 Determining whether any pair of segments intersects
33.3 Finding the convex hull
33.4 Finding the closest pair of points
34 NP-Completeness
34.1 Polynomial time
34.2 Polynomial-time verification
34.3 NP-completeness and reducibility
34.4 NP-completeness proofs
34.5 NP-complete problems
35 Approximation Algorithms
35.1 The vertex-cover problem
35.2 The traveling-salesman problem
35.3 The set-covering problem
35.4 Randomization and linear programming
35.4 The subset-sum problem
VIII Appendix: Mathematical Background
A Summations
A.1 Summation formulas and properties
A.2 Bounding summations
B Sets, Etc.
B.1 Sets
B.2 Relations
B.3 Functions
B.4 Graphs
B.5 Trees
C Counting and Probability
C.1 Counting
C.2 Probability
C.3 Discrete random variables
C.4 The geometric and binomial distributions
C.5 The tails of the binomial distribution
Bibliography
Index (created by the authors)
11. Recent developments in primality
testing
11.1. Recent developments in primality testing
Carl Pomerance1 <[email protected]>
(Joint work with Hendrik Lenstra.)
In August, 2002, Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, all from the Indian Institute
of Technology in Kanpur, announced a new algorithm to distinguish between prime numbers and
composite numbers. Unlike earlier methods, their test is completely rigorous, deterministic, and
runs in polynomial time. If n is prime and a is an integer, then the polynomials (x+a)n and xn+a
are congruent modulo n. Therefore they are also congruent modulo n and f(x) for any integer
polynomial f(x). The heart of the procedure for testing n involves verifying such a congruence
where a runs over a small set of integers, and f(x) is a (craftily chosen) polynomial. In the original
paper f(x) is of the form xr
−1, where r is a prime with some additional properties. We have found
a way to instead use polynomials like those that arise in the argument of Gauss for constructible
regular polygons. It is important that the degree of f(x) be large enough so that the primality test
is
valid, but not so large that the running time suffers.We are able to choose the degree fairly
precisely
using some tools from analytic number theory and a new result, due to Daniel Bleichenbacher
and
Vsevolod Lev, from combinatorial number theory. We thus achieve a rigorous and effective
running
time of about (log n)6, the heuristic complexity of the original test.
12. Comparison / Benchmarking for
Primality Testing
12.1. Comparison study for Primality testing using
Mathematica
Hailiza Kamarulhaili & Ega Gradini
[email protected]
School of Mathematical Sciences, Universiti Sains Malaysia, Minden 11800 Penang, MALAYSIA
13. Deterministic Primality Testing
13.1. AKS primality test
13.2. APR - Adleman–Pomerance–Rumely primality
test
Adleman–Pomerance–Rumely primality test
The Jacobi Sums algorithm
http://calistamusic.dreab.com/p-Adleman%E2%80%93Pomerance%E2%80%93Rumely_primality
_test
Unlike other algorithms, it avoids the use of random numbers, so it is a deterministic primality
test. It is named after its discoverers, Leonard Adleman, Carl Pomerance, and Robert Rumely. The
test involves arithmetic in cyclotomic fields.
It was later improved by Henri Cohen and Hendrik Willem Lenstra and called APRT-CL (or APRCL).
It is often used with UBASIC under the name APRT-CLE (APRT-CL extended) and can test primality
of an integer n in time:
Randomized (Probalistic) Provable Primality Testing
Deterministic Primality Testing
13.3. Atkin sieve
In mathematics, the sieve of Atkin is a fast, modern algorithm for finding
all prime numbers up to a specified integer. It is an optimized version
of the ancient sieve of Eratosthenes, but does some preliminary work and
then marks off multiples of primes squared, rather than multiples of
primes. It was created by A. O. L. Atkin and Daniel J. Bernstein.
13.3.1.







Contents
1 Algorithm
2 Pseudocode
3 Explanation
4 Computational complexity
5 See also
6 References
7 External links
13.3.2.
Algorithm
In the algorithm:



All remainders are modulo-sixty remainders (divide the number by sixty and return the
remainder).
All numbers, including x and y, are whole numbers (positive integers).
Flipping an entry in the sieve list means to change the marking (prime or nonprime) to
the opposite marking.
1. Create a results list, filled with 2, 3, and 5.
2. Create a sieve list with an entry for each positive integer; all entries of this list should
initially be marked nonprime.
3. For each entry number n in the sieve list, with modulo-sixty remainder r :
o If r is 1, 13, 17, 29, 37, 41, 49, or 53, flip the entry for each possible solution to
4x2 + y2 = n.
o If r is 7, 19, 31, or 43, flip the entry for each possible solution to 3x2 + y2 = n.
o If r is 11, 23, 47, or 59, flip the entry for each possible solution to 3x2 − y2 = n
when x > y.
o If r is something else, ignore it completely.
4. Start with the lowest number in the sieve list.
5. Take the next number in the sieve list still marked prime.
6. Include the number in the results list.
7. Square the number and mark all multiples of that square as nonprime.
8. Repeat steps five through eight.

This results in numbers with an odd number of solutions to the corresponding equation
being prime, and an even number being nonprime.
13.3.3.
Pseudocode
The following is pseudocode for a straightforward version of the
algorithm:
// arbitrary search limit
limit ← 1000000
// initialize the sieve
is_prime(i) ← false, ∀ i ∈ [5, limit]
// put in candidate primes:
// integers which have an odd number of
// representations by certain quadratic forms
for (x, y) in [1, √limit] × [1, √limit]:
n ← 4x²+y²
if (n ≤ limit) and (n mod 12 = 1 or n mod 12 = 5):
is_prime(n) ← ¬is_prime(n)
n ← 3x²+y²
if (n ≤ limit) and (n mod 12 = 7):
is_prime(n) ← ¬is_prime(n)
n ← 3x²-y²
if (x > y) and (n ≤ limit) and (n mod 12 = 11):
is_prime(n) ← ¬is_prime(n)
// eliminate composites by sieving
for n in [5, √limit]:
if is_prime(n):
// n is prime, omit multiples of its square; this is
// sufficient because composites which managed to get
// on the list cannot be square-free
is_prime(k) ← false, k ∈ {n², 2n², 3n², ..., limit}
print 2, 3
for n in [5, limit]:
if is_prime(n): print n
This pseudocode is written for clarity. Repeated and wasteful
calculations mean that it would run slower than the sieve of Eratosthenes.
To improve its efficiency, faster methods must be used to find solutions
to the three quadratics. At the least, separate loops could have tighter
limits than [1, √limit].
13.3.4.
Explanation
The algorithm completely ignores any numbers divisible by two, three, or
five. All numbers with an even modulo-sixty remainder are divisible by
two and not prime. All numbers with modulo-sixty remainder divisible by
three are also divisible by three and not prime. All numbers with
modulo-sixty remainder divisible by five are divisible by five and not
prime. All these remainders are ignored.
All numbers with modulo-sixty remainder 1, 13, 17, 29, 37, 41, 49, or 53
have a modulo-four remainder of 1. These numbers are prime if and only
2
2
if the number of solutions to 4x + y = n is odd and the number is squarefree
(proven as theorem 6.1 of ).
All numbers with modulo-sixty remainder 7, 19, 31, or 43 have a modulo-six
remainder of 1. These numbers are prime if and only if the number of
solutions to 3x2 + y2 = n is odd and the number is squarefree (proven as
theorem 6.2 of ).
All numbers with modulo-sixty remainder 11, 23, 47, or 59 have a
modulo-twelve remainder of 11. These numbers are prime if and only if the
number of solutions to 3x2 − y2 = n is odd and the number is squarefree
(proven as theorem 6.3 of ).
None of the potential primes are divisible by 2, 3, or 5, so they can't
be divisible by their squares. This is why squarefree checks don't include
22, 32, and 52.
13.3.5.
Computational complexity
This sieve computes primes up to N using O(N/log log N) operations with
only N1/2 + o(1) bits of memory. That is a little better than the sieve of
Eratosthenes which uses O(N) operations and O(N1/2(log log N)/log N)
bits of memory. These asymptotic computational complexities include
simple optimizations, such as wheel factorization, and splitting the
computation to smaller blocks.
13.3.6.


See also
Sieve of Sundaram
Sieve theory
13.3.7.
References
1. ^ a b c d e A.O.L. Atkin, D.J. Bernstein, Prime sieves using binary quadratic forms, Math.
Comp. 73 (2004), 1023-1030.[1]
13.4. Bhattacharjee and Pandey
http://www.cse.iitk.ac.in/research/btp2001/primality.html
The Ultimate PrimalityTest
Conjeture
(Bhattaharjeeand Pandey [8℄)
Ifris an odd
prime whih doesnot divide
n(n¾
1), and
(x
1)n= xn
1
in (Z=nZ)[x℄=(x
Remarks
an
r
1), then nisprime.
1.
We
whi
_nd anodd prime r = O(log n)
hdoesnotdividen¾ 1 simply by
heking r=3; 5; 7; 11; : : :.(Ifrjn then we
are
_nished.)
2.
The time
forthee
test is
e
O(rlog¾ n) =O(log¿n).
3. The onjeture has been
veri_ed for
r < 100, n < 10½¼,½6and also
numbersupto 10
(heking the
for Carmihael
smallest
appliable r).For partial results and
13.5. Brillhart, Lehmer, Selfridge Test based on Lucas
Test
13.6. Cyclotomic
Deterministic
Primality
Cyclotomy
13.7. Demytko deterministic primality test method
If “
” meets the four following
conditions, then pi+1 is sure to be a prime number.
Test
(a) Input a positive odd prime number pi . Let it be
regarded as a seed generating prime number. We also
look for them by using Look-Up Table (LUT) or
other primality test methods.
(b) For hi<4(pi+1) Hi, hi is an even number, so we must
use all of the even numbers from 2 to hi during the
test procedures.
(c) 1 2 1mod hi pi
(d) 1 2 1mod hi
13.8. Elliptic curve methods
ECPP is practical and has been used to prove
primality of a number 44052638 + 26384405 of
15071 decimal digits. The total CPU time was
5.1 Ghz-years (Franke, Kleinjung, Morain, and
Wirth, July 2004).
In practice ECPP is comparable to the Jacobi
Sums algorithm, but ECPP has the advantage
of producing an easily-checked certificate of
primality. In fact, ECPP produces a certificate
of size O( 2) that can be checked in
deterministic polynomial time eO ( 3).
13.9. Eratosthenes Sieve
In mathematics, the sieve of Eratosthenes (Greek: κόσκινον
Ἐρατοσθένους), one of a number of prime number sieves, is a
simple, ancient algorithm for finding all prime numbers up to a specified
integer. It is one of the most efficient ways to find all of the smaller
primes (below 10 million or so). The algorithm is named after Eratosthenes,
an ancient Greek mathematician; although none of Eratosthenes' works have
survived, the sieve was described and attributed to Eratosthenes in the
Introduction to Arithmetic by Nicomachus.
Sieve of Eratosthenes: algorithm steps for primes below 120 (including optimization of
terminating when square of prime exceeds upper limit)
13.9.1.









Contents
1 Algorithm description
o 1.1 Incremental sieve
o 1.2 Trial division
2 Example
3 Algorithm complexity
4 Implementation
5 Arithmetic progressions
6 Euler's sieve
7 See also
8 References
9 External links
13.9.2.
Algorithm description
Sift the Two's and Sift the Three's,
The Sieve of Eratosthenes.
When the multiples sublime,
The numbers that remain are Prime.
“
”
Anonymous
A prime number is a natural number which has exactly two distinct natural
number divisors: 1 and itself.
To find all the prime numbers less than or equal to a given integer n by
Eratosthenes' method:
1. Create a list of consecutive integers from 2 to n: (2, 3, 4, ..., n).
2. Initially, let p equal 2, the first prime number.
3. Starting from p, count up in increments of p and mark each of these numbers greater
than p itself in the list. These numbers will be 2p, 3p, 4p, etc.; note that some of them
may have already been marked.
4. Find the first number greater than p in the list that is not marked; let p now equal this
number (which is the next prime).
5. If there were no more unmarked numbers in the list, stop. Otherwise, repeat from step
3.
When the algorithm terminates, all the numbers in the list that are not
marked are prime.
As a refinement, it is sufficient to mark the numbers in step 3 starting
from p2, as all the smaller multiples of p will have already been marked
at that point. This means that the algorithm is allowed to terminate in
step 5 when p2 is greater than n. This does not appear in the original
algorithm.
Another refinement is to initially list odd numbers only (3, 5, ..., n),
and count up using an increment of 2p in step 3, thus marking only odd
multiples of p greater than p itself. This refinement actually appears
in the original description. This can be generalized with wheel
factorization, forming the initial list only from numbers coprime with
the first few primes and not just from odds, i.e. numbers coprime with
2.
13.9.2.1. Incremental sieve
An incremental formulation of the sieve generates primes indefinitely
(i.e. without an upper bound) by interleaving the generation of primes
with the generation of their multiples (so that primes can be found in
gaps between the multiples), where the multiples of each prime p are
generated directly, by counting up from the square of the prime in
increments of p (or 2p for odd primes).
13.9.2.2. Trial division
Trial division can be used to produce primes by filtering out the
composites found by testing each candidate number for divisibility by its
preceding primes. It is often confused with the sieve of Eratosthenes,
although the latter directly generates the composites instead of testing
for them. Trial division has worse theoretical complexity than that of
the sieve of Eratosthenes in generating ranges of primes.
When testing each candidate number, the optimal trial division algorithm
uses just those prime numbers not exceeding its square root. The widely
known 1975 functional code by David Turner is often presented as an example
of the sieve of Eratosthenes but is actually a sub-optimal trial division
algorithm.
13.9.3.
Example
To find all the prime numbers less than or equal to 30, proceed as follows.
First generate a list of integers from 2 to 30:
2 3 4 5 6
26 27 28 29 30
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
First number in the list is 2; cross out every 2nd number in the list after
it (by counting up in increments of 2), i.e. all the multiples of 2:
2 3 4 5 6
26 27 28 29 30
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Next number in the list after 2 is 3; cross out every 3-rd number in the
list after it (by counting up in increments of 3), i.e. all the multiples
of 3:
2 3 4 5 6
26 27 28 29 30
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Next number not yet crossed out in the list after 3 is 5; cross out every
5-th number in the list after it (by counting up in increments of 5), i.e.
all the multiples of 5:
2 3 4 5 6
26 27 28 29 30
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Next number not yet crossed out in the list after 5 is 7; the next step
would be to cross out every 7-th number in the list after it, but they
are all already crossed out at this point, as these numbers (14, 21, 28)
are also multiples of smaller primes because 7*7 is greater than 30. The
numbers left not crossed out in the list at this point are all the prime
numbers below 30:
2
29
3
13.9.4.
5
7
11
13
17
19
23
Algorithm complexity
Time complexity in random access machine model is O(nlog log n)
operations, a direct consequence of the fact that the prime harmonic
series asymptotically approaches log log n.
The bit complexity of the algorithm is O(n(log
operations with a memory requirement of O(n).
n)(log log n)) bit
The segmented version of the sieve of Eratosthenes, with basic
optimizations, uses O(n) operations and O(n1 / 2log log n / log n) bits
of memory.
13.9.5.
Implementation
In pseudocode:
Input: an integer n > 1
Let A be an array of bool values, indexed by integers 2 to n,
initially all set to true.
for i = 2, 3, 4, ..., while i ≤ n/2:
if A[i] is true:
for j = 2i, 3i, 4i, ..., while j ≤ n:
A[j] = false
Now all i such that A[i] is true are prime.
Large ranges may not fit entirely in memory. In these cases it is necessary
to use a segmented sieve where only portions of the range are sieved at
a time. For ranges so large that the sieving primes could not be held in
memory, space-efficient sieves like that of Sorenson are used instead.
13.9.6.
Arithmetic progressions
The sieve may be used to find primes in arithmetic progressions.
13.9.7.
Euler's sieve
Euler's proof of the zeta product formula contains version of the sieve
of Eratosthenes in which each composite number is eliminated exactly once.
It, too, starts with a list of numbers from 2 to n in order. On each step
the first element is identified as the next prime and the results of
multiplying this prime with each element of the list are marked in the
list for subsequent deletion. The initial element and the marked elements
are then removed from the working sequence, and the process is repeated:
[2] (3) 5 7 9 11
47 49 51 53 55 57 59
[3]
(5) 7
11
47 49
53 55
59
[4]
(7)
11
47 49
53
59
[5]
(11)
47
53
59
[...]
13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45
61 63 65 67 69 71 73 75 77 79 ...
13
17 19
23 25
29 31
35 37
41 43
61
65 67
71 73
77 79 ...
13
17 19
23
29 31
37
41 43
61
67
71 73
77 79 ...
13
17 19
23
29 31
37
41 43
61
67
71 73
79 ...
Here the example is shown starting from odds, after the 1st step of the
algorithm. Thus on k-th step all the multiples of the k-th prime are
removed from the list. If generating a bounded sequence of primes, when
the next identified prime exceeds the square root of the upper limit, all
the remaining numbers in the list are prime. In the example given above
that is achieved on identifying 11 as next prime, giving a list of all
primes less than or equal to 80.
Note that numbers that will be discarded by some step are still used while
marking the multiples, e.g. for the multiples of 3 it is 3 · 3 = 9, 3 · 5
= 15, 3 · 7 = 21, 3 · 9 = 27, ..., 3 · 15 = 45, ... .
13.9.8.



See also
Sieve theory
Sieve of Atkin
Sieve of Sundaram
13.9.9.
References
13.10.
Goldwasser Kilian Algorithm
13.11.
Jacobi Sums
The Jacobi Sums algorithm runs in time
O(log log ) .
This is almost polynomial time4.
We can be more precise: Odlyzko and
Pomerance have shown that, for all large n, the
running time is in
[ Alog log , B log log ] ,
where A,B are positive constants. The lower
bound shows that the Jacobi Sums algorithms
is definitely not polynomial-time (in theory
anyway).
The Jacobi sums algorithm is deterministic and
practical: it has been used for numbers of at
least 3395 decimal digits (Mihailescu: 6.5 days
on a 500 Mhz Dec Alpha).
13.12.
Lucas 素性测定算法
http://www.peach.dreab.com/p-Lucas_primality_test
Lucas
http://calistamusic.dreab.com/p-Lucas_primality_test
In computational number theory, the Lucas test is a primality test for
a natural number n; it requires that the prime factors of n − 1 be already
known. It is the basis of the Pratt certificate that gives a concise
verification that n is prime.
13.12.1. Contents





1 Concepts
2 Example
3 Algorithm
4 See also
5 Notes
13.12.2. Concepts
Let n be a positive integer. If there exists an integer 1
that
and for every prime factor q of n
−
<
a < n such
1
then n is prime. If no such number a exists, then n is composite.
The reason for the correctness of this claim is as follows: if the first
equality holds for a, we can deduce that a and n are coprime. If a also
survives the second step, then the order of a in the group (Z/nZ)* is equal
to n−1, which means that the order of that group is n−1 (because the order
of every element of a group divides the order of the group), implying that
n is prime. Conversely, if n is prime, then there exists a primitive root
modulo n, or generator of the group (Z/nZ)*. Such a generator has order
|(Z/nZ)*| = n−1 and both equalities will hold for any such primitive
root.
Note that if there exists an a < n such that the first equality fails,
a is called a Fermat witness for the compositeness of n.
13.12.3. Example
For example, take n = 71. Then n − 1 = 70 and the prime factors of 70
are 2, 5 and 7. We randomly select an a < n of 17. Now we compute:
For all integers a it is known that
Therefore, the multiplicative order of 17 (mod 71) is not necessarily 70
because some factor of 70 may also work above. So check 70 divided by its
prime factors:
Unfortunately, we get that 1710≡1 (mod 71). So we still don't know if 71
is prime or not.
We try another random a, this time choosing a
=
11. Now we compute:
Again, this does not show that the multiplicative order of 11 (mod 71)
is 70 because some factor of 70 may also work. So check 70 divided by its
prime factors:
So the multiplicative order of 11 (mod 71) is 70, and thus 71 is prime.
(To carry out these modular exponentiations, one could use a fast
exponentiation algorithm like binary or addition-chain exponentiation).
13.12.4. Algorithm
The algorithm can be written in pseudocode as follows:
Input: n > 2, an odd integer to be tested for primality; k, a parameter
that determines the accuracy of the test
Output: prime if n is prime, otherwise composite or possibly composite;
determine the prime factors of n−1.
LOOP1: repeat k times:
pick a randomly in the range [2, n − 1]
if an-1
1 (mod n) then return composite
otherwise
LOOP2: for all prime factors q of n−1:
if a(n-1)/q
1 (mod n)
if we did not check this equality for all prime factors
of n−1
then do next LOOP2
otherwise return prime
otherwise do next LOOP1
return possibly composite.
13.12.5. See also


Édouard Lucas
Fermat's little theorem
13.13.
Lucas-Lehmer 测试
Lucas–Lehmer
http://calistamusic.dreab.com/p-Lucas%E2%80%93Lehmer_primality_test
Der Lucas-Lehmer Test ist ein deterministischer Primzahltest, der entscheidet, ob eine
Mersenne-Zahl Mn = 2n − 1 f¨ur ein vorgelegtes n > 2 prim ist oder nicht.
13.13.1. 梅森素数判定算法 Lucas-Lehmer 测试
2010-03-14 13:14:20|
分类: 默认分类 |字号 订阅
梅森素数判定法的算法设计
法国数学家 Lucas 在研究著名的斐波那契数列时" 惊人地发现它与梅森素数的联系"他由此
提出了一个用以判别 Mp 是否为素数的重要定理!!!卢卡斯定理"为梅森素数的研究提供了有
利工具。1930 年,"美国数学家 Lehmer 改进了 Lucas,的工作" 给出一个针对 Mp 的新的素
性测试方法" 即 Lucas-Lehmer 测试:对于所有大于 1 的奇数 p,Mp 是素数当且仅当 Mp 整
除 S(p-1),其中 S(n)由 S(n)=S(n-1)^2-2,S(1)=4 递归定义。
这个方法尤其适合于计算机运算,因为除以 Mp 的运算在二进制下可以简单地用计算机特别
擅长的移位和加法操作来实现。
以下是用 C 语言实现的可实际使用的 Lucas-Lehmer 测试:
Lucas-Lehmer_test(int p)
{
int s,i,s1;
for(i=0,s1=1;i<p;i++) s1*=2;
s1--;
for(i=3,s=4;i<=p;i++) (s=s*s-2)%s1;
return s==0?1:0;
}
This article is about the Lucas–Lehmer test that only applies to Mersenne numbers. For the
Lucas–Lehmer test that applies to a natural number n, see Lucas primality test. For the
Lucas–Lehmer–Riesel test, see Lucas–Lehmer–Riesel test.
In mathematics, the Lucas–Lehmer test (LLT) is a primality test for
Mersenne numbers. The test was originally developed by Édouard Lucas in
1856, and subsequently improved by Lucas in 1878 and Derrick Henry Lehmer
in the 1930s.
13.13.2. Contents








1 The test
2 Time complexity
3 Examples
4 Proof of correctness
o 4.1 Sufficiency
o 4.2 Necessity
5 Applications
6 See also
7 References
8 External links
13.13.3. The test
p
The Lucas–Lehmer test works as follows. Let Mp = 2 − 1 be the
Mersenne number to test with p an odd prime (because p is exponentially
smaller than Mp, we can use a simple algorithm like trial division for
establishing its primality). Define a sequence {s i } for all i ≥ 0 by
The first few terms of this sequence are 4, 14, 194, 37634, ... (sequence
A003010). Then Mp is prime iff
The number sp − 2 mod Mp is called the Lucas–Lehmer residue of p. (Some
authors equivalently set s1 = 4 and test sp−1 mod Mp). In pseudocode, the
test might be written:
// Determine if Mp = 2p − 1 is prime
Lucas–Lehmer(p)
var s = 4
var M = 2p − 1
repeat p − 2 times:
s = ((s × s) − 2) mod M
if s = 0 return PRIME else return COMPOSITE
By performing the mod M at each iteration, we ensure that all intermediate
results are at most p bits (otherwise the number of bits would double each
iteration). It is exactly the same strategy employed in modular
exponentiation.
13.13.4. Time complexity
In the algorithm as written above, there are two expensive operations
during each iteration: the multiplication s × s, and the mod M
operation. The mod M operation can be made particularly efficient on
standard binary computers by observing the following simple property:
In other words, if we take the least significant n bits of k, and add the
remaining bits of k, and then do this repeatedly until at most n bits remain,
n
we can compute the remainder after dividing k by the Mersenne number 2
−1 without using division. For example:
916 mod 25−1 = 11100101002 mod 25−1
= 111002 + 101002 mod 25−1
= 1100002 mod 25−1
= 12 + 100002 mod 25−1
= 100012 mod 25−1
= 100012
= 17.
Moreover, since s × s will never exceed M2 < 22p, this simple technique
converges in at most 2 p-bit additions, which can be done in linear time.
As a small exceptional case, the above algorithm may produce 2n−1 for a
multiple of the modulus, rather than the correct value of zero; this should
be accounted for.
With the modulus out of the way, the asymptotic complexity of the algorithm
depends only on the multiplication algorithm used to square s at each step.
The simple "grade-school" algorithm for multiplication requires O(p2)
bit-level or word-level operations to square a p-bit number, and since
we do this O(p) times, the total time complexity is O(p3). A more efficient
multiplication method, the Schönhage–Strassen algorithm based on the
Fast Fourier transform, requires O(p log p log log p) time to square a
p-bit number, reducing the complexity to O(p2 log p log log p) or Õ(p2)..
Currently the most efficient known multiplication algorithm, Fürer's
algorithm, needs
time to multiply two p-bit numbers.
By comparison, the most efficient randomized primality test for general
integers, the Miller–Rabin primality test, takes O(k p2 log p log log
p) bit operations using FFT multiplication for a p-digit number (here p
can be any natural number, not necessarily a prime), where k is the number
of iterations and is related to the error rate. So it is in the same
complexity class as the Lucas-Lehmer test for constant k. But in practice
the cost of doing many iterations and other differences leads to worse
performance for Miller–Rabin. The most efficient deterministic
primality test for general integers, the AKS primality test, requires Õ(p6)
bit operations in its best known variant and is dramatically slower in
practice.
13.13.5. Examples
Suppose we wish to verify that M3 = 7 is prime using the Lucas–Lehmer
test. We start out with s set to 4 and then update it 3−2 = 1 time, taking
the results mod 7:

s ← ((4 × 4) − 2) mod 7 = 0
Because we end with s set to zero, M3 is prime.
On the other hand, M11 = 2047 = 23 × 89 is not prime. To show this, we
start with s set to 4 and update it 11−2 = 9 times, taking the results
mod 2047:









s ← ((4 × 4) − 2) mod 2047 = 14
s ← ((14 × 14) − 2) mod 2047 = 194
s ← ((194 × 194) − 2) mod 2047 = 788
s ← ((788 × 788) − 2) mod 2047 = 701
s ← ((701 × 701) − 2) mod 2047 = 119
s ← ((119 × 119) − 2) mod 2047 = 1877
s ← ((1877 × 1877) − 2) mod 2047 = 240
s ← ((240 × 240) − 2) mod 2047 = 282
s ← ((282 × 282) − 2) mod 2047 = 1736
Because s is not zero, M11=2047 is not prime. Notice that we learn nothing
about the factors of 2047, only its Lucas–Lehmer residue, 1736.
13.13.6. Proof of correctness
Lehmer's original proof of the correctness of this test is complex, so
we'll depend upon more recent refinements. Recall the definition:
Then our theorem is that Mp is prime iff
We begin by noting that
is a recurrence relation with a closed-form
solution. Define
and
induction that
for all i:
where the last step follows from
; then we can verify by
. We will
use this in both parts.
13.13.6.1. Sufficiency
In this direction we wish to show that
implies that
Mp is prime. We relate a straightforward proof exploiting elementary group
theory given by J. W. Bruce as related by Jason Wojciechowski.
Suppose
. Then
for some
integer k, and:
Now suppose Mp is composite, and let q be the smallest prime factor of
Mp. Since Mersenne numbers are odd, we have q > 2. Define the set
with q2 elements, where
is the integers mod
q, a finite field (in the language of ring theory X is the quotient of
by the ideal generated by (T2 − 3)).
the univariate polynomial ring
The multiplication operation in X is defined by:
Since q
>
2,
and
are in X (in fact
are
in X, but by abuse of language we identify
and
X under the natural ring homomorphism from
with their images in
to X which sends the
square root of 3 to T). Any product of two numbers in X is in X, but it's
not a group under multiplication because not every element x has an inverse
y such that xy = 1 (in fact X is a ring and the set of non-zero elements
of X is a group if and only if
does not contain a square root of 3).
If we consider only the elements that have inverses, we get a group X*
of size at most q2 − 1 (since 0 has no inverse).
Now, since
, and
which by equation (1) gives
, we have
in X,
. Squaring both sides gives
, showing that ω is invertible with inverse
and so lies
in X*, and moreover has an order dividing 2p. In fact the order must equal
2p, since
and so the order does not divide 2p − 1. Since the order
of an element is at most the order (size) of the group, we conclude that
. But since q is the smallest prime factor of the
composite Mp, we must have
, yielding the
contradiction 2p < 2p − 1. So Mp is prime.
13.13.6.2. Necessity
In the other direction, we suppose Mp is prime and show
. We rely on a simplification of a proof by Öystein
J. R. Ödseth. First, notice that 3 is a quadratic non-residue mod Mp,
since 2 p − 1 for odd p > 1 only takes on the value 7 mod 12, and so
the Legendre symbol properties tell us (3 | Mp) is −1. Euler's criterion
then gives us:
On the other hand, 2 is a quadratic residue mod Mp, since
and so
Euler's criterion again gives:
Next, define
.
, and define X* similarly as before as the
multiplicative group of
following lemmas:
(from Proofs of Fermat's little
theorem#Proof_using_the_binomial_theorem)
for every integer a (Fermat's little theorem)
. We will use the
Then, in the group X* we have:
We chose σ such that ω = (6 + σ)2 / 24. Consequently, we can use this
to compute
in the group X*:
where we use the fact that
Since
this equation by
, all that remains is to multiply both sides of
and use
:
Since sp−2 is an integer and is zero in X*, it is also zero mod
Mp.
13.13.7. Applications
The Lucas–Lehmer test is the primality test used by the Great Internet
Mersenne Prime Search to locate large primes, and has been successful in
locating many of the largest primes known to date. The test is considered
valuable because it can provably test a very large number for primality
within affordable time and, in contrast to the equivalently fast Pépin's
test for any Fermat number, can be tried on a large search space of numbers
with the required form before reaching computational limits.
13.13.8. See also



Mersenne's conjecture
Lucas–Lehmer–Riesel test
GIMPS
13.14.
Lucas–Lehmer–Riesel
http://calistamusic.dreab.com/p-Lucas%E2%80%93Lehmer%E2%80%93Riesel_test
In mathematics, the Lucas–Lehmer–Riesel test is a primality test for
numbers of the form N = k2n − 1, with 2n > k. The test was developed
by Hans Riesel and it is based on the Lucas–Lehmer primality test. It
is the fastest deterministic algorithm known for numbers of that form.
The Brillhart–Lehmer–Selfridge test is the fastest deterministic
algorithm for numbers of the form N = k2n + 1
13.14.1. Contents






1 The algorithm
2 Finding the starting value
3 How does the test work?
4 LLR software
5 References
6 External links
13.14.2. The algorithm
The algorithm is very similar to the Lucas–Lehmer test, but with a
variable starting point depending on the value of k.
Define a sequence {ui} for all i
>
0 by:
Then N is prime if and only if it divides
un−2.
13.14.3. Finding the starting value



If k = 1: if n is odd, then we can take u0 = 4. If n = 3 mod 4, then we can take u0 = 3. Note
that if n is prime, these are Mersenne numbers.
If k = 3: if n = 0 or 3 mod 4 then u0 = 5778.
If k = 1 or 5 mod 6 and 3 does not divide N, then we take
.

Otherwise, we are in the case where k is a multiple of 3, and it is more difficult to select
the right value of u0
13.14.4. How does the test work?
The Lucas–Lehmer–Riesel test is a particular case of group-order
primality testing; we demonstrate that some number is prime by showing
that some group has the order that it would have were that number prime,
and we do this by finding an element of that group of precisely the right
order.
For Lucas-style tests on a number N, we work in the multiplicative group
of a quadratic extension of the integers modulo N; if N is prime, the order
of this multiplicative group is N2 − 1, it has a subgroup of order
N + 1, and we try to find a generator for that subgroup.
We start off by trying to find a non-iterative expression for the ui.
Following the model of the Lucas–Lehmer test, put
by induction we have
, and
.
So we can consider ourselves as looking at the 2ith term of the sequence
v(i) = ai + a − i. If a satisfies a quadratic equation, this is a Lucas
sequence, and has an expression of the form v(i) = αv(i − 1) + βv(i −
2). Really, we're looking at the
th term of a different sequence,
but since decimations (take every kth term starting with the zeroth) of
a Lucas sequence are themselves Lucas sequences, we can deal with the
factor k by picking a different starting point.
13.14.5. LLR software
LLR is a program that can run the LLR tests. The program was developed
by Jean Penné. Vincent Penné has modified the program so that it can obtain
tests via the Internet. The software is both used by individual prime
searchers and some distributed computing projects including Riesel Sieve
and PrimeGrid.
13.14.6. References

Riesel, Hans (1969). "Lucasian Criteria for the Primality of N = h·2n − 1". Mathematics of
Computation (American Mathematical Society) 23 (108): 869–875. doi:10.2307/2004975.
JSTOR 2004975.
13.14.7. External links

Download Jean Penné's LLR
13.14.8. 推广的 Lucas 型素性测定算法
13.14.9. Massey–Omura–Kryptosystem
Beispielsweise benötigt das Massey–Omura–Kryptosystem Primzahlen mit 2000 Bitstellen (Stand
2004), um endliche Körper zu konstruieren.
13.15.
Miller-Primzahltest
Extended Riemann Hyperthesis
erweiterte Riemannsche Vermutung
erweiterte Riemann-Hypothese
13.16.
Pocklington Lehmer primality test
http://calistamusic.dreab.com/p-Pocklington_primality_test
http://www.peach.dreab.com/p-Pocklington_primality_test
In mathematics,
test devised by
decide whether a
that the number
the Pocklington–Lehmer primality test is a primality
Henry Cabourn Pocklington and Derrick Henry Lehmer to
given number N is prime. The output of the test is a proof
is prime or that primality could not be established.
Pocklington primality test (deterministic)
Relies on (partial) factorization of p-1 to generate
recursive list of successive p
(uses Fermat's little theorem)
(isn't always able to work)
13.16.1. Contents






1 Pocklington criterion
o 1.1 Proof of this theorem
2 Generalized Pocklington method
3 The test
4 Example
5 References
6 External links
13.16.2. Pocklington criterion
The test relies on the Pocklington Theorem (Pocklington criterion) which
is formulated as follows:
Let N > 1 be an integer, and suppose there exist numbers a and q such that
(1) q is prime,
and
(2)
(3) gcd(a(N − 1) / q − 1,N) = 1
Then N is prime.
13.16.2.1. Proof of this theorem
Suppose N is not prime. This means there must be a prime p, where
that divides N.
Therefore, q > p − 1 which implies gcd(q,p − 1) = 1.
Thus there must exist an integer u with the property that
This implies
by (1) and (2), and
this contradicts (3)
The test is simple once the theorem above is established. Given N, seek
to find suitable a and q. If they can be obtained, then N is prime. Moreover,
a and q are the certificate of primality. They can be quickly verified
to satisfy the conditions of the theorem, confirming N as prime.
A problem which arises is the ability to find a suitable q, that must
satisfy (1)–(3) and be provably prime. It is even quite possible that
such a q does not exist. This is a large probability, indeed only 57.8%
of the odd primes, N,
have such a q. To find a is not nearly
so difficult. If N is prime, and a suitable q is found, each choice of
a where
will satisfy
, and so will
satisfy (2) as long as ord(a) does not divide (N − 1) / q. Thus a randomly
chosen a is likely to work. If a is a generator mod N its order is N-1
and so the method is guaranteed to work for this choice.
13.16.3. Generalized Pocklington method
A generalized version of Pocklington's theorem covers more primes N.
Corollary:
Let N
−
1 factor as N
−
AB, where A and B are relatively prime,
1=
and the factorization of A is known.
If for every prime factor p of A there exists an integer ap so that
then N is prime. The reverse implication
and
also holds: If N is prime then every prime factor of A can be written in
the above manner.
Proof of Corollary: Let p be a prime dividing A and let pe be the maximum
power of p dividing A. Let v be a prime factor of N. For the ap from the
corollary set
. This means
and because of
also
.
This means that the order of b(mod v) is pe
Thus,
. The same observation holds for each prime power factor
pe of A, which implies
.
Specifically, this means
If N were composite, it would necessarily have a prime factor which is
less than or equal to
. It has been shown that there is no such factor,
which implies that N is prime.
To see the converse choose ap a generator of the integers modulo p.
13.16.4. The test
The Pocklington–Lehmer primality test follows directly from this
corollary. We must first partially factor N − 1 into A and B. Then we
must find an ap for every prime factor p of A, which fulfills the conditions
of the corollary. If such ap's can be found, the Corollary implies that
N is prime.
According to Koblitz, ap = 2 often works.
13.16.5. Example
N = 11351
Choose
, which means B = 2
Now it is clear that gcd(A,B) = 1 and
.
Next find an ap for each prime factor p of A. E.g. choose a5 = 2.
.
So a5 = 2 satisfies the necessary conditions. Choose a227 = 7.
and
So both ap's work and thus N is prime.
We have chosen a small prime for calculation purposes but in practice when
we start factoring A we will get factors that themselves must be checked
for primality. It is not a proof of primality until we know our factors
of A are prime as well. If we get a factor of A where primality is not
certain, the test must be performed on this factor as well. This gives
rise to a so-called down-run procedure, where the primality of a number
is evaluated via the primality of a series of smaller numbers.
In our case, we can say with certainty that 2, 5, and 227 are prime, and
thus we have proved our result. The certificate in our case is the list
of ap's, which can quickly be checked in the corollary.
If our example had given rise to a down-run sequence, the certificate would
be more complicated. It would first consist of our initial round of ap's
which correspond to the 'prime' factors of A; Next, for the factor(s) of
A of which primality was uncertain, we would have more ap's, and so on
for factors of these factors until we reach factors of which primality
is certain. This can continue for many layers if the initial prime is large,
but the important thing to note, is that a simple certificate can be
produced, containing at each level the prime to be tested, and the
corresponding ap's, which can easily be verified. If at any level we cannot
find ap's then we cannot say that N is prime.
The biggest difficulty with this test is the necessity of discovering
prime factors of N - 1, in essence, factoring N − 1. In practice this
could be extremely difficult. Finding ap's is a less difficult problem.
13.17.
Proth deterministic primality test method
1
()
2 1mod
N
then N is prime.
The difference between probabilistic primality test
methods and deterministic primality test methods is that
the result of the later methods can be precisely accurate.
Namely, we are sure the number we calculate is a prime
number.
13.18.
Sundaram sieve
In mathematics, the sieve of Sundaram is a simple deterministic algorithm
for finding all prime numbers up to a specified integer. It was discovered
in 1934 by S. P. Sundaram, an Indian student from Sathyamangalam.
13.18.1. Contents





1 Algorithm
2 Correctness
3 Computational complexity
4 See also
5 References
13.18.2. Algorithm
Start with a list of the integers from 1 to n. From this list, remove all
numbers of the form i + j + 2ij where:


The remaining numbers are doubled and incremented by one, giving a list
of the odd prime numbers (i.e., all primes except 2) below 2n + 2.
The sieve of Sundaram sieves out the composite numbers just as sieve of
Eratosthenes does, but even numbers are not considered; the work of
"crossing out" the multiples of 2 is done by the final
double-and-increment step. Whenever Eratosthenes' method would cross out
k different multiples of a prime 2i+1, Sundaram's method crosses out i
+ j(2i+1) for
.
13.18.3. Correctness
The final list of doubled-and-incremented integers contains only odd
integers; we must show that the set of odd integers excluded from the list
is exactly the set of composite odd integers.
An odd integer is excluded from the final list if and only if it is of
the form 2(i + j + 2ij) + 1, and we have
2(i + j + 2ij) + 1
= 2i + 2j + 4ij + 1
= (2i + 1)(2j + 1).
So, an odd integer is excluded from the final list if and only if it has
a factorization of the form (2i + 1)(2j + 1) — which is to say, if it
has a non-trivial odd factor. Since every odd composite number has a
non-trivial odd factor, we may safely say that an odd integer is excluded
from the final list if and only if it is composite. Therefore the list
must be composed of exactly the set of odd prime numbers less than or equal
to n.
13.18.4. Computational complexity
The sieve of Sundaram finds the primes less than n in Θ(n log n) operations
using Θ(n) bits of memory.
13.18.5. See also

Sieve of Eratosthenes


Sieve of Atkin
Sieve theory
13.18.6. References
13.19.
Trial division
http://calistamusic.dreab.com/p-Trial_division
13.20.
Ward’s primality test
Lucas sequence Un
Sylvester cyclotomic number Qn
13.21.
Wilson's Primality Test 威尔逊判别法
n 是素数的充要条件是
(n  1)!1  0 (mod n )
a  b mod p
这里
是指 a-b 被 p 整除。
不过该算法的运算量为 O(nlogn^2),计算量太大。
14. Randomized /Probabilistic/ Probable /
Provable / Primality Testing
14.1. Adelman-Huang algorithm
Adelman-Huang
There is a complicated algorithm5, due to
Adleman and Huang (1992), that gives a
certificate of primality in expected polynomial
time. Thus
PRIMES ∈ RP.
It follows from Adelman-Huang and
Rabin-Miller that
PRIMES ∈ RP ∩ co-RP = ZPP.
Recall that ZPP is very close to P. The
difference is that for ZPP the expected running
time is polynomial in , but for P the
worst-case running time is polynomial in .
In practice no one uses the Adelman-Huang
algorithm because ECPP is much faster.
Adelman-Huang is of theoretical interest
because we can prove that its expected running
time is polynomial in .
14.2. Agrawal-Biswas algorithm or Agarwal-Biswas
Probabilistic Testing
14.3. AKS parallel sorting algorithm of Ajtai, Koml´os
and Szemer´edi
14.4. ALI primality test
14.5. APR Test
14.6. APRT-CL (or APRCL)
14.7. 素性测试的 ARCL 算法
1980年数学家 Adleman, Rumely, Cohen 和 Lenstra 研究出一种非常复杂、具有高度技巧
的素数判别方法,检验一个20位数的素性只需10秒,对一个50位数,只要15秒,而
一个100位数只用40秒。如果用试除法,判别一个50位数的素性要一百亿年!
14.8. Baillie–PSW
http://calistamusic.dreab.com/p-Baillie%E2%80%93PSW_primality_test
The Baillie–PSW primality test is a probabilistic primality testing heuristic algorithm: it determines
if a number is composite or a probable prime. The authors of the test offered $30 for the
discovery of a composite number that passed this test. As of 1994, the value was raised to $620
and no pseudoprime was found up to 1017, consequently this can be considered a sound
primality test on numbers below that upper bound.
A primality testing software PRIMO uses this algorithm to check for probable primes, and no
certification of this test has yet failed. The author, Marcel Martin, estimates by those results that
the test is accurate for numbers below 10000 digits. There is a heuristic argument (Pomerance
1984) suggesting that there may be infinitely many counterexamples.
The test
Optionally, perform trial division to check if the number isn't a multiple of a small prime number.
Perform a base 2 strong pseudoprimality test. If it fails; n is composite.
Find the first a in the sequence 5, −7, 9, −11, ... for which the Jacobi symbol
Perform a Lucas pseudoprimality test with discriminant a on n. If this test does not fail, n is likely
a prime.
14.9. BPP algorithm
First, in all the algorithms we will assume that n is odd, not a prime
power, and not a perfect square. (These are fine to assume since we
can always begin by checking if the square-root, cube-root, etc. of n
is an integer, and if so saying "composite".)
Here's a BPP algorithm for primality testing. This one is probably the
easiest to analyze, and I think is due to Lehmer.
Repeat k times:
* Pick a in {1,...,n-1} at random.
* If gcd(a,n) != 1, then output COMPOSITE.
[this is actually unnecessary but conceptually helps]
* If a^{(n-1)/2} is not congruent to +1 or -1 (mod n),
then output COMPOSITE.
Now, if we ever got a "-1" above output "PROBABLY PRIME" else output
"PROBABLY COMPOSITE".
Theorem: this procedure has error probability at most 1/2^k.
Proof: if n is really prime, then in each iteration we get
a^{(n-1)/2} = -1 (mod n) with probability 1/2, so we get a -1 at least
once with probability 1 - 1/2^k. If n is composite, the proof of
success will follow from the lemma below with t = (n-1)/2.
Key Lemma: Let n be an odd composite, not a prime power, and let t <
n. If there exists a in Z_n^* such that a^t = -1 (mod n), then at
most half of the x's in Z_n^* have x^t = {-1,+1} (mod n). In fact, we
can weaken the condition to simply that there exists a in Z_n^* such
that a^t is not congruent to +1 (mod n).
Proof: Let S = {x in Z_n^* : x^t = +1 or -1 (mod n)}. S is a subgroup
of Z_n^* since it's closed under multiplication (xy)^t = (x^t)(y^t)
and inverse (x^{-1})^t = (x^t)^{-1}. So we just need to find some b
in Z_n^* not in S. (So, the "weakened condition" is equivalent to the
stronger condition because if a^t is not congruent to -1 (mod n), then
we're done.) Let n = q * r, where q and r are relatively prime
and greater than 1. Let b = (a,1), where we're using CRT notation:
b = a (mod q) and b = 1 (mod r). Note that b^t = (a^t, 1^t) = (-1,1).
Therefore, b is not in S since 1 = (1,1) and -1 = (-1, -1). QED.
========================================================================
An aside: it's clear that primality is in co-NP: if a number is
composite, there is a short witness: namely, a factor. Is primality
in NP? Yes. Here's an idea due to Pratt:
The certificate that n is prime will be a generator g of Z_n^*, a
prime factorization of n-1, and then (recursively) a proof
that each of those prime factors is really prime.
The idea is that if n is prime, then g^{n-1} = 1 (mod n) and g^t is
not congruent to 1 (mod n) for any 0 < t < n-1. On the other hand, if
n is composite, then phi(n) < n-1 so this cannot be the case. So, to
verify that n is prime, we just verify that g^{n-1} = 1 (mod n) and
then verify that g^{(n-1)/p} is not congruent to 1 for any prime
factor p of n, and then recursively verify that each of these factors
is prime (and, of course, verify that the prime factorization really
is a factorization of n). We can see that this certificate is not too
large since if you draw out the recursive "tree", the width is at most
log(n) (since the product of all primes on any given level is at most n)
and the depth is at most log(n) since the primes are dropping by at
least a factor or 2. QED.
14.10.
Baillie and Wagstaff Method
14.11.
Chen--Kao and Lewin--Vadhan tests
[Chen and Kao 1997;
Lewin and Vadhan 1998].
14.12.
Chinese Primality Test
14.13.
Chinese Remaindering
Primality and Identity Testing via Chinese Remaindering.
[DBLP_Link] [Online_Version] CitedBy 72
Authors:
http://arnetminer.org/pictures/thumb/489/1225201608105.jpeg
Manindra Agrawal social network
Professor
Department of Computer Science and Engineering Indian Institute of Technology Kanpur
http://arnetminer.org/images/no_photo.jpg
Somenath Biswas social network
Professor
Computer Science and Engineering Indian Institute of Technology, Kanpur
Abstract:
We give a simple and new randomized primality testing algorithm by reducing primality testing
for number n to testing if a specific univariate identity over Zn holds.We also give new
randomized algorithms for testing if a multivariate polynomial, over a finite field or over rationals,
is identically zero. The first of these algorithms also works over Zn for any n. The running time of
the algorithms is polynomial in the size of arithmetic circuit representing the input polynomial
and the error parameter. These algorithms use fewer random bits and work for a larger class of
polynomials than all the previously known methods, for example, the Schwartz--Zippel test
[Schwartz 1980; Zippel 1979], Chen--Kao and Lewin--Vadhan tests [Chen and Kao 1997; Lewin
and Vadhan 1998].
14.14.
Cohen-Lenstra Method
14.15.
Colin Plumb primality test (Euler Criterion)
14.16.
Combination Algorithm
Recall that ECPP produces a certificate of
primality. Thus, using a combination of
Rabin-Miller and ECPP, we can get a
randomized algorithm that produces a
certificate to prove that its result (whether
“prime” or “composite”) is correct.
All we have to do is run the Rabin-Miller and
ECPP algorithms in “parallel” until one of
them produces a certificate7. The expected
running time is believed to be eO ( 4), although
we can’t prove this.
To be guaranteed an expected polynomial
runtime, add a parallel thread for the
Adelman-Huang algorithm.
14.17.
Cyclotomic Probabilistic Primality Test
14.18.
ECPP Elliptic Curve Primality Proving
http://calistamusic.dreab.com/p-Elliptic_curve_primality_proving
http://calistamusic.dreab.com/p-Elliptic_curve_primality_proving
Elliptic Curve Primality Proving (ECPP) is a method based on elliptic curves to prove the primality
of a number (see Elliptic curve primality testing). It is a general-purpose algorithm, meaning it
does not depend on the number being of a special form. ECPP is currently in practice the fastest
known algorithm for testing the primality of general numbers, but the worst-case execution time
is not known. ECPP heuristically runs in time:
for some
. This exponent may be decreased to 4+\epsilon for some versions by heuristic
arguments. ECPP works the same way as most other primality tests do, finding a group and
showing its size is such that p is prime. For ECPP the group is an elliptic curve over a finite set of
quadratic forms such that p − 1 is trivial to factor over the group.
ECPP generates an Atkin-Goldwasser-Kilian-Morain certificate of primality by recursion and then
attempts to verify the certificate. The step that takes the most CPU time is the certificate
generation, because factoring over a class field must be performed. The certificate can be verified
quickly, allowing a check of operation to take very little time.
In 2006 the largest prime that has been proved with ECPP is the 20,562-digit Mills' prime:
(((((((((23 + 3)3 + 30)3 + 6)3 + 80)3 + 12)3 + 450)3 + 894)3 + 3636)3 + 70756)3 + 97220.
The distributed computation with software by François Morain started in September 2005 and
ended in June 2006. The cumulated time corresponds to one AMD Opteron 250 processor at 2.39
GHz for 2219 days (near 6 years).
As of 2011 the largest prime that has been proved with ECPP is the 26,643-digits LR prime. The
distributed computation with software by François Morain started in January 2011 and ended in
April 2011. The cumulated time corresponds to one processor for 2255 days (more than 6 years).
14.19.
Elliptic Curve Primality Testing
Elliptic curve
http://calistamusic.dreab.com/p-Elliptic_curve_primality_testing
http://www.peach.dreab.com/p-Elliptic_curve_primality_proving
Elliptic curve primality testing techniques are among the quickest and most widely used methods
in primality proving. It is an idea forwarded by Shafi Goldwasser and Joe Kilian in 1986 and
turned into an algorithm by A.O.L. Atkin the same year. The algorithm was altered and improved
by several collaborators subsequently, and notably by Atkin and Francois Morain, in 1993. The
concept of using elliptic curves in factorization had been developed by H.W. Lenstra in 1985, and
the implications for its use in primality testing (and proving) followed quickly.
The elliptic curve test, proves primality (or compositeness) with a quickly verifiable certificate.
Elliptic curve primality proving provides an alternative to (among others) the Pocklington
primality test, which can be difficult to implement in practice. Interestingly, the elliptic curve
primality tests are based on criteria which is analogous to the Pocklington criterion, on which that
test is based, where the group
is replaced by
, and E is a properly
chosen elliptic curve. We will now state a proposition on which to base our test, which is
analogous to the Pocklington criterion, and gives rise to the Goldwasser-Kilian-Atkin form of the
elliptic curve primality test.
Contents
14.19.1. Proposition
Let N be a positive integer, and E be the set which is defined by the equation y2 = x3 + ax + b(mod
N). Consider E over
, use the usual addition law on E, and write O for the neutral
element on E.
Let m be an integer. If there is a prime q which divides m, and is greater than (N1 / 4 + 1)2 and
there exists a point P on E such that
(1) mP = 0
(2) (m/q)P is defined and not equal to 0
Then N is prime.
14.19.2. Proof
If N is composite, then there exists a prime
that divides N. Define Ep as the elliptic
curve defined by the same equation as E but evaluated modulo p rather than modulo N. Define
mp as the order of the group Ep. By Hasse's theorem on elliptic curves we know
and thus gcd(q,mp) = 1 and there exists an integer u with the property that
Let Pp be the point P evaluated modulo p. Thus, on Ep we have
by (1), as mPp is calculated using the same method as mP, except modulo p rather than modulo N
(and
).
This contradicts (2), because if (m/q)P is defined and not equal to 0 (mod N), then the same
method calculated modulo p instead of modulo N will yield
14.19.3. Goldwasser–Kilian algorithm
From this proposition an algorithm can be constructed to prove an integer, N, is prime. This is
done as follows:
Choose three integers at random, a, x, y and set
Now P = (x,y) is a point on E, where we have that E is defined by y2 = x3 + ax + b. Next we need an
algorithm to count the number of points on E. Applied to E, this algorithm (Koblitz and others
suggest Schoof's algorithm) produces a number m which is the number of points on curve E over
FN, provided N is prime. Next we have a criterion for deciding whether our curve E is acceptable.
If we can write m in the form m = kq where
is a small integer and q a probable prime (it
has passed some previous probabilistic primality test, for example), then we do not discard E. If it
is not possible to write m in this form, we discard our curve and randomly select another triple (a,
x, y) to start over.
Assuming we find a curve which passes the criterion, proceed to calculate mP and kP. If at any
stage in the calculation we encounter an undefined expression (from calculating the multiples of
P or in our point counting algorithm), it gives us a non-trivial factor of N.
If
it is clear that N is not prime, because if N were prime then E would have order m,
and any element of E would become 0 on multiplication by m. If kP = 0 then we have hit a
dead-end and must start again with a different triple.
Now if mP = 0 and kP \neq 0 then our previous proposition tells us that N is prime. However
there is one possible problem, which is the primality of q. This must be verified, using the same
algorithm. So we have described a down-run procedure, where the primality of N can be proven
through the primality of q and indeed smaller 'probable primes' until we have reached certain
primes and are finished.
14.19.4. Problems with the algorithm
Atkin and Morain state "the problem with GK is that Schoof's algorithm seems almost impossible
to implement. It is very slow and cumbersome to count all of the points on E using Schoof's
algorithm, which is the preferred algorithm for the Goldwasser–Kilian algorithm. However, the
original algorithm by Schoof is not effcicient enough to provide the number of points in short
time. These comments have to be seen in the historical context, before the improvements by
Elkies and Atkin to Schoof's method.
A second problem Koblitz notes is the difficulty of finding the curve E whose number of points is
of the form kq, as above. There is no known theorem which guarantees we can find a suitable E in
polynomially many attempts. The distribution of primes on the Hasse interval
, which contains m, is not the same as the distribution of
primes in the group orders, counting curves with multiplity. However, this is not a significant
problem in practice.
14.19.5. Atkin–Morain elliptic curve primality test (ECPP)
In a 1993 paper, Atkin and Morain described an algorithm ECPP which avoided the trouble of
relying on a cumbersome point counting algorithm (Schoof's). The algorithm still relies on the
proposition stated above, but rather than randomly generating elliptic curves and searching for a
proper m, their idea was to construct a curve E where the number of points is easy to compute.
Complex multiplication is key in the construction of the curve.
Now, given an N for which primality needs to be proven we need to find a suitable m and q, just
as in the Goldwasser-Kilian test, that will fulfill the proposition and prove the primality of N. (Of
course, a point P and the curve itself, E, must also be found.)
ECPP uses complex multiplication to construct the curve E, doing so in a way that allows for m
(the number of points on E) to be easily computed. We will now describe this method:
Utilization of complex multiplication requires a negative discriminant, D, such that D can be
written as the product of two elements
, or completely equivalently, we can write
the equation:
For some a, b. If we can describe N in terms of either of these forms, we can create an elliptic
curve E on
with complex multiplication (described in detail below), and the number
of points is given by:
For N to split into two the two elements, we need that
(where
denotes
the Legendre symbol). This is a necessary condition, and we achieve sufficiency if the class
number h(D) of the order in
is 1. This happens for only 13 values of D, which are the
elements of {−3, −4, −7, −8, −11, −12, −16, −19, −27, −28, −43, −67, −163}
14.19.6. The test
Pick discriminants D in sequence of increasing h(D). For each D check if
and
whether 4N can be written as:
This part can be verified using Cornacchia's algorithm. Once acceptable D and a have been
discovered, calculate m = N + 1 − a. Now if m has a prime factor q of size
q > (N1 / 4 + 1)2
14.19.7. Complex multiplication method
For completeness, we will provide an overview of complex multiplication,
the way in which an elliptic curve can be created, given our D (which can
be written as a product of two elements).
Assume first that
and
(these cases are much more easily done).
It is necessary to calculate the elliptic j-invariants of the h(D) classes
of the order of discriminant D as complex numbers. There are several
formulas to calculate these.
Next create the monic polynomial HD(X), which has roots corresponding to
the h(D) values. Note, that HD(X) is the class polynomial. From complex
multiplication theory, we know that HD(X) has integer coefficients, which
allows us to estimate these coefficients accurately enough to discover
their true values.
Now, if N is prime, CM tells us that HD(X) splits modulo N into a product
of h(D) linear factors, based on the fact that D was chosen so that N splits
as the product of two elements. Now if j is one of the h(D) roots modulo
N we can define E as:
c is any quadratic nonresidue mod N, and r is either 0 or 1.
Given a root j there are only two possible nonisomorphic choices of E,
one for each choice of r. We have the cardinality of these curves as
or
14.19.8. Discussion
Just as with the Goldwasser–Killian test, this one leads to a down-run
procedure. Again, the culprit is q. Once we find a q that works, we must
check it to be prime, so in fact we are doing the whole test now for q.
Then again we may have to perform the test for factors of q. This leads
to a nested certificate where at each level we have an elliptic curve E,
an m and the prime in doubt, q.
14.19.9. Example of Atkin–Morain ECPP
We construct an example to prove that N = 167 is prime using the
Atkin–Morain ECPP test. First proceed through the set of 13 possible
discriminants, testing whether the Legendre Symbol (D / N) = 1, and if
4N can be written as 4N = a2 + | D | b2.
For our example D = −43 is chosen. This is because (D / N) = ( − 43 / 167)
= 1 and also, using Cornacchia's algorithm, we know that
and thus a = 25 and b = 1.
The next step is to calculate m. This is easily done as m = N + 1 − a which
yields m = 167 + 1 − 25 = 143. Next we need to find a probable prime divisor
of m, which was referred to as q. It must satisfy the condition that q >
(N1 / 4 + 1)2
Now in this case, m = 143 = 11*13. So unfortunately we cannot choose 11
or 13 as our q, for it does not satisfy the necessary inequality. We are
saved, however, by an analogous proposition to that which we stated before
the Goldwasser–Kilian algorithm, which comes from a paper by Morain. It
states, that given our m, we look for an s which divides m, s > (N1 / 4 +
1)2, but is not necessarily prime, and check whether, for each pi which
divides s
for some point P on our yet to be constructed curve.
If s satisfies the inequality, and its prime factors satisfy the above,
then N is prime.
So in our case, we choose s = m = 143. Thus our possible pi's are 11 and
13. First, it is clear that 143 > (1671 / 4 + 1)2, and so we need only check
the values of
(143 / 11)P = 13P and (143 / 13)P = 11P.
But before we can do this, we must construct our curve, and choose a point
P. In order to construct the curve, we make use of complex multiplication.
In our case we compute the J-invariant
Next we compute
and we
know our elliptic curve is of the form:
y2 = x3 + 3kc2x + 2kc3,
where k is as described previously and c a non-square in
. So we can
begin with
,
which yields
E: y2 = x3 + 140x + 149(mod 167)
Now, utilizing the point P = (6,6) on E it can be verified that 143P =
.
It is simple to check that 13(6, 6) = (12, 65) and 11P = (140, 147), and
so, by Morain's proposition, N is prime.
14.19.10. Complexity and running times
Goldwasser and Kilian's elliptic curve primality proving algorithm
terminates in expected polynomial time for at least
of prime inputs.
14.19.11. Conjecture
Let π(x) be the number of primes smaller than x
for sufficiently large x.
If one accepts this conjecture then the Goldwasser–Kilian algorithm
terminates in expected polynomial time for every input. Also, if our N
is of length k, then the algorithm creates a certificate of size O(k2)
that can be verified in O(k4).
Now consider another conjecture, which will give us a bound on the total
time of the algorithm.
14.19.12. Conjecture 2
Suppose there exist positive constants c1 and c2 such that the amount of
primes in the interval
is larger than
Then the Goldwasser Kilian algorithm proves the primality of N in an
expected time of
For the Atkin–Morain algorithm, the running time stated is
O((logN)6 + ε) for some ε > 0
14.19.13. Primes of special form
For some forms of numbers, it is possible to find 'short-cuts' to a
primality proof. This is the case for the Mersenne numbers. In fact, due
to their special structure, which allows for easier verification of
primality, the largest known prime number is a Mersenne number. There has
been a method in use for some time to verify primality of Mersenne numbers,
known as the Lucas–Lehmer test. This test does not rely on elliptic curves.
However we present a result where numbers of the form N = 2kn − 1 where
, n odd can be proven prime (or composite) using elliptic
curves. Of course this will also provide a method for proving primality
of Mersenne numbers, which correspond to the case where n = 1. It should
be noted that there is a method in place for testing this form of number
without elliptic curves (with a limitation on the size of k) known as the
Lucas–Lehmer–Riesel test. The following method is drawn from the paper
Primality Test for 2kn − 1 using Elliptic Curves, by Yu Tsumura.
14.19.14. Group structure of E(FN)
We take E as our elliptic curve, where E is of the form y2 = x3 − mx for
,
, where
is prime, and
odd.
14.19.15. Theorem 1
#E
14.19.16. Theorem 2
E
or
E
Depending on whether or not m is a quadratic residue modulo p.
14.19.17. Theorem 3
Let
be prime, E, k, n, m as above. Take Q = (x,y) on E,
x a quadratic nonresidue modulo p.
Then the order of Q is divisible by 2k in the cyclic group
.
First we will present the case where n is relatively small with respect
to 2k, and this will require one more theorem.
14.19.18. Theorem 4
Choose a λ > 1. E, k, n, m are specified as above with the added
restrictions that
and
p is a prime if and only if there exists a Q = (x,y) which is on E, such
that the
gcd(Si,p) = 1 for i = 1, 2, ...,k
−
1 and
where Si is a sequence with initial value S0 = x
14.19.19. The algorithm
We provide the following algorithm, which relies mainly on Theorems 3 and
4. To verify the primality of a given number N, perform the following
steps:
(1) Chose
, and find y such that
such that
Take
2
3
Then Q' = (x,y) is on E: y = x − mx where
Calculate Q = mQ'. If
then N is composite, otherwise proceed to
(2).
(2) Set Si as the sequence with initial value Q. Calculate Si for i = 1,2,...,
k − 1
If gcd(Si,N) > 1 for an i, where
then N is composite.
Otherwise, proceed to (3).
(3) If
This completes the test.
then N is prime. Otherwise, N is composite.
14.19.20. Justification of the algorithm
In (1), and elliptic curve, E is picked, along with a point Q on E, such
that the x-coordinate of Q is a quadratic nonresidue. We can say
Thus, if N is prime, Q' has order divisible by 2k, by Theorem 3,
and therefore the order of Q' is 2kd d | n.
This means Q = nQ' has order 2k. Therefore, if (1) concludes that N is
composite, it truly is composite. (2) and (3) check if Q has order 2k.
Thus, if (2) or (3) conclude N is composite, it is composite.
Now, if the algorithm concludes that N is prime, then that means S1
satisfies the condition of Theorem 4, and so N is truly prime.
There is an algorithm as well for when n is large, however for this we
refer to the aforementioned article.
14.19.21. References
14.20.
Demytko
14.21.
Euler Test
14.22.
Fermat 素性测试
http://www.doc.ic.ac.uk/~cd04/430notes/fermat.jar
http://kobep525.rsjp.net/~math/notes/note02.html
http://calistamusic.dreab.com/p-Fermat_primality_test
Fermat’s little theorem:
If n is prime and doesn’t divide a, then
a n1  1(mod n)
The Fermat primality test is a probabilistic test to determine if a number
is probable prime.
14.22.1. Contents






1 Concept
2 Example
3 Algorithm and running time
4 Flaw
5 Applications
6 References
14.22.2. Concept
Fermat's little theorem states that if p is prime and
, then
If we want to test if p is prime, then we can pick random a's in the interval
and see if the equality holds. If the equality does not hold for a value
of a, then p is composite. If the equality does hold for many values of
a, then we can say that p is probable prime.
It might be in our tests that we do not pick any value for a such that
the equality fails. Any a such that
when n is composite is known as a Fermat liar. Vice versa, in this case
n is called Fermat pseudoprime to base a.
If we do pick an a such that
then a is known as a Fermat witness for the compositeness of n.
14.22.3. Example
Suppose we wish to determine if n = 221 is prime. Randomly pick 1 ≤
a < 221, say a = 38. Check the above equality:
Either 221 is prime, or 38 is a Fermat liar, so we take another a, say
26:
So 221 is composite and 38 was indeed a Fermat liar.
14.22.4. Algorithm and running time
The algorithm can be written as follows:
Inputs: n: a value to test for primality; k: a parameter that determines
the number of times to test for primality
Output: composite if n is composite, otherwise probably prime
repeat k times:
pick a randomly in the range [1, n − 1]
if
, then return composite
return probably prime
Using fast algorithms for modular exponentiation, the running time of this
algorithm is O(k × log2n × log log n × log log log n), where k is the
number of times we test a random a, and n is the value we want to test
for primality.
14.22.5. Flaw
There are infinitely many values of n (known as Carmichael numbers) for
which all values of a for which gcd(a,n) = 1 are Fermat liars. While
Carmichael numbers are substantially rarer than prime numbers, there are
enough of them that Fermat's primality test is often not used in favor
of other primality tests such as Miller-Rabin and Solovay-Strassen.
In general, if n is not a Carmichael number then at least half of all
are Fermat witnesses. For proof of this, let a be a Fermat witness and
a1, a2, ..., as be Fermat liars. Then
and so all
for i = 1,2,...,s are Fermat witnesses.
14.22.6. Applications
The encryption program PGP uses this primality test in its algorithms.
The chance of PGP generating a Carmichael number is less than 1 in 1050,
which is more than adequate for practical purposes.
14.23.
Fermat-Euler
14.24.
Frobenius pseudoprimality test
http://www.peach.dreab.com/p-Frobenius_pseudoprime
14.25.
Goldwasser Kilian Algorithm
14.26.
Gordon‟s algorithm
14.27.
雅克比和素性判别方法
14.28.
Konyagin – Pomerance n-1 Test
14.29.
Lehmann
另一种更简单的测试是由 Lehmann 独自研究的。下面是它的测试算法:
(1)
(2)
(3)
(4)
选择一个小于 p 的随机数 a。
计算 a^(p-1)/2 mod p
如果 a^(p-1)/2<>1 或-1(mod p)
,那么 p 肯定不是素数。
如果 a^(p-1)/2=1 或-1(mod p),那麽 p 不是素数的可能性值多是 50%
同样,重复 t 次,那麽 p 可能是素数所冒的错误风险不超过 1/2^t。
14.30.
Maurer‟s algorithm
14.31.
Miller-Rabin / Rabin-Miller 素 性 测 试 算 法
Miller-Rabin Compositeness Test
http://en.literateprograms.org/Category:Miller-Rabin_primality_test
http://www.cryptomathic.com/Default.aspx?ID=479
http://calistamusic.dreab.com/p-Miller%E2%80%93Rabin_primality_test
Verification of the Miller–Rabin probabilistic primality test
This primality test is also called the Selfridge-Miller-Rabin test or the strong prime test. It is a
refinement of the Fermat test which works very well in practice.
14.31.1. 快速判定素数-----Miller-Rabin 算法
说 Miller-Rabin.这是个很容易且广泛使用的简单算法,它基于 Gary Miller 的部分象法,有
Michael Rabin 发展。事实上,这是在 NIST 的 DSS 建议中推荐的算法的一个简化版。 更多的
人叫它“测试”
,因为通过 Miller-Rabin 测试的并不一定就是素数,非素数通过测试的概率是
1/4。它的原理无论如何都会让人想起“费马小定理“
费马小定理 如果 P 是一个素数,且 0<a<p,则 a^(p-1)≡1(mod p)
例如,67 是一个素数,则 2^66mod 67=1。
利用费马小定理,对于给定的整数 n,可以设计一个素数判定算法。通过计算 d=2^(n-1)mod
n 来判定整数 n 的素性。当 d 不等于 1 时,n 肯定不是素数;当 d 等于 1 时,n 则很可能是
素数。但也存在合数 n 使得 2^(n-1)≡1(mod n)。例如,满足此条件的最小合数是 n=341。为
了提高测试的准确性,我们可以随机地选取整数 1
费马小定理毕竟只是素数判定的一个必要条件。满足费马小定理条件的整数 n 未必全是素
数。有些合数也满足费马小定理的条件。这些合数被称做 Carmichael 数,前 3 个 Carmichael
数是 561,1105,1729。Carmichael 数是非常少的。在 1~100,000,000 范围内的整数中,只有 255
个 Carmichael 数。利用下面的二次探测定理可以对上面的素数判定算法作进一步改进,以避
免将 Carmichael 数当作素数。
二次探测定理
如果 p 是一个素数,0<x<p,则方程 x^2≡1(mod p)的解为 x=1,p-1
上面是在王晓东的《计算机算法分析与设计》上抄录的,看不懂网上的文章,就这个解释费
马小定理的我看懂了-___-。下面具体说一下算法的步骤吧,n 是我们要测试的数据:
0、先计算出 m、j,使得 n-1=m*2^j,其中 m 是正奇数,j 是非负整数
1、随机取一个 b,2<=b
2、计算 v=b^m mod n
3、如果 v==1,通过测试,返回
4、令 i=1
5、如果 v=n-1,通过测试,返回
6、如果 i==j,非素数,结束
7、v=v^2 mod n,i=i+1
8、循环到 5
最后说明,运行一次测试的判断结果当然不能满意,多运行几次随机测试,这样我们判断错
的概率就变为(1/4)^loop 了:),往往还要先通过一个小的素数表现进行一些筛选来提高效率:)
14.31.2. Miller-Rabin Algorithm
We now describe one final RP alg for compositeness. This is along the
lines of using a^{n-1} =? 1 (mod n) as a test.
* Let n = 2^r * B, where B is odd.
* pick a in {1,...,n-1} at random.
* Compute a^B, a^{2B}, ..., a^{n-1} (mod n).
* If a^{n-1} != 1 (mod n), then output COMPOSITE
* If we found a non {-1,+1} root of 1 in the above list, then
output COMPOSITE.
* else output POSSIBLY PRIME.
Note that we only have to worry about Carmichael numbers (composite n
such that all a in Z_n^* satisfy a^{n-1} = 1 (mod n)). For all other
composite numbers, at least half of the a's don't have this property
(since {a : a^{n-1} = 1 (mod n)} is a group).
Here's how we can argue for Carmichael numbers:
Suppose there exists a in Z_n^* satisfying a^{(n-1)/2} != 1 (mod n).
Then our "Key Lemma" implies that with probability at least 1/2, we
will find a non {-1,+1} root of 1 in computing a^{(n-1)/2} and output
COMPOSITE. If no such a exists, but there exists b in Z_n^* such that
b^{(n-1)/4} != 1 (mod n) then similarly we're OK (assuming n-1 is
divisible by 4). Going down the line from right-to-left, we can now
assume w.l.o.g. that all a in z_n^* satisfy a^B = 1 (mod n).
We now show a contradiction: let n=p1^e1 * r, where e1 is odd and p1
and r are relatively prime. Let g be a generator mod p1^e1 (we didn't
prove it, but the prime power groups are cyclic too; actually,
Carmichael numbers must be products of distinct primes, but we didn't
prove that either). Let a = (g,1) using CRT notation. If a^B = 1 (mod n)
then g^B = 1 (mod p1^e1), which means that B must be a multiple of
phi(p1^e1) since g is a generator. But B is odd and phi(p1^e1) is
even, a contradiction. QED
14.31.3. Miller-Rabbin 素性测试[ZJUT1517]
2010-03-20 21:04
今天校赛的一道题:[ZJUT1517],一看就知道是大素数验证,所以自然而然想到了出错率为
(1/4)^s 的 RP 算法-Miller-Rabbin 素性测试,但是网上找到的模板都是 wa,只有过 PKU1811
的少数几个不 wa,后来才发现是在算快速幂取模的时候用到的 long long 乘法超出 long long
范围了,于是加上 long long 乘法后就可以了。
Miller-Rabbin 算法:如果 a^(n-1)≡1 (mod n)(a 为任意<n 的正整数)则可近似认为 n 为素数。
取多个底进行试验,次数越多概率越大。如取 2 3 5 7 11 是在 2^31 内只有一组解不能通过。
2.5*10^13 以内也只有 3215031751 这一个合数。
算法基于费马小定理:设 p 是素数,a 与 p 互素,则 a^(p-1) ≡1(mod p)
当 n>1 时, 1<=a<=n-1, 如果 a^(n-1)!≡1(mod n), 则 n 必然是合数
但是当 n 是合数时,不一定满足上式, 也就是当 a^(n-1)≡1(mod n)时, n 不一定是素数
例如当 a 取 2, n 取 341 时, 满足式子 a^(n-1)≡1(mod n),但是 341 是合数,此时称 341 为基 2
的伪素数.
即费马小定理只是必要条件,满足条件的也可能是合数
模板:
#include <iostream>
#include <cstdlib>
#include <ctime>
using namespace std;
inline long long mul(long long a,long long b,long long m)//a * b % m
{
long long ret = 0;
while (b > 0)
{
if (b & 1)
ret = (ret + a) % m;
b >>= 1;
a = (a << 1) % m;
}
return ret;
}
inline long long mod(long long a,long long b,long long m)//a^b % m
{
long long res,t;
res = 1;
t = a;
while (b > 0)
{
if (b & 1)
res = mul(res,t,m);//res = res * t % m;
t = mul(t,t,m);//t = t * t % m;
b >>= 1;
}
return res;
}
inline bool Miller_Rabbin(long long n)
{
long long a;
for (int i = 0;i < 5;i++)
{
a = (long long)rand()*rand()*rand() % (n-2) +2;
if (mod(a,n-1,n) != 1)
return false;
}
return true;
}
int main()
{
long long n;
srand((unsigned)time(NULL));
while (scanf("%I64d",&n) != EOF)
{
if (n == 2 || Miller_Rabbin(n))
printf("Yes\n");
else
printf("No\n");
}
return 0;
}
14.31.4. Rabin-Miller
这是个很容易且广泛使用的简单算法,它基于 Gary Miller 的部分象法,有 Michael Rabin 发
展。事实上,这是在 NIST 的 DSS 建议中推荐的算法的一个简化版。
首先选择一个代测的随机数 p,
计算 b,
b 是 2 整除 p-1 的次数。
然后计算 m,
使得 n=1+(2^b)m。
(1) 选择一个小于 p 的随机数 a。
(2) 设 j=0 且 z=a^m mod p
(3) 如果 z=1 或 z=p-1,那麽 p 通过测试,可能使素数
(4) 如果 j>0 且 z=1, 那麽 p 不是素数
(5) 设 j=j+1。如果 j<b 且 z<>p-1,设 z=z^2 mod p,然后回到(4)。如果 z=p-1,那麽 p 通过
测试,可能为素数。
(6) 如果 j=b 且 z<>p-1,不是素数
这个测试较前一个速度快。数 a 被当成证据的概率为 75%。这意味着当迭代次数为 t 时,它
产生一个假的素数所花费的时间不超过 1/4^t。实际上,对大多数随机数,几乎 99.99%肯定
a 是证据。
实际考虑:
在实际算法,产生素数是很快的。
(1) 产生一个 n-位的随机数 p
(2) 设高位和低位为 1(设高位是为了保证位数,设低位是为了保证位奇数)
(3) 检查以确保 p 不能被任何小素数整除:如 3,5,7,11 等等。有效的方法是测试小
于 2000 的素数。使用字轮方法更快
(4) 对某随机数 a 运行 Rabin-Miller 检测,如果 p 通过,则另外产生一个随机数 a,在测试。
选取较小的 a 值,以保证速度。做 5 次 Rabin-Miller 测试如果 p 在其中失败,从新产生 p,
再测试。
在 Sparc II 上实现: 2 .8 秒产生一个 256 位的素数
24.0 秒产生一个 512 位的素数
2 分钟产生一个 768 位的素数
5.1 分钟产生一个 1024 位的素数
The Miller–Rabin primality test or Rabin–Miller primality test is a
primality test: an algorithm which determines whether a given number is
prime, similar to the Fermat primality test and the Solovay–Strassen
primality test. Its original version, due to Gary L. Miller, is
deterministic, but the determinism relies on the unproven generalized
Riemann hypothesis; Michael O. Rabin modified it to obtain an
unconditional probabilistic algorithm.
14.31.5. Contents







1 Concepts
2 Example
3 Algorithm and running time
4 Accuracy of the test
5 Deterministic variants of the test
6 Notes
7 External links
14.31.6. Concepts
Just like the Fermat and Solovay–Strassen tests, the Miller–Rabin test
relies on an equality or set of equalities that hold true for prime values,
then checks whether or not they hold for a number that we want to test
for primality.
First, a lemma about square roots of unity in the finite field
,
where p is prime and p > 2. Certainly 1 and −1 always yield 1 when squared
mod p; call these trivial square roots of 1. There are no nontrivial square
roots of 1 mod p (a special case of the result that, in a field, a polynomial
has no more zeroes than its degree). To show this, suppose that x is a
square root of 1 mod p. Then:
In other words, p divides the product (x − 1)(x + 1). It thus divides one
of the factors and it follows that x is either congruent to 1 or −1 mod
p.
Now, let n be an odd prime. Then n−1 is even and we can write it as 2s·d,
where s and d are positive integers (d is odd). For each
,
either
or
for some
To show that one of these must be true, recall Fermat's little theorem:
By the lemma above, if we keep taking square roots of an − 1, we will get
either 1 or −1. If we get −1 then the second equality holds and we are
done. If we never get −1, then when we have taken out every power of 2,
we are left with the first equality.
The Miller–Rabin primality test is based on the contrapositive of the
above claim. That is, if we can find an a such that
and
for all
then n is not prime. We call a a witness for the compositeness of n
(sometimes misleadingly called a strong witness, although it is a certain
proof of this fact). Otherwise a is called a strong liar, and n is a strong
probable prime to base a. The term "strong liar" refers to the case where
n is composite but nevertheless the equations hold as they would for a
prime.
For every odd composite n, there are many witnesses a. However, no simple
way of generating such an a is known. The solution is to make the test
probabilistic: we choose nonzero
randomly, and check whether
or not it is a witness for the compositeness of n. If n is composite, most
of the choices for a will be witnesses, and the test will detect n as
composite with high probability. There is, nevertheless, a small chance
that we are unlucky and hit an a which is a strong liar for n. We may reduce
the probability of such error by repeating the test for several
independently chosen a.
14.31.7. Example
Suppose we wish to determine if n = 221 is prime. We write n − 1 =
220 as 22·55, so that we have s = 2 and d = 55. We randomly select
a number a such that a < n, say a = 174. We proceed to compute:


a20·d mod n = 17455 mod 221 = 47 ≠ 1, n − 1
a21·d mod n = 174110 mod 221 = 220 = n − 1.
Since 220 ≡ −1 mod n, either 221 is prime, or 174 is a strong liar for
221. We try another random a, this time choosing a=137:


a20·d mod n = 13755 mod 221 = 188 ≠ 1, n − 1
a21·d mod n = 137110 mod 221 = 205 ≠ n − 1.
Hence 137 is a witness for the compositeness of 221, and 174 was in fact
a strong liar. Note that this tells us nothing about the factors of 221
(which are 13 and 17).
14.31.8. Algorithm and running time
The algorithm can be written in pseudocode as follows:
Input: n > 3, an odd integer to be tested for primality;
Input: k, a parameter that determines the accuracy of the test
Output: composite if n is composite, otherwise probably prime
write n − 1 as 2s·d with d odd by factoring powers of 2 from n − 1
LOOP: repeat k times:
pick a random integer a in the range [2, n − 2]
x ← ad mod n
if x = 1 or x = n − 1 then do next LOOP
for r = 1 .. s − 1
x ← x2 mod n
if x = 1 then return composite
if x = n − 1 then do next LOOP
return composite
return probably prime
Using modular exponentiation by repeated squaring, the running time of
this algorithm is O(k log3 n), where k is the number of different values
of a we test; thus this is an efficient, polynomial-time algorithm.
FFT-based multiplication can push the running time down to O(k log2 n log
log n log log log n) = Õ(k log2 n).
In the case that the algorithm returns "composite" because x = 1, it has
also discovered that d2r is (an odd multiple of) the order of a — a fact
which can (as in Shor's algorithm) be used to factorize n, since n then
divides
but not either factor by
itself. The reason Miller–Rabin does not yield a probabilistic
factorization algorithm is that if
(i.e., n is not
a pseudoprime to base a) then no such information is obtained about the
period of a, and the second "return composite" is taken.
14.31.9. Accuracy of the test
The more bases a we test, the better the accuracy of the test. It can be
shown that for any odd composite n, at least ¾ of the bases a are witnesses
for the compositeness of n. The Miller–Rabin test is strictly stronger
than the Solovay–Strassen primality test in the sense that for every
composite n, the set of strong liars for n is a subset of the set of Euler
liars for n, and for many n, the subset is proper. If n is composite then
the Miller–Rabin primality test declares n probably prime with a
probability at most 4−k. On the other hand, the Solovay–Strassen primality
test declares n probably prime with a probability at most 2−k.
On average the probability that a composite number is declared probably
prime is significantly smaller than 4−k. Damgård, Landrock and Pomerance
compute some explicit bounds. Such bounds can, for example, be used to
generate primes; however, they should not be used to verify primes with
unknown origin, since in cryptographic applications an adversary might
try to send you a pseudoprime in a place where a prime number is required.
In such cases, only the error bound of 4−k can be relied upon.
14.31.10. Deterministic variants of the test
The Miller–Rabin algorithm can be made deterministic by trying all
possible a below a certain limit. The problem in general is to set the
limit so that the test is still reliable.
If the tested number n is composite, the strong liars a coprime to n are
contained in a proper subgroup of the group
, which means that
if we test all a from a set which generates
, one of them must
be a witness for the compositeness of n. Assuming the truth of the
generalized Riemann hypothesis (GRH), it is known that the group is
generated by its elements smaller than O((log n)2), which was already
noted by Miller. The constant involved in the Big O notation was reduced
to 2 by Eric Bach. This leads to the following conditional primality
testing algorithm:
Input: n > 1, an odd integer to test for primality.
Output: composite if n is composite, otherwise prime
write n−1 as 2s·d by factoring powers of 2 from n−1
repeat for all
:
then return composite
return prime
The running time of the algorithm is Õ((log n)4). The full power of the
generalized Riemann hypothesis is not needed to ensure the correctness
of the test: as we deal with subgroups of even index, it suffices to assume
the validity of GRH for quadratic Dirichlet characters.
This algorithm is not used in practice, as it is much slower than the
randomized version of the Miller-Rabin test. For theoretical purposes,
it was superseded by the AKS primality test, which does not rely on
unproven assumptions.
When the number n to be tested is small, trying all a < 2(ln n)2 is not
necessary, as much smaller sets of potential witnesses are known to
suffice. For example, Pomerance, Selfridge and Wagstaff and Jaeschke have
verified that






if n < 1,373,653, it is enough to test a = 2 and 3;
if n < 9,080,191, it is enough to test a = 31 and 73;
if n < 4,759,123,141, it is enough to test a = 2, 7, and 61;
if n < 2,152,302,898,747, it is enough to test a = 2, 3, 5, 7, and 11;
if n < 3,474,749,660,383, it is enough to test a = 2, 3, 5, 7, 11, and 13;
if n < 341,550,071,728,321, it is enough to test a = 2, 3, 5, 7, 11, 13, and 17.
See The Primes Page, Zhang and Tang, SPRP records and sequence A014233
in OEIS for other criteria of this sort. These results give very fast
deterministic primality tests for numbers in the appropriate range,
without any assumptions.
There is a small list of potential witnesses for every possible input size
(at most n values for n-bit numbers). However, no finite set of bases is
sufficient for all composite numbers. Alford, Granville, and Pomerance
have shown that there exist infinitely many composite numbers n whose
smallest compositeness witness is at least (ln n)1/(3ln ln ln n). They also argue
heuristically that the smallest number w such that every composite number
below n has a compositeness witness less than w should be of order Θ(log
n log log n).
14.31.11. Notes
14.32.
MONTE CARLO PRIMALITY TESTS
MONTE CARLO PRIMALITY
14.32.1. A NOTE ON MONTE CARLO PRIMALITY
TESTS
AND
ALGORITHMIC
INFORMATION
THEORY
Gregory J. Chaitin
IBM Thomas J. Watson Research Center
Jacob T. Schwartz1
Courant Institute of Mathematical Sciences
Algorithmic Information Theory
14.33.
Pépin's
http://calistamusic.dreab.com/p-P%C3%A9pin%27s_test
In mathematics, http://calistamusic.dreab.com/p-Mathematics Pépin's
test is a primality test,
http://calistamusic.dreab.com/p-Primality_test which can be used to
determine whether a Fermat number
http://calistamusic.dreab.com/p-Fermat_number is prime
http://calistamusic.dreab.com/p-Prime_number . It is a variant of
Proth's test http://calistamusic.dreab.com/p-Proth%27s_theorem . The
test is named for a French mathematician, Théophile Pépin
http://calistamusic.dreab.com/p-Th%C3%A9ophile_P%C3%A9pin .
14.33.1. Contents


1 Description of the test
2 Proof of correctness
3 References
4 External links


14.33.2. Description of the test
be the nth Fermat number. Pépin's test states that for
Let
n > 0,
Fn is prime if and only if
The expression
can be evaluated modulo Fn by repeated squaring.
This makes the test a fast polynomial-time algorithm. However, Fermat
numbers grow so rapidly that only a handful of Fermat numbers can be tested
in a reasonable amount of time and space.
Other bases may be used in place of 3, for example 5, 6, 7, or 10 (sequence
A129802).
14.33.3. Proof of correctness
For one direction, assume that the congruence
holds. Then
modulo Fn divides
, thus the multiplicative order of 3
, which is a power of two. On the other
hand, the order does not divide (Fn − 1) / 2, and therefore it must be
equal to Fn − 1. In particular, there are at least Fn − 1 numbers below
Fn coprime to Fn, and this can happen only if Fn is prime.
For the other direction, assume that Fn is prime. By Euler's criterion,
,
where
is the Legendre symbol. By repeated squaring, we find that
, thus
, and
. As
, we conclude
from the law of quadratic
reciprocity.
14.33.4. References

P. Pépin, Sur la formule
, Comptes Rendus Acad. Sci. Paris 85 (1877),
pp. 329–333.
14.34.
Proth's theorem
http://calistamusic.dreab.com/p-Proth%27s_theorem
http://www.peach.dreab.com/p-Proth%27s_theorem
Proth's test http://calistamusic.dreab.com/p-Proth%27s_theorem .
In number theory, Proth's theorem is a primality test for Proth numbers.
It states that if p is a Proth number, of the form k2n + 1 with k odd and
k < 2n, then if for some integer a,
then p is prime (called a Proth prime). This is a practical test because
if p is prime, any chosen a has about a 50 percent chance of working.
If p is a quadratic nonresidue modulo a then the converse is also true,
and the test is conclusive. Such an a may be found by iterating a over
small primes and computing the Jacobi symbol until:
14.34.1. Contents





1 Numerical examples
2 History
3 See also
4 References
5 External links
14.34.2. Numerical examples
Examples of the theorem include:




for p = 3, 21 + 1 = 3 is divisible by 3, so 3 is prime.
for p = 5, 32 + 1 = 10 is divisible by 5, so 5 is prime.
for p = 13, 56 + 1 = 15626 is divisible by 13, so 13 is prime.
for p = 9, which is not prime, there is no a such that a4 + 1 is divisible by 9.
The first Proth primes are (sequence
A080076):
3, 5, 13, 17, 41, 97, 113, 193, 241, 257, 353, 449, 577, 641, 673, 769, 929, 1153 ….
As of 2009, the largest known Proth prime is 19249 · 213018586 + 1, found
by Seventeen or Bust. It has 3918990 digits and is the largest known prime
which is not a Mersenne prime.
14.34.3. History
François Proth (1852 – 1879) published the theorem around 1878.
14.34.4. See also

Sierpinski number
14.34.5. References
14.35.
Random Quadratic Frobenius Test (RQFT)
14.36.
Solovay- Strassen 算法
Solovay–Strassen
http://calistamusic.dreab.com/p-Solovay%E2%80%93Strassen_primality_test
14.36.1. Solovay-Strassen primality test.
We now turn to an RP algorithm for compositeness, the Solovay-Strassen
algorithm. First, we need to define a generalization of the Legendre
symbol called the Jacobi symbol:
If n = p1 * p2 * ... * pt, where the pi are primes not nec. distinct,
[a/n] = [a/p1]*[a/p2]*...*[a/pt].
It turns out that the Jacobi symbol can be calculated efficiently
(essentially using a Euclidean GCD-like algorithm) via a number of
identities that are not trivial to prove. A couple easy-to-prove
identities are: [ab/n] = [a/n]*[b/n], and if a=b (mod n) then [a/n]=[b/n].
Here is the Solovay-Strassen algorithm:
* Pick random a in {1,...,n-1}
* if gcd(a,n) != 1, then COMPOSITE.
* if [a/n] != a^{(n-1)/2} then COMPOSITE.
* else say POSSIBLY PRIME.
Thm: if n is composite, then this algorithm says "composite" with
probability at least 1/2.
Proof: the proof is much like that of our "key lemma".
Let J = {a in Z_n^* : [a/n] = a^{(n-1)/2} (mod n)}.
J is a group (e.g., use fact that [ab/n] = [a/n]*[b/n]) so it suffices
to show there exists b not in J. By the assumption that n is not a
prime power and is not a perfect square, we can write n = p1^e1 * r,
where p1 and r are realatively prime, and e1 is odd. Let g be an
arbitrary non-residue mod p1, and let b = (g,1), in CRT notation. We
can see that b^{(n-1)/2} = (g^{(n-1)/2}, 1) is *not* congruent to -1
(mod n), since -1 = (-1, -1). So, it suffices to show that [b/n] = -1.
This in turn follows from the basic Jacobi symbol identities:
[b/n] = ([b/p1])^e1 * [b/r] = ([g/p1])^e1 * [1/r] = (-1)^e1 * 1 = -1.
QED
14.36.2. Solovag-Strasson
Robert Solovag 和 Volker Strasson 开发了一种概率的基本测试算法。这个算法使用了雅可比
函数来测试 p 是否为素数:
(1)
(2)
(3)
(4)
(5)
(6)
选择一个小于 p 的随机数 a。
如果 gcd(a,p) <> 1,那么 p 通不过测试,它是合数。
计算 j=a^(p-1)/2 mod p。
计算雅可比符号 J(a,p)
。
如果 j<>J(a,p)
,那么 p 肯定不是素数。
如果 j=J(a,p),那麽 p 不是素数的可能性值多是 50%
数 a 被称为一个证据,如果 a 不能确定 p,p 肯定不是素数。如果 p 是合数。随机数 a 是证
据的概率不小于 50%。对 a 选择 t 个不同的随机值,重复 t 次这种测试。p 通过所有 t 次测
试后,它是合数的可能性不超过 1/2^t。
The Solovay–Strassen primality test, developed by Robert M. Solovay and
Volker Strassen, is a probabilistic test to determine if a number is
composite or probably prime. It has been largely superseded by the
Miller–Rabin primality test, but has great historical importance in
showing the practical feasibility of the RSA cryptosystem.
14.36.3. Contents








1 Concepts
2 Example
3 Algorithm and running time
4 Accuracy of the test
5 Average-case behaviour
6 Complexity
7 References
8 Further reading
14.36.4. Concepts
Euler proved that for an odd prime number p and any integer a,
where
is the Legendre symbol. The Jacobi symbol is a generalisation
of the Legendre symbol to
, where n can be any odd integer. The Jacobi
symbol can be computed in time O((log
of law of quadratic reciprocity.
n)²) using Jacobi's generalization
Given an odd number n we can contemplate whether or not the congruence
holds for various values of the "base" a. If n is prime then this congruence
is true for all a. So if we pick values of a at random and test the
congruence, then as soon as we find an a which doesn't fit the congruence
we know that n is not prime (but this does not tell us a nontrivial
factorization of n). This base a is called an Euler witness for n; it is
a witness for the compositeness of n. The base a is called an Euler liar
for n if the congruence is true while n is composite.
For every composite odd n at least half of all bases
are (Euler) witnesses: this contrasts with the Fermat primality test, for
which the proportion of witnesses may be much smaller. Therefore, there
are no (odd) composite n without lots of witnesses, unlike the case of
Carmichael numbers for Fermat's test.
14.36.5. Example
Suppose we wish to determine if n
=
We randomly select an a = 47
n. We compute:


<
221 is prime. We write (n−1)/2=110.
a(n−1)/2 mod n = 47110 mod 221 = −1 mod 221
mod n =
mod 221 = −1 mod 221.
This gives that, either 221 is prime, or 47 is an Euler liar for 221. We
try another random a, this time choosing a = 2:


a(n−1)/2 mod n = 2110 mod 221 = 30 mod 221
mod n =
mod 221 = −1 mod 221.
Hence 2 is an Euler witness for the compositeness of 221, and 47 was in
fact an Euler liar. Note that this tells us nothing about the factors of
221 (which are 13 and 17).
14.36.6. Algorithm and running time
The algorithm can be written in pseudocode as follows:
Inputs: n, a value to test for primality; k, a parameter that determines
the accuracy of the test
Output: composite if n is composite, otherwise probably prime
repeat k times:
choose a randomly in the range [2,n − 1]
x ←
if x = 0 or
then return composite
return probably prime
Using fast algorithms for modular exponentiation, the running time of this
algorithm is O(k·log3 n), where k is the number of different values of
a we test.
14.36.7. Accuracy of the test
It is possible for the algorithm to return an incorrect answer. If the
input n is indeed prime, then the output will always correctly be probably
prime. However, if the input n is composite then it is possible for the
output to be incorrectly probably prime. The number n is then called a
pseudoprime.
When n is odd and composite, at least half of all a with gcd(a,n) = 1
are Euler witnesses. We can prove this as follows: let {a1, a2, ..., am}
be the Euler liars and a an Euler witness. Then, for i = 1,2,...,m:
Because the following holds:
now we know that
This gives that each ai gives a number a·ai, which is also an Euler witness.
So each Euler liar gives an Euler witness and so the number of Euler
witnesses is larger or equal to the number of Euler liars. Therefore, when
n is composite, at least half of all a with gcd(a,n) = 1 is an Euler
witness.
Hence, the probability of failure is at most 2−k (compare this with the
probability of failure for the Miller-Rabin primality test, which is at
most 4−k).
For purposes of cryptography the more bases a we test, i.e. if we pick
a sufficiently large value of k, the better the accuracy of test. Hence
the chance of the algorithm failing in this way is so small that the (pseudo)
prime is used in practice in cryptographic applications, but for
applications for which it is important to have a prime, a test like ECPP
or Pocklington should be used which proves primality.
14.36.8. Average-case behaviour
The bound 1/2 on the error probability of a single round of the
Solovay–Strassen test holds for any input n, but those numbers n for which
the bound is (approximately) attained are extremely rare. On the average,
the error probability of the algorithm is significantly smaller: it is
less than
for k rounds of the test, applied to uniformly random n ≤ x. The same
bound also applies to the related problem of what is the conditional
probability of n being composite for a random number n ≤ x which has been
declared prime in k rounds of the test.
14.36.9. Complexity
The Solovay–Strassen algorithm shows that the decision problem COMPOSITE
is in the complexity class RP.
14.37.
Square Root Compositeness Theorem
Given integers n, x, and y:
If x 2  y 2 (mod n), but x   y(mod n)
Then n is composite, and gcd(x-y, n) is a non-trivial factor
14.38.
Schwartz--Zippel test
[Schwartz 1980; Zippel 1979],
15. Papers of Others
15.1. 素性检测算法研究及其在现代密码学中的应用
专 业: 系统分析与集成
关键词: 素性检测 公钥密码学 公钥密码体制 椭圆曲线 离散对数
分类号: TN918.1
形 态: 共 63 页 约 41,265 个字 约 1.974 M 内容
阅 读: 获取全文
优秀研究生学位论文题录展示
内容摘要
素数问题是一个使很多数学家着迷的问题。
素数就是一个除了 1 和它自身以外不能被其它数整除的数。
素数的一个基本问题是如何有效地确定一个数是否是一个素数,即素性测试问题。
素性测试问题不仅在数学上是一个有挑战性的问题,而且在实际应用中也是非常重要的。
在现代密码系统中,大素数的判别和寻找对一些加密系统的建立起着关键作用。
很多公钥密码算法都用到素数,特别是 160 位(二进制)以上的大素数。
RSA 的公共模数就是两个 512 位以上的大素数的乘积;基于有限域 PF 上离散对数的公钥密
码体制,其中素数 p 要求在 512 位以上;基于椭圆曲线离散对数的公钥密码体制(ECC)中,
一般要求基点的阶是一个 160 位以上的大整数,且是一个素数。
由此可见对大数进行的素性判断的重要性。
判定一个整数是否是素数,最为简单的想法是直接利用素数的定义,用比要判断的整数小的
素数去一一试除,如果能整除被检测的数的话,那就能确定无疑为合数了.但是对于大素数
来说,由于计算量太大,根本无法实现以用于具体应用。
所以科学家们根据素数判断的理论发明了许多新的算法,提高了判断一个大数是否是一个大
素数的效率。
Eratosthenes 筛法是对于所有素数都有效的最古老的算法,然而它的时间复杂性是输入规模
的幂指数,因此在实际中使用它是不合适的。
17 世纪的 Fermat 小定理是一些有效素性测试算法的起点,但其逆定理是不满足的。
许多科学家在费马小定理的基础上进行研究发明了很多新的素数检测方法,但是这些算法大
部分都是概率性的,比如说 Miller-Rabin 算法和 Solovay-Strassen 概率素数测试法。
直 2002 年,印度的三位科学家发明了确定性的素数检测算法 AKS 算法。
但是研究表明 AKS 算法的时间复杂度和空间复杂度并不能满足时间中的需要。
随着椭圆曲线算法的研究的开展,许多科学家又发明了利用椭圆曲线算法进行素性检测的方
法,并且成为今年来素性检测领域里的一个重要方向。
本文也对 2 中基本的椭圆曲线素性检测算法做了简单的分析。
本文从素性检测算法的基本理论入手,对素性将测算法做一个全面的介绍,对大部分的素性
检测算法进行了分析,其中着重对实践中常用的 Miller-Rabin 算法进行了分析,并且对
Miller-Rabin 算法做了一定的优化,使 Miller-Rabin 算法的效率提高了一倍,生成 1024 位素
数的效率提升至,在 1.5G 的计算机上每 1.5 秒生成一个,并且给出了 Miller-Rabin 算法
的算法流程图和部分关键代码的实现..……
全文目录
中文摘要
符号说明
第一章 引言
第二章 素性判定的概述
第三章 素性判定的理论依据
3.1 定义
3.2 素性判定方法理论依据
3.3 本章总结
第四章 几种素性检测算法研究
4.1 概率算法的基础
4.1.1 PP 类
4.1.2 Monte Carlo 算法
4.1.3 Las Vegas 算法
4.2 几种素性检测所需要的算法的复杂度
4.2.1 欧几里德算法
4.2.2 模指数算法
4.2.3 随机素数的生成算法
4.2.4 计算 Jacobi 符号的算法
4.3 素性检测算法及其分析
4.3.1 Fermat 小定理作为合性检测
4.3.2 Euler 判则作为合性检测
4.3.3 Lucas 检测算法
4.3.4 Pocklington 检测算法
4.3.5 Demytko 定理
4.3.6 Monte-Carlo 算法进行素性检测
4.3.7 Las Vegas 算法进行素性检测
4.3.8 Solovay-Strassen 概率素数测试法
4.3.9 AKS 素性检测算法
4.3.10 几种椭圆曲线素性检测算法
4.4 本章总结
第五章 Miller_Rabin 素性检测算法
5.1 算法描述
5.2 算法分析
5.3 Miller-Rabin 算法的流程图及关键部分的代码
5.3.1 Miller-Rabin 算法的工作流程
5.3.2 Miller-Rabin 算法中的测试函数
5.4 Miller-Rabin 算法的一些优化
5.5 有关 Miller-Rabin 算法的最近一些进展
第六章 素性检测算法在密码学中的重要应用
6.1 RSA 公钥密码算法
6.1.1 RSA 加密解密运算
6.1.2 RSA 签名体制
6.1.3 Rabin 签名体制
6.3 ElGamal 密码体制
6.3.2 ElGamal 签名体制
6.4 本章总结
第七章 结束语及展望
参考文献
16. Math Tools
16.1. Axiom
16.2. Bignum
16.3. Derive
16.4. GMP Library
16.5. GNU Octave
16.6. Kant
A library for computational number theory
16.7. LiDIA
A library for computational number theory
Ingrid Biehl Johannes Buchmann Thomas Papanikolaou
Universit at des Saarlandes
Fachbereich
Saarbr ucken
16.8. Lisp
16.9. Macsyma
16.10.
Magma
Modular Exponentiation in Magma
16.11.
Maple
Modular Exponentiation in Maple
16.12.
MathCad
16.13.
Mathematica
16.14.
Matlab
the command rand(1) returns a random number between 0 and 1
16.15.
Maxima
16.16.
MIRACL
16.17.
MuPAD
16.18.
NTL library
16.19.
OpenMP
16.20.
Pari -GP
A library for computational number theory
16.21.
Reduce
16.22.
Sage
Prime Numbers and Integer Factorization in Sage
16.23.
Simath
A library for computational number theory
16.24.
Ubasic
17. COMPARISONS
18. MY IDEAS FOR FURTHER
IMPROVEMENT OF COMPLEXITY
19. RESOURCES
19.1. MAJOR NUMBER THEORISTS
19.2. KEY UNIVERSITIES
19.3. KEY RESEARCH INSTITUTIONS
19.4. SEMINARS,
SYMPOSIUMS,
FORUMS
19.5. JOURNALS
19.6. ACADEMIC WEB RESOURCES
WORKSHOPS
20. LITERATURE –PRIMALITY
20.1. BOOKS (BK)
20.2. LECTURE SCRIPTS
20.3. THESES FOR POSTDOC, PHD, MASTER AND
BACHELOR DEGREES (PDT, DT,MT,BT)
20.3.1.
BT- Bachelor Thesis
20.3.2.
MT – Master Thesis
20.3.3.
ST - Senior Thesis
20.4. GENERAL PAPER (GP)
20.5. COLLECTIONS OF PAPERS
20.6. PRESENTATIONS/SLIDES AT SEMINARS
20.7. OTHER PAPERS
20.8. PROPOSALS / SUGGESTIONS
21. LITERATURE – OTHER
RELATED
21.1. ALGEBRA
21.2. NUMBER THEORY
21.3. COMPUTER COMPLEXITY
21.4. COMPLEX ANALYSIS / FUNCTIONS
21.5. Cryptography
22. APPENDICES
22.1. CHARTS
22.2. TABLES
22.3. DATABASES
22.4. MULTIMEDIA DATA
22.5. COMPUTATION CODES
22.6. WEBSITE FOR THIS THESIS