Download High-Performance Computing and Quantum Processing - HPC-UA

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Bohr–Einstein debates wikipedia , lookup

Quantum dot cellular automaton wikipedia , lookup

Delayed choice quantum eraser wikipedia , lookup

Measurement in quantum mechanics wikipedia , lookup

Quantum field theory wikipedia , lookup

Particle in a box wikipedia , lookup

Path integral formulation wikipedia , lookup

Coherent states wikipedia , lookup

Density matrix wikipedia , lookup

Hydrogen atom wikipedia , lookup

Max Born wikipedia , lookup

Quantum dot wikipedia , lookup

Renormalization group wikipedia , lookup

Copenhagen interpretation wikipedia , lookup

Quantum entanglement wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Quantum fiction wikipedia , lookup

Bell's theorem wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Many-worlds interpretation wikipedia , lookup

Probability amplitude wikipedia , lookup

History of quantum field theory wikipedia , lookup

Quantum computing wikipedia , lookup

EPR paradox wikipedia , lookup

Interpretations of quantum mechanics wikipedia , lookup

Orchestrated objective reduction wikipedia , lookup

Quantum group wikipedia , lookup

Quantum teleportation wikipedia , lookup

Canonical quantization wikipedia , lookup

Quantum machine learning wikipedia , lookup

Quantum state wikipedia , lookup

Quantum key distribution wikipedia , lookup

Hidden variable theory wikipedia , lookup

Quantum cognition wikipedia , lookup

T-symmetry wikipedia , lookup

Transcript
Міжнародна конференція "Високопродуктивні обчислення"
HPC-UA’2012 (Україна, Київ, 8-10 жовтня 2012 року)
________________________________________________________________________________________________________________________
High-Performance Computing and Quantum Processing
Sergey Edward Lyshevski
Department of Electrical and Microelectronic Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA
E-mail: [email protected] URL: http://people.rit.edu/seleee
Abstract. We research novel solutions and apply new transformative findings towards quantum processing engineering
high-performance and enabled-capabilities data processing platforms. In living organisms, various information and
data processing tasks are performed by microscopic atomic and biomolecular fabrics using quantum phenomena and
effects. We examine fundamentals of quantum data processing on measurable, processable and compatible real-valued
physical quantities (variables) in microscopic fabrics. These fabrics comprise and implement devices, modules and
systems. The developed quantum data processing paradigm: (1) Unifies and enables concepts of theoretical computer
science, computer engineering and quantum mechanics; (2) Consistent with the first principles of quantum informatics,
communication and processing; (3) Coherently examines device physics, switching algebra, processing arithmetics and
calculus; (4) Fosters fundamentally-consistent and practical microscopic hardware solutions. We examine two core
problems, e.g., algorithmic and hardware premises. The microscopic processing primitives must exhibit utilizable
quantum-effect state transitions on the measurable physical variables (observables) which result in computable
transforms from viewpoints of devise physics, processing arithmetics, calculus and design. Our new coherent, cohesive
and consistent paradigm promises one to: (i) Ease enormous challenges; (ii) Overcome foremost inconsistencies of
naive algorithmically-centric computing; (iii) Enable new practical inroads, paradigms and solutions; (iv) Guarantee
unprecedented processing capabilities ensuring far-reaching benchmarks; (v) Advance theory and practice of natural
and engineered processing. Our findings support a broad spectrum of transformative research activities and
engineering developments. The results may be used in assessing performance, capabilities and benchmarks of natural
and engineered processing and computing.
Keywords
High-performance computing, information theory, microscopic systems, quantum computing, quantum processing
1. Introduction
High-performance computing is very important to solve various computationally-intensive problems. The highperformance computing is expected to achieve sustainable performance ensuring 1×1015 (petaflops) floating point
operations per second (FLOPS) in practical applications. New reliable and robust hardware and software to ensure
sustained performance and computing are under extensive developments. Petascale supercomputers, in some
applications, can process one quadrillion (1000 trillion) FLOPS. The computers and processors performance and
capabilities can be enabled by:
1. Advanced low-power hardware, self-aware software and cohesive algorithms;
2. Enabling hardware, software and languages which ensure highly-programmable systems;
3. Enabled-functionality low-power nanoscaled microelectronics; etc.
Vertebrates and invertebrates exhibit information and data processing. The data processing in living organisms
exceeds exascale performance which by far surpassing the exaflops equivalence and range applied in assessing of
computing. An exaflop is one quintillion (1×1018) FLOPS. This performance and capabilities cannot be ensured and
sustained by any envisioned super-computer platforms due to fundamental and technological limits.
Advanced digital integrated circuits (ICs) are used to implement various distinct computers designs,
organizations and architectures. Enormous progress was achieved in semiconductor devices and ICs. The
aforementioned astonishing fundamental, applied and technological developments led to mass production of highperformance ICs and processors with billions of transistors. Enabling materials, processes and tools led to a current
lithography-defined ~32-nm “DRAM Metal 1 Half-Pitch” which is also known as a “technology node” [1]. Various
fundamental limits will emerge within a foreseen scaling towards 20-nm and 10-nm features by 2017 and 2023 [1]. The
technology performance evaluation criteria are scalability, energy efficiency, on/off current, operational reliability,
operational temperature, technology compatibility and architecture compatibility [1]. As the planar solid-state devices
are scaled to only hundreds of nanometer in dimensions, the undesirable quantum phenomena significantly degraded the
overall device and ICs performance and functionality. The focused research activities have being centered on quantumeffect devices. A significant progress has accomplished in widely deployed resonant-tunneling devices, solid-state,
-33-
Міжнародна конференція "Високопродуктивні обчислення"
HPC-UA’2012 (Україна, Київ, 8-10 жовтня 2012 року)
________________________________________________________________________________________________________________________
inorganic and organic lasers, etc. [1]. To fully utilize quantum phenomena and enable new features, developments
progressed beyond current macroscopic microelectronic paradigms. We focus on quantum-mechanical microscopic
solutions. The microscopic-centric paradigm ultimately implies new device physics of subatomic/atomic/molecular
devices, novel communication and processing principles, new interfacing and networking schemes, innovative synthesis
and fabrication, etc. Quantum phenomena and effects, exhibited by microscopic systems (subatomic, atomic, molecular
and other), may be utilized to ensure processing tasks. There are enormous challenges and complexities which range
from quantum-mechanical analysis to synthesis, interfacing, testing and characterization of microscopic devices and
systems. Solutions of the aforementioned problem promise one to enable processing with unprecedented performance
and capabilities.
The living organisms provide undisputable evidence of biomolecular sensing, communication and information
processing. The information processing in living organisms has not been comprehended. By proposing, examining and
establishing new premises in high-performance communication and processing, we intent to significantly contribute to:
(i) Devising of enabling engineered processing platforms;
(ii) Analysis of revealing and essential aspects of quantum communication and processing.
An exploratory roadmap towards quantum processing is documented in Figure 1 [2, 3].
Natural and Engineered Systems
Molecular and Biomolecular Processing Microelectronics
2008 IBM
Blue Gene
Supercomputer
1946 ENIAC
2011 1971
AMD 64-bit Intel 4004
940-pin dualcore processor
Fundamental and
Technological Limits
Solutions:
1. Microscopic Devices
2. Networked Fabrics
3. Quantum Processing
4. Processing Calculi
5. Quantum Communication
1930, US Patent 1745175
1934 GB Patent 439457
Electron
(Vacuum)
1904
US PatentTubes
803684
Processing in Living Organisms
10 100
1
Picometer
10 100
1
10
100
1
10
100
Micrometer
Nanometer
Millimeter
Size (logarithmic scale)
1
10
Meter
Figure 1. Envisioned roadmap: Towards super-high-performance sensing, communication and processing [2, 3]
2. Information Theory With Applications to Communication and Interfacing
Information theory is applied to examine communication. Claude Shannon introduced and applied the entropy in order
to measure the complexity of the set. The sets which have larger entropies require more bits to represent them.
For M objects (symbols) Xi which have probability distribution functions p(Xi), the entropy is given as
M
H ( X ) = −∑ p ( X i ) log2 p( X i ) , i=1.2.…, M–1, M.
(1)
i =1
Example 2. 1.
Consider a cubic dice with 6 faces, and, non-cubic dices. The common polyhedron, Zocchihedron and other non-cubic
dices may have a specific number of faces. One may ensure uniform, normal and other distributions. For the
deltohedron, the number of faces is 10. Let
X=[a, b, c, d, e, f] with equal probability 1/6,
and
X=[a, b, c, d, e, f, g, h, k, l] with equal probability 1/10.
The entropies H(X) are found to be
M
H ( X ) = −∑ p( X i ) log 2 p( X i ) = − 16 log 2 16 − 16 log2 16 − 16 log 2 16 − 16 log2 16 − 16 log 2 16 − 16 log2 16 =2.585 bit
i =1
and
M
H ( X ) = −∑ p ( X i ) log 2 p( X i ) = 10(− 101 log 2 101 ) =3.3219 bit.
i =1
-34-
■
Міжнародна конференція "Високопродуктивні обчислення"
HPC-UA’2012 (Україна, Київ, 8-10 жовтня 2012 року)
________________________________________________________________________________________________________________________
Example 2. 2.
1 with probability p
. The entropy H(X), as a function of p, is given as
Let X = 

0 with probability 1 − p
M
H ( X ) = −∑ p( X i ) log2 p ( X i ) = − p log2 p − (1 − p) log2 (1 − p) .
■
i =1
It is evident that H is positive-definite, H≥0. That is, the number of bits required by the Source Coding
Theorem is positive. In particular, N independent and identically distributed random variables, each with entropy H(X),
can be compressed into more than NH(X) bits with negligible risk of information loss as N→∞; however, if these N
random variables are compressed into fewer than NH(X) bits, it is virtually certain that information will be lost.
Examining analog computation and processing on continuous-time signals, a differential entropy can be
applied. For a continuous-time random variable X, the differential entropy is given as
(2)
H ( X ) = − ∫ p X ( x) log 2 p X ( x) dx ,
where pX(x) is a one-dimensional probability distribution function of x,
In general, one has
H ( X 1 , X 2 ,..., X n −1 , X n ) = − ∫ p X (x) log 2 p X (x)dx .
∫p
X
( x)dx = 1 .
(3)
The relative entropy between probability density functions pX(x) and gX(x) is expressed by
p ( x)
(4)
H R ( p X g X ) = ∫ p X (x) log 2 X
dx
g X ( x)
The differential entropies for various common distribution functions are derived. For Cauchy, exponential,
Laplace, Maxwell-Boltzmann, normal and uniform distributions pX(x), the resulting H(X) are reported in [2, 3]. The
differential entropy can be negative. The differential entropy of a Gaussian random variable with
p X ( x) =
1
−
2πσ 2
e
( x − a )2
2σ 2
, –∞<x<∞, –∞<a<∞, σ>0 is H(X)=½ln(2πeσ2), Thus, H(X) can be positive, negative or zero
depending on the variance σ.
One examines the mutual information I(X,Y) between the stimulus X and the response Y in order to measure
how similar the input and output are. We have
p ( y x)
I(X,Y) = H(X) + H(Y) – H(X,Y), I ( X , Y ) = p X ,Y ( x, y) log 2 p X ,Y ( x, y ) dxdy = p ( y x) p X ( x) log 2 Y X
dxdy . (5)
Y X
∫
∫
p X ( x) pY ( y )
pY ( y )
The channel capacity C is found by maximizing the mutual information subject to the input probabilities, and
(6)
C = max I ( X , Y ) [bit/symbol].
p X (⋅)
The analysis of mutual information results in C which depends on pY|X(y|x). It is difficult to obtain or estimate
the probability distribution functions. Consider a point process channel [2, 3]. The instantaneous rate at which pulses
occur cannot be lower than rmin and greater than rmax which are related to the photon emission, electromagnetic
radiation, etc. Let the average sustainable pulse is r0. For a Poisson process, the channel capacity of the point processes
1+ rmin


rmax − rmin  rmax −rmin 
rmin   rmax − rmin  . If the minimum rate is zero (rmin=0),
−1 


 ln1 +

C = rmin e 1 +
− 1 +
 
rmin 
rmin 
 rmax − rmin  


rmax
rmax

 e ln 2 , r0 > e
the expression for a channel capacity is
.
C =
r0  rmax 
rmax
, r0 <

ln
e
 ln 2  r0 
for rmin≤ r≤rmax is
Example 2. 3.
For serial communication, the gross bit rate r depends on the transmission time Tt, and, r=1/Tt.
N
For the parallel communication, the gross bit rate is r = ∑ log 2 M i , where N is the number of parallel
Ti
i =1
channels; Mi is the number of symbols or modulation levels in the ith channel; Ti is the symbol duration time for the ith
channel.
The quantum transductions occur within ~1×10–15 sec. We assume that the maximum rate rmax varies from
13
1×10 to 1×1014 pulse/sec, the average rate r0 changes from 1×1011 to 1×1012 pulse/sec. Let rmin assumes 0.5×1011 to
-35-
Міжнародна конференція "Високопродуктивні обчислення"
HPC-UA’2012 (Україна, Київ, 8-10 жовтня 2012 року)
________________________________________________________________________________________________________________________
rmax
11
1×10
pulse/sec.
One
 r  rmax − rmin
.
r0 < e rmin  max 
 rmin 
−1
has
channel
The
capacity
C(r0,rmax)
is
rmax


 rmax  rmax − rmin
 r0   if rmin≤ r≤rmax. Figure 2 document three-dimensional plots for C(r0,rmax)
1 




C=
r
−
r
−
r
(
)
ln
ln
 0 min 
0

 r 
ln 2 
 rmin 
 min  


11
11
if rmin=0.5×10 and rmin=1×10 pulse/sec. For rmin=0.5×1011, one finds Cmax=6.106×1012 bits. A very high channel
capacity C is achieved.
12
12
x 10
x 10
6
8
5
)
3
max
4
C(r , r
0
4
0
C(r , r
max
)
6
2
2
1
0
10
0
10
10
10
8
5
13
x 10
4
r
max
0
r
x 10
6
4
11
x 10
2
0
8
5
13
6
r
0
0
max
(a)
Figure 2. Three-dimensional plot for the hannel capacity C(r0,rmax):
11
11
(a) C(r0,rmax) if rmin=0.5×10 pulse/sec; (b) C(r0,rmax) if rmin=1×10 pulses/sec.
11
x 10
2
0
r
0
(b)
■
3. High-Performance Computing: Solutions and Limits
Many enabling solutions were implemented ensuring high-performance computing. The following concepts were
utilized in existing processors:
• parallelism,
• vector processing,
• accelerators,
• superpipelining,
• array processing,
• multi-core architecture,
• multithreading,
• distributed computing,
• multi-level shared memory, etc.
These solutions drastically advanced processing performance and capabilities. The advances of computer
engineering and microelectronics enable and support the aforementioned concepts. There are fundamental, hardware
and software limits associated with all solutions.
Let us examine the parallelism. We introduce the following ratio
r=tserial/tparallel,
where tserial and tparallel are the processing, communication, interfacing, memory access and retrieval, execution and other
times of the serial (sequential) and parallel (concurrent and distributed) processes, tasks, algorithms, etc.
Processes, tasks and algorithms are not scalable, and, many are not parallelizable. Not mentioning memory
hierarchy, even, majority of algorithms are impossible to parallelizable. Inherently serial algorithmic problems are
majority of sequential and combinational logics, conditional statements, numeric problems, etc.
Example 3. 1: Limits of Parallelism
There are many processing tasks which must be performed. For example, computing, logics, memory access and
retrieval, coding, communication, interfacing, networking, etc. These tasks, many of which can be performed only in
series, are hardware-, software-, algorithms-, architecture- and organization-dependent, Many operations and tasks
cannot be parallelized. One of the major quantity of parallelism is the speed-up measure Mspeed [2, 3], defined as
1
1
M speed =
=
,
1


tseries
tseries
1 
r
+
(
1
−
r
)
 series
series
+
1−
NP
tseries + t parallel N P  tseries + t parallel 
rseries=tseries/(tseries+tparallel), rparallel=1–rseries=1–tseries/(tseries+tparallel),
-36-
Міжнародна конференція "Високопродуктивні обчислення"
HPC-UA’2012 (Україна, Київ, 8-10 жовтня 2012 року)
________________________________________________________________________________________________________________________
where tseries and tparallel are the averaged times to perform series and parallel processing will all related tasks and
operations; NP is the number of processors; rseries and rparallel are the series (not parallelizable) and parallelizable ratios.
The speed-up measure Mspeed depends on the degree of parallelism which can be achieved. As one may obtain
the average estimate for rseries>0 we have rparallel=(1–rseries), rparallel<1.
There are various latency delays, such as device/module/system transients, holds, protocols, algorithmic,
synchronization, flow control, propagation, transmission and other delays. These delays affect computing, memory
access and retrieval, logics, coding/decoding, communication, networking, interfacing and other tasks which are
accomplished by means of corresponding arithmetics, operations and processes. The aforementioned factors lead to the
averaged or effective rseries.
Assume a very high level of parallelizable capabilities. We optimistically postulate that: (i) 80% of tasks,
operations and processes could be performed in parallel; (ii) There are only 20% of not parallelizable undertaking tasks,
operations and processes.
Hence, one finds Mspeed=5 as NP→∞.
Therefore, we may uppermost speed-up computations by a factor of 5 with number of parallel processors
NP→∞. If NP=10 and NP=100, computations are speed-up by factors 3.57 and 4.81. The three-dimensional plot of
Mspeed(rseries,NP) if rseries∈[0.05 1] and NP∈[1 100] is illustrated in Figure 3.
18
16
M
speed
14
12
10
8
6
4
2
1
0
100
80
0.5
60
40
N
20
0
0
r
P
series
Figure 3. Plot for Mspeed(rseries,NP) if rseries∈[0.05 1] and NP∈[1 100]
■
Example 3. 2: Limits of Pipelining
Consider a deep data paths pipelining by N pipeline stages. The total latency of each instruction is tl, while the overhead
time per stage is t0. This yields the expression for the frequency as f=(t0+tl/N)–1 Hz.
Assume that branches constitute a fraction nI of all instructions to be executed. Let, nI also includes pipeline
stall or bubbles features. The average number of cycles per instruction is nCPI=1+NnI.
N
1
For an N-stage pipeline, the average throughput T=f/nCPI is T =
.
Nt 0 + t I 1 + NnI
The optimal number of stages Noptimal is found by using dT/dN=0.
One obtains the optimal number of stages Noptimal = tI 1 .
t0 nI
Many factors affect N and Noptimal. These factors are: data dependencies, preordering, uncertainties, etc. In most
advanced processors, N is usually less than 10. If one increases the depth of pipelining to N>Noptimal, the performance
degrades. Using the derived expression for Noptimal = tI 1 , a three-dimensional plot for Noptomal(nl,tl/t0,) is illustrated in
t0 nI
Figure 4.
N optimal
15
10
5
0
0
0.5
1
nI
10
8
6
4
2
0
t /t
I 0
Figure 4. Plot for Noptomal(nl,tl/t0,)
■
-37-
Міжнародна конференція "Високопродуктивні обчислення"
HPC-UA’2012 (Україна, Київ, 8-10 жовтня 2012 року)
________________________________________________________________________________________________________________________
4. Quantum Processing Fundamentals
We initiate transformative knowledge generation and perform fundamental research on [2-5]:
1. Consistent analysis of electron- and photon-induced phenomena which lead to quantum state transitions
and utilizable transductions on detectable real-valued measurable physical variables (observables). These
measurable variables must be controllable, algorithmically processable and hardware realizable;
2. Quantum- and device-physics consistent fundamentals of communication and processing by microscopic
engineered and natural systems;
3. Devising, design, substantiation and demonstration of practical engineering paradigms and technologies on
electronic, photonic and photoelectronic sensing, communication and processing by molecular fabrics.
It is important to progress from theory, to its substantiation, engineering solutions and technologies by:
1. Studying and evaluating quantum utilizable transductions on detectable real-valued measurable physical
variables;
2. Verifying principles and mechanisms of energy conversion, sensing, communication and processing in
microscopic systems as applied to practical engineered solutions.
Under some hypotheses and conjectures, microscopic system can be mathematically modeled (mapped) by
using wave functions in different spaces. For example, spatial, momentum and other spaces are used. The
spatiotemporal wave function Ψ(r,t)=ψ(r)ϕ(t) is found by solving the time-dependent Schrödinger equation
En
∞
∞
∞
∂Ψ (r , t )
, Ψ(r, t) = ∑c Ψ (r, t) =ψ (r)ϕ(t) = ∑c ψ (r)e−i h t = ∑c ψ (r)e−iωnt , Ĥ∈÷, Ψ∈÷, c∈÷,
(7)
Hˆ Ψ (r , t ) = ih
n n
n n
n n
∂t
n=1
n=1
n=1
where Ĥ is the total Hamiltonian operator, Ĥ=Ĥ0+ĤE+ĤP; Ĥ0 is the unperturbed Hamiltonian in the absence of external
excitations, and, for an unperturbed microscopic system Ĥ0ψn=Enψn, Hˆ 0 = − 21m h 2∇ 2 + Π ; ĤE and ĤP are the excitation
and perturbation Hamiltonians; Π is the potential function; cn(t) are the complex probability amplitudes, and, *cn*2 is the
probability that a microscopic system at any given time t is in a state with En (the probability that a measurement of the
∞
2
energy at t would yield En), ∑ cn = 1 .
n =1
From (7), using Ψ(r,t)=ψ(r)ϕ(t), one has
E
−i t
Ĥ0ψn=Enψn, ih 1 ∂ϕ = E , ϕ (t ) = e h .
(8)
ϕ ∂t
Wave functions Ψ, derived using various spaces, may yield mathematically-consistent estimates on various
quantities, such as probabilities, allowed states, expectation values, etc. The expectation value of a quantum canonical
variable C∈ú, with an associated operator Ĉ∈÷, is
(9)
C =∫ Ψ * (r, t )Cˆ Ψ (r , t )dV , C∈ú.
The governing equation for the operator Ĉ∈÷ is given as
d ˆ
∂Cˆ
1 ˆ ˆ , [Ĉ, Ĥ]=ĈĤ – ĤĈ, Ĉ∈÷.
(10)
C =
+
[C , H ]
dt
∂t
ih
Our quantum-mechanically-consistent modeling and analysis result in a set of equations (7)-(10) with the
resulting model mapping of microscopic systems as
(11)
M(k(Ψ),Ĉ,C)∈K×Ĉ×C, Ĉ∈÷, C∈ú.
Various use of mathematical operations, manipulations on operators, superposition and other premises of allalgorithmic quantum communication and processing were outlined in [6-11]. There is a need to depart from abstract
quantum computing which assumes the practicality of [6-11]:
• Naive all-algorithmic computing on not detectable, not observable and not measurable mathematical
operators;
• Algorithmic schemes of “quantum logic gates” aggregated within postulated computing structures by
means of elusive quantum interconnects and circuits;
• Macroscopic microelectronic devices including the so-called “single-electron-transistor”, “single-photontransistor”, “quantum dots”; etc.
By applying the aforementioned solutions, it is unclear how one may ensure a practical computing. It is
unlikely that any processing tasks can be accomplished by directly or indirectly abstractly using quantum
indeterminacy, incompleteness, mathematical operators (wave functions, probability density and others), hidden
variables, superposition of states, etc.
We outline the following Three Core Principles on which our results are centered:
Principle 1: Engineering Quantum Mechanics – Coherent quantum physics and information science as applied
to information and data processing;
-38-
Міжнародна конференція "Високопродуктивні обчислення"
HPC-UA’2012 (Україна, Київ, 8-10 жовтня 2012 року)
________________________________________________________________________________________________________________________
Principle 2: Design and Algorithmic Solutions – Processing on detectable real-valued measurable physical
variables using distinguishable, quantifiable and computable transforms and schemes with corresponding data
processing arithmetics (logics and calculi), organizations and architectures;
Principle 3: Physical Implementation on Device and System Levels – Utilize quantum phenomena which lead
to state transitions and utilizable transductions on measurable physical variables (observables) in molecular
fabrics. These variables should be quantum-mechanically achievable, algorithmically processsable and
hardware realizable to perform processing.
It is important that for quantum processing, the time-dependent Shannon entropy H(t,p) is given as
N
H(t,p)= − ∑ pi ln pi .
(12)
i =1
5. High-Performance Quantum Processing
Using three principles, we research quantum-mechanically consistent, algorithmically cohesive and hardware coherent
quantum data processing. We use the detectable real-valued measurable physical variables (quantities) during
controllable quantum transductions [2-5]. Only measurable and algorithmically processable variables v∈V lead to
quantum-mechanically, device- and algorithmically (arithmetically) consistent processing. For example, v=[E, ω, λ,
T,…]T. It is unlikely that computing can be accomplished by using wave functions, kets, eigenkets or other mathematical
quantities, operators, etc.
Consider controlled physical microscopic devices. We utilize the distinguishable and computable transforms T
which are mappings of the utilizable initial, intermediate and final state transductions (SI, ST and SF) on v=[v1,…,vk]T.
The microscopic devices may accomplish the following irreversible and reversible utilizable transductions
S I:vIYS T:vTYSF:vF and S I:vI]ST:vT]SF:vF.
(13)
Consider a physical microscopic processing fabrics with processing primitives P1,…,Pk. Each Pj exhibits
transductions Sj(v) on detectable, measurable and processable variables vj yielding distinguishable and computable
transforms Tj(S,v). Using Tj(S,v), consistent with device physics and admissible arithmetic operand Aj, one has
T Σ=T1B …BTk.
(14)
The processing can be accomplished by using the infinite- and finite-valued logics. Analog, digital and hybrid
processing schemes may be supported by microscopic devices. Considering multiple-valued logic, the switching
function on r-valued vj is f:{0,…,r–1}n→F{0,…,r–1}m with a truth vector F.
Any f can be represented as
f=A(F,T).
(15)
The evolutions of quantum transductions are mapped as
vj,l²Qvj,l+1=Qj(vj,l)
(16)
which defines the evolution of physically-realizable computable transforms Tj(S,v). The transductions Sj(v) on v in
physical microscopic systems (devices) can be controlled by using device-specific control schemes by varying systems
energetics, potential or other quantities denoted as H [20-22]. The controllable evolutions on measurable vj, are
mapped as
vj,l²Qvj,l+1=Qj(vj,l,Hj,l),
(17)
where Qj denotes transductions Sj on vj consistent with A(F,T).
The binary switching function is f:{0,1}n→F{0,1}m.
An n-variable r-valued function f, with r r different combinations, is defined as a mapping of a finite set
{0,…,r–1}n into a finite set {0,…,r–1}m, e.g.,
f:{0,…,r–1}n→F{0,…,r–1}m.
(18)
Truth vectors on n binary and r-valued variables [x1,…,xn] are
F=[f(0),f(1),…,f(2n–1)]T and F=[f(0),f(1),…,f(rn–1)]T.
(19)
Digital computing and digital design use algebraic maps, Boolean algebra, digital (binary and multiple-valued)
logics, decision diagrams, data structures, fundamental expansions, polynomial expressions, sequential networks,
probabilistic concepts, stochastic schemes, and other approaches [12, 13]. If quantum-mechanical consistency,
algorithmically cohesiveness and hardware coherency are satisfied, some aforementioned concepts may be applied for
quantum processing. Any arithmetically- and algorithmically-defined computable function must be:
1. Definable, realizable and implementable as derived by using the distinguishable and computable
transforms;
2. Implementable using the utilizable quantum transductions on measurable and algorithmically processable
variables.
The calculi and arithmetics of quantum processing are reported in [2-5, 12, 13].
n
-39-
Міжнародна конференція "Високопродуктивні обчислення"
HPC-UA’2012 (Україна, Київ, 8-10 жовтня 2012 року)
________________________________________________________________________________________________________________________
6. Conclusions
We examined quantum phenomena which are exhibited and utilized to ensure high-performance quantum processing.
The following three-fold objectives were achieved:
1. The microscopic systems which may enable quantum communication and processing were devices and
examined;
2. Phenomena and mechanisms, possibly utilized by natural systems to accomplish high-performance
communication and processing, were studied;
3. A novel paradigms of high-performance quantum processing was developed.
We enabled a knowledge base and discovered new solutions. Our transformative findings are substantiated by
means of basic, applied and numerical studies which are consistent with experiments, biophysics, quantum mechanics,
information theory, computer science and computer engineering.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
International Technology Roadmap for Semiconductors, 2005, 2009 and 2011 Editions, Semiconductor
Industry Association, Austin, Texas, USA, 2011.
S. E. Lyshevski, Molecular Electronics, Circuits, and Processing Platform, CRC Press, Boca Raton, FL, 2007.
S. E. Lyshevski, Molecular and BioMolecular Processing: Solutions, Directions and Prospects, Handbook of
Nanoscience, Engineering and Technology, Ed. W. Goddard, D. Brenner, S. E. Lyshevski and G. Iafrate, CRC
Press, Boca Raton, FL, 2012.
S. E. Lyshevski, “Hardware, software and algorithmic solutions for quantum data processing,” Proc. IEEE
Conf. Nanotechnology, Birmingham, UK, 2012.
S. E. Lyshevski, “Quantum processing: Feasibility studies and solutions,” Proc. IEEE Conf. Nanotechnology,
Portland, OR, pp. 1527-1532, 2011.
A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin and H.
Weinfurter, "Elementary gates for quantum computation," Phys. Rev. A, vol. 52, no. 5. pp. 3457-3467, 1995.
C. H. Bennett, I. Devetak, P. W. Shor and J. A. Smolin, "Inequalities and separations among assisted capacities
of quantum channels," Phys. Rev. Let., vol. 96, no. 150502, 2006.
A. Imamoglu, D. D. Awschalom, G. Burkard, D. P. DiVincenzo, D. Loss, M. Sherwin and A. Small, “Quantum
information processing using quantum dot spins and cavity QED,” Phys. Rev. Let., vol. 83, no. 4204, 1999.
D. C. Marinescu and G. M. Marinescu, “The Boole Lecture – Quantum information: A glimpse at the strange
and intriguing future of information." The Computer Journal, vol. 50, pp. 505-521, 2007.
M. Mosca, R. Jozsa, A. Steane and A. Ekert, “Quantum-enhanced information processing,” Trans. R. Soc., vol.
A 358, pp. 261-279, 2000.
A. Steane and E. Reiffel, “Beyond bits: The future of quantum information processing,” Computer, vol. 33, no.
1, pp. 38-45, 2000.
S. Yanushkevich, V. Shmerko and S. E. Lyshevski, Computer Arithmentics for Nanoelectronics, CRC Press, Boca
Raton, FL, 2009.
S. Yanushkevich, V. Shmerko and S. E. Lyshevski, Logic Design of Nano-ICs, CRC Press, Boca Raton, FL, 2005.
-40-