Download quantum computing

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Quantum electrodynamics wikipedia, lookup

Delayed choice quantum eraser wikipedia, lookup

Coherent states wikipedia, lookup

Quantum key distribution wikipedia, lookup

Copenhagen interpretation wikipedia, lookup

Density matrix wikipedia, lookup

History of quantum field theory wikipedia, lookup

Canonical quantization wikipedia, lookup

Quantum state wikipedia, lookup

Interpretations of quantum mechanics wikipedia, lookup

Path integral formulation wikipedia, lookup

Hidden variable theory wikipedia, lookup

T-symmetry wikipedia, lookup

Scalar field theory wikipedia, lookup

Max Born wikipedia, lookup

Renormalization group wikipedia, lookup

Quantum machine learning wikipedia, lookup

EPR paradox wikipedia, lookup

Quantum group wikipedia, lookup

Hydrogen atom wikipedia, lookup

Symmetry in quantum mechanics wikipedia, lookup

Many-worlds interpretation wikipedia, lookup

Orchestrated objective reduction wikipedia, lookup

Bell's theorem wikipedia, lookup

Quantum entanglement wikipedia, lookup

Quantum teleportation wikipedia, lookup

Bell test experiments wikipedia, lookup

Quantum field theory wikipedia, lookup

Quantum decoherence wikipedia, lookup

Quantum computing wikipedia, lookup

Quantum dot wikipedia, lookup

Quantum fiction wikipedia, lookup

Particle in a box wikipedia, lookup

Measurement in quantum mechanics wikipedia, lookup

Algorithmic cooling wikipedia, lookup

Quantum cognition wikipedia, lookup

Transcript
QUANTUM COMPUTING:
• Quantum computing is an attempt to unite
Quantum mechanics and information science
together to achieve next generation computation.
• A Quantum computer is a machine that performs
calculations based on the laws of quantum
mechanics which is the behaviour of particles at the
sub-atomic level.
• Quantum computers have simultaneity and
parallelism built inherently.
• Moore’s law states that transistors doubles every 18
months in a microprocessor.
• Transistor size should reduce proportionally.
• CMOS-size-5nm
• In other few years the transistor size reaches subatomic scale i.e in the range of 0.1A
Classical Computers:
• Use bits which contain either zero or a one.
• Operate on these bits using a series of binary logic
gates.
• Components have been decreasing in size.
• Classical designs are reaching the theoretical limit of
miniaturization.(only a few atoms)
• On the atomic scale matter obeys the rules of quantum
not classical physics.
• Quantum technology could not only further reduce the
size of components , but could allow for development
of new algorithms based on quantum concepts.
Qubit(Quantum bit)
• A bit of data is represented by a single atom that is
in one of two states is known as qubit.
• Physical implementation of a qubit uses the two
energy levels of an atom.
• Excited state representing |1> and a ground state
representing |0>.
• Spin up-state represents a 1,spin-down a 0.
• A single bit can be forced into a superposition of the
two states.
So What’s the Point?
• While a single classical bit can store either 0 or 1,a
single qubit can simultaneously store both 0 and 1.
• Two qubits can store four states simultaneously
while two classical bits can store one of four bits.
• In general if L is the number of qubits in a quantum
register, that register can store 2^L different states
simultaneously.
• Classical registers store only one state.
• The speed of classical computers can be improved
by using parallelism.
• In contrasted with quantum systems, parallelism is
exponentially increased with the linear increase in
the size of system.
• Because of its inheritance. Parallelism is inbuilt in
quantum systems.
Quantum error detection:
• . The qubits are highly unstable and they keep their
state which is termed as ‘decoherence’. This
requires constant error correction for building a
fault tolerant system.
• Quantum error correction is very expensive
because arbitrary reliability is achieved by
recursively encoding physical qubits numerous
times and is achieved at the expense of speed.
• It is the most basic operation of a quantum
computer.
Parallelism
• Exploitable parallelism is limited by resource and
application structure.
• Now specialize into memory and computing
blocks.
• Encode them differently.
• High processing speed and slow memory.
• The problem of stalls.
• Now,
Memory hierarchy
• Reliability can be increased by recursive
encoding.
• When level 1 encoding creates N bits,
• Level 2 encoding creates N^2 logical bits.
• Qubits in ion trap quantum processor have
large life times when left idle.
• Volatility increases with interactions.
•
•
•
•
Error correction procedure for every gate.
Processor spends most of its time on ECP.
So design should enable fast error correction.
Increase the number of the cloned/ancillary
qubits.
• For each level of concatenation error
correction time and error increase
exponentially.
• But reliability increases double exponentially 
Revised architecture
• Data locality is a common phenomenon.
• The logical qubit can start at level 2, go to level
1 in peak and return to level 2 when idle.
• Memory at level 2 is for area and reliability.
• It is slower than level 1 structure designed for
gate execution.
• So what do we do for optimisation?
Cache
•
•
•
•
Alleviate the need for constant communication.
Memory at level 2 encoding (slow and reliable).
Cache at level 1 (faster and less reliable).
Compute region is fastest and as reliable as
cache.
• But they differ in speed due to the number of
ancilla bits in compute region.
• (a) Memory is denser. The figure shows 3 data
qubits in the compute block which take the same
area as 8 data qubits in memory.
• (b) Memory is at level 2 encoding, while the
compute and cache are at level 1 encoding.
• The revised architecture consists of memory at level
2, compute regions at level 2 and also a cache and
compute region at level 1.
•
•
•
•
•
Changed the ratio of logical to ancillary bits.
Earlier it was 1:2 entirely. Now 8:1 and 1:2.
Eg: adder
The cache hit rate was around 20%.
Static scheduling is done and the dependency is
calculated.
• The optimised approach fetches hit rate up to 85%
irrespective of the cache and adder size.
• Now, we have seen that balanced design with
architectural techniques shows 13X improvement in
speed and 8X in performance.
THANK YOU