Download Quantum Finite Automata www.AssignmentPoint.com In quantum

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

EPR paradox wikipedia , lookup

Quantum decoherence wikipedia , lookup

History of quantum field theory wikipedia , lookup

Interpretations of quantum mechanics wikipedia , lookup

Spinor wikipedia , lookup

Quantum machine learning wikipedia , lookup

Hidden variable theory wikipedia , lookup

Canonical quantization wikipedia , lookup

Max Born wikipedia , lookup

Quantum dot cellular automaton wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Quantum state wikipedia , lookup

Quantum group wikipedia , lookup

Density matrix wikipedia , lookup

Bra–ket notation wikipedia , lookup

Probability amplitude wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Transcript
Quantum Finite Automata
www.AssignmentPoint.com
www.AssignmentPoint.com
In quantum computing, quantum finite automata or QFA or quantum state
machines are a quantum analog of probabilistic automata or a Markov decision
process. They are related to quantum computers in a similar fashion as finite
automata are related to Turing machines. Several types of automata may be
defined, including measure-once and measure-many automata. Quantum finite
automata can also be understood as the quantization of subshifts of finite type,
or as a quantization of Markov chains. QFA's are, in turn, special cases of
geometric finite automata or topological finite automata.
The
automata
work
by
accepting
a
finite-length
string
\sigma=(\sigma_0,\sigma_1,\cdots,\sigma_k) of letters \sigma_i from a finite
alphabet \Sigma, and
assigning to each such string
a probability
\operatorname{Pr}(\sigma) indicating the probability of the automaton being in
an accept state; that is, indicating whether the automaton accepted or rejected
the string.
The languages accepted by QFA's are not the regular languages of deterministic
finite automata, nor are they the stochastic languages of probabilistic finite
automata. Study of these quantum languages remains an active area of research.
Informal description
There is a simple, intuitive way of understanding quantum finite automata. One
begins with a graph-theoretic interpretation of deterministic finite automata
(DFA). A DFA can be represented as a directed graph, with states as nodes in
the graph, and arrows representing state transitions. Each arrow is labelled with
a possible input symbol, so that, given a specific state and an input symbol, the
www.AssignmentPoint.com
arrow points at the next state. One way of representing such a graph is by means
of a set of adjacency matrices, with one matrix for each input symbol. In this
case, the list of possible DFA states is written as a column vector. For a given
input symbol, the adjacency matrix indicates how any given state (row in the
state vector) will transition to the next state; a state transition is given by matrix
multiplication.
One needs a distinct adjacency matrix for each possible input symbol, since
each input symbol can result in a different transition. The entries in the
adjacency matrix must be zero's and one's. For any given column in the matrix,
only one entry can be non-zero: this is the entry that indicates the next (unique)
state transition. Similarly, the state of the system is a column vector, in which
only one entry is non-zero: this entry corresponds to the current state of the
system. Let \Sigma=\{\alpha\} denote the set of input symbols. For a given
input symbol \alpha\in\Sigma, write U_\alpha as the adjacency matrix that
describes the evolution of the DFA to its next state. The set \{U_\alpha |
\alpha\in\Sigma\} then completely describes the state transition function of the
DFA. Let Q represent the set of possible states of the DFA. If there are N states
in Q, then each matrix U_\alpha is N by N-dimensional. The initial state q_0\in
Q corresponds to a column vector with a one in the q0'th row. A general state q
is then a column vector with a one in the q'th row. By abuse of notation, let q0
and q also denote these two vectors. Then, after reading input symbols
\alpha\beta\gamma\cdots from the input tape, the state of the DFA will be given
by q = \cdots U_\gamma U_\beta U_\alpha q_0. The state transitions are given
by ordinary matrix multiplication (that is, multiply q0 by U_\alpha, etc.); the
order of application is 'reversed' only because we follow the standard
application order in linear algebra.
www.AssignmentPoint.com
The above description of a DFA, in terms of linear operators and vectors,
almost begs for generalization, by replacing the state-vector q by some general
vector, and the matrices \{U_\alpha\} by some general operators. This is
essentially what a QFA does: it replaces q by a probability amplitude, and the
\{U_\alpha\} by unitary matrices. Other, similar generalizations also become
obvious: the vector q can be some distribution on a manifold; the set of
transition matrices become automorphisms of the manifold; this defines a
topological finite automaton. Similarly, the matrices could be taken as
automorphisms of a homogeneous space; this defines a geometric finite
automaton.
Before moving on to the formal description of a QFA, there are two noteworthy
generalizations that should be mentioned and understood. The first is the nondeterministic finite automaton (NFA). In this case, the vector q is replaced by a
vector which can have more than one entry that is non-zero. Such a vector then
represents an element of the power set of Q; its just an indicator function on Q.
Likewise, the state transition matrices \{U_\alpha\} are defined in such a way
that a given column can have several non-zero entries in it. After each
application of \{U_\alpha\}, though, the column vector q must be renormalized
so that it only contains zeros and ones. Equivalently, the multiply-add
operations performed during component-wise matrix multiplication should be
replaced by Boolean and-or operations, that is, so that one is working with a
ring of characteristic 2.
A well-known theorem states that, for each DFA, there is an equivalent NFA,
and vice versa. This implies that the set of languages that can be recognized by
www.AssignmentPoint.com
DFA's and NFA's are the same; these are the regular languages. In the
generalization to QFA's, the set of recognized languages will be different.
Describing that set is one of the outstanding research problems in QFA theory.
Another generalization that should be immediately apparent is to use a
stochastic matrix for the transition matrices, and a probability vector for the
state; this gives a probabilistic finite automaton. The entries in the state vector
must be real numbers, positive, and sum to one, in order for the state vector to
be interpreted as a probability. The transition matrices must preserve this
property: this is why they must be stochastic. Each state vector should be
imagined as specifying a point in a simplex; thus, this is a topological
automaton, with the simplex being the manifold, and the stochastic matrices
being linear automorphisms of the simplex onto itself. Since each transition is
(essentially) independent of the previous (if we disregard the distinction
between accepted and rejected languages), the PFA essentially becomes a kind
of Markov chain.
By contrast, in a QFA, the manifold is complex projective space
\mathbb{C}P^N, and the transition matrices are unitary matrices. Each point in
\mathbb{C}P^N corresponds to a quantum-mechanical probability amplitude or
pure state; the unitary matrices can be thought of as governing the time
evolution of the system (viz in the Schrödinger picture). The generalization
from pure states to mixed states should be straightforward: A mixed state is
simply a measure-theoretic probability distribution on \mathbb{C}P^N.
www.AssignmentPoint.com
A worthy point to contemplate is the distributions that result on the manifold
during the input of a language. In order for an automaton to be 'efficient' in
recognizing a language, that distribution should be 'as uniform as possible'. This
need for uniformity is the underlying principle behind maximum entropy
methods: these simply guarantee crisp, compact operation of the automaton. Put
in other words, the machine learning methods used to train hidden Markov
models generalize to QFA's as well: the Viterbi algorithm and the forwardbackward algorithm generalize readily to the QFA.
www.AssignmentPoint.com