Download Probability and Empirical Frequency

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Atomic orbital wikipedia , lookup

Hidden variable theory wikipedia , lookup

Quantum entanglement wikipedia , lookup

Ensemble interpretation wikipedia , lookup

Identical particles wikipedia , lookup

Electron configuration wikipedia , lookup

Hydrogen atom wikipedia , lookup

Measurement in quantum mechanics wikipedia , lookup

Many-worlds interpretation wikipedia , lookup

Copenhagen interpretation wikipedia , lookup

Double-slit experiment wikipedia , lookup

Bell's theorem wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

EPR paradox wikipedia , lookup

Quantum state wikipedia , lookup

Interpretations of quantum mechanics wikipedia , lookup

Density matrix wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Probability amplitude wikipedia , lookup

Transcript
Probability and Empirical Frequency
February 1, 2009
This handout clarifies a common misconception among students learning probability for the first time—
the difference between empirical frequency and probability.
Suppose you toss a fair coin 100 times. Let us say of them, 52 tosses turned up heads. The empirical
frequency of heads is .52, the proportion of tosses which turned up heads.
The probability of turning up heads is still 1/2 or .5 (the coin is known to be fair)—and here lies the
difference. If the probability of heads is 1/2, it is still possible that 100 independent coin tosses can all turn
up heads (though this is not very likely—it can happen only with probability 1/2100 —see HW).
However, the empirical frequency and probability are not unrelated. One relation between empirical
frequency and probability is in the limit. Say you toss coins (each toss is independent of the other tosses) n
times, and of them nh tosses turned up heads. What is true is (almost surely)
lim
n→∞
nh
= 1/2,
n
namely if someone gives you any > 0 , you can pick some n so large that nnh is between 12 ± . Since can
be arbitrarily small, you can keep tossing coins till you get as close to 1/2 as you like.
Another way to relate the empirical frequency and probability is by looking at averages. Say you throw
a coin 2 times. With probability 1/4, you will see each of HH, TT, HT, and TH.
The number of times you see heads then is 2 with probability 1/4 (if we get HH), 1 with probability 1/2
(either of HT or TH) and 0 with probability 1/4 (if we get TT). Therefore, the expected (we will learn more
about his later in the semester) number of times we see heads is
1
1/4 · 2 + 1/2 · 1 + 1/4 · 0 = 1(= 2 × ).
2
Similarly, the expected number of times heads turns up with 3 coin tosses is 1.5, 4 coin tosses is 2, and in
general, with n coin tosses, we expect n/2 heads, which is nothing but n multiplied by the probability of
heads. In general for potentially biased coins, if the probability of heads is p, the expected number of heads
that turn up np.
Empirical frequency is easy to understand—it is just the observed fraction. But what is probability then?
It is an unseen quantity which we can only infer approximately by repeating experiments sufficiently many
times and looking at empirical frequency.
Note that this sort of abstraction is not all that strange by itself—we always model real life with abstractions. And we are ok with it so long as the abstractions predict real life well and in this regard, probability
has been wildly successful.
Probability and quantum theory
In surprisingly many ways, the above abstract notion of probability is pretty much built into our world. If
you study quantum physics, you will learn that objects that are governed by quantum laws are never in one
state or another, rather they exist in all states simultaneously. In one interpretation of quantum theory—the
Copenhagen interpretation—when you observe a quantum object like an electron, it collapses into one of the
several states because of the observer—and the chance it falls into any state is the probability of the state.
To make it more “concrete”, electrons can have two possible spins denoted “+” or “−”. At any given
point, we do not know what the spin of the electron is, we only know the probability of each spin—the
1
electron is supposed to exist as an ensemble. In other words, we think of the electron as having both “+”
and “−” spins, say with probability p and 1 − p respectively. Now if we are curious and actually observe the
electron, the electron falls into “+” with probability p and “−” with probability 1 − p. In other words, we
will see “+” spin with probability p and “−” spin with probability 1 − p.
There are other interpretations as well—among them is Everett’s many world interpretation. In this
case, at the moment of observation, the observer enters parallel possible universes, one in which the spin is
“+” and one in which the spin is “−”. When Mark Twain remarked that truth is stranger than fiction, he
probably had no idea how right he was.
This is one of the phenomena that quantum computing hopes to exploit—since electrons exist in many
states together, maybe we can exploit it to do massively parallel computations (see HW). If all this seems too
out of the world, it isn’t—the first demonstration of a quantum computer is not in the future but in the past.
Back in 2007, a Vancouver based hardware firm, D-Wave demonstrated a “proof of concept” 16-quantum bit
computer they named Orion at the Computer History Museum in Mountain View, CA.
There is a famous thought experiment, Schrödinger’s cat, (google it) based on this. Famous enough that
it has been referenced on several TV series, including the medical mystery series HOUSE on FOX network.
HW 1 We will compute here the probability of seeing hundred consecutive heads in hundred tosses of a fair coin.
1. First we write out the probability of any sequence HT H... of results of 100 coin tosses using our
definition of conditional probabilities.
(a) Suppose you tossed the coin twice. Say we want the probability of heads on the first toss and
tails on the second. We know that
P ( H on first and T on second ) = P ( T on second | H on first )P ( H on first ).
What equations in the handout on conditional probability do we need to obtain the above? From
now on, we will shorten the above equation and write P (HT ) = P (T |H)P (H).
(b) Show that for a sequence HHT HT T T H,
P (HHT HT T T H) =P (H) · P (H|T ) · P (T |HH) · P (H|HHT ) · P (T |HHT H)·
P (T |HHT HT ) · P (T |HHT HT T ) · P (H|HHT HT T T ).
(Hint: do this in steps. First write the probability as
P (HHT HT T T in the first 7 tosses and H in the eighth),
then use the definition of conditional probability. Repeat.)
2. Next we write out the probability of any sequence of 100 heads and tails, for example, HT HT HT...
(alternating heads and tails), using the above relation. We use independence of coin tosses, namely, no
matter what the past outcomes are, the probability of heads (or tails) is 1/2. Specifically it means that
P (H|xxxx) = 1/2 where xxxx is any sequence of heads or tails. Using this along with the previous
subproblem, what is the probability of any sequence of 100 heads or tails? In particular, what is the
probability of all hundred tosses being heads?
3. How many outcomes are possible with hundred tosses of a fair coin? Each outcome is a sequence
HT H... that we could have obtained by tossing the coin 100 times.
HW 2 Suppose an electron exists as an ensemble of two possible states. You are building a quantum computer
with 16 electrons. The state of the computer is the vector of states of the electrons. How many states does
the computer ensemble have?
2