Download Particle Counting Statistics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Quantum electrodynamics wikipedia , lookup

Probability amplitude wikipedia , lookup

Transcript
PHYS 331: Junior Physics Laboratory I
Particle Counting Statistics
The following paragraphs are intended to supplement the text by Bevington and
Robinson with a specific focus on counting experiments. You will need to read the text to obtain
further details and background information.
A. Counts in a fixed interval
Poisson statistics are the appropriate model for counting experiments, so it will be useful
to express the Poisson distribution in a form that can be readily applied. If the average number of
events in a time interval t is µ, it is convenient to define a counting rate r by
(1)
r = µ / t
The probability of observing exactly events in t is then given by the Poisson distribution
function with mean µ = rt
Pµ ( ) =
µ µ (rt ) rt
e =
e
!
!
(2)
Two special cases are of interest: The probability of exactly zero events in t is
Pµ (0) = e rt
(3)
and the probability of exactly one event in t is
Pµ (1) = rt e r t r dt
t 0
(4)
These results will be directly applicable when interpreting experimental counting data, and are
very helpful in the examples that follow.
For large values of the mean the factorial makes it difficult to evaluate the Poisson
distribution function numerically. Fortunately, it can be shown that P µ( ) approaches the
Gaussian form,
1 µ 2
1
Gµ , ( )d =
exp d
2
2 1
(5)
in the limit of large µ, provided that we set 2 = µ. You will need to use this mathematical result
when you analyze Poisson-distributed data for large µ.
To use the distribution functions with experimental data we must estimate µ and from
the data. The mean is found from the usual formula
µ=
1
f
N i i i
(6)
where i is the value of the random variable (the number of counts/time interval in this case), fi is
the number of measurements yielding a result of i, and N is the total number of measurements.
Similarly, is found from
2 s2 =
1
2
fi ( i )
N
(7)
where the symbols have the same meanings.
It will also be useful to know the standard deviations of our estimates. For this is just
the standard deviation of the mean,
= /
(8)
Ns/ N
The standard deviation of s2, s 2 is found from the general formula
2
2
( s ) = 2
i
(s2 ) 2
i i
(9)
which reduces to
( )
s2
2
=
4 2 2 4 4
s s
N
N
(10)
when all the i are equal, as is the case here.
2
B. Waiting time distribution
The Poisson distribution function tells us directly the probability of finding a certain
number of counts in a time interval. We might equally well ask for the distribution of time
intervals between counts. That distribution would answer the question "Given an average rate r,
what is the probability of waiting a time t from one event to the next?"
The situation we want to analyze is shown in Fig. 1. Denoting the time between adjacent
events as t1, between next-neighbor events as t2, and so on, we can define a family of probability
density functions q j(t). For example, the probability of adjacent pulses having a separation
between t and t+t would be q1(t)dt. To calculate this distribution we need to find out how many
ways the desired sequence can occur.
For q1(t) we require that there be no events from 0 to t, and then an event in the interval
dt at t. The probability of this happening is just the product of the individual probabilities, or
q1 (t)dt = ( e rt ) rdt
(11)
which says that intervals long compared to 1/r will be quite rare.
The argument for q2(t), the time interval to the second following event, is a little trickier.
Start by considering a typical sequence: An event occurs at t = 0, another at an intermediate time
t1, and then the end event is at t. The probability of this sequence is the product
[p(0) from 0 to t1][p(1) at t1][p(0) from t1 to t][p(1) at t]
(12)
e rt1 rdt1 e r( t t1 ) rdt = r 2 e rtdt1 dt
(13)
Writing this out,
Of course, we do not care when the intermediate event occurred, so we sum over all possible t1
time
t1
t
2
Fig. 1 Time intervals between pulses
3
q (t)
1.0 j
0.5
rt
0.0
0
2
4
6
8
10
Fig. 2 Plots of qj(t) for j = 1 (solid line), j = 2 (dashed line) and j = 4 (dotted line).
to get
t
2 rt
2 rt
q2 (t)dt = r e dt1dt = tr e dt
(14)
0
The general result for any j is
q j (t) =
r(rt ) j 1 e rt
( j 1)!
(15)
which is plotted in Fig. 2. Note that after j = 1 the function peaks near rt = j-1, as one might
expect. This function also arises in other contexts, and is called the gamma distribution.
C. Accidental coincidence
Suppose we have connected two counters to a coincidence module, with the intention of
picking out simultaneous events. We might do this to discriminate against some background
process which produces non-coincident counts, for example. Unfortunately some of the
coincidences we observe will be due to unrelated events which happen to occur within the time
resolution of the coincidence circuit. It is important to know how frequent such accidental
coincidences are.
As a simple approximation, we assume that detector pulses that arrive at the coincidence
circuit within a resolving time r are counted as coincident, while pulses with a greater
separation interval are not counted. The probability of an accidental coincidence is then just the
4
probability of an event in counter 1 and an event in counter 2 within a short interval r . If the
individual counting rates are r1 and r2, this is just the product of the two probabilities
pc = (r1r)(r2r)
(16)
where we assumed r1r « 1 and r2r « 1 because this is the only useful situation. The rate of
accidental counts is just the probability per unit time, or
ra = r1r2r
(17)
Actually this is a slight overestimate since some of the measured single-counter rate is due to real
coincident events. One should then subtract the true coincidences before estimating accidentals,
but when coincidence counting is useful this is usually a small correction.
D. Dead time correction
All counter systems require a finite amount of time to process an event. During this time,
the counter is unable to respond to another particle that may arrive. To model this phenomenon,
we assume that the system is totally unresponsive for a "dead time" d, after which it will again
accept an event. If another event arrives during the dead time it will extend the interval, so that
very high event rates will paralyze this type of counter.
To grasp the seriousness of the problem, we can ask for the probability of losing one or
more counts following a particular event. This can be approximated as the Poisson probability of
having one or more arrivals during d, which is
Pµ ( 1) = 1 Pµ (0) = 1 e r d
(18)
Numerically, the probability of losing counts is about 10% when rd is about 0.1. For a Geiger
tube with a dead time of 250 µs, as might be used in a survey meter, this corresponds to only 400
counts per second. Clearly we need to correct for the dead time when highly accurate
measurements are needed.
Suppose we have events arriving at a true rate r for a period T. The counter registers only
N counts, fewer than the true number of arrivals, rT. In fact, only those events which arrive at
time intervals greater than d are recorded. The probability of a time interval between events
greater than d is just an integral over the time interval distribution q1(t) derived above, namely
5
p(t d ) = q1 (t) dt = e r d
(19)
d
This probability is the fraction N/rT of the counts that we see, so
N = rTe
r d
(20)
To find the true rate r we must solve this numerically. Qualitatively, the expression behaves as
expected: The apparent rate initially rises linearly with r, but then peaks and decreases toward
zero when events arrive at intervals shorter than d, .
6