Download Introduction to LaTeX Part II: Writing a Technical Paper

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia , lookup

Probability interpretations wikipedia , lookup

Probability wikipedia , lookup

Randomness wikipedia , lookup

Transcript
Introduction to LATEX
Part II: Writing a Technical Paper
This Frame Intentionally Left Blank
Monson H. Hayes
[email protected]
Chung-Ang University
Seoul, Korea
This material is the property of the author and is for the sole and exclusive use of his students. It is not for publication,
nor is it to be sold, reproduced, or generally distributed.
M.Hayes (CAU-GT)
November 08, 2011
1 / 54
M.Hayes (CAU-GT)
Getting Started with your LATEX Technical Paper
Paper Preamble
First Step: Set up the preamble, which is the area between
\documentclass and \begin{document}.
For the sample paper, the preamble is quite simple:
The preamble includes the definition of the document class with options,
November 08, 2011
2 / 54
%\documentclass[journal,onecolumn,twoside]{IEEEtran}
\documentclass[10pt,twocolumn,twoside]{IEEEtran}
\usepackage[pdftex]{graphicx}
\usepackage{amsmath,amssymb}
\usepackage{setspace}
\usepackage{subfigure}
\def\pr{{\rm P}}
\documentclass[journal,onecolumn]{IEEEtran}
Global style commands.
\setlength{\parindent}{0pt}
Packages that you want to include,
\usepackage[pdftex]{graphicx}
\begin{document}
Your own special features and definitions
\def\pr{{\rm P}}
M.Hayes (CAU-GT)
November 08, 2011
3 / 54
M.Hayes (CAU-GT)
November 08, 2011
4 / 54
LATEX
Paper Header
NS ON LATEX
1
For the sample
paper, the header is straightforward:
Random Variables: An Overview
Monson Hayes, Fellow, IEEE
Chung-Ang University
Seoul, Korea
[email protected]
A
\title{Random Variables: An Overview}
\author{Monson Hayes,~\IEEEmembership{Fellow,~IEEE} \\
Chung-Ang University \\
Seoul, Korea \\
\textit{[email protected]}}
\markboth{IEEE Transactions on \LaTeX\ }
{Hayes}
IEEE TRANSACTIONS ON LTEX
(Invited Paper)
1
Random Variables: An Overview
paper introduces the concept of a random
s nothing more than a variable whose numeric
ed by the outcome of an experiment. To describe
that are associated with these numeric values
d conceptually
useful manner, the probability
M.Hayes (CAU-GT)
probability density function are introduced. In
y important concepts of conditional distribution
ensity functions are presented. Then, the expeced values of random variables are introduced,
e framework
describing random variables in
PaperforAbstract
. Throughout the chapter, a variety of different
s that are important in applications will be
\IEEEspecialpapernotice{(Invited Paper)}
\maketitle
which a geometric probability law, and the random
variable
N is referred to as a geometric random variable. It is straightforward to show that this probability assignment satisfies the
Monson Hayes, Fellow, IEEE
three probability axioms.
Pn
5 / 54
π 2 08, 2011
= Chung-Ang
style M.Hayes (CAU-GT)
limn→∞ k=1 k12 November
University
6 . And this is display
November 08, 2011
n
X
π 2 Korea
1
Seoul,
=
(2)
2
n→∞
k
6
[email protected]
k=1
LATEX
Given the probability assignment for the random variable
N , it is then easy to find the(Invited
probability Paper)
of any event that is
defined in terms of values of X. For example, the \begin{abstract}
probability
that the number of α particles is less than some number, N0 ,
This paper introduces the concept of a random variable,
may be found as follows. Since the event {N < N0 } is the
I. I NTRODUCTION
λ > 0.
Abstract—This paper introduces
the concept of a random for some
union of the events {N = k} for k = 0, 1, . . . , N0which
− 1, is nothing more than a variable whose numeric
variable,
which
is
nothing
more
than
a
variable
whose
numeric
value
determined
by the outcome
experiment.
Given this is
probability
assignment
for N , of
it isanthen
easy to
in the development of probability theory is to
N[
0 −1
value
is
determined
by
the
outcome
of
an
experiment.
To
describe
To
describe
the
probabilities
that
are
associated
with
ncept of a random variable. Although perhaps
{N = find
n} the probability of any event that is defined in terms of
{N < N0 } =
the probabilities
that are
associated with these
numeric values
t like something
difficult, random
variables
these
numeric
values
in
a
concise
and
conceptually
n=0
values of N . For example, the probability that the number of useful
in a concise
and conceptually
quite simple.
Given a sample
space Ω cor- useful manner, the probability
the probability distribution and probability
and since these events are mutually exclusive,
thenmanner,
it follows
α particles
is less
than some
number, N0 , may be found as
distribution
andthisprobability
ome random
experiment,
sample spacedensity function are introduced.
density
function
are introduced.
from Axiom 3 that
ary events,
ω ∈ the
Ω, and
when an experiment
Then,
moment
generatingis function is defined, and several follows.
Sincethe
themoment
event {N
< N0 } isfunction
the union
the events
Then,
generating
is of
defined,
and
cific elementary
event
outcome)
examples
are(experimental
given. Finally,
the concept
of0 }a =correlation
function
P{N < N
P{N = 0}+P{N
= 1}+·{N
· ·+P{N
=
N
−1}
0
= several
k} for k examples
= 0, 1, . . are
. , Ngiven.
0 − 1,
en theseand
experimental
outcomes
are numeric
correlation
matrices
is introduced.
Finally, the concept ofN a−1correlation function and
the temperature of a fluid, the velocity of Therefore,
0
correlation matrices is [
introduced.
NX
NX
0 −1
0 −1
e value of a commodity or stock, we have
{N = n}
{N
<
N
}
=
0
n
\end{abstract}
P{N < N0 } =
P{N = n} =
0.9(0.1)
le. In some experiments, the outcomes may
I. I NTRODUCTION
n=0
n=0
n=0
%\doublespace
such as the gender or the blood type of a
at randomThe
from concept
within a given
This last sum
be evaluated
geometric
of a population,
random variable
is amay
simple
one, using
and theand
sinceseries,
these events are mutually exclusive, then
of the flip
of
a
coin.
In
these
cases,
a
random
N
−1
oneM.Hayes
that(CAU-GT)
is important. Although perhaps sounding
at 1first
X
November
08, 2011
/ 54
M.Hayes (CAU-GT) N0 −1
November 08, 2011
NX
− α7 N
0 −1
formed simply
by mapping these non-numeric
X
λn
αn =
6 / 54
lim
like something difficult, random variables are conceptually
−λ
8 / 54
in a concise
and conceptually
useful
manner, To
thedescribe
probability
value is determined
by the outcome
of an experiment.
follows. α
Since
the eventis{Nless
< Nthan
unionnumber,
of the events
0 } is the
particles
some
N0 , may be found as
distribution
and probability
densitywithfunction
are values
introduced.
the probabilities
that are associated
these numeric
{N
=
k}
for
k
=
0,
1,
.
.
.
,
N
−
1,
0
in
concise andgenerating
conceptually function
useful manner,
the probability
Then, Introduction
thea moment
is defined,
and several follows.
{N < N0 } is the union of the events
Paper
LATESince
X the event
distribution
and Finally,
probability
function
are introduced.
0 −1
examples
are given.
thedensity
concept
of a correlation
function {N = k} for k =N0,
[
1, . . . , N0 − 1,
Then, the moment generating function is defined, and several
{N < N0 } =
{N = n}
and correlation
matrices
is
introduced.
examples are given. Finally, the concept of a correlation function
and correlation matrices is introduced.
I. I.I NTRODUCTION
I NTRODUCTION
n=0
N[
0 −1
and since these events are mutually
{N <exclusive,
N0 } = then
{N = n}
NX
NX
\section{Introduction}
0 −1 n=0
0 −1
λn −λ
e
P{N < NThe
P{Nof= an}random
=
variable
is a simple one, and
0 } =concept
n!
The concept
of of
a random
a simple
one, and and since these
The concept
a random variable
variable is is
a simple
one, and
events are mutually
exclusive, then
n=0
n=0 is important.
one that
one is
thatimportant.
is important.Although
Although perhaps
sounding
at firstat first
one that
perhaps
sounding
This last sum may
be evaluated
usingNX
the
Although
perhaps
sounding
at first Nlike
difficult,
0 −1 something
0 −1following
n
X
like something difficult, random variables are conceptually
λsimple.
−λ
like something
difficult,
random
variables
are
conceptually
random
variables
are
conceptually
quite
k
e
P{N
= + 1, λ)P{N = n} =
quite simple. Given a sample space Ω corresponding to some
X
λn<−λN0 }Γ(k
n!
Given
a
sample
space
$\Omega
$
corresponding
to some random
quite simple.
Given
a
sample
space
Ω
corresponding
to
some
e
=
random experiment, this sample space contains elementary
n=0
n=0
n!
k!
n=0
experiment,
this
sample
space
contains
elementary
events,
ω ∈ Ω, and
an experiment
is performed,
a
randomevents,
experiment,
thiswhen
sample
space contains
elementary
This
last
sum
may
be
evaluated
using
the
following
$\omega \in
(experimental
outcome)is
is observed.
where
Z ∞\Omega $, and when an experiment is performed,
events,specific
ω ∈elementary
Ω, and event
when
an experiment
performed,
a
aΓ(k,
specific
When these experimental outcomes are numeric values, such
kxk−1 en−x dx event (experimental outcome) is
λ) = elementary
X
specificas elementary
event (experimental outcome) is observed.
λ −λ
Γ(k + 1, λ)
λ
the temperature of a fluid, the velocity of a particle, or the
observed.
e
=
When value
theseofexperimental
outcomes
are
numeric
values,
such
a commodity or stock, we have a random variable. In
n!
k!
n=0
experiments,
outcomes
be non-numeric,
such as or the
as the some
temperature
of the
a fluid,
themay
velocity
of a particle,
III. P ROBABILITY M ASS F UNCTION
theagender
or the blood
type ofwe
a person
at random
value of
commodity
or stock,
have selected
a random
variable. In where
Z ∞
from within a given population, or the outcome of the flip of a
Discrete random variables have a discrete set ofk−1
possible
some experiments,
the outcomes may be non-numeric,
such as
x
e−x
Γ(k, λ)of=these outcomes
coin.
In these cases, a random variable may be formed
simply
M.Hayes
(CAU-GT)
November
08, 2011 outcomes,
9 / 54
M.Hayes
(CAU-GT)
November 08, 2011
10 / 54
xk , and the
probabilities
aredx
λ
the gender
or
the
blood
type
of
a
person
selected
at
random
by mapping these non-numeric outcomes to real numbers We denoted by pk ,
begin this
paper population,
by introducing
of of
a random
from within
a given
or an
theexample
outcome
the flip of a
P{X = xk } = pk
variable,
and
how
a
probability
assignment
may
be
made
on
coin. In these cases, a random variable may be formed simply
III. referred
P ROBABILITY
M ASS
F UNCTION
These probabilities
to as probability
masses
Section
II experimental
with Numbered
LATEare
X often
the
outcomes.Equation and Footnote
by mapping these non-numeric outcomes to real numbers since
We they represent discrete (point) masses of probability at
Discrete
variables
havecharges
a discrete
set of possible
specific
alongrandom
the real axis,
just as point
in
begin this paper II.byP ROBABILITY
introducing
an example of a random locations
A SSIGNMENTS
\section{Probability
Assignments}
electrostatics
are
used
to
represent
discrete
charges
distributed
x , and the probabilities of these outcomes are
variable,Let
and
how
a probability
assignment
be made on outcomes,
Let $N$kbe a variable that represents the number
N be
a variable
that represents
the number may
of α particles
along a denoted
line. These
probabilities,
written or plotted as a
pk , $ particles
of by
$\alpha
that are counted over a given period of
that are counted
over a given period of time. The ensemble
the experimental
outcomes.
function of x, is called the probability mass function (PMF).
time.
for N is the set of non-negative integers
P{Xfunctions
= xk }mathemat= pk
In order to express
probability
The ensemble
formass
$N$2 is the set
of non-negative integers
which
is
defined
as \} \]
ically,
we
introduce
the
delta
function,
E
=
{0,
1,
2,
.
.
.}
N
\[ {\cal E}_N
\{ 0,referred
1, 2, \ldots
II. P ROBABILITY A SSIGNMENTS
These probabilities
are =often
to as
probability masses
follows:
Since the number of outcomes is unknown until we actually make a
Since
number ofthat
outcomes
is unknown
until we of
actually
1 ; discrete
n=0
they
represent
(point) masses of probability at
Let N
bethe
a variable
represents
the number
α particles since count,
δ[n] =then $N$ is a random variable.
make a count, then N is a random variable. In many cases,
0
;
n
=
6
0
along
real axis, to
justmodel
as point
in
In locations
many cases,
it isthe
appropriate
$N$ ascharges
a
that are
counted over a given period of time. The ensemble specific
it is appropriate to model N as a Poisson random variable
\emph{Poisson
random
variable}
where\footnote{Note
that with this
Shifted delta
functions
may
be used
used
totorepresent
functions
that charges distributed
electrostatics
are
represent
discrete
for N where
is the1 set of non-negative
integers
n
probability
assignment
it
is
assumed
that
the
number
of particles
have
a
value
of
one
at
other
values
of
n.
For
example,
δ[n−1]
λ −λ
a line.
These probabilities,
written
or plotted
as
a
P{N = n} =
e
n≥0
(1) is equal along
may
be
arbitrarily
large
and,
in
fact,
approach
infinity.}
to one when n = 1 and equal to zero for all other
EN = {0,n!1, 2, . . .}
of x,
the probability
mass function (PMF).
\begin{equation}
values offunction
n. Therefore,
for is
an called
integer-valued
discrete random
for some λ > 0.
\pr
\{
N
=
n
\}
variable X In
withorder to express probability mass functions mathematSince theGiven
number
of outcomes
is unknown
we to
actually
this probability
assignment
for N , it isuntil
then easy
= \frac {\lambda ^n}{n!} e^{-\lambda
} \qquad n\geq 0
introduce
the <
delta
function,2 which is defined as
the probability
event that variable.
is defined in
make afindcount,
then N ofisany
a random
In terms
manyof cases, ically,
P{X we
= n}
= pX [n] ; −∞
n<∞
\label{eq:prob_assgn}
it is appropriate
as a Poisson
variable follows: \end{equation} 1 Note that withto
thismodel
probabilityN
assignment
it is assumed random
that the number
; n=0
2 In digital signal processing,
for someδ[n]
$\lambda
> 0$. 1
in fact, approach infinity.
is referred
where1of particles may be arbitrarily large and,
δ[n] to=as the unit sample function.
n
0 ; n 6= 0
λ −λ
M.Hayes (CAU-GT)
M.Hayes (CAU-GT)
November 08, 2011
11 / 54
P{N = n} =
e
n≥0
(1)
November 08, 2011
12 / 54
@gatech.edu
ted Union
Paper)
LATEX
bles: An Overview
m
values of N . For example, the probability that the number of
yes, αFellow,
IEEE
ric
particles
is less than some number, N0 , may be found as
be University
Ang
follows. Since the event {N < N0 } is the union of the events
es
oul, Korea
ty {N = k} for k = 0, 1, . . . , N0 − 1,
@gatech.edu
d.
N0 −1
al
ited
on
m
ric
be
nd
es
rst
ty
ly
ed.
me
ral
ry
on
Paper)
{N < N0 } =
[
n=0
{N = n}
and since these events are mutually exclusive, then
values of N . For example,
that
number of
NX
0 −1 the
0 −1 the probabilityN
X
λn −λ
α particles
number,
N0 , may bee found as
P{N
= n} =
P{N is
< less
N0 } than
= some
n!of the events
follows. Since the eventn=0
{N < N0 } is the union
n=0
{N =last
k}sum
for kmay
= 0,be1,evaluated
. . . , N0 −using
1, the following
This
M.Hayes (CAU-GT)
N[
0 −1
k
n
X
λ
Γ(k
+ 1,=λ)n}
−λ
{N < Ne0 } ==
{N
n!
n=0 k!
n=0
November 08, 2011
13 / 54
a
and events
a Sumare mutually
andIntegral
since these
exclusive, then
d. An
where
Z ∞
ch
NX
NX
−1
0 −1
0=
Γ(k, λ)
xk−1 e−x dx
λn −λ
λ
he
e
P{N
= n} =
P{N < N0 } =
n!
nd
In
n=0
n=0
rst
as This last sum
be evaluatedMusing
following
III.may
P ROBABILITY
ASS Fthe
UNCTION
ly
m
k
mea
X
λn −λ have
Γ(ka +discrete
1, λ) set of possible
Discrete random
variables
e =
ry
ly outcomes, xk , and then! probabilities k!of these outcomes are
n=0
Wea denoted by pk ,
d. where
Z x∞k } = pk
m
P{X =
ch
Γ(k, λ) =
xk−1 e−x dx
on
to as probability masses
λ
he These probabilities are often referred
In since they represent discrete (point) masses of probability at
along the real axis, just as point charges in
as specific locations
III. P ROBABILITY M ASS F UNCTION
m electrostatics are used to represent discrete charges distributed
es
a line.
Thesevariables
probabilities,
as a
f a along
Discrete
random
have awritten
discreteorsetplotted
of possible
le
M.Hayes (CAU-GT)
November 08,
2011
15 / 54
of xx,k , isand
called
probability of
mass
function
(PMF).
ly function
outcomes,
the the
probabilities
these
outcomes
are
For example, the probability that the number of $\alpha $
particles is less than some number, $N_0$, may be found as
follows.
Since the event $\{ N < N_0 \}$ is the union of the events
$\{ N=k \}$ for $k = 0, 1, \ldots , N_0-1$,
\[ \{ N < N_0 \} = \bigcup _{n=0} ^{N_0-1} \{N = n\} \]
and since these events are mutually exclusive, then
\begin{equation*}
\pr \{N < N_0 \}
= \sum _{n=0} ^{N_0-1} \pr \{ N=n \}
= \sum _{n=0} ^{N_0-1}
\frac {\lambda ^n}{n!} e^{-\lambda}
\end{equation*}
M.Hayes (CAU-GT)
November 08, 2011
14 / 54
LATEX
This last sum may be evaluated using the following
\[ \sum _{n=0} ^{k} \frac {\lambda ^n}{n!} e^{-\lambda }
= \frac {\Gamma(k+1,\lambda)}{k!} \]
where
\[ \Gamma (k,\lambda )
= \int _{\lambda }^{\infty} x^{k-1}e^{-x}dx \]
M.Hayes (CAU-GT)
November 08, 2011
16 / 54
electrostatics are used to represent discrete charges distributed
es
along a line. These probabilities, written or plotted as a
le Arrays with Invisible Delimiters
function of x, is called the probability mass function (PMF).
In order to express probability mass functions mathematically, we introduce the delta function,2 which is defined as
2
follows:
ly
1 ; n=0
δ[n]
=
es,
0 ; n 6= 0
le the PMF may be written as
Shifted delta functions may be used to represent functions that
∞
have a value of one at other X
values of n. For example, δ[n−1]
p [k]δ[n − k]
1) is equal to one pwhen
X (n) =
n = 1 and Xequal to zero for all other
k=−∞
values of n. Therefore, for an integer-valued discrete random
For example, a Bernoulli Random Variable X has an ensemble
to variable X with
of possible outcomes of zero and one
of
P{X = n} = pX [n] ; −∞ < n < ∞
EX = {0, 1}
ber
2
In digital signal processing, δ[n] is referred to as the unit sample function.
where
M.Hayes (CAU-GT)
P{X = 0} = p ; P{X = 1} = 1 −November
p 08, 2011 17 / 54
Therefore, the probability mass function for X may be expressed ininterms
of delta functions as follows:
Fractions
Equations
pX (n) = pδ[n] + (1 − p)δ[n − 1]
Another example is the The Geometric Random Variable that
has an ensemble equal to the set of all positive integers
EX = {1, 2, 3, . . .}
with a probability law given by
P{N = k} = ( 12 )k ; k > 0
The probability mass function for this random variable is
pN (n) =
∞
X
( 12 )k δ[n − k]
k=1
Another random variable that occurs frequently in applications
is (CAU-GT)
one that corresponds to the number ofNovember
successes,
N
M.Hayes
08, 2011
19 /, 54
in n Bernoulli trials, with the probability of a success being
LATEX
In order to express probability mass functions mathematically,
IEEE TRANSACTIONS ON
LATEX
we introduce the \emph{delta function},\footnote{In
digital
signal processing, $\delta [n]$ is referred to as the unit
sample function.} which is defined as follows:
\[ \delta [n] Density
= \left\{Function
\begin{array}{cl}
Name
1 & \ ; \ n = 0 \\
0 & \ ; \ n \neq 0 \end{array} \right. \]
Shifted
delta functions
be −λx
used to represent functions
Exponential
fX (x)may
= λe
that have a value of one at 1
other−α|x−m|
values of $n$. For example,
Laplace
fX (x)to=one
2 αe
$\delta [n-1]$ is equal
when 2$n=1$
and equal to zero for
2
allRayleigh
other values of
fX$n$.
(x) = α2 xe−α x /2 , x ≥ 0
Therefore,
discrete random variable $X$
Uniformfor an finteger-valued
X (x) = 1/(b − a), b ≤ x ≤ a
with
\[ \pr \{ X = n \} = p_X[n] \ ; \ -\infty < n < \infty \]
TABLE I
A TABLE OF COMMON AND IMPORTANT RANDOM VARIABLES .
M.Hayes (CAU-GT)
November 08, 2011
18 / 54
LATEX the distribution function is the integral of the
Conversely,
density function,
Z x
FX (x) =
fX (α)dα
(2)
−∞
Another example is the \emph{The Geometric Random Variable}
Two
properties
of the density
are:of all positive
that
has an ensemble
equalfunction
to the set
1) integers
fX (x) ≥ 0 for all x.
Z ∞\[ {\cal E}_X = \{ 1, 2, 3, \ldots \} \]
with a probability law given by
2)
fX (x)dx = 1
\[
−∞ \pr \{ N = k \} = (\tfrac{1}{2})^k \ ; \ k > 0 \]
probability
mass function
for monotonicity
this random variable
TheThefirst
property follows
from the
of the is
\[ p_N(n) = \sum _{k=1} ^{\infty }
distribution function. Specifically, since FX (x) is a non(\tfrac {1}{2})^k \delta [n - k] \]
decreasing function of x, then the derivative will always be
non-negative for all x. The second property follows by letting
x → ∞, in Eq. (2,
Z ∞
M.Hayes (CAU-GT) fX (α)dα = FX (∞) = 1
November 08, 2011
−∞
20 / 54
X
∞
X
Another example is the The Geometric Random Variable that
pNPhantoms
(n) =
( 12 )k δ[n − k]
Combinatorics
and
has an ensemble equal to the set of all positive integers
k=1
distribution function. Specifically,
since FX (x) is a nonZ x
decreasing
functionFX
of(x)
x, =
then thefXderivative
(α)dα will always be
(2)
AT X
L
E
non-negative for all x. The second
−∞ property follows by letting
x→
∞,properties
in Eq. (2,of the density function are:
Two
Z ∞ variable that occurs frequently in
Another random
1) fapplications
all x. that corresponds to the number of
X (x) ≥ 0 foris
fXone
(α)dα = FX (∞) = 1
Z ∞
EX = {1,
3, . . .}frequently in applicaAnother random variable
that2,occurs
tions
one that corresponds
to the number of successes, N ,
with ais probability
law given by
successes,−∞
$N$, in $n$ Bernoulli trials, with the
in n Bernoulli trials, with the probability of a success being
2) probability
fX (x)dx
1 k
of=a 1success being equal to $p$.
) ; k > Distribution
0
= (a2Binomial
equal to p. In thisP{N
case,=Nk}has
with
It isIn−∞
important
to
understand
that fX (x) is Distribution}
a probability
this case, $N$
has a \emph{Binomial
The
first
property
follows
from
the
monotonicity
of by
the
densitywith
and not a probability. Probabilities are found
n for
The probability mass function
k this random
n−k variable is
pN (k) = P{N = k} =
p (1 − p)
; 0≤k≤n
distribution\[
FX (x)of isthea event
nonintegrating
fXfunction.
(x). For
example,
probability
p_N(k)
= Specifically,
\pr
\{ N =the
k since
\}
∞
k
X
decreasing
of
x,
then the
derivative
always
= \dbinom{n}{k}
p^k(1-p)^{n-k}
\ ; \be
{X
≤ x} is function
given by the
integral
in Eq.
(2), and will
the probability
pN (n) =
( 12 )k δ[n − k]
n
0 ≤
\leq
n \] by
non-negative
second
follows
by letting
where k is the number of combinations of n objects that are of
the event for
{x1all≤x.XThe
x2k} \leq
isproperty
found
evaluating
the
k=1
where
$\tbinom{n}{k}$
is
the
number
of
combinations
of
x
→
∞,
in
Eq.
(2,
taken k at a time, and is defined by
integral
Z
x2 at a time, and is defined
$n$ objects
Z ∞that are taken $k$
that occurs frequently in applicaAnother random variable
n
n!
P{x
≤
X
≤
x
}
=
fX (α)dα
(3)
by
1
2
tions is one that corresponds
f
(α)dα
=
F
(∞)
=
1
= to the number of successes, N ,
X
X
x
\[ \dbinom{n}{k} = \frac1 {n!}{k!(n-k)!} \]
− k)! of a success being
k thek!(n
−∞
in n Bernoulli trials, with
probability
Alternative
notations
include
Unlike
the
distribution
function,
which$C(n,k)$,
is constrained to have
n
equal to p. notations
In this case,
N has
a Binomial
Alternative
include
C(n,
k), n Ck , Distribution
Ck , and Cknwith
.
It is${}_nC_k$,
important ${}^nC_k$,
to understand
fX (x) is a probability
and that
$C^n_k$.
values between zero and one, there is no such constraint on
density and not a probability. Probabilities are found by
Interesting problems that
are
sometimes
challenging
to
n k
function.
large08, 2011 22 / 54
n−k November 08, 2011
M.Hayes (CAU-GT)
21 / 54 the densityM.Hayes
(CAU-GT) In fact, fX (x) may be arbitrarily
pN (k)
P{Nsuch
= k}
; 0≤k≤n
integrating fX (x).
For example, the probability of theNovember
event
solve,
are =
those
as = k p (1 − p)
as long as the integral of fX (x) over all x is equal to one (the
X
{X ≤ x} is given by the integral in Eq. (2), and the probability
area under the curve is unity).
is odd}of=combinations
P{Nof=nn}
where nk is P{N
the number
objects that are of the event {x1 ≤ X ≤ x2 } is found by evaluating the
A table
0≤n≤∞
Substack
LATEXof common random variables is given in Table ??.
taken k atCommand
a time, and is defined
by
integral
Z x2
nodd
n
n!
P{x1 ≤ X ≤ x2 } =
fX (α)dα
(3)
V. M OMENT
G ENERATING
F UNCTIONS
=
x1
k!(n − k)!
k
IV. P ROBABILITY D ENSITY F UNCTION
The complete
statistical
characterization
of a random
Unlike
the distribution
function,
which is constrained
to varihave
Alternative notations include C(n, k), n Ck , n Ck , and Ckn .
able
of between
the continuous
type
of its
values
zero and
one,requires
there isthe
no specification
such constraint
on
Interesting
problems
that variable,
are sometimes
challenging
to probability
Interesting
problems
that
are
sometimes
challenging
For
a continuous
random
the counterpart
of the
density
function.
way
to
the density distribution
function. Inorfact,
fX (x)
may beAnother
arbitrarily
large
to solve, are those such as
solve, are those
as is the probability density function. provide
probability
mass such
function
thisthestatistical
characterization
the characteras long as
integral of
fX (x)isover
allisxwith
is equal
to one (the
\[ \pr \{ N \text{
odd}\}
jωX
The probability density function,X
fX (x), of a continuous ran- istic
is the
expected value
of e
area function,
under thewhich
unity).
P{N = n}
P{N is odd} =
=curve
\sum is_{\substack{0
\leq n \leq \infty \\[0.5ex]
Z
dom variable X is the derivative of the probability distribution
A table of common random variables
is given
in Table ??.
n∞\text{
odd}}}
0≤n≤∞
jωX
function,
MX (jω) =\pr
E{e
= \] ejωx fX (x)dx
n odd
\{N = }n\}
dFX (x)
−∞
fX (x) =
V.
M
OMENT
G
ENERATING
UNCTIONS
dx
where ω √
is a variable taking on any realFnumber
between ±∞
jω
IV.
P
ROBABILITY
D
ENSITY
F
UNCTION
where
−1. Since
e =characterization
cos ω + j sin ω, of
thea characteristic
andThe
j =complete
statistical
random variFX (x) = P{X ≤ x}
function
is, in
general, type
a complex-valued
function of ω.
It
able of the
continuous
requires the specification
of its
For a continuous random variable, the counterpart of the probability distribution or density function. Another way to
probability mass function is the probability density function. provide this statistical characterization is with the characterM.Hayes (CAU-GT)
November 08, 2011
23 / 54
M.Hayes (CAU-GT)
November 08, 2011
24 / 54
The probability density function, fX (x), of a continuous ran- istic function, which is the expected value of ejωX
le
ble
LATEX
Tables
IEEE TRANSACTIONS ON LATEX
Name
Density Function
IEEE TRANSACTIONS ON LATEX
−λx
Exponential fX (x) = λe
1
−α|x−m|
Laplace
fDensity
Name
X (x) = Function
2 αe
2 2
Rayleigh
fX (x) = α2 xe−α x /2 , x ≥ 0
Uniform
− a), b ≤ x ≤ a
Exponential ffXX(x)
(x)==1/(b
λe−λx
1
Laplace
fX (x) = 2 αe−α|x−m|
2 2
Rayleigh
fX (x) = α2 xe−α x /2 , x ≥ 0
TABLE I
Uniform
= 1/(b −
a), b ≤
x≤a .
X (x)
A TABLE
OF COMMONfAND
IMPORTANT
RANDOM
VARIABLES
xA
M.Hayes TABLE
(CAU-GT) OF
TABLE I
November 08, 2011
COMMON AND IMPORTANT RANDOM VARIABLES
.
25 / 54
Conversely, the distribution function is the integral of the
density function,
Z x
exSeparation
at Enumeration withFItem
fX (α)dα
X (x) =
Conversely, the distribution
function
is the integral of (2)
the
hat
\begin{table}[t]
\begin{center}
\begin{tabular}[t]{|ll|}
\hline\hline
Name & Density Function \\
\hline\hline
& \\
Exponential & $f_X(x) = \lambda e^{-\lambda x}$ \\
Laplace
& $f_X(x) = \frac{1}{2}\alpha e^{-\alpha |x - m|}$ \\
Rayleigh
& $f_X(x) = \alpha ^2 x e^{-\alpha ^2x^2/2},
\ x \geq 0$ \\
Uniform
& $f_X(x) = 1/(b-a), \ b \leq x \leq a$ \\
& \\
\hline
\end{tabular}
\end{center}
\caption{A table of common and important random variables.}
\label{table:RandomVariables}
\end{table}
M.Hayes (CAU-GT)
November 08, 2011
26 / 54
LATEX
−∞
density function,
Two properties of the density
Z xfunction are:
1) fX (x) ≥ 0 forFXall(x)
x.=
fX (α)dα
Z ∞
−∞
2)Two properties
fX (x)dx
1 density function are:
of=the
(2)
−∞
1) ffirst
≥ 0 for follows
all x. from the monotonicity of the
The
X (x)property
Z ∞
distribution function. Specifically, since FX (x) is a non2)
fX (x)dxof=x,
1 then the derivative will always be
decreasing
function
−∞
non-negative
all x. The
second
follows by letting
The first for
property
follows
fromproperty
the monotonicity
of the
xdistribution
→ ∞, in Eq.
(2,
function. Specifically, since FX (x) is a nonZ ∞
decreasing function
of x, then the derivative will always be
f
(α)dα
= FXproperty
(∞) = 1follows by letting
non-negative for all x.XThe
second
aN,
−∞
ng
x → ∞, in Eq. (2,
hcaIt is importantZ to
∞ understand that fX (x) is a probability
M.Hayes and
(CAU-GT)not a probability. Probabilities are
08, 2011 by
27 / 54
N , density
fX (α)dα = FX (∞) = 1 Novemberfound
Two properties of the density function are:
\begin{enumerate}\setlength{\itemsep}{4pt}
\item $f_X(x) \geq 0$ for all $x$.
\item ${\displaystyle \int _{-\infty } ^{\infty }
f_X(x) dx } = 1$
\end{enumerate}
M.Hayes (CAU-GT)
November 08, 2011
28 / 54
may be recognized that, except for the sign in the exponent,
jω
is the Fourier
thesign
probability
density
Mmay
X (ebe)recognized
that,transform
except forofthe
in the exponent,
Left
and
Right
Brackets
function.
useful
in the analysis
MX (ejωJust
) is as
theFourier
Fouriertransforms
transform are
of the
probability
density
offunction.
signals and
linear
systems,
the
characteristic
function
is also
Just as Fourier transforms are useful in the analysis
a of
useful
tool
inlinear
probability
andtherandom
variables.
Sinceis also
signals
and
systems,
characteristic
function
Z ∞
Z ∞
a useful tool in probability and random
variables.
Since |MX (jω)| = Z ∞ejωx fX (x)dx ≤ Z ∞ ejωx fX (x) dx
−∞
−∞ jωx
e fX (x) dx
|MX (jω)| = Z ∞ ejωx fX (x)dx ≤ Z ∞
ejωx |fX (x)| dx =−∞
= Z −∞
Z ∞ fX (x)dx = 1
∞ −∞
ejωx |fX (x)| dx = −∞ fX (x)dx = 1
=
then the characteristic
and will always
−∞function is well-defined −∞
exist
probabilityfunction
densityisfunction.
thenfor
the any
characteristic
well-defined and will always
exist for any probability density function.
A. Exponential Random Variable
A.If Exponential
Random random
Variablevariable with parameter λ,
X is an exponential
If X is an exponential
variable
f (x) random
= λe−λx
u(x) with parameter λ,
∂
FX,Y (x, y)
2
∂x∂y
∂
LAT X fX,Y (x, y) =
FX,Y (x, y)
where E
∂x∂y
FX,Y (x, y) = P{X ≤ x, Y ≤ y}
where
function,
fX,Y (x, y) =
(4)
(4)
FX,Y (x, y)
= P{X
≤ x, Y the
≤ joint
y} distribution
provided the derivatives
exist.
Conversely,
\begin{array*}
function
thederivatives
integral of the Conversely,
joint densitythe
function,
providedisthe
joint distribution
|M_X(j\omegaexist.
)|
Z x Z y
function is the&=
integral
the joint
density} function,
\left|of\int
_{-\infty
^{\infty }
FX,Y (x, y) = Z x Z ye^{j\omega
fX,Y (α, β)dαdβ
x} f_X(x) dx (5)
\right|
−∞
−∞
} β)dαdβ
^{\infty } \left|
FX,Y (x,\leq
y) = \int _{-\infty
fX,Y (α,
(5)
Given the joint density or distribution
function,x}the
probability
e^{j\omega
f_X(x)
\right| dx \\
−∞ −∞
&=
\int
_{-\infty
} ^{\infty
of
any the
event
that
is
defined
in terms
of values}the
of probability
X may be
Given
joint
density
or
distribution
function,
\left|
e^{j\omega
x}
\right|
\ y) is
found.
P{X ≤in x,
Y ≤
= FofX,Y
of any For
eventexample,
that is defined
terms
of y}
values
X(x,
may be
\left|
f_X(x)
\right|
dxas in Eq. (5).
found
by
integrating
the
joint
density
function
found. For example,
P{X
≤ x, Y ≤
y} = FX,Y
(x, y) isdx = 1
=
\int _{-\infty
} ^{\infty
} f_X(x)
Two
important
properties
of
the
joint
density
function
are
found by integrating
the joint density function as in Eq. (5).
\end{eqnarray*}
given
below.
Two important
properties of the joint density function are
given
X
1) fbelow.
XY (x, y) ≥ 0 for all x, y.
Z
−λx
∞ Z ∞
(x) = λe
u(x)
1) fXY
(x, y) ≥ 0 for all x, y.
thenM.Hayes
the (CAU-GT)
characteristicfX
function
is
November 08, 2011
30 / 54
fXY (x, y)dxdy = 1
Z ∞ (CAU-GT)
Z ∞
Z ∞ November 08, 2011 29 / 54 2) Z ∞ M.Hayes
−∞
−∞
then the characteristic function
is
2)
fXY (x, y)dxdy = 1
MX (jω) = Z ∞λejωx e−λx dx = λ Z ∞e(jω−λ)x dx
The
first
property
follows from the monotonicity of the
−∞
−∞
0
(jω−λ)x
MX (jω) = 0 λejωx e−λxdx
=
λ
e
dx
distribution
Specifically,
FX (x) is aofnonThe LAfirst
property follows
from since
the monotonicity
the
λ
λ
Figures
TEX function.
∞
0
0
=
e(jω−λ)x ∞=
decreasing
function
of
x,
then
the
derivative
will
always
be
distribution function. Specifically, since FX (x) is a nonjω −
λ −λjω
λ λ (jω−λ)x0
=
e
=
non-negative
for all ofx. x,The
propertywill
follows
from
decreasing
function
thensecond
the derivative
always
be
jω − λ function of
0 ω. λ − jω
\subsection{Gaussian Random Variable}
which is a complex-valued
Eq.
(5)
by
letting
x
→
∞
and
y
→
∞,
non-negative for all x. The second property follows from
which is a complex-valued function of ω.
Z ∞xGaussian
Z ∞
zero-mean
random
$X$ has a density
Eq. (5)A by
letting
→
∞ and
y →variable
∞,
B. Gaussian Random Variable
function Z
of ∞
the
Z form
∞ fXY (x, y)dxdy = 1
\[ f_X(x)
\frac {1}{\sigma _x\sqrt {2\pi }}
B.A Gaussian
Variable
−∞ =−∞
zero-meanRandom
Gaussian
random variable X has a density
fXY (x, y)dxdy
=1
e^{-x^2/2\sigma
_x^2} \]
function
of the form
−∞
−∞
A zero-mean
Gaussian random variable X has a density
As was
the$\sigma
case for_x^2$
a single
random
variable,
it is important
where
is the
variance
of $X$.
function of the form
2
2
1
A plot
the
density
a Gaussian
to As
understand
that
the
jointfunction
density
(x,several
y) is
was
the of
case
for
a single
randomoffunction
variable,
itfXY
is for
important
fX (x) = √ e−x /2σx
different
values
of
$\sigma
_x$
is
shown
in
2
2
1
atoprobability
andjoint
not density
a probability.
Probabilities
understanddensity
that the
function
fXY (x, y)are
is
Fig.~\ref{fig:Gaussian}.
fX (x) =σx √2π e−x /2σx
by
integrating
function. For
example, the
a probability
density the
and density
not a probability.
Probabilities
are
σx A2πplot of the density function found
\begin{figure}
where σx2 is the variance of X.
probability
of the eventthe density function. For example, the
found \begin{center}
by integrating
ofwhere
a Gaussian
forvariance
several of
different
values
of σdensity
in
σx2 is the
X. A plot
of the
function
x is shown
\includegraphics[width=\hsize]{images/Gaussian.png}\\
probability of the event
Fig.
of a1.Gaussian for several different values of σx is shown in
A = {x1 ≤ X ≤ x2 } ∪ {y1 ≤ X ≤ y2 }
\end{center}
\caption{A
Function}
The1.characteristic function is
Fig.
A = {x1 Gaussian
≤ X ≤ xDensity
2 } ∪ {y1 ≤ X ≤ y2 }
Z
\label{fig:Gaussian}
is
found
by
integrating
the
joint
density
function as follows
∞
The characteristic
is −x2 /2σ2
1 function
jωx
\end{figure}
Z x2 function
Z y2
x
Z ∞e e
is found
by integrating the joint density
as follows
MX (jω) = √
2
2
1
jωx
−x
/2σ
σ
2π
−∞ e
x
P{x1 ≤ X M.Hayes
≤ x2 ,(CAU-GT)
y1 ≤ X ≤ y2 } = Z x2 Z y2 fXY (x, y)dxdy
M (jω) = x √
Ze ∞
November 08, 2011
32 / 54
M.HayesX(CAU-GT)
November 08, 2011
31 / 54
σ
2π
2
2
2
2
1
x σ /2 −∞
P{x1 ≤ X ≤ x2 , y1 ≤ X ≤ y2 } = x1 y1 fXY (x, y)dxdy
−ω
−(x−jωσx ) /2σx
x
Z
function of the form
Given the joint density or distribution
function,
the probability
2
1
ys
−x2 /2σx
√
f
(x)
=
e
of any event that isX defined in terms of values of X may be
Underbraces
σx 2π
found. For example, P{X ≤
x, Y ≤ y} = FX,Y (x, y) is
2
where
σ
is
the
variance
of
X.
A plot of
the density
found byx integrating the joint density
function
as infunction
Eq. (5).
of
a
Gaussian
for
several
different
values
of
σ
is
shown
x function in
Two important properties of the joint density
are
λ,
Fig.
1.
given below.
The characteristic function is
1) fXY (x, y) ≥ 10 forZ all
∞ x, y.
2
2
∞ Z =
∞
√
ejωx e−x /2σx
MZX (jω)
σfxXY2π
−∞
2)
(x, y)dxdy
=
1
Z ∞
−∞ −∞
2 2
2
1
−ω σx /2
−(x−jωσx )2 /2σx
√
=
e
e
The first property follows
from−∞the monotonicitydxof the
σx 2π
|
{z FX (x) is a} nondistribution function. Specifically,
since
ity
=1
decreasing function of x, then the derivative will always be
non-negative for all x. The second property follows from
Eq. (5) by letting x → ∞ and y → ∞,
Z ∞Z ∞
fXY (x, y)dxdy = 1
M.Hayes (CAU-GT)
−∞
−∞
November 08, 2011
As was the case for a single random variable, it is important
to understand
that the joint density function fXY (x, y) is
LATEXdensity and not a probability. Probabilities
a probability
are
found by integrating the density function. For example, the
probability of the event
= {x1 ≤ X ≤ function
x2 } ∪ {y1is≤ X ≤ y2 }
TheA characteristic
\begin{align*}
is found by M_X(j\omega
integrating the
) joint density function as follows
Z x2 Z y{2\pi
&= \frac {1}{\sigma _x\sqrt
}}
2
\int
_{-\infty
}
^{\infty
}
e^{j\omega
x}
fXY (x, y)dxdy
P{x1 ≤ X ≤ x2 , y1 ≤ X ≤ y2 } =
e^{-x^2/2\sigma
_x^2}
\\
y1
x1
(6)
&= e^{-\omega ^2 \sigma _x^2/2}
\underbrace{\frac
_x\sqrt
and to find the probability
that (x, y){1}{\sigma
∈ R where
R is{2\pi
an }}
\int
_{-\infty
}
^{\infty
}
arbitrarily defined region in the x-y plane, we integrate the
e^{-(x - j\omega \sigma _x)^2/2\sigma _x^2} dx}_{=1}
\end{align*}
33 / 54
As was the case for a single random variable, it is important
to understand that the joint density function fXY (x, y) is
a probability
density and not a probability. Probabilities are
Double
Integrals
found by integrating the density function. For example, the
on
probability of the event
in
A = {x1 ≤ X ≤ x2 } ∪ {y1 ≤ X ≤ y2 }
is found by integrating the joint density function as follows
Z x2 Z y2
P{x1 ≤ X ≤ x2 , y1 ≤ X ≤ y2 } =
fXY (x, y)dxdy
x1
y1
(6)
and to find the probability that (x, y) ∈ R where R is an
arbitrarily defined region in the x-y plane, we integrate the
M.Hayes (CAU-GT)
November 08, 2011
35 / 54
M.Hayes (CAU-GT)
November 08, 2011
34 / 54
LATEX
\begin{equation}
\pr \{ x_1 \leq X \leq x_2, \ y_1 \leq X \leq y_2 \}
= \int _{x_1}^{x_2} \int _{y_1}^{y_2} f_{XY}(x, y) dx dy
\label{eq:IntegrateJointDensity}
\end{equation}
M.Hayes (CAU-GT)
November 08, 2011
36 / 54
4
IEEE TRANSACTIONS ON LATEX
LATEX
Integrals with Limits at Bottom of Integral Sign
density function over R,
4
P{(X, Y ) ∈ R} =
ZZ
fXY (x, y)dxdy
IEEE TRANSACTIONS ON LATEX
(7)
R
density function over R,
Unlike the joint distribution function,
which is constrained
ZZ
to have values
between
zero
one,
is no such
P{(X,
Y ) ∈ R}
= andfXY
(x,there
y)dxdy
(7)
constraint on the joint density function. In fact, fXY (x, y)
may be arbitrarily large as longRas the integral of fXY (x, y)
the joint
distribution
overUnlike
all x and
y is equal
to one function, which is constrained
to have values between zero and one, there is no such
constraint
on the joint density
function. In M
fact,
fXY (x, y)
VII. C ORRELATION
AND C ORRELATION
ATRICES
may
large as variables,
long as the
of fensemble
XY (x, y)
For be
twoarbitrarily
or more random
anintegral
important
over allisxthe
andcorrelation.
y is equal Given
to onetwo random variables X and
average
Y , the correlation is defined by
VII. C ORRELATION AND C ORRELATION M ATRICES
M.Hayes (CAU-GT)
November 08, 2011
37 / 54
E{X1 X2 }an important ensemble
X1 X2 =variables,
For two or more rrandom
is the correlation.
Given two
randomtovariables
X and
Ifaverage
we consider
these two random
variables
be a random
Y , the correlation is defined by
vector
Matrices
T
X X= [X
1 , X2 ]
rX
1 2 = E{X1 X2 }
T
then
theconsider
correlation
matrix
the expected
value
of XX
,
If we
these
two is
random
variables
to be
a random
"
#
2
vector
E{X
}
E{X
X
}
1
2
1
RX = E{XXT } =
X = [X1 , X2 ]T
E{X2 X1 }
E{X22 }
then the correlation matrix is the expected value of XXT ,
For n random variables, the"correlation matrix has the #form

 1 X2 }
E{X12 }
E{X
T r11
r
· · · r1n
12
RX = E{XX } =
 r21 rE{X
 22 }
·X
· · 1 } r2n E{X
2
22


RX =  .
..
.. 
.
.
.
 . the correlation
For n random variables,
. matrix
.
.  has the form
r

n1
r11 rn2
r12 · ·· ·· · rnn
r1n
 r21 r22 · · · r2n 
where


RX =r =
.. E{X
.. X. }.
.. 
kl .

. k l .
. 
rn1 estimated
rn2 · · · from
rnn a set of realThese
correlations are often
M.Hayes (CAU-GT)
November 08, 2011
39 / 54
izations
where (experimental outcomes) of the random variables as
\begin{equation}
\pr \{ (X,Y) \in R \}
= \iint\limits _{R} f_{XY}(x,y) dx dy
\label{eq:IntegrateOverR}
\end{equation}
M.Hayes (CAU-GT)
November 08, 2011
38 / 54
LATEX
${\bf X}{\bf X}^T$,
\[ {\bf R}_X
= E\{ {\bf X} {\bf X}^T \}
= \left[ \begin{array}{cc}
E\{ X_1^2 \} & E\{ X_1X_2 \} \\[1ex]
E\{ X_2X_1\} & E\{ X_2^2 \}
\end{array} \right] \]
For $n$ random variables, the correlation matrix has the
form
\[ {\bf R}_X
= \begin{bmatrix}
r_{11} & r_{12} & \cdots & r_{1n} \\
r_{21} & r_{22} & \cdots & r_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
r_{n1} & r_{n2} & \cdots & r_{nn} \\
\end{bmatrix}
\]
M.Hayes (CAU-GT)
November 08, 2011
40 / 54
(7)
ned
uch
y)
y)
ble
nd
rn1
rn2
where
···
rnn
LATEX
Sums and Integrals in rFractions
kl = E{Xk Xl }
These correlations are often estimated from a set of realizations (experimental outcomes) of the random variables as
follows Sample cross-correlation
r̂xy = "
n
X
(xi − x)(yi − y)
i=1
n
X
i=1
(xi − x)2
n
X
i=1
(yi − y)2
\[ \hat{r}_{xy}
= \frac {\displaystyle \sum _{i=1}^n (x_i - x)(y_i - y)}
{\displaystyle \left[ \sum _{i=1}^n (x_i - x)^2
\sum _{i=1}^n (y_i - y)^2 \right]} \]
#
where r̂xy is called a sample autocorrelation.
VIII. C ONCLUSION
There are many excellent textbooks where the reader may
find advanced developments of the results presented in this
M.Hayes (CAU-GT)
08, 2011
41 / 54
paper.
The classic work in the field is the text byNovember
Papoulis
[1].
Another recommended text is [2]. An introduction to Monte
Carlo simulations may be found in [3].
A
The Subpicture Environment
IEEE TRANSACTIONS ON LTEX
M.Hayes (CAU-GT)
November 08, 2011
42 / 54
November 08, 2011
44 / 54
LATEX
R EFERENCES
[1] A. Papoulis and S. Pillai, Probability, Random Variables, and Stochasic
Processes. New York: McGraw-Hill, 2002.
[2] H. Larsen and B. Shubert, Probabilistic Models in Engineering Sciences,
Vol. 1. New York: John Wiley and Sons, 1979.
[3] S. Raychaudhuri, “Introduction to monte carlo simulation,” in Simulation
Conference, 2008. WSC 2008. Winter, pp. 91 –100, Dec. 2008.
(a) Density Function
Fig. 2.
(b) Distribution Function
\begin{figure}
\begin{center}
\subfigure[Density Function]{
\includegraphics[width=0.45\hsize]
{images/Chi-Square_distributionPDF.png}}
\subfigure[Distribution Function]{
\includegraphics[width=0.45\hsize]
{images/Chi-Square_distributioncDF.png}}
\end{center}
\caption{The Chi-square Random Variable.}
\label{fig:ChiSquare}
\end{figure}
The Chi-square Random Variable.
M.Hayes (CAU-GT)
November 08, 2011
IX. C ONCLUSION
43 / 54
M.Hayes (CAU-GT)
i=1
(xi − x)2
i=1
(yi − y)2
LATEX
References Using BibTEX
where r̂xy is called a sample autocorrelation.
VIII. C ONCLUSION
\section{Conclusion}
There are many excellent textbooks where the reader may
find advanced developments of the results presented in this
paper. The classic work in the field is the text by Papoulis [1].
Another recommended text is [2]. An introduction to Monte
Carlo simulations may be found in [3].
R EFERENCES
[1] A. Papoulis and S. Pillai, Probability, Random Variables, and Stochasic
Processes. New York: McGraw-Hill, 2002.
[2] H. Larsen and B. Shubert, Probabilistic Models in Engineering Sciences,
Vol. 1. New York: John Wiley and Sons, 1979.
[3] S. Raychaudhuri, “Introduction to monte carlo simulation,” in Simulation
Conference, 2008. WSC 2008. Winter, pp. 91 –100, Dec. 2008.
M.Hayes (CAU-GT)
November 08, 2011
45 / 54
BibTEX
2
3
\bibliography{mybibliography}
\bibliographystyle{ieeetr}
\end{document}
M.Hayes (CAU-GT)
November 08, 2011
46 / 54
BibTEX File
BibTEX makes it easy to cite sources in a consistent manner, by
separating bibliographic information from the presentation of this
information.
BibTEX takes, as input
1
There are many excellent textbooks where the reader may
find advanced developments of the results presented in
this paper.
The classic work in the field is the text by
Papoulis~\cite{Pap2002}.
Another recommended text is~\cite{Larsen}.
An introduction to Monte Carlo simulations may be found
in~\cite{MonteCarlo}.
An .aux file produced by LATEX on an earlier run;
A .bst file (the style file), that specifies the general reference-list style
and specifies how to format individual entries,
A .bib file(s) constituting a database of all reference-list entries the
user might ever hope to use.
BibTEX chooses from the .bib file(s) only those entries specified by
the .aux file (that is, those given by LATEX’s \cite or \nocite
commands), and creates as output a .bbl file containing these entries
together with the formatting commands specified by the .bst file.
LATEX uses the .bbl file, perhaps edited by the user, to produce the
reference list.
M.Hayes (CAU-GT)
November 08, 2011
47 / 54
@book{Pap2002,
author = {A. Papoulis and S. Pillai},
title = {Probability, Random Variables, and Stochasic Processes},
publisher = {McGraw-Hill},
Address = {New York},
year = {2002}
}
@INPROCEEDINGS{MonteCarlo,
author={Raychaudhuri, S.},
booktitle={Simulation Conference, 2008. WSC 2008. Winter},
title={Introduction to Monte Carlo Simulation},
year={2008},
month={Dec.},
volume={},
number={},
pages={91 -100},
keywords={Monte Carlo simulation;repeated random sampling;
statistical analysis;Monte Carlo methods;random processes},
doi={10.1109/WSC.2008.4736059},
ISSN={},}
M.Hayes (CAU-GT)
November 08, 2011
48 / 54
Manual Specification of References
BibTEX and IEEExplore
\begin{thebibliography}{9}
\bibitem{Pap2002} A. Papoulis and S. Pillai,
\emph{Probability, Random Variables, and Stochastic Processes},
Mc-Graw Hill, 2002.
\bibitem{Larsen}
H Larsen and B. Shubert,
\emph{Probabilistic Models in Engineering Sciences, Vol. 1},
John Wiley and Sons, New York, 1979.
\bibitem{MonteCarlo}
S. Raychaudhuri,
"Introduction to Monte Carlo Simulation,"
\emph{Simulation Conference}, pp. 91-100, Dec. 2008
IEEExplore and other databases will export citations into BibTEX
format, as well as others.
\end{thebibliography}
M.Hayes (CAU-GT)
November 08, 2011
49 / 54
Ifthen Package
M.Hayes (CAU-GT)
November 08, 2011
50 / 54
Verbatim Text
To include computer listings or other similar text, we would like to
have unformatted text to produce something like:
\begin{align*}
M_X(j\omega )
&= \frac {1}{\sigma _x\sqrt {2\pi }}
\int _{-\infty } ^{\infty } e^{j\omega x}
e^{-x^2/2\sigma _x^2} \\
&= e^{-\omega ^2 \sigma _x^2/2}
\underbrace{\frac {1}{\sigma _x\sqrt {2\pi }}
\int _{-\infty } ^{\infty }
e^{-(x - j\omega \sigma _x)^2/2\sigma _x^2} dx}_{=1}
\end{align*}
%---------Ignore all text from here until \fi --------%---Replace \iffalse with \iftrue to include text ----\iffalse
It is clear that this probability assignment satisfies
the first probability axiom since all probabilities in
Eq.~\ref{eq:prob_assgn} are positive.
\fi
%------------end ----------------
There are several ways to introduce text that won’t be interpreted by
the compiler.
I
I
M.Hayes (CAU-GT)
November 08, 2011
51 / 54
With the verbatim environment, everything input between a
\begin{verbatim} and an \end{verbatim} command will be
processed as if by a typewriter.
Also see the \verb command for short in-line verbatim text.
M.Hayes (CAU-GT)
November 08, 2011
52 / 54
Graphics
Next Time
Now that you have your beautifully typeset journal paper or article,
you want your figures, block diagrams, plots and other graphics to be
beautifully typset.
There are a number of very powerful packages that allow you to
create graphics in postscript or PDF file format. Some of these are:
I
I
I
I
I
I
How to prepare and deliver an effective presentation.
Presentations with PowerPoint Using Aurora
xfig,
TikZ and PGF,
XY-Pic,
PSTricks and PDFTricks,
Metapost
Adobe Illustrator
LATEX Presentations using the Beamer and Prosper Classes
See the web for a description of these packages and for
documentation.
M.Hayes (CAU-GT)
November 08, 2011
53 / 54
M.Hayes (CAU-GT)
November 08, 2011
54 / 54