Download Document

Document related concepts

Opto-isolator wikipedia , lookup

Electronic engineering wikipedia , lookup

Telecommunication wikipedia , lookup

Audio power wikipedia , lookup

Rectiverter wikipedia , lookup

Radio transmitter design wikipedia , lookup

Power electronics wikipedia , lookup

HD-MAC wikipedia , lookup

Tektronix analog oscilloscopes wikipedia , lookup

Decibel wikipedia , lookup

Spectrum analyzer wikipedia , lookup

MOS Technology SID wikipedia , lookup

Index of electronics articles wikipedia , lookup

Oscilloscope history wikipedia , lookup

Mathematics of radio engineering wikipedia , lookup

Transcript
Lecture 1.3.
Signals. Fourier Transform.
What is a communication system?
 Communication systems are designed to
transmit information.
 Communication systems design concerns:
• Selection of the information–bearing
waveform;
• Bandwidth and power of the waveform;
• Effect of system noise on the received
information;
• Cost of the system.
Digital and Analog Sources and Systems
Basic Definitions:
• Analog Information Source:
An analog information source produces messages which
are defined on a continuum. (E.g. :Microphone)
• Digital Information Source:
A digital information source produces a finite set of
possible messages. (E.g. :Typewriter)
x(t)
x(t)
t
t
Analog
Digital
Digital and Analog Sources and Systems
A digital communication system
transfers information from a digital source
to the intended receiver (also called the
sink).
An analog communication system
transfers information from an analog
source to the sink.
A digital waveform is defined as a
function of time that can have a discrete
set of amplitude values.
An Analog waveform is a function that
has a continuous range of values.
Deterministic and Random Waveforms
A Deterministic waveform can be modeled
as a completely specified function of time.
w(t )  A cos(0t  0 )
A Random Waveform (or stochastic
waveform) cannot be modeled as a
completely specified function of time and
must be modeled probabilistically.
We will focus mainly on deterministic
waveforms.
Block Diagram of A Communication System

All communication systems contain three main sub
systems:
1. Transmitter
2. Channel
3. Receiver
Transmitter
Receiver
What makes a Communication System GOOD
We can measure the “GOODNESS” of a
communication system in many ways:





How close is the estimate
•
•
•
to the original signal m(t)
Better estimate = higher quality transmission
Signal to Noise Ratio (SNR) for analog m(t)
Bit Error Rate (BER) for digital m(t)
How much power is required to transmit s(t)?
•
Lower power = longer battery life, less interference
How much bandwidth B is required to transmit s(t)?
•
•
Less B means more users can share the channel
Exception: Spread Spectrum -- users use same B.
How much information is transmitted?
•
•
In analog systems information is related to B of m(t).
In digital systems information is expressed in bits/sec.
Measuring Information
 Definition: Information Measure (Ij)
The information sent from a digital source (Ij) when the jth
massage is transmitted is given by:
where Pj is the probability of transmitting the jth message.
• Messages that are less likely to occur (smaller value for Pj) provide
more information (large value of Ij).
• The information measure depends on only the likelihood of sending
the message and does not depend on possible interpretation of the
content.
• For units of bits, the base 2 logarithm is used;
• if natural logarithm is used, the units are “nats”;
• if the base 10 logarithm is used, the units are “hartley”.
Measuring Information
 Definition: Average Information (H)
The average information measure of a digital source is,
– where m is the number of possible different source messages.
– The average information is also called Entropy.
• Definition: Source Rate (R)
The source rate is defined as,
– where H is the average information
– T is the time required to send a message.
Channel Capacity & Ideal Comm. Systems
 For digital communication systems, the “Optimum System” may defined as
the system that minimize the probability of bit error at the system output subject
to constraints on the energy and channel bandwidth.
 Is it possible to invent a system with no error at the output even when we have
noise introduced into the channel?
Yes under certain assumptions !.
 According Shannon the probability of error would approach zero, if R< C
Where
• R - Rate of information (bits/s)
• C - Channel capacity (bits/s)
Capacity is the maximum amount of information that
a particular channel can transmit. It is a theoretical
upper limit. The limit can be approached by using
Error Correction
 B - Channel bandwidth in Hz and
 S/N - the signal-to-noise power ratio
Channel Capacity & Ideal Comm. Systems
ANALOG COMMUNICATION SYSTEMS
In analog systems, the OPTIMUM SYSTEM might be defined as the one that
achieves the Largest signal-to-noise ratio at the receiver output, subject to
design constraints such as channel bandwidth and transmitted power.
Question:
Is it possible to design a system with infinite signal-to-noise ratio at the output
when noise is introduced by the channel?
Answer: No!
DIMENSIONALITY THEOREM for Digital Signalling:
Nyquist showed that if a pulse represents one bit of data,
noninterfering pulses can be sent over a channel no faster than 2B
pulses/s, where B is the channel bandwidth.
Properties of Signals & Noise
 In communication systems, the received
waveform is usually categorized into two parts:
Signal:
The desired part containing the
information.
Noise:
The undesired part
Properties of waveforms include:
• DC value,
• Root-mean-square (rms) value,
• Normalized power,
• Magnitude spectrum,
• Phase spectrum,
• Power spectral density,
• Bandwidth
• ………………..
Physically Realizable Waveforms
 Physically realizable waveforms are practical
waveforms which can be measured in a
laboratory.
 These waveforms satisfy the following conditions
• The waveform has significant nonzero values over a
composite time interval that is finite.
• The spectrum of the waveform has significant values
over a composite frequency interval that is finite
• The waveform is a continuous function of time
• The waveform has a finite peak value
• The waveform has only real values. That is, at any
time, it cannot have a complex value a+jb, where b is
nonzero.
Physically Realizable Waveforms
 Mathematical Models that violate some or all of the conditions
listed above are often used.
 One main reason is to simplify the mathematical analysis.
 If we are careful with the mathematical model, the correct
result can be obtained when the answer is properly
interpreted.
Physical Waveform
Mathematical Model Waveform
The Math model in this example
violates the following rules:
1. Continuity
2. Finite duration
Time Average Operator
Definition: The time average operator is given
by,
The operator is a linear operator,
• the average of the sum of two quantities is the
same as the sum of their averages:
Periodic Waveforms
 Definition
A waveform w(t) is periodic with period T0 if,
w(t) = w(t + T0) for all t
where T0 is the smallest positive number that satisfies this relationship
 A sinusoidal waveform of frequency f0 = 1/T0 Hertz is periodic
 Theorem: If the waveform involved is periodic, the time
average operator can be reduced to
where T0 is the period of the waveform and a is an arbitrary real constant,
which
may be taken to be zero.
DC Value
Definition: The DC (direct “current”) value of a
waveform w(t) is given by its time average, w(t).
Thus,
 For a physical waveform, we are actually interested in
evaluating the DC value only over a finite interval of interest,
say, from t1 to t2, so that the dc value is
Power
 Definition.
Let v(t) denote the voltage across a set of circuit
terminals, and let i(t) denote the current into the
terminal, as shown .
The instantaneous power (incremental work divided
by incremental time) associated with the circuit is
given by:
p(t) = v(t)i(t)
the instantaneous power flows into the circuit when
p(t) is positive and flows out of the circuit when p(t) is
negative.
RMS Value
 Definition: The root-mean-square (rms) value of w(t) is:
 Rms value of a sinusoidal:
Wrms 
V cos(ot )
2

V
2
 Theorem:
If a load is resistive (i.e., with unity power factor), the
average power is:
where R is the value of the resistive load.
Normalized Power
 In the concept of Normalized Power, R is assumed to be
1Ω, although it may be another value in the actual circuit.
 Another way of expressing this concept is to say that the
power is given on a per-ohm basis.
 It can also be realized that the square root of the
normalized power is the rms value.
Definition. The average normalized power is given as follows, Where w(t) is the voltage
or current waveform
Energy and Power Waveforms
 Definition: w(t) is a power waveform if and only if the
normalized average power P is finite and nonzero (0 < P <
∞).
 Definition: The total normalized energy is
 Definition: w(t) is an energy waveform if and only if the total
normalized energy is finite and nonzero (0 < E < ∞).
Energy and Power Waveforms
 If a waveform is classified as either one of these types, it
cannot be of the other type.
 If w(t) has finite energy, the power averaged over infinite
time is zero.
 If the power (averaged over infinite time) is finite, the
energy if infinite.
 However, mathematical functions can be found that have
both infinite energy and infinite power and, consequently,
cannot be classified into either of these two categories.
(w(t) = e-t).
 Physically realizable waveforms are of the energy type.
– We can find a finite power for these!!
Decibel
 A base 10 logarithmic measure of power ratios.
 The ratio of the power level at the output of a
circuit compared with that at the input is often
specified by the decibel gain instead of the
actual ratio.
 Decibel measure can be defined in 3 ways
•
•
•
Decibel Gain
Decibel signal-to-noise ratio
Mill watt Decibel or dBm
 Definition: Decibel Gain of a circuit is:
Decibel Gain
 If resistive loads are involved,
Definition of dB may be reduced to,
or
Decibel Signal-to-noise Ratio (SNR)
 Definition. The decibel signal-to-noise ratio (S/R, SNR)
is:
Where, Signal Power (S) =
And, Noise Power (N) =
So, definition is equivalent to
Decibel with Mili watt Reference (dBm)
 Definition. The decibel power level with respect to 1 mW
is:
= 30 + 10 log (Actual Power Level (watts)
•
•
•
Here the “m” in the dBm denotes a milliwatt reference.
When a 1-W reference level is used, the decibel level is denoted dBW;
when a 1-kW reference level is used, the decibel level is denoted dBk.
E.g.: If an antenna receives a signal power of 0.3W, what is the received power level in
dBm?
dBm = 30 + 10xlog(0.3) = 30 + 10x(-0.523)3 = 24.77 dBm
Phasors
 Definition: A complex number c is said to be a “phasor”
(фазовый вектор) if it is used to represent a sinusoidal
waveform. That is,
where the phasor c = |c|ejc and Re{.} denotes the real part of the
complex quantity {.}.
 The phasor can be written as:
c  x  jy  c e j  c 
Fourier Transform and Spectra
Topics:
 Fourier transform (FT) of a waveform
 Properties of Fourier Transforms
 Parseval’s Theorem and Energy Spectral
Density
 Dirac Delta Function and Unit Step Function
 Rectangular and Triangular Pulses
 Convolution
Fourier Transform of a Waveform
Definition: Fourier transform
The Fourier Transform (FT) of a waveform w(t)
is:
where ℑ[.]
denotes the Fourier transform of [.]
f is the frequency parameter with units of Hz (1/s).
 W(f) is also called Two-sided Spectrum of w(t), since
both positive and negative frequency components are
obtained from the definition
Evaluation Techniques for FT Integral
 One of the following techniques can be used to
evaluate a FT integral:
• Direct integration.
• Tables of Fourier transforms or Laplace
transforms.
• FT theorems.
• Superposition to break the problem into two or
more simple problems.
• Differentiation or integration of w(t).
• Numerical integration of the FT integral on the PC
via MATLAB or MathCAD integration functions.
• Fast Fourier transform (FFT) on the PC via
MATLAB or MathCAD FFT functions.
Fourier Transform of a Waveform
 Definition: Inverse Fourier transform
The Inverse Fourier transform (FT) of a waveform w(t)
is:

w(t ) 
j 2 ft
W
(
f
)
e
df


 The functions w(t) and W(f) constitute a Fourier transform pair.

w(t ) 
j 2 ft
W
(
f
)
e
df


Time Domain Description
(Inverse FT)

W( f ) 

w(t )e  j 2 nft dt

Frequency Domain Description
(FT)
Fourier Transform - Sufficient Conditions

•
•
The waveform w(t) is Fourier transformable if it satisfies both
Dirichlet conditions:
1) Over any time interval of finite length, the function w(t) is
single valued with a finite number of maxima and minima, and
the number of discontinuities (if any) is finite.
2) w(t) is absolutely integrable. That is,
Above conditions are sufficient, but not necessary.
A weaker sufficient condition for the existence of the Fourier transform is:
E


2
w(t ) dt  
Finite Energy
•
•
where E is the normalized energy.
This is the finite-energy condition that is satisfied by all physically realizable
waveforms.
•
Conclusion: All physical waveforms encountered in engineering practice
are Fourier transformable.
Spectrum of an Exponential Pulse
Spectrum of an Exponential Pulse
Plot of the real and imaginary parts of FT
Properties of Fourier Transforms
 Theorem : Spectral symmetry of real signals
If w(t) is real, then
Superscript asterisk is conjugate operation.
• Proof:
Take the conjugate
Substitute -f
=
Since w(t) is real, w*(t) = w(t), and it follows that W(-f) = W*(f).
• If w(t) is real and is an even function of t, W(f) is real.
• If w(t) is real and is an odd function of t, W(f) is imaginary.
Properties of Fourier Transforms
 Spectral symmetry of real signals. If w(t) is real, then:
W ( f )  W ( f )
•
Magnitude spectrum is even about the origin.
|W(-f)| = |W(f)|
•
(A)
Phase spectrum is odd about the origin.
θ(-f) = - θ(f)
(B)
Corollaries of
Since, W(-f) = W*(f)
We see that corollaries (A) and
(B) are true.
Properties of Fourier Transform
•
f, called frequency and having units of hertz, is just a
parameter of the FT that specifies what frequency we are
interested in looking for in the waveform w(t).
•
The FT looks for the frequency f in the w(t) over all time,
that is, over -∞ < t < ∞
•
W(f ) can be complex, even though w(t) is real.
•
If w(t) is real, then W(-f) = W*(f).
Parseval’s Theorem and Energy Spectral Density
 Persaval’s theorem gives an alternative method to evaluate
energy in frequency domain instead of time domain.
 In other words energy is conserved in both domains.
Parseval’s Theorem and Energy Spectral Density
The total Normalized Energy E is given by the area under the Energy Spectral Density
TABIE 2-1: SOME FOURIER TRANSFORM THEOREMS
Example 2-3: Spectrum of a Damped Sinusoid
 Spectral Peaks of the Magnitude spectrum has moved to f = fo and f = -fo due to
multiplication with the sinusoidal.
Example 2-3: Spectrum of a Damped Sinusoid
Variation of W(f) with f
Dirac Delta Function
 Definition: The Dirac delta function δ(x) is defined
by

d(x)


w( x)d ( x)dx  w(0)
x
where w(x) is any function that is continuous at x = 0.
An alternative definition of δ(x) is:



d ( x)dx  1
, x =0
d ( x)  
0, x  0
The Sifting Property of the δ function is



w( x)d ( x  xo )dx  w( xo )
If δ(x) is an even function the integral of the δ function is given by:
Unit Step Function
 Definition: The Unit Step function u(t) is:
1,
u (t )  
0,
t>0
t<0
Because δ(λ) is zero, except at λ = 0, the Dirac delta function is related to the unit
step function by
du (t )
 d (t )
dt

t

d ( )d   u (t )
Spectrum of Sinusoids
 Exponentials become a shifted delta
Ad(f-fc)
Aej2fct 
d(f-fc)
H(f )
fc
H(fc) ej2fct
 Sinusoids become two shifted deltas
Ad(f+fc)
H(fc)d(f-fc)
Ad(f-fc)
2Acos(2fct) 
-fc
fc
 The Fourier Transform of a periodic signal is a weighted
train of deltas
Spectrum of a Sine Wave
A
V ( f )  d ( f  f o )  d ( f  f o ) 
2
Spectrum of a Sine Wave
Sine Wave with an Arbitrary Phase
w(t )  A sin(0t  0 )  A sin[0 (t 
f
0
0 )]
A j0 fo
W( f )  j e
d ( f  f o )  d ( f  f o )
2
Sampling Function
 The Fourier transform of a delta train in time domain is again a
delta train of impulses in the frequency domain.
 Note that the period in the time domain is Ts whereas the period
in the frquency domain is 1/ Ts .
 This function will be used when studying the Sampling
Theorem.
-3Ts
-2Ts
w(t ) 
-Ts
0
Ts
2Ts
3Ts
t
-1/Ts

 Tsd (t  nTs )
n 
W( f ) 
0

1/Ts
 Tsd ( f 
k 
f
k
)
Ts
Fourier Transform and Spectra
Topics:
 Rectangular and Triangular Pulses
 Spectrum of Rectangular, Triangular
Pulses
 Convolution
 Spectrum by Convolution
Rectangular Pulses
Triangular Pulses
Spectrum of a Rectangular Pulse
t
w(t )   
T





  W ( f )  T  Sa  Tf 

Rectangular pulse is a time window.
FT is a Sa function, infinite frequency content.
Shrinking (сжатие) time axis causes stretching of frequency axis.
Signals cannot be both time-limited and bandwidth-limited.
Note the inverse relationship between the pulse width T and the zero crossing 1/T
Spectrum of Sa Function
 To find the spectrum of a Sa function we can use duality theorem.
Duality: W(t)  w(-f)
Because Π is an even and real function
Spectrum of Rectangular and Sa Pulses
Duality Theorem if w(t )  W ( f ) Then W(t )  w(  f )
t 
    TSa  Tf
T 

 f 
Then 2WSa  2 Wt    

 2W 
Spectrum of a Time Shifted Rectangular Pulse
• The spectra shown in previous slides are real because the time domain pulse
(rectangular pulse) is real and even.
• If the pulse is offset in time domain to destroy the even symmetry, the spectra
will be complex.
• Let us now apply the Time delay theorem of Table 2.1 to the Rectangular pulse.
1
 t T 
2
v(t )   
 T 


T
Time Delay Theorem:
w(t-Td)  W(f) e-jωTd
We get:
V( f ) T
sin( fT )
 fT
 ( f )  e j fT  Sa( fT )
Spectrum of a Triangular Pulse
 The spectrum of a triangular pulse can be obtained by direct evaluation of the FT
integral.
 An easier approach is to evaluate the FT using the second derivative of the
triangular pulse.
 First derivative is composed of two rectangular pulses as shown.
 The second derivative consists of the three impulses.
 We can find the FT of the second derivative easily and then calculate the FT of
the triangular pulse.
dw(t )
dt
d 2 w(t )
dt 2
Spectrum of a Triangular Pulse
dw(t )
dt
d 2 w(t )
dt 2
Table 2.2 Some FT pairs
Key FT Properties
 Time Scaling; Contracting the time axis leads to an expansion of
the frequency axis.
 Duality
• Symmetry between time and frequency domains.
• “Reverse the pictures”.
• Eliminates half the transform pairs.
 Frequency Shifting (Modulation); (multiplying a time signal by an
exponential) leads to a frequency shift.
 Multiplication in Time
• Becomes complicated convolution in frequency.
• Mod/Demod often involves multiplication.
• Time windowing becomes frequency convolution with Sa.
 Convolution in Time
• Becomes multiplication in frequency.
• Defines output of LTI filters: easier to analyze with FTs.
x(t)*h(t)
x(t)
h(t)
X(f)
H(f)
X(f)H(f)
Convolution
 The convolution of a waveform w1(t) with a waveform w2(t) to produce a third waveform
w3(t) which is
where w1(t)∗ w2(t) is a shorthand notation for this integration operation and ∗ is
read “convolved with”.
If discontinuous wave shapes are to be convolved, it is usually easier to evaluate
the equivalent integral

Evaluation of the convolution integral involves 3 steps.
•
•
•
Time reversal of w2 to obtain w2(-λ),
Time shifting of w2 by t seconds to obtain w2(-(λ-t)), and
Multiplying this result by w1 to form the integrand w1(λ)w2(-(λ-t)).
Example for Convolution
T

t


2
w1 (t )   
 T

-
t
T
w 2 (t)=e u (t )
For
0< t < T
For t > T





Convolution
 y(t)=x(t)*z(t)=  x(τ)z(t- τ )d τ
• Flip one signal and drag it across the other
• Area under product at drag offset t is y(t).
x(t)
-1
0
z(t)
x(t)
1
t
t
-6
t
-1
0
z(-2-t)
z(-6-t)
-4
-2
t
1
-1
0
-4
-2
-1
t-1
z(2-t)
z(0-t)
1
2
-6
z(t-t)
z(t)
t
z(4-t)
t
2
y(t)
0
1
2
t
t+1
t
Fourier Transform and Spectra
Topics:
 Spectrum by Convolution
 Spectrum of a Switched Sinusoid
 Power Spectral Density
 Autocorrelation
Spectrum of a triangular pulse by convolution
t
t
t
    T 
T 
T 
T 
CONVOLUTION THEOREM w1 (t )  w2 (t )  W1 ( f ) W2 ( f )
2
 t 
    T  Sa ( fT ) 
T 
 The tails of the triangular pulse decay faster than the rectangular pulse. WHY ??
Spectrum of a Switched Sinusoid
t
w(t )     A sin(ot )
T 
Switched sinusoid waveform

t
t
w(t )     A sin(ot )     A cos(ot  )
2
T 
T 
Using the Frequency Translation Property of the Fourier Transform
1
w(t ) cos(ct   )  e j W ( f  f c )  e  j W ( f  f c ) 
2
A
W ( f )  j T  Sa( T ( f  f o )  Sa ( T ( f  f o ) 
2
We can get a similar result using the convolution property of the Fourier Transform.
w1 (t )  w2 (t )  W1 ( f ) W2 ( f )
Spectrum of a Switched Sinusoid
A
TSa( T ( f  f o )
2
A
TSa( T ( f  f o )
2
Power Spectral Density (PSD)
 We define the truncated version (Windowed) of the
waveform by:
• The average normalized power from the time domain:
• Using Parseval’s theorem to calculate power from the frequency domain
Power Spectral Density
 Definition: The Power Spectral Density (PSD) for a
deterministic power waveform is
• where wT(t) ↔ WT(f) and Pw(f) has units of watts per hertz.
• The PSD is always a real nonnegative function of frequency.
• PSD is not sensitive to the phase spectrum of w(t)
• The normalized average power is
• This means the area under the PSD function is the normalized average power.
Autocorrelation Function
 Definition: The autocorrelation of a real (physical) waveform is
• Wiener-Khintchine Theorem: PSD and the autocorrelation function are Fourier
transform pairs;
The PSD can be evaluated by either of the following two methods:
1.
Direct method: by using the definition,
2.
Indirect method: by first evaluating the autocorrelation function and
then taking the Fourier transform:
Pw(f)= ℑ [Rw(τ) ]
• The average power can be obtained by any of the four techniques.
PSD of a Sinusoid
A2
Pw ( f ) 
d ( f  fo   d ( f  fo )
4
PSD of a Sinusoid

The average normalized power may be obtained by using:
Orthogonal Representation,
Fourier Series and Power Spectra
 Orthogonal Series Representation of Signals and
Noise
•
•
Orthogonal Functions
Orthogonal Series
 Fourier Series.
•
•
•
•
•
Complex Fourier Series
Quadrature Fourier Series
Polar Fourier Series
Line Spectra for Periodic Waveforms
Power Spectral Density for Periodic Waveforms
Orthogonal Functions
 Definition: Functions ϕn(t) and ϕm(t) are said to be
Orthogonal with respect to each other the interval a < t <
b if they satisfy the condition,
where
• δnm is called the Kronecker delta function.
• If the constants Kn are all equal to 1 then the ϕn(t) are
functions.
said to be orthonormal
Example 2.11 Orthogonal Complex Exponential Functions
Orthogonal Series
Theorem: Assume w(t) represents a waveform over the interval a < t <b. Then w(t) can be
represented over the interval (a, b) by the series where, the coefficients an are given by following
where n is an integer value :
w(t )   ann (t )
n
1
an 
Kn

b
a
w(t ) (t )dt
*
n
• If w(t) can be represented without any errors in this way we call the set of
functions {φn} as a “Complete Set”
• Examples for complete sets:
• Harmonic Sinusoidal Sets {Sin(nw0t)}
• Complex Expoents {ejnwt}
• Bessel Functions
• Legendare polynominals
Orthogonal Series
Proof of theorem: Assume that the set {φn} is sufficient to represent the waveform w(t) over
the interval a < t <b by the series
w(t )   an n (t )
n
We operate the integral operator
on both sides to get,
• Now, since we can find the coefficients an writing w(t) in series form is possible. Thus
theorem is proved.
Application of Orthogonal Series
 It is also possible to generate w(t) from the ϕj(t) functions and the coefficients aj.
 In this case, w(t) is approximated by using a reasonable number of the ϕj(t) functions.
w(t) is realized by adding
weighted versions of
orthogonal functions
Ex. Square Waves Using Sine Waves.
n =1
n =3
n =5
http://www.educatorscorner.com/index.cgi?CONTENT_ID=2487
Fourier Series
Complex Fourier Series
 The frequency f0 = 1/T0 is said to be the fundamental frequency and the frequency
nf0 is said to be the nth harmonic frequency, when n>1.
Some Properties of Complex Fourier Series
Some Properties of Complex Fourier Series
Quadrature Fourier Series
 The Quadrature Form of the Fourier series representing any physical waveform w(t)
over the interval a < t < a+T0 is,
n 
n 
n 0
n 0
w(t )   an cos( n0t )   bn sin( n0t )
where the orthogonal functions are cos(nω0t) and sin(nω0t).
Using
we can find the Fourier coefficients as:
Quadrature Fourier Series
• Since these sinusoidal orthogonal functions are periodic, this series is periodic
with the fundamental period T0.
• The Complex Fourier Series, and the Quadrature Fourier Series are equivalent
representations.
• This can be shown by expressing the complex number cn as below
For all integer values of n
and
Thus we obtain the identities
and
Polar Fourier Series
• The POLAR F Form is
where w(t) is real and
The above two equations may be inverted, and we obtain
Polar Fourier Series Coefficients
Line Spetra for Periodic Waveforms
Theorem: If a waveform is periodic with period T0, the spectrum of the waveform
w(t) is
where f0 = 1/T0 and cn are the phasor Fourier coefficients of the waveform
Proof:
Taking the Fourier transform of both sides, we obtain
Here the integral representation for a delta function was used.
Line Spectra for Periodic Waveforms
Theorem: If w(t) is a periodic function with period T0 and is represented by
Where,
then the Fourier coefficients are given by:
The Fourier Series Coefficients can also be calculated from the periodic sample values of the
Fourier Transform.
Line Spectra for Periodic Waveforms
w(t ) 
n 
Spectra
for Periodic Waveforms
h(t  nT
)
 Line
o
n 
cn  fo H (nfo )
h(t)
W( f ) 
h(t )  H ( f )
The Fourier Series Coefficients
of the periodic signal can be
calculated from the Fourier
Transform of the similar
nonperiodic signal.
n 
 c d ( f  nf
n 
n 
n
o
)
=   f o H (nf o )  d ( f  nf o )
n 
The sample values for the
Fourier transform gives the
Fourier series coefficients.
Line Spectra for Periodic Waveforms
Single Pulse
Continous Spectrum
Periodic Pulse Train Line Spectrum
Ex. 2.12 Fourier Coeff. for a Periodic Rectangular Wave
Ex. 2.12 Fourier Coeff. for a Periodic Rectangular Wave
 Now evaluate the coefficients from the Fourier Transform
 T Sa(fT)
Now compare the spectrum for this periodic rectangular wave (solid lines) with the
spectrum for the rectangular pulse.
• Note that the spectrum for the periodic wave contains spectral lines, whereas the
spectrum for the nonperiodic pulse is continuous.
• Note that the envelope of the spectrum for both cases is the same |(sin x)/x| shape,
where x=Tf.
• Consequently, the Null Bandwidth (for the envelope) is 1/T for both cases, where T is
the pulse width.
• This is a basic property of digital signaling with rectangular pulse shapes. The null
bandwidth is the reciprocal of the pulse width.
Ex. 2.12 Fourier Coeff. for a Periodic Rectangular Wave
Single Pulse
Continous Spectrum
Periodic Pulse Train Line Spectrum
Normalized Power
Theorem: For a periodic waveform w(t), the normalized
power is given by:
where the {cn} are the complex Fourier coefficients for the waveform.
Proof: For periodic w(t), the Fourier series representation is valid over all time and
may be substituted into Eq.(2-12) to evaluate the normalized power:
Power Spectral Density for Periodic Waveforms
Theorem: For a periodic waveform, the power spectral density (PSD) is
given by
where T0 = 1/f0 is the period of the waveform and
{cn} are the corresponding Fourier coefficients for the waveform.
PSD is the FT of the
Autocorrelation
function
Power Spectral Density for a Square Wave
• The PSD for the periodic square wave will be found.
• Because the waveform is periodic, FS coefficients can be used to evaluate the PSD.
Consequently this problem becomes one of evaluating the FS coefficients.
END