Download Nyquist–Shannon sampling theorem

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Heterodyne wikipedia , lookup

Immunity-aware programming wikipedia , lookup

Linear time-invariant theory wikipedia , lookup

Hilbert transform wikipedia , lookup

Spectral density wikipedia , lookup

Ringing artifacts wikipedia , lookup

Chirp spectrum wikipedia , lookup

Mathematics of radio engineering wikipedia , lookup

Transcript
In this analysis, the Laplace transform is often interpreted as a transformation from
the time-domain, in which inputs and outputs are functions of time, to the
frequency-domain, where the same inputs and outputs are functions of complex
angular frequency, in radians per unit time. Given a simple mathematical or
functional description of an input or output to a system, the Laplace transform
provides an alternative functional description that often simplifies the process of
analyzing the behavior of the system, or in synthesizing a new system based on a set
of specifications.
Formal definition
The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the function
F(s), defined by:
The parameter s is a complex number:
with real numbers σ and ω.
The meaning of the integral depends on types of functions of interest. A necessary
condition for existence of the integral is that ƒ must be locally integrable on [0,∞). For
locally integrable functions that decay at infinity or are of exponential type, the integral
can be understood as a (proper) Lebesgue integral. However, for many applications it is
necessary to regard it as a conditionally convergent improper integral at ∞. Still more
generally, the integral can be understood in a weak sense, and this is dealt with below.
Properties of the unilateral Laplace transform
Time domain
Linearity
Frequenc
y
differenti
's' domain
Comment
Can be proved using
basic rules of
integration.
is the first
derivative of
.
ation
Frequenc
y
differenti
ation
More general form,
(n)th derivative of
F(s).
ƒ is assumed to be a
Differenti
ation
differentiable function,
and its derivative is
assumed to be of
exponential type. This
can then be obtained
by integration by parts
ƒ is assumed twice
Second
Differenti
ation
differentiable and the
second derivative to be
of exponential type.
Follows by applying
the Differentiation
property to
General
Differenti
ation
.
ƒ is assumed to be
n-times differentiable,
with nth derivative of
exponential type.
Follow by
mathematical
induction.
Frequenc
y
integratio
n
u(t) is the Heaviside
step function. Note (u
Integratio
n
* f)(t) is the
convolution of u(t)
and f(t).
Scaling
where a is positive.
Frequenc
y shifting
u(t) is the Heaviside
Time
shifting
step function
ƒ(t) and g(t) are
Convoluti
on
extended by zero for
t<0 in the definition of
the convolution.
f(t) is a periodic
function of period T
Periodic
Function
so that
. This is the result of
the time shifting
property and the
geometric series.

Initial value theorem:

Final value theorem:
, if all poles of sF(s) are in the left-hand plane.
The final value theorem is useful because it gives the long-term behaviour without
having to perform partial fraction decompositions or other difficult algebra. If a
t
function's poles are in the right-hand plane (e.g. e or sin(t)) the behaviour of this
formula is undefined.
A causal system is a system where the impulse response h(t) is zero for all time t prior to t
= 0. In general, the region of convergence for causal systems is not the same as that of
anticausal systems.
ID
Function
1
ideal delay
1a
unit impulse
delayed nth power
2 with frequency
shift
2a
nth power
( for integer n )
2a.
qth power
1 ( for complex q )
2a.
2
unit step
2b delayed unit step
2c
ramp
Time domain
Laplace s-domain
1
Region of
convergence
2d
nth power with
frequency shift
2d.
exponential decay
1
3
exponential
approach
4
sine
5
cosine
6
hyperbolic sine
7 hyperbolic cosine
Exponentially-dec
8
aying
sine wave
Exponentially-dec
9
aying
cosine wave
10
nth root
11 natural logarithm
Bessel function
12 of the first kind,
of order n
Modified Bessel
13
14
function
of the first kind,
of order n
Bessel function
of the second
kind,
of order 0
15 Modified Bessel
function
of the second
kind,
of order 0
16
Error function
Explanatory notes:

represents the Heaviside

step function.

represents the Dirac delta

function.

represents the Gamma

function.
is the Euler–Mascheroni
constant.
, a real number, typically represents
time,
although it can represent any
independent dimension.
is the complex angular frequency, and
Re{s} is its real part.


, , , and are real numbers.
, is an integer.
[edit] s-Domain equivalent circuits and impedances
The Laplace transform is often used in circuit analysis, and simple conversions to the
s-Domain of circuit elements can be made. Circuit elements can be transformed into
impedances, very similar to phasor impedances.
Here is a summary of equivalents:
Note that the resistor is exactly the same in the time domain and the s-Domain. The
sources are put in if there are initial conditions on the circuit elements. For example, if a
capacitor has an initial voltage across it, or if the inductor has an initial current through it,
the sources inserted in the s-Domain account for that.
The equivalents for current and voltage sources are simply derived from the
transformations in the table above.
Nyquist–Shannon sampling
theorem
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Fig.1: Hypothetical spectrum of a bandlimited signal as a function of frequency
The Nyquist–Shannon sampling theorem is a fundamental result in the field of
information theory, in particular telecommunications and signal processing. Sampling is
the process of converting a signal (for example, a function of continuous time or space)
into a numeric sequence (a function of discrete time or space). Shannon's version of the
theorem states:[1]
If a function x(t) contains no frequencies higher than B hertz, it is completely determined
by giving its ordinates at a series of points spaced 1/(2B) seconds apart.
The theorem is commonly called the Nyquist sampling theorem, and is also known as
Nyquist–Shannon–Kotelnikov, Whittaker–Shannon–Kotelnikov,
Whittaker–Nyquist–Kotelnikov–Shannon, WKS, etc., sampling theorem, as well as the
Cardinal Theorem of Interpolation Theory. It is often referred to as simply the sampling
theorem.
In essence, the theorem shows that a bandlimited analog signal that has been sampled can
be perfectly reconstructed from an infinite sequence of samples if the sampling rate
exceeds 2B samples per second, where B is the highest frequency in the original signal. If
a signal contains a component at exactly B hertz, then samples spaced at exactly 1/(2B)
seconds do not completely determine the signal, Shannon's statement notwithstanding. This
sufficient condition can be weakened, as discussed at Sampling of non-baseband signals
below.