Download Visit Engineering Essentials Vol. III electronicdesign.com/subscribe

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Power engineering wikipedia , lookup

Power over Ethernet wikipedia , lookup

Islanding wikipedia , lookup

Variable-frequency drive wikipedia , lookup

Three-phase electric power wikipedia , lookup

Heterodyne wikipedia , lookup

Mains electricity wikipedia , lookup

Alternating current wikipedia , lookup

Immunity-aware programming wikipedia , lookup

Resistive opto-isolator wikipedia , lookup

Buck converter wikipedia , lookup

Electronic engineering wikipedia , lookup

Switched-mode power supply wikipedia , lookup

Pulse-width modulation wikipedia , lookup

Power electronics wikipedia , lookup

Opto-isolator wikipedia , lookup

Transcript
Visit http://electronicdesign.com
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
Louis E. FrEnzEL | CommuniCations EDitoR
[email protected]
today’s designers can utilize myriad
modern modulation methods to pack
ever-increasing data into ever-decreasing
spectrum.
F
undamental to all wireless communications is modulation, the process of impressing the data to be transmitted on the radio carrier. Most wireless transmissions
today are digital, and with the limited spectrum available, the type of modulation is more critical than it has
ever been.
The main goal of modulation today is to squeeze as much
data into the least amount of spectrum possible. That objective,
known as spectral efficiency, measures how quickly data can
be transmitted in an assigned bandwidth. The unit of measurement is bits per second per Hz (bits/s/Hz). Multiple techniques
have emerged to achieve and improve spectral efficiency.
02.09.12 ElEctronic DEsign
46
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
ASK And FSK
1
There are three basic ways to modulate a sine wave radio
carrier: modifying the amplitude, frequency, or phase. More
sophisticated methods combine two or more of these variations
to improve spectral efficiency. These basic modulation forms
are still used today with digital signals.
Figure 1 shows a basic serial digital signal of binary zeros
and ones to be transmitted and the corresponding AM and
FM signals resulting from modulation. There are two types of
AM signals: on-off keying (OOK) and amplitude shift keying
(ASK). In Figure 1a, the carrier amplitude is shifted between
two amplitude levels to produce ASK. In Figure 1b, the binary
signal turns the carrier off and on to create OOK.
AM produces sidebands above and below the carrier equal
to the highest frequency content of the modulating signal. The
bandwidth required is two times the highest frequency content
including any harmonics for binary pulse modulating signals.
Frequency shift keying (FSK) shifts the carrier between two
different frequencies called the mark and space frequencies, or
fm and fs (Fig. 1c). FM produces multiple sideband frequencies
above and below the carrier frequency. The bandwidth produced is a function of the highest modulating frequency including harmonics and the modulation index, which is:
0
0
1
Serial
binary
data
BPSK
Phase changes
0
1
2. In binary phase shift
keying, note how a
binary 0 is 0° while a
binary 1 is 180°. The
phase changes when the
binary state switches so
the signal is coherent.
T is the bit time interval of the data or the reciprocal of the
data rate (1/bit/s).
Smaller values of m produce fewer sidebands. A popular
version of FSK called minimum shift keying (MSK) specifies
m = 0.5. Smaller values are also used such as m = 0.3.
There are two ways to further improve the spectral efficiency
for both ASK and FSK. First, select data rates, carrier frequencies, and shift frequencies so there are no discontinuities in the
sine carrier when changing from one binary state to another.
These discontinuities produce glitches that increase the harmonic content and the bandwidth.
The idea is to synchronize the stop and start times of the
binary data with when the sine carrier is transitioning in amplitude or frequency at the zero crossing points. This is called
continuous phase or coherent operation. Both coherent ASK/
m = ∆f(T)
OOK and coherent FSK have fewer harmonics and a narrower
bandwidth than non-coherent signals.
∆f is the frequency deviation or shift between the mark and
A second technique is to filter the binary data prior to moduspace frequencies, or:
lation. This rounds the signal off, lengthening the rise and fall
times and reducing the harmonic content. Special Gaussian
∆f = fs – fm
and raised cosine low pass filters are used for this purpose.
GSM cell phones widely use a popular combination, Gaussian
1
1
filtered MSK (GMSK), which allows a data rate of 270 kbits/s
Binary
in a 200-kHz channel.
data
0
0
BPSK And QPSK
(a) ASK
Carrier
sine
(b) OOK
Higher
frequency
(c)
FSK
Lower frequency
1. Three basic digital modulation formats are still very popular with lowdata-rate short-range wireless applications: amplitude shift keying (a), onoff keying (b), and frequency shift keying (c). These waveforms are coherent as the binary state change occurs at carrier zero crossing points.
A very popular digital modulation scheme, binary phase
shift keying (BPSK), shifts the carrier sine wave 180° for
each change in binary state (Fig. 2). BPSK is coherent as the
phase transitions occur at the zero crossing points. The proper
demodulation of BPSK requires the signal to be compared to
a sine carrier of the same phase. This involves carrier recovery
and other complex circuitry.
A simpler version is differential BPSK or DPSK, where the
received bit phase is compared to the phase of the previous bit
signal. BPSK is very spectrally efficient in that you can transmit at a data rate equal to the bandwidth or 1 bit/Hz.
In a popular variation of BPSK, quadrature PSK (QPSK),
the modulator produces two sine carriers 90° apart. The binary
data modulates each phase, producing four unique sine signals
shifted by 45° from one another. The two phases are added
together to produce the final signal. Each unique pair of bits
generates a carrier with a different phase (Table 1).
Figure 3a illustrates QPSK with a phasor diagram where the
phasor represents the carrier sine amplitude peak and its position indicates the phase. A constellation diagram in Figure 3b
shows the same information. QPSK is very spectrally efficient
since each carrier phase represents two bits of data. The spec-
ElEctronic DEsign Go To www.elecTronicdesiGn.com
Engineering Essentials Vol. III
47
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
90º
90º
90º
01
180º
0º
00
0º
180º
11
(a)
270º
(b)
180º
0º
10
270º
270º
3. Modulation can be represented without time domain waveforms. For example, QPSK can be
represented with a phasor diagram (a) or a constellation diagram (b), both of which indicate phase
and amplitude magnitudes.
4. 16QAM uses a mix of amplitudes and phases to achieve 4 bits/Hz. In this example, there
are three amplitudes and 12 phase shifts.
tral efficiency is 2 bits/Hz, meaning twice the data rate can be
8PSK uses eight symbols with constant carrier amplitude
achieved in the same bandwidth as BPSK.
45° shifts between them, enabling 3 bits to be transmitted for
each symbol. 16PSK uses 22.5° shifts of constant amplitude
Data Rate anD BauD Rate
carrier signals. This arrangement results in a transmission of 4
The maximum theoretical data rate or channel capacity (C) bits per symbol.
in bits/s is a function of the channel bandwidth (B) channel in
While M-PSK is much more spectrally efficient, the greater
Hz and the signal-to-noise ratio (SNR):
the number of smaller phase shifts, the more difficult the signal is to demodulate in the presence of noise. The benefit of
C = B log2 (1 + SNR)
M-PSK is that the constant carrier amplitude means that more
efficient nonlinear power amplification can be used.
This is called the Shannon-Hartley law. The maximum data
rate is directly proportional to the bandwidth and logarithmi- QaM
The creation of symbols that are some combination of
cally proportional the SNR. Noise greatly diminishes the data
amplitude and phase can carry the concept of transmitting
rate for a given bit error rate (BER).
Another key factor is the baud rate, or the number of modu- more bits per symbol further. This method is called quadrature
lation symbols transmitted per second. The term symbol in amplitude modulation (QAM). For example, 8QAM uses four
modulation refers to one specific state of a sine carrier signal. It carrier phases plus two amplitude levels to transmit 3 bits per
can be an amplitude, a frequency, a phase, or some combination symbol. Other popular variations are 16QAM, 64QAM, and
256QAM, which transmit 4, 6, and 8 bits per symbol respecof them. Basic binary transmission uses one bit per symbol.
In ASK, a binary 0 is one amplitude and a binary 1 is another tively (Fig. 4).
While QAM is enormously efficient of spectrum, it is more
amplitude. In FSK, a binary 0 is one carrier frequency and a
binary 1 is another frequency. BPSK uses a 0° shift for a binary difficult to demodulate in the presence of noise, which is most0 and a 180° shift for a binary 1. In each of these cases there is ly random amplitude variations. Linear power amplification
is also required. QAM is very widely used in cable TV, Wi-Fi
one bit per symbol.
Data rate in bits/s is calculated as the reciprocal of the bit wireless local-area networks (LANs), satellites, and cellular
telephone systems to produce maximum data rate in limited
time (tb):
bandwidths.
bits/s = 1/tb
aPSK
With one symbol per bit, the baud rate is the same as the bit
rate. However, if you transmit more bits per symbol, the baud
rate is slower than the bit rate by a factor equal to the number of
bits per symbol. For example, if 2 bits per symbol are transmitted, the baud rate is the bit rate divided by 2. For instance, with
QPSK a 70-Mbit/s data stream is transmitted at a baud rate of
35 symbols/s.
M-PSK
QPSK produces two bits per symbol, making it very spectrally efficient. QPSK can be referred to as 4PSK because
there are four amplitude-phase combinations. By using smaller
phase shifts, more bits can be transmitted per symbol. Some
popular variations are 8PSK and 16PSK.
Amplitude phase shift keying (APSK), a variation of both
M-PSK and QAM, was created in response to the need for an
improved QAM. Higher levels of QAM such as 16QAM and
above have many different amplitude levels as well as phase
shifts. These amplitude levels are more susceptible to noise.
Furthermore, these multiple levels require linear power
amplifiers (PAs) that are less efficient than nonlinear (e.g.,
class C). The fewer the number of amplitude levels or the
smaller the difference between the amplitude levels, the greater
the chance to operate in the nonlinear region of the PA to boost
power level.
APSK uses fewer amplitude levels. It essentially arranges
the symbols into two or more concentric rings with a constant
phase offset θ. For example, 16APSK uses a double-ring PSK
02.09.12 ElEctronic DEsign
48
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
90º
Each type of modulation has a maximum theoretical spectral
efficiency measure (Table 2).
A1
SNR is another important factor that influences spectral
θ
efficiency. It also can be expressed as the carrier to noise power
ratio (CNR). The measure is the BER for a given CNR value.
A2
BER
is the percentage of errors that occur in a given number of
180º
0º
bits transmitted. As the noise becomes larger compared to the
signal level, more errors occur.
Some modulation methods are more immune to noise than
others. Amplitude modulation methods like ASK/OOK and
QAM are far more susceptible to noise so they have a higher
270º
BER for a given modulation. Phase and frequency modulation
format (Fig. 5). This is called 4-12 16APSK with four symbols (BPSK, FSK, etc.) fare better in a noisy environment so they
in the center ring and 12 in the outer ring.
require less signal power for a given noise level (Fig. 7).
Two close amplitude levels allow the amplifier to operate
closer to the nonlinear region, improving efficiency as well as Other FactOrS aFFecting Spectral eFFiciency
While modulation plays a key role in the spectral efficiency
power output. APSK is used primarily in satellites since it is a
you can expect, other aspects in wireless design influence it as
good fit with the popular traveling wave tube (TWT) PAs.
well. For example, the use of forward error correction (FEC)
OFDM
techniques can greatly improve the BER. Such coding methods
Orthogonal frequency division multiplexing (OFDM) com- add extra bits so errors can be detected and corrected.
bines modulation and multiplexing techniques to improve
These extra coding bits add overhead to the signal, reducspectral efficiency. A transmission channel is divided into ing the net bit rate of the data, but that’s usually an acceptable
many smaller subchannels or subcarriers. The subcarrier fre- tradeoff for the single-digit dB improvement in CNR. Such
quencies and spacings are chosen so they’re orthogonal to one coding gain is common to almost all wireless systems today.
another. Their spectra won’t interfere with one another, then,
Digital compression is another useful technique. The digital
so no guard bands are required (Fig. 6).
data to be sent is subjected to a compression algorithm that
The serial digital data to be transmitted is subdivided into greatly reduces the amount of information. This allows digital
parallel slower data rate channels. These lower data rate sig- signals to be reduced in content so they can be transmitted as
nals are then used to modulate each subcarrier. The most com- shorter, slower data
Table 1: Carrier Phase
mon forms of modulation are BPSK, QPSK, and several levels streams.
shifT
for eaCh Pair of biTs
of QAM. BPSK, QPSK, 16QAM, and 64QAM are defined
For example,
with 802.11n. Data rates up to about 300 Mbits/s are possible voice signals are
rePresenTed
with 64QAM.
compressed for dig- Bit pairs
phase (degrees)
The complex modulation process is only produced by digital ital cell phones and
45
00
signal processing (DSP) techniques. An inverse fast Fourier voice over Internet
135
0
1
transform (IFFT) generates the signal to be transmitted. An p r o t o c o l ( Vo I P )
225
FFT process recovers the signal at the receiver.
phones. Music is 1 1
OFDM is very spectrally efficient. That efficiency level compressed in MP3 1 0
315
depends on the number of subcarriers and the type of modulation, but it can be as high as 30 bits/s/Hz. Because of the wide
Table 2: sPeCTral effiCienCy for PoPular
bandwidth it usually occupies and the large number of subcardigiTal ModulaTion MeThods
riers, it also is less prone to signal loss due to fading, multipath
reflections, and similar effects common in UHF and microtype of modulation
Spectral efficiency (bits/s/hz)
wave radio signal propagation.
<1 (depends on modulation index)
fsK
Currently, OFDM is the most popular form of digital modulation. It is used in Wi-Fi LANs, WiMAX broadband wire1.35
gMsK
less, Long-Term Evolution (LTE) 4G cellular systems, digital
1
bPsK
subscriber line (DSL) systems, and in most power-line communications (PLC) applications. For more, see “Orthogonal
2
QPsK
Frequency-Division Multiplexing (OFDM): FAQ Tutorial,” at
3
8PsK
http://mobiledevdesign.com/tutorials/ofdm.
5. 16APSK uses two
amplitude levels, A1 and
A2, plus 16 different
phase positions with an
offset of Ĥ. This technique is widely used in
satellites.
DeterMining Spectral eFFiciency
Again, spectral efficiency is a measure of how quickly
data can be transmitted in an assigned bandwidth. The unit
of measurement for spectral efficiency is bits/s/Hz (b/s/Hz).
16QaM
4
64QaM
6
ofdM
>10 (depends on the type of modulation
and the number of subcarriers)
ElEctronic DEsign Go To www.elecTronicdesiGn.com
Engineering Essentials Vol. III
49
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
20-MHz channel
312.5-kHz
subcarrier
spacing
312.5-kHz
subcarrier
bandwidth
Each subcarrier
modulated by
BPSK, QPSK,
16QAM, or 64QAM
6. In the OFDM signal for the IEEE 802.11n Wi-Fi standard, 56 subcarriers
are spaced 312.5 kHz in a 20-MHz channel. Data rates to 300 Mbits/s can
be achieved with 64QAM.
or AAC files for faster transmission and less storage. Video
is compressed so high-resolution images can be transmitted
faster or in bandwidth-limited systems.
Another factor affecting spectral efficiency is the utilization of multiple-input multiple-output (MIMO), which is the
use of multiple antennas and transceivers to transmit two or
more bit streams. A single high-rate stream is divided into
two parallel streams and transmitted in the same bandwidth
simultaneously.
By coding the streams and their unique path characteristics,
the receiver can identify and demodulate each stream and reassemble it into the original stream. MIMO, therefore, improves
data rate, noise performance, and spectral efficiency. Newer
wireless LAN (WLAN) standards like 802.11n and 802.11ac/
ad and cellular standards like LTE and WiMAX use MIMO.
For more, see “How MIMO Works” at http://electronicdesign.
com/article/communications/how-mimo-works12998.aspx.
ImplementIng modulatIon and demodulatIon
In the past, unique circuits implemented modulation and
demodulation. Today, most modern radios are softwaredefined radios (SDR) where functions such as modulation
and demodulation are handled in software. DSP algorithms
manage the job that was previously assigned to modulator and
demodulator circuits.
The modulation process begins with the data to be transmitted being fed to a DSP device that generates two digital
outputs, which are needed to define the amplitude and phase
information required at the receiver to recover the data. The
DSP produces two baseband streams that are sent to digital-toanalog converters (DACs) that produce the analog equivalents.
These modulation signals feed the mixers along with the
carrier. There is a 90° shift between the carrier signals to the
mixers. The resulting quadrature output signals from the mixers are summed to produce the signal to be transmitted. If the
carrier signal is at the final transmission frequency, the composite signal is ready to be amplified and sent to the antenna.
This is called direct conversion. Alternately, the carrier signal
may be at a lower intermediate frequency (IF). The IF signal
is upconverted to the final carrier frequency by another mixer
before being applied to the transmitter PA.
At the receiver, the signal from the antenna is amplified
and downconverted to IF or directly to the original baseband
signals. The amplified signal from the antenna is applied to
mixers along with the carrier signal. Again, there is a 90° shift
between the carrier signals applied to the mixers.
The mixers produce the original baseband analog signals,
which are then digitized in a pair of analog-to-digital converters (ADCs) and sent to the DSP circuitry where demodulation
algorithms recover the original digital data.
There are three important points to consider. First, the
modulation and demodulation processes use two signals in
quadrature with one another. The DSP calculations call for two
quadrature signals if the phase and amplitude are to be preserved and captured during modulation or demodulation.
Second, the DSP circuitry may be a conventional programmable DSP chip or may be implemented by fixed digital logic
implementing the algorithm. Fixed logic circuits are smaller
and faster and are preferred for their low latency in the modulation or demodulation process.
Third, the PA in the transmitter needs to be a linear amplifier
if the modulation is QPSK or QAM to faithfully reproduce the
amplitude and phase information. For ASK, FSK, and BPSK, a
more efficient nonlinear amplifier may be used.
the pursuIt of greater spectral effIcIency
With spectrum being a finite entity, it is always in short supply. The Federal Communications Commission (FCC) and
other government bodies have assigned most of the electromagnetic frequency spectrum over the years, and most of that
is actively used.
7. This is a comparison of several popular modulation methods and their
spectral efficiency expressed in terms of BER versus CNR. Note that for a
given BER, a greater CNR is needed for the higher QAM levels.
10–3
10–4
10–5
64QAM
QPSK
BER
56 subcarriers
10–6
BPSK
8PSK
16QAM
8QAM
10–7
10–8
10–9
6
10
14
18
CNR (dB)
22
26
30
02.09.12 ElEctronic DEsign
50
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
AG
T
L
O
V
H
G
I
H
E
Over
2500 Std. Models
Surface Mount and Thru-Hole


Low Profile / Isolated
Up to 10,000 Volts Standard
Regulated Models Available
ICO
See P
w
ico
w w. p
elect
tely
media
.com
onics
log im
cata
’s full
r
High
Power





 

 
143 Sparks Ave., Pelham, New York 10803
See EEM or send direct for Free PICO Catalog
Call Toll Free 800-431-1064 • FAX 914-738-8225
Shortages now exist in the cellular and
land mobile radio sectors, inhibiting the
expansion of services such as high data
speeds as well as the addition of new subscribers. One approach to the problem
is to improve the efficiency of usage by
squeezing more users into the same or
less spectrum and achieving higher data
rates. Improved modulation and access
methods can help.
One of the most crowded areas of spectrum is the land mobile radio (LMR) and
private mobile radio (PMR) spectrum
used by the federal and state governments
and local public safety agencies like fire
and police departments. Currently they’re
assigned spectrum by FCC license in the
150- to 174-MHz VHF spectrum and the
421- to 512-MHz UHF spectrum.
Most radio systems and handsets use
FM analog modulation that occupies a
25-kHz channel. Recently the FCC has
required all such radios to switch over
to 12.5-kHz channels. This conversion,
known as narrowbanding, doubles the
number available channels.
Narrowbanding is expected to improve
a radio’s ability to get access to a channel. It also means that more radios can
be added to the system. This conversion
must take place before January 1, 2013.
Otherwise, an agency or business could
lose its license or be fined. This switchover will be expensive as new radio systems and handsets are required.
In the future, the FCC is expected to
mandate a further change from the 12.5kHz channels to 6.25-kHz channels,
again doubling capacity without increasing the amount of spectrum assigned. No
date for that change has been assigned.
The new equipment can use either analog or digital modulation. It is possible
to put standard analog FM in a 12.5-kHz
channel by adjusting the modulation
index and using other bandwidth-narrowing techniques. However, analog FM
in a 6.25-kHz channel is unworkable, so
a digital technique must be used.
Digital methods digitize the voice signal and use compression techniques to
produce a very low-rate serial digital signal that can be modulated into a narrow
band. Such digital modulation techniques
are expected to meet the narrowbanding
goal and provide some additional performance advantages.
New modulation techniques and protocols—including P25, TETRA, DMR,
dPMR, and NXDN—have been developed to meet this need. All of these new
methods must meet the requirements of
the FCC’s Part 90 regulations and/or the
regulations of the European Telecommunications Standards Institute (ETSI) standards such as TS-102 490 and TS-102658 for LMR.
The most popular digital LMR technology, P25, is already in wide use in
the U.S. with 12.5-kHz channels. Its frequency division multiple access (FDMA)
method divides the assigned spectrum
into 6.25-kHz or 12.5-kHz channels.
Phase I of the P25 project uses a
four-symbol FSK (4FSK) modulation.
Standard FSK, covered earlier, uses two
frequencies or “tones” to achieve 1 bit/
Hz. However, 4FSK is a variant that
uses four frequencies to provide 2-bit/
Hz efficiency. With this scheme, the standard achieves a 9600-bit/s data rate in a
12.5-kHz channel. With 4FSK, the carrier frequency is shifted by ±1.8 kHz or
±600 Hz to achieve the four symbols.
In Phase 2, a compatible QPSK modulation scheme is used to achieve a similar data rate in a 6.25-kHz channel. The
phase is shifted either ±45° or ±135° to
get the four symbols. A unique demodulator has been developed to detect either
the 4FSK or QPSK signal to recover the
digital voice. Only different modulators
on the transmit end are needed to make
the transition from Phase 1 to Phase 2.
The most widespread digital LMR
technology outside of the U.S. is TETRA, or Terrestrial Trunked Radio. This
ETSI standard is universally used in
Europe as well as in Africa, Asia, and
Latin America. Its time division multiple
access (TDMA) approach multiplexes
four digital voice or data signals into a
25-kHz channel.
A single channel is used to support a
digital stream of four time slots for the
digital data for each subscriber. This is
equivalent to four independent signals in
adjacent 6.25-kHz channels. The modulation is π/4-DQPSK, and the data rate is
7.2 kbits/s per time slot.
Another ETSI standard, digital mobile
radio (DMR), uses a 4FSK modulation
scheme in a 12.5-kHz channel. It can
achieve a 6.25-kHz channel equivalent
E Mail: [email protected]
02.09.12 ElEctronic DEsign
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
in a 12.5-kHz channel by using two-slot
TDMA. The voice is digitally coded with
error correction, and the basic rate is 3.6
kbits/s. The data rate in the 12.5-kHz
band is 9600 kbits/s.
A similar technology is dPMR, or digital private mobile radio standard. This
ETSI standard also uses a 4FSK modulation scheme, but the access is FDMA in
6.25-kHz channels. The voice coding rate
is also 3.6 kbits/s with error correction.
LMR manufacturers Icom and Kenwood have developed NXDN, another
standard for LMR. It is designed to operate in either 12.5- or 6.25-kHz channels
using digital voice compression and a
four-symbol FSK system. A channel may
be selected to carry voice or data.
The basic data rate is 4800 bits/s. The
access method is FDMA. NXDN and
dPMR are similar, as they both use 4FSK
and FDMA in 6.25-kHz channels. The
two methods are not compatible, though,
as the data protocols and other features
are not the same.
Because all of these digital techniques
are similar and operate in standard frequency ranges, Freescale Semiconductor
was able to make a single-chip digital
radio that includes the RF transceiver
plus an ARM9 processor that can be programmed to handle any of the digital
standards. The MC13260 system-ona-chip (SoC) can form the basis of a
handset radio for any one if not multiple
protocols. For more, see “Chip Makes
Two-Way Radio Easy” at http://electronicdesign.com/article/communications/
Chip-Makes-Two-Way-Radio-Easy.aspx.
Another example of modulation techniques improving spectral efficiency and
increasing data throughput in a given
channel is a new technique from NovelSat called NS3 modulation. Satellites are
positioned in an orbit around the equator
about 22,300 miles from earth. This is
called the geostationary orbit, and satellites in it rotate in synchronization with
the earth so they appear fixed in place,
making them a good signal relay platform from one place to another on earth.
Satellites carry several transponders
that pick up the weak uplink signal from
earth and retransmit it on a different frequency. These transponders are linear
and have a fixed bandwidth, typically 36
MHz. Some of the newer satellites have
72-MHz channel transponders. With a
fixed bandwidth, the data rate is somewhat fixed as determined by the modulation scheme and access methods.
The question is how one deals with the
need to increase the data rate in a remote
satellite as required by the ever increasing demand for more traffic capacity.
The answer lies in simply creating and
implementing a more spectrally efficient
modulation method. That’s what NovelSat did. Its NS3 modulation method
increases bandwidth capacity up to 78%.
That level of improvement comes from
a revised version of APSK modulation
covered earlier. One commonly used satellite transmission standard, DVB-S2, is
a single carrier (typically L-band, 950 to
1750 MHz) that can use QPSK, 8PSK,
16APSK, and 32APSK modulation with
different forward error correction (FEC)
schemes. The most common application
is video transmission.
NS3 improves on DVB-S2 by offering 64APSK with multiple amplitude
and phase symbols to improve efficiency. Also included is low density parity
check (LDPC) coding. This combination
provides a maximum data rate of 358
Mbits/s in a 72-MHz transponder.
Because the modulation is APSK, the
TWT PAs don’t have to be backed off to
preserve perfect linearity. As a result, they
can operate at a higher power level and
achieve the higher data rate with a lower
CNR than DVB-S2. NovelSat offers its
NS1000 modulator and NS2000 demodulator units to upgrade satellite systems
to NS3. In most applications, NS3 provides a data rate boost over DVB-S2 for
a given CNR.
Acknowledgment
Special thanks to marketing director
Debbie Greenstreet and technical marketing manager Zhihong Lin at Texas
Instruments as well as David Furstenberg, chairman of NovelSat, for their
help with this article.
More FroM Lou FrenzeL
See more communications
coverage
from Lou at http://electronicdesign.
com/author/1843/LouisEFrenzel.
ElEctronic DEsign Go To www.elecTronicdesiGn.com
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
Sam DaviS | Contributing EDitor
S t r i c te r g u i d e l i n e s
imposed by version
3 of the iEC standard
for harmonic current
emissions push designers to embrace powerfactor-correction methodologies.
[email protected]
B
efore the latest IEC61000-3-2 standard took effect in 2005, most power
supplies for PCs, monitors, and TVs generated excessive line harmonics
when operating from single-phase, 110- to 120-V, 60-Hz ac. Spurred on by
this newer and stricter IEC standard, power-supply manufacturers aim to
minimize power-line harmonics by adding power factor correction (PFC).
To understand the impact of IEC61000-3-2, it’s best to first look at the
ideal situation, which places a load resistor (R) directly across the power line (Fig. 1).
Here, the sinusoidal line current, IAC, is directly proportional to and in phase with the
line voltage VAC. Therefore:
I
(t) =
V (t)
R
(1)
This means that for the most efficient and distortion-free power-line operation, all
loads should behave as an effective resistance (R), whereby the power used and delivered is the product of the RMS line voltage and line current.
Factor
PFc Into Your
Power-SuPPlY
DeSIgn
03.08.12 ElEctronic DEsign
64
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
IAC
V
VAC, I AC
VAC
angle (θ) between line current and voltage; in that regard,
Equation 2 can be rewritten as:
VAC
IAC
t
R
(a)
(b)
1. With a resistive load on the power line (a), line current is proportional
and in phase with the line voltage (b).
I AC
V
AC
+
VAC
C
−
V
2. A diode bridge and capacitor across the power line results in a nonlinear load.
However, loads for many electronic systems require an acto-dc conversion. In this case, the load on the power line from
a typical power supply consists of a diode bridge driving a
capacitor (Fig. 2).
It’s a nonlinear load for the power line because two diodes
of the bridge rectifier lie in the direct power path for either the
positive or negative half-cycle of the input ac line voltage. This
nonlinear load draws line current only during the peak of the
sinusoidal line voltage, resulting in the “peaky” input line current that causes line harmonics (Fig. 3).
A nonlinear load causes harmonics comparable in magnitude to the fundamental harmonic current at line frequency.
Figure 4 shows the magnitude of higher-order harmonics currents normalized with respect to the magnitude of the fundamental harmonic at line frequency.
However, only the harmonic current at the same frequency
as the line frequency and in phase with the line voltage (in this
case, the fundamental harmonic at line frequency) given in
Figure 1 contributes to the average power delivered to the load.
These harmonics currents can affect operation of other equipment on the same utility line.
The magnitude of line harmonics depends on a power supply’s power factor, which varies from 0 to 1. A low power-factor
value causes higher harmonics, while a high power-factor value
produces lower harmonics. Power factor (PF) is defined as:
P
(2)
PF =
HV
I
where P = real power in watts; IRMS = RMS line current; VRMS
= RMS line voltage; and VRMS × IRMS = apparent power in
volt-amperes (VA). PF also equals the cosine of the phase
P = (I
HV
)cosș
(3)
The value of cosθ is a number between 0 and 1.
If θ = 0°, cosθ = 1 and P = IRMS × VRMS, which is the same
as for a resistor load. When the PF is 1, the load consumes all
of the energy supplied by the source.
If θ = 90°, then cosθ = 0°; therefore, the load receives zero
power. The generator that’s providing the power must deliver
IRMS × VRMS power, even though no power is used for useful
work.
Thus, for the diode bridge-capacitor case in Figure 2, the
only variable left in the PF definition of Equation 2 is the line
current IRMS, since line voltage (VRMS) is fixed by power-line
generators to 120 V. The higher the IRMS the power line draws
for the given average power delivered to the load, the lower the
power factor (PF).
The ac-dc converter in Figure 2, which operates from 120-V
ac line voltage and delivers 600 W to the load while drawing
10 A of the line current, has a PF = 0.5. However, Figure 1’s
resistive load with a PF of 1, which draws 600 W from the 120V ac line, draws only 5 A from the line.
The electric utility suffers from low PF loads because it
must provide higher generating capability to support demands
for increased line current due to poor load PF. Nonetheless, it
charges the user only for delivery of average power in watts—
not the generation of volt-amperes.
This difference between volt-amperes and watts either
appears as heat or is reflected back to the ac power line. The
most common means of correcting this condition is to employ
power factor correction.
Power-Factor correction
The IEC-61000-3-2 standard defines the maximum harmonic current allowed for a given power level. Initial versions
of the standard in 1995 and 2001 were changed by the 2005
Edition 3. It imposed stricter requirements on power-line
harmonic currents for (Class D) PCs, monitors, and TVs consuming between 75 and 600 W and ≤16 A per phase. To meet
those requirements, designers must employ active powerfactor correction (PFC) in Class D power supplies.
Many PFC circuits employ a boost converter. One limitation
in the conventional
boost PFC converter
is that it can operate
only from the rectiI
AC
fied ac line, which
involves two-stage
3. Line current is
“peaky” and out of
phase with the diode
bridge-capacitor load’s
line voltage.
ElEctronic DEsign Go To www.elecTronicdesiGn.com
Engineering Essentials Vol. III
65
electronicdesign.com/subscribe
Visit http://electronicdesign.com
Harmonic amplitude
(normalized to
fundamental)
EngineeringEssentials
1.0
0.8
0.6
0.4
0.2
0
1
5
9
13 17 21 25 29
Harmonic number
(f = fundamental)
4. “Peaky” line
current generates
current harmonics
comparable in magnitude to the fundamental harmonic
current at the line
frequency.
VR
IR
CR
L
V
V
AC
VAC
+
S
CO
−
97% × 97% = 94%
5. Two-stage power processing is required in this simplified conventional
PFC boost converter.
power processing (Fig. 5). Waveforms generated by the conFor medium and low power, there’s an alternative approach
verter better illustrate this problem (Fig. 6). In addition, there’s that reduces the amount of switches by using a forward conno simple and effective way to introduce isolation in a conven- verter for the isolation stage (Fig. 9). Before going this route,
tional boost converter.
one must be aware that although there are now 10 switches, the
Using a full-bridge extension of the boost converter, which four switching devices in the forward converter impose greater
is then controlled as a PFC converter, is one way to introduce voltage stresses on both primary and secondary side switches
isolation (Fig. 7). However, this adds the complexity of four than the full-bridge solution. In addition, the full-bridge solutransistors on the primary side and four diode rectifiers on the tion requires four magnetic components.
secondary, both operating at the switching frequency of, say,
100 kHz. Plus, four more diodes are in the input bridge rectifier Bridgeless PFC Converter
Breaking new ground in this arena, Dr. Slobodan Cuk, presioperating at the line frequency of 50/60 Hz.
Besides low-frequency sinusoidal current, the line current dent of Teslaco, developed a bridgeless PFC converter (patent
will have superimposed input inductor ripple current at the pending) that operates directly from the ac line. It’s claimed to
high switching frequency, which needs to be filtered out by an be the first true single-stage bridgeless ac-dc PFC converter.
To accomplish this feat, Cuk employs a new switching powadditional high-frequency filter on the ac line. The presence
of 12 switches operating in the hard-switching mode results er-conversion method, termed “hybrid-switching.” It employs
in high conduction and switching losses. The best efficiency a converter topology consisting of only three switches: one
reported for this two-stage approach and its supplementary controllable switch S and two passive current rectifier switches
(CR1 and CR2) (Fig. 10).
switching devices is 87%.
The two rectifiers turn on and off in response to the state of
Such a method also suffers from the startup problem due to
step-up dc conversion gain. It needs additional circuitry to pre- the main switch (S) for either positive or negative polarity of
the input ac voltage. This topology consists of an inductor in
charge the output capacitor so the converter can start up.
To achieve 1 kW or higher power, designers often employ series with the input, the floating energy-transferring capacitor
a three-stage approach (Fig. 8). Here, the standard boost PFC that acts as a resonant capacitor for the part of the switching
converter and an isolated step-down converter follow the input’s cycle, and a resonant inductor.
Because the conventional converters based on PWM squarebridge rectifier. This requires a total of 14 switches. At least six
of those switches are high voltage, further decreasing efficien- wave switching use inductors and capacitors, they require
cy and increasing the cost. Still, with the
highest efficiency based on best present 7. A full-bridge extension of the boost converter, controlled as a PFC converter, provides isolation.
switching devices reaching about 90%,
Isolated boost PFC
it’s better than the two-stage approach.
Bridge
6. Shown are voltage and current waveforms
from conventional PFC boost converter.
DB1
DB2
S1
S2
D1
D2
V
R
n:1
I
R
VAC
C
S3
DB3
DB4
+
<
R
S4
D3
D4
03.08.12 ElEctronic DEsign
66
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
Bridge
Boost PFC
DB
D B1
Isolated full bridge
L
400 V
S1
D B2
S2
LF
VAC
D1
48 V
D2
n:1
+
SB
CF
CB
C
S3
D B3
–
R
S4
D B4
D3
D4
8. Power supplies handling at least 1 kW typically employ a three-stage PFC converter.
Boost PFC
Bridge
DB1
DB2
LF
CR1
S1
Forward converter
CR2
S2
S3
L
CR3
VAC
C
CH
C
DB3
DB4
PFC IC controller
+
–
R
full-bridge rectifier directly eliminates
losses, especially for an 85-V low line.
The active switch S on the primary side is modulated and operated at
the switching frequency, which measures three orders of magnitude higher
than the line frequency (e.g., 50-kHz
switching frequency compared to a low
ac line frequency of 50/60 Hz). Duty
ratio (D) can be defined with respect
to on-time of the controlling switch S
and all steady-state quantities, such as
dc conversion ratios, and dc current of
inductor L is expressed in terms of D.
The full-wave input line voltage and
input line currents are then sensed and
sent as input to the bridgeless PFC IC
controller. In turn, the controller modulates switch S on the primary side to
force the input line current to be proportional to the input line voltage, providing the desired unity power factor.
This PFC converter’s truly remarkable property is that a galvanically isolated extension retains the same simplicity of the three-switch converter
in Figure 10. Basically, the resonant
capacitor splits into two, in series, and
the isolation transformer is inserted at
the point of their split.1,2
Digitally ControlleD PFC
The availability of low-cost, highperformance digital controllers intend9. This PFC circuit uses an isolated forward converter, a setup usually reserved in medium- and lowed for power supplies has led to their
power situations.
use in PFC designs. Digital controllers
provide programmable configuration,
complementary pair switches. When one switch is on, its com- nonlinear control, and low part counts, as well as the ability to
plementary switch is off and vice versa. As a result, only an implement complex functions that are usually difficult with an
even number of switches are allowed, compared with an odd analog approach.
number (three) in the new hybrid switching PFC converter.
Most digital power controllers, such as Texas Instruments’
In this setup, no such complementary switches exist. One UCD30203, provide integrated power-control peripherals and
active switch S solely controls both diodes, whose roles change a power-management core, including digital loop compensaautomatically according to the polarity of the ac input voltage. tors, fast analog-to-digital converters (ADCs), high-resolution
For the positive polarity of the ac input voltage, CR1 conducts
CR2
CR
LR
L
IAC L F
IL
V
during the on-time interval of switch S. For the negative polarity
of ac input voltage, CR1 conducts during the off-time interval V
AC
+
of switch S. CR2 also responds automatically to the state of
CR1
C
S
CF
R
−
switch S and input ac-voltage polarity. For a positive polarity, it
conducts during the off-time interval of switch S; for negative
polarity, it conducts during the on-time interval of switch S.
Thus, the three switches operate at all times for both positive
Bridgeless PFC IC controller
and negative half-cycles of the input ac line voltage. Hence, this
true bridgeless PFC converter operates without the full-bridge
rectifier because the converter topology actually performs ac 10. This bridgeless PFC uses a hybrid-switching method that employs a
line rectification. The end result is the same dc output voltage three-switch converter topology: one controllable switch (S) and two pasfor either polarity of input ac line voltage. Elimination of the sive current rectifier switches (CR1 and CR2).
ElEctronic DEsign Go To www.elecTronicdesiGn.com
Engineering Essentials Vol. III
67
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
digital pulse-width modulators (DPWMs) with built-in
Line
D2
dead-time, low-power consumption microcontrollers,
L2
AC
Neutral
etc. They support a complex,
RL
high-perfor mance powerCT2
CT1
supply design, such as a
bridgeless PFC.
D3
D4
For example, a bridgeless
S1
S2
PFC can incorporate two dcdc boost circuits: L1, D1, S1
and L2, D2, S2 (Fig. 11). D3
and D4 are slow-recovery
CT1
CT2
diodes. Separately sensing
the line and neutral voltages
referenced to internal power
ground enables the input ac
DPWM1A
DPWM2A EADC
voltage measurement.
ADC_01
UCD3020
ADC_03
By comparing the sensed
line and neutral signals, firmADC_02
ware can tell if it’s a positive
or negative half-cycle. During
11. A digitally controlled bridgeless PFC consists of two phase-boost circuits, but only one phase is active at a
a positive half-cycle, the first
time.
dc-dc boost circuit (L1-S1-D1)
is active and the boost current
returns to ac neutral through
D4. During a negative halfAC
VOUT
cycle, L2-S2-D2 is active and
T3
input
Relay
the boost current returns to the
ac line through D3.
T1
T2
Compared with conventional
single-phase PFCs using the
VREC
Bulk
same power devices, a bridgecapacitor
less PFC and a single-phase
Q1
Q2
PFC should have the same
switching losses. However, a
bridgeless PFC current passes
only one slow diode (D4 for
3.3 V
positive half-cycle and D3 for
negative half-cycle) instead of two at any
time. Thus, efficiency improvement relies
1 AGND
VDD 24
on the difference in conduction loss between
2 VAC
RES 23
one diode and two diodes.
3 VFB
RTD 22
Bridgeless PFC efficiency also can be
4 OVP
ADD 21
improved by turning the inactive switch
5 PGND
SDA 20
fully on. For example, during a positive
PMBus
6 ILIM
SCL 19
cycle, S2 can be fully turned on while S1
7 IBAL
SYNC 18
is controlled by the PWM signal. Since the
8 CS–
INRUSH 17
voltage drop on MOSFET S2 may be lower
T1 + T2 + T3
9 CS+
PGOOD 16
than D4 when the flowing current is below a
10 DGND
AC_OK 15
certain value, the return current partially or
11 PSON
PWM2 14
totally flows through L1-D1-RL-S2-L2 and
12 VCORE
then back to the ac source. This decreases
PWM 13
conduction loss and improves circuit effiADP1048
ciency, especially at light loads. Similarly,
during a negative cycle, S1 gets turned on
12. Analog Devices’ ADP1048 digital PFC is configured as a bridgeless PFC.
fully while S2 is switching.
L1
D1
03.08.12 ELECTRONIC DESIGN
68
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
With the same input ac voltage and dc output voltage, the
output current is proportional to voltage loop output. Armed
with this knowledge, the frequency and output voltage thus
can be adjusted accordingly. Firmware implements the voltage loop in digital controllers. Because the output is already
known, it’s easy to implement this feature, and less costly than
an analog approach.
More Digital Control
Analog Devices recently introduced the ADP1047 and
ADP10484 digital PFCs controllers, which also provide input
power metering and inrush current control. The ADP1047
is intended for single-phase PFC applications, while the
ADP1048 targets interleaved and bridgeless PFC applications.
The digital PFC function is based on a conventional boost
circuit to provide optimum harmonic correction and power
factor for ac-dc systems. All signals are converted into the
digital domain to maximize flexibility; key parameters can be
reported and adjusted via a PMBus interface.
Overall, the ADP1047 and the ADP1048 were configured
to assist designers in optimizing system performance and
maximizing efficiency across the load range. The two ICs
accurately measure RMS input voltage, current, and power.
Then that data can be reported to the power supply’s microcontroller via the PMBus.
The ADP1048’s bridgeless boost configuration allows
removal of conduction losses caused by the PFC converter’s
input bridge (Fig. 12). In this configuration, the two power
MOSFETs must be driven separately to achieve the highest
efficiency. Signals from the ADP1048 make this possible. The
IBAL pin detects the ac line phase and zero crossings. The
maximum rating on the IBAL pin is VDD + 0.3 V, so it needs to
be protected with a suitable clamp circuit.
During the positive ac line phase, only one boost stage is
effectively working. The second stage is passive; the current
flows in Q2 from the source to the drain. Turning the Q2 FET
fully on during this phase minimizes conduction losses in Q2.
When the ac line phase becomes negative, the roles of Q1
and Q2 are reversed, and Q2 switches actively while Q1 is
always on. The phase information is detected from the ac
line via the IBAL pin. During the soft start phase, both FETs
switch as a precautionary measure. The same situation happens when phase information on the IBAL pin becomes corrupted or inaccurate.
RefeRences
1. cuk, slobodan, “True Bridgeless Pfc converter Achieves Over 98%
efficiency, 0.999 Power factor,” Power Electronics Technology, July
2010.
2. cuk, slobodan, “True Bridgeless Pfc converter Achieves Over 98%
efficiency, 0.999 Power factor, Part 2,” Power Electronics Technology,
August 2010.
3. Bosheng sun and Zhong Ye, “Digital control Improves Bridgeless
Pfc Performance,” Power Electronics Technology, March 2011.
4. Analog Devices ADP1047/ADP1048, “Digital Power factor
controller with Accurate Ac Power Metering” data sheet,
september 2011.
BODY ARMOR
FOR SWITCHES
Standard black or gray
or matched to any color
Back-up secondary seal
Molded-in mounting nut
100,000 min. actuations
Patented perimeter
sealing rib prevents
leakage past mounting
hole. No O-ring
required.
IP66/68 Rated
High tear-strength hostile
environment-resistant
silicone rubber
Temperature range
-94°F to +400°F
�US
®
C
UL Recognized Component
Call/click for catalog & samples
Switch/Circuit
Breaker housing
HEXSEAL® HERMETIC BOOTS DEFEND
AGAINST HOSTILE ENVIRONMENTS
HEXSEAL boots increase reliability & prolong life by
hermetically protecting unsealed switches, circuit breakers,
potentiometers and encoders from hostile contaminants
& actuator-function interference by blocking water, dust,
dirt, ice, solvents, etc. Many HEXSEAL® sealing boots can
also suppress EMI/RFI. Meets MIL-DTL-5423 specs. Toggle,
pushbutton, rotary, rocker & armored versions.
View website
800.498.9034 apmhexseal.com
The World’s Most Hostile Environments
Are Our Proving Grounds®
ElEctronic DEsign Go To www.elecTronicdesiGn.com
Engineering Essentials Vol. III
69
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
BILL WONG | EMBEDDED/SYSTEMS/SOFTWARE EDITOR
[email protected]
THE FUNDAMENTALS
OF FLASH
MEMORY
STORAGE
There’s more to flash memory
than NAND and NOR as new
technologies drastically
improve storage capacities while reducing real estate
and power requirements.
F
lash memory is ubiquitous, especially in mobile
devices. Available in a wide range of form factors,
it continues to push hard-disk drives from more and
more platforms as its costs go down and capacities
and operational lifetimes go up (Fig. 1).
NAND and NOR flash memory dominate the solidstate nonvolatile memory (NVM) arena, but they aren’t the
only technologies that are available. Form factors that don’t
03.22.12 ELECTRONIC DESIGN
34
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
expose flash memory explicitly are possible targets for replacement with
non-flash technologies. For example, non-flash products are cropping up
in serial storage.
NONVOLATILE SOLID STORAGE
At one end of the spectrum are one-time programmable (OTP) memories. These days, OTP memory is normally used for storing security keys
or network IDs. It is implemented using a range of technologies such as
fuse, antifuse, and floating gates. It also can be implemented using standard
CMOS technologies.
Moving up the NVM scale are a number of multi-time programmable
(MTP) memory technologies that can write hundreds or thousands of times
(see “Kilopass Delivers OTP And MTP Memory” at electronicdesign.com).
MTP memories often are used to implement boot code that rarely changes.
Like OTP, MTP is usually implemented using CMOS technologies, allowing it to be included with digital logic.
Floating-gate EEPROM has been commonly used for data storage. Its
ability to write a single byte plus its good endurance and data retention
properties have made it popular, but flash technologies outclassed it in
density. EEPROM emulation often is called a feature of some flash implementations that hide flash’s block erase requirements so an individual byte
can be written.
Other nonvolatile technologies keep bumping the edges of flash dominance, including magnetoresistive RAM (MRAM), ferroelectric RAM
(FRAM), phase change memory (PCM), and up and coming NVM technologies (see “Magnetic Cores To MRAM: Nonvolatile Tipping Point?” at
electronicdesign.com). These technologies have better overall performance
figures than other NVM technologies including NAND and NOR flash,
including write speed, voltage requirements, lack of a page erase cycle,
long-term endurance, data retention, and scalability.
These technologies started targeting niche markets where their higher
costs, at least initially, weren’t as much of an issue and their advantages
were significant. They’re even giving SRAM and DRAM a run for their
money.
The Texas Instruments 16-bit MSP430FR57xx family only boasts up
to 16 kbytes of FRAM for data storage and program storage. The family typically has a mix of SRAM, flash, and EEPROM storage. A single
approach reduces the number of stock keeping units (SKUs) and simplifies
developers’ jobs since they no longer need to juggle RAM requirements
with program storage.
These alternative NVM technologies will be found in more designs in the
future. But for now, flash memory is the dominant NVM technology.
(a)
(b)
(c)
(d)
FLASH TECHNOLOGIES
Flash memory implementations are divided into NAND and NOR implementations with a host of variations from different vendors. In general, they
employ a floating-gate transistor. The two approaches indicate how the
transistors are connected and used rather than incorporating the transistors
as part of digital logic as with an FPGA or custom logic.
NOR flash transistors are connected to ground and a bit line, enabling
individual bits to be accessed. It provides better write endurance than
NAND flash. NOR flash is typically used where code and data may exist.
Microcontrollers with on-chip flash normally incorporate NOR flash.
NAND flash transistors are generally connected in groups to a word line.
This allows a higher density than NOR flash. NAND flash is typically used
for block-oriented data storage. NAND flash can be less reliable than NOR
from a transistor standpoint, so error detection and correction hardware
(e)
1. Flash memory comes in a range of form factors, including SecureDigital (a), MicroSD (b),
Sony Memory Stick (c), Compact Flash (d), and
mSATA (e). They typically employ NAND flash
storage.
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
Engineering Essentials Vol. III
35
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
or software is part of NAND storage
In this case, the remapping mechaplatforms. NAND is typically used for
nism can be used if the memory is overhigh-capacity data storage.
provisioned. Extra blocks or sectors are
Flash memory uses an erase-write
common on hard-disk drives, and the
cycle. The erase essentially sets the
same technique applies to flash memory.
memory to all 1s. Writing sets bits to 0,
The only difference is that an extra block
and it’s possible to write different data
will be used if an uncorrectable error is
as long as existing 1s are changed to 0s.
detected in a regular block.
Flash file systems can take advantage
If wear leveling is used, then typically
of this feature because it permits operaall blocks are part of a pool. If the system
tions to be performed without a long
is implemented in software, it may also
and electrically expensive erase cycle.
be possible to select the logical device
NAND flash always works at the block
size based on the desired lifetime of the
level, while NOR normally has a finer 2. Micron’s triple-level cell (TLC) flash memory
system. A smaller logical size provides
stores 3 bits of data in each transistor.
grain access.
more “extra” blocks.
Flash memory started with singleOther technologies like FRAM,
level cell (SLC) data encoding where each storage transistor MRAM, and PCM don’t suffer from the same write endurance
encoded a 1 or a 0. Multi-level cell (MLC) flash normally refers issues as flash. But techniques such as memory over-provisionto the ability to store 2 bits of information per cell instead of ing and remapping may still apply, especially in larger devices
one. Everything is analog at the transistor level, but it’s simpler where other errors such as hardware defects may be common.
to build a two-level detection circuit than it is to build a fourFlash soFtware and Controllers
level detection circuit required for MLC flash.
Controlled access to flash memory allows software to ignore
Likewise, programming an MLC cell requires the ability to
generate four distinct levels. Triple-level cell (TLC) flash takes many of the challenges of supporting flash from erase requirethis a step further, packing 3 bits or eight levels into a single ments to write endurance. Where and how this control is implestorage cell like Micron’s 3-bit, 34-nm NAND flash memory mented varies greatly.
Software flash file systems are one way developers deal with
chip (Fig. 2).
The obvious advantage to MLC or TLC storage is higher raw flash. These systems are device drivers that have access to
densities. The tradeoff is usually in performance, especially in the flash chip interface. The driver handles all the flash chores
like error detection, wear leveling, and bad block remapping
terms of endurance.
The typical SLC NAND flash has a write endurance on the transparently with respect to the operating system and applicaorder of 100k cycles, while SLC NOR flash is on the order of tions. It may utilize part of the flash storage for internal tables,
1M cycles. MLC flash cuts this by a factor of 10, and TLC cuts and it may account for flash erase and write characteristics.
The drive may provide some level of file and directory manit even further. Technology continues to improve these numbers. SLC has better write endurance, while MLC and TLC agement, or it may simply present a logical, low-level block
device. There are advantages to both approaches, and the
will be more cost efficient.
Flash system lifetime depends on a number of factors, choice depends upon the application environment.
A block level interface is normally provided if a hardware
including how it’s managed. Unmanaged flash storage has a
problem if one area wears out, which occurs when a write fails approach is taken. A hardware implementation can also incorto store the proper information. Error detection systems can porate a more robust error correction and mapping system
help determine when this happens, but once it does the device because of hardware acceleration that would normally be
is usually worthless. Worse, its failure could cause significant unavailable to a software implementation. Initially, there were
problems. This is why devices such as microcontrollers with many flash controller companies, but they have been snapped
built-in flash storage that do not track wear rely on NOR flash up by flash memory companies looking to provide a more integrated solution.
with its higher write endurance characteristics.
Placing the flash memory behind a hardware controller
Several techniques can be used to improve the overall system lifetime, such as wear leveling. This approach requires does a number of things. For example, it can simplify the
the ability to remap the location of information. It works best device interface, provide more advanced features such as powwith a block-oriented device, although it could be applied with er reduction, including various sleep modes, and implement
a block size of a single word. There is overhead to implement hybrid memory systems.
Hybrid systems mix memory types in the same package. This
wear leveling so large block sizes will be more efficient.
Wear leveling distributes writes across the storage device. approach allows block devices like NAND flash to be addressed
The system’s lifetime then can be viewed as the system’s total at a byte or word level by adding RAM to the mix. Samsung’s
write capacity rather than the maximum for a single block. Wear OneNAND mixes SRAM with its NAND flash controller (see
leveling requires the ability to track block write usage and to “The Storage Hierarchy Gets More Complex” at electronicderecord and utilize this information. Defects often can reduce a sign.com). This allows the system to be used as program storage
with blocks being cached in the SRAM as required.
block’s lifetime to less than its recommended write lifetime.
03.22.12 ElEctronic DEsign
36
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
RAM is also faster than flash, especially for writes. It doesn’t suffer from flash’s
write endurance limitations either. And,
RAM isn’t restricted to block access. A
hybrid system can provide many of the
advantages of flash as well as those of
RAM when it’s used as a general cached
system. Data is flushed from RAM to
flash as necessary since there is usually
more flash memory than RAM in these
kinds of designs.
Hybrid systems can get even more
complex as demonstrated by Seagate’s
Momentus XT hard drive (see “Seagate
Delivers 2nd Generation Hybrid Hard
Drive” at electronicdesign.com). This
storage system mixes three types of storage: DRAM, SLC flash, and rotating
magnetic storage. It has a SATA interface, so there’s a SATA controller in
addition to controllers for the flash and
hard-disk drive. This is completely transparent to users.
The use of hardware controllers for
flash memory also enables designers to
add other functions such as security and
encryption to the mix. Hardware acceleration benefits these types of features
as well.
Standardizing the flash interface would
definitely make a system designer’s job
easier. The Open NAND Flash Interface
(ONFI) Working Group has been doing
this type of work, releasing the OFNI
3.0 specification in 2011. The spec is
designed to deliver 400 Mtransfers/s
with double data rate (DDR) transfers.
Its Toggle Mode 2.0 optionally employs
differential signaling. OFNI additionally specifies chip-level form factors, but
flash storage covers a very wide range of
form factors.
3. SanDisk’s iNand implements JEDEC’s
e-MMC interface.
FLASH FORM FACTORS
Form factors for small serial flash
devices vary widely. There are three-pin
devices that support the 1Wire protocol
as well as a wide range of devices that
support I2C and SPI. Quad SPI (QSPI)
NVM devices increase the number of
bits transferred by a factor of four, and
there are even microcontrollers that can
execute programs directly from QSPI
serial memory devices like NXP’s
LPC1800 family (see “Cortex M3 Can
Run From Quad SPI Flash” at electronicdesign.com).
Storing programs in serial flash is
not uncommon. Most PCs have their
BIOS stored in serial flash memory. The
chip boot loader copies this program
into RAM where it is executed. NXP’s
LPC1800 reads the memory an instruction at a time.
Serial memories were one of the first
places were other technologies like
FRAM and MRAM were used. Serial
memories often contain other subsystems such as temperature sensors and
real-time clocks (RTCs). Some RTCs
even utilize the memory for storing timestamp information.
JEDEC e-MMC (embedded multimedia card) form-factor chips like SanDisk’s iNAND use the same serial interface as the removable, seven-pin MMC
form factor (Fig. 3). The advantage for
developers lies in having the same interface for fixed and removable storage.
The seven-pin MMC device fits into
the same slot as nine-pin SD and nine-pin
SDIO devices, so I/O devices can reside
on the card. The SD has the same pinout
as MMC with two extra pins added near
the outside edges. The MMC interface is
essentially SPI with SD being QSPI. The
11-pin miniSD and eight-pin microSD
cards use the same type of interface but
in a smaller package. The transfer rate of
these serial devices is 832 Mbits/s.
Removable flash storage shows up
with USB, SATA, and SAS interfaces
as well. SAS tends to be used only
on drives, while SATA is found on
disk-drive form-factor flash drives as
well as embedded devices like Viking
Technology’s SATA Cube 3 (Fig. 4). The
SATA Cube 3 is a stack of circuit boards
with flash memory and a controller. More
boards mean more storage.










ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
4. Viking Technology’s SATA Cube 3 has a SATA
interface that provides access to flash chips
stacked on multiple circuit boards.
On-board SATA devices also include
standards such as mSATA and Slim
SATA modules. SATA interfaces provide
significantly higher throughput compared to SPI/QSPI used with media like
SD cards. Larger SATA flash storage can
be found in 1.8-, 2.5-, and 3.5-in. harddrive form factors.
IDE-based Compact Flash storage is
still a common feature on many embedded
motherboards. This has been changing as
microcontrollers have moved from IDE
and PCI to SATA and PCI Express. Still,
Compact Flash is found in many mobile
devices like digital camcorders, although
cameras tend to utilize SD-style cards.
USB flash drives have effectively
replaced CDs, DVDs, and floppy disks.
The first USB 1.x flash drives were tiny
in capacity compared to today’s average
size. These days the top-end platforms
are massive and run USB 3.0 (see “USB
3.0: A Tale Of Two Busses” at electronicdesign.com).
Capacity and speed aren’t the only
things that have been changing with USB
flash drives. Additional functionality,
especially in security, is more common.
For example, Apricorn’s Aegis Secure
Key has a built-in keyboard for entering
a security code, preventing key-logging
viruses from capturing the code (Fig. 5).
It works with any operating system.
Most other security-related solutions
use a device driver or application that runs
on the host and uses the host for entering
any decode key. The Aegis Secure Key
has an admin and user password. These
features are used to decode a key that
encrypts and decrypts data stored in the
flash memory.
USB flash drives are normally used
for portability, but they have also found
a home inside embedded devices. Many
motherboards have an internal Type A
connector. Most motherboards only have
Type A connectors on the back panel.
Some devices like Eurotech’s Helios
Edge Controller only have USB interfaces for flash storage and peripheral
interfaces (see “Hands-On Eval Of Eurotech’s Helios Edge Controller” at electronicdesign.com).
USB headers are also common on
motherboards. They’re used for additional external USB interfaces via cables and
backplane connections. They can also be
used for USB storage devices like those
from Swissbit (Fig. 7). The Swissbit USB
Flash Module plugs into standard ninepin USB headers found on most motherboards. The mounting hole isn’t always
found on motherboards, but it does provide a rugged solution when the board
can be bolted to the motherboard.
Modules like mSATA and Swissbit’s USB Flash Module aren’t the only
boards-based flash solutions. Flash
memory also can be found in dual-inline
memory module (DIMM) and small-outline DIMM (SODIMM) form factors, but
there’s no standard for flash-only solutions as there is with DRAM.
On the other hand, several solutions
like Viking Technology’s
ArxCis-NV blend DRAM
with flash memory (Fig.
6). The flash memory is
5. Apricorn’s Aegis Secure Key lets users enter
the digital key via a keypad rather than the
host’s keyboard.
03.22.12 ELECTRONIC DESIGN
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
com). The difference is that
the interface is
an SCSI adapter.
SAS uses the
SCSI command
DDR3 NV-DIMM
Zero recurring
16-Gbyte
Battery-free
set, so it effeccosts
integrated SSD
maintenance
tively defines a
6. Viking Technology’s ArxCis-NV hybrid blends DDR3 DRAM with flash
standard SAS
backup storage in a DDR3 DIMM form factor.
interface. Conve n t i o n a l S A S
used as a backup to store the contents of controllers require device drivers from
the DRAM when power is lost. A super- their respective vendors.
capacitor can provide sufficient power to
SATA Express from the Serial ATA
perform the copy operation.
Organization is a similar standard, except
The challenge with using these types of it provides a SATA interface. Like SCSI
hybrid memory is that the software needs Express, SATA Express could just as easto account for the nonvolatile feature. In ily deliver hard-disk storage via the interthe past, computers with magnetic core face along with flash storage.
memory could be turned off and on withNVM Express and SCSI Express serve
out reloading the operating system or the enterprise. Board and drive standards
applications. This can save a significant with hot-swap support are in the mix.
amount of time and would be very handy These platforms may find their way
for embedded applications.
into embedded systems as they become
These days, main memory is normal- more common. They lend themselves to
ly DRAM. Turn off the system and the embedded applications because they procontents of this memory are lost, so the vide a high-speed solution that can reside
default recovery process reboots the sys- on the same board as the processing and
tem. The boot program stored in flash networking hardware.
normally remains constant, unlike these
nonvolatile solutions that have stored the STANDARDS ORGANIZATIONS
Most of the major flash-related orgaprior contents of the DRAM.
Most of these hybrid solutions target nizations have already been mentioned,
enterprise systems, but they can be easily such as JEDEC, the ONFI Working
incorporated into embedded applications Group and the NVMHCI Working Group.
because they use standard DIMM sockets The SD Association is responsible for the
and look like standard DDR2 or DDR3 SD card family of removable storage.
Likewise, the CompactFlash Association
DRAM to the system hardware.
handles the CompactFlash standard. T10
FLASHY PCI EXPRESS
handles SCSI and SCSI Express. The
Bandwidth is one thing flash memory Serial ATA Organization handles SATA
can use, but many interfaces such as USB Express.
and SATA have restrictions that prevent
full utilization of the speed of flash memory. PCI Express is one way to get data
moving quickly.
The Non-Volatile Memory Host Controller Interface (NVMHCI) Working
Group developed and manages NVM
Express, which provides an interface to
nonvolatile memory that, at this point,
essentially means flash storage.
SCSI Express is another standard in
the works that will bring flash storage
directly to the PCI Express interface 7. Swissbit’s USB Flash Module plugs into the
(see “Storage Standards Move Towards nine-pin USB headers found on most mother12-Gbit/s Speeds” at electronicdesign. boards.
2V to DC-DC Co
10,00 nverte
0 VDC rs
Outpu
t
EngineeringEssentials
• Over 2500 Std.
DC-DC Converters
• Surface Mount
• From 2V to 10,000
VDC Output
• 1-300 Watt Modules
• Isolated/ Regulated/
Programmable Models
Available
• Military Upgrades Available
• Custom Models,
Consult Factory
ek
e
one w
ck to tities
ry-Sto
n
Delive ample qua
for s
for FREE PICO Catalog
Call toll free 800-431-1064
in NY call 914-738-1400
Fax 914-738-8225
PICO
Electronics, Inc.
143 Sparks Ave. Pelham, N.Y. 10803
E Mail: [email protected]
www.picoelectronics.com
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
DON TUITE | ANALOG/POWER EDITOR
[email protected]
es,
rain
, flicker mig
s
e
m
ti
fe
li
ncertain
to be true.
promise u
d
s
o
n
o
o
g
ti
a
o
c
to
li
ey’re
ng app
n’t mean th
s
Ds for lighti
e
E
o
L
d
’s
t
y
a
a
th
d
To
op. But
readed dro
d
e
th
d
n
a
s
D
E
L
DO
64
A
E
V
HARK
DA
E
D
SI
?
05.03.12 ELECTRONIC DESIGN
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
L
EDs are hot—in terms of market potential, if not actual temperature. As incandescent lamp bans
spread around the world, LED lighting seems to have unlimited potential (see the table). However,
the technology has its flaws. How can we state with authority how long they will really last in service? What’s this “flicker” issue that keeps coming up? And, why are we suddenly reading about
“droop” in The Wall Street Journal and The New York Times?
USEFUL LIFE
How long do LEDs last? If they’re properly installed and given an efficient thermal path to conduct away
the heat they generate, the answer is generally a long time. The engineering question has become how
long it takes for their light output to fall to some fraction of its original value (Fig. 1). Yet that isn’t totally
satisfactory. A procedure that assesses “rated life” in a manner similar to the way that the rated life of conventional incandescents and fluorescents is still evolving.
Last August, the Illuminating Engineering Society of North America (IESNA) released IES TM-21-11:
“Projecting Long Term Lumen Maintenance of LED Light Sources.”1 The document describes how to
take data that was already being measured by an approved process and extrapolate it. The report is available for $40 (or $28 if you belong to IESNA).
However, TM-21 only applies to specific light source components (package, module, array), not an
entire luminaire. A complete luminaire is a complex system with many other components that can affect
lifetime such as the driver, optics, thermal management, and housing. The failure of any one of these
components can mean the end of the luminaire’s useful life, even if the LEDs are still going strong.
Any meaningful projection of lifetime must account for all of these components and not simply focus
on the LEDs.
When incandescent and fluorescent light bulbs are evaluated, a large and statistically significant
sample is operated until 50% have failed. That point, in terms of operating hours, defines the rated
life for those lamps.2 That doesn’t work for LEDs, which typically don’t fail abruptly. Instead, their
light output slowly diminishes over time.
Also, that notion about LEDs lasting “a long time” means that acquiring real application data on
long-term reliability becomes time-challenging. Moreover, the light output and useful life of individual LEDs tend to be influenced more by how much current they’re driven by and how hot they
get in the luminaire where they’re mounted.
LUMEN MAINTENANCE LIFE AND RATED LIFE
Before explaining how TM-21 is applied, it will be useful to distinguish between maintenance
life, which TM-21 addresses, and rated life, which relies on a procedure that doesn’t have the
same authority as TM-21 yet. Again, rated life is used to assess conventional lamps.
Conceptually, the value of lumen maintenance (Lp) derived from test data by TM-21 describes
the number of hours of operation over which the LED light source will maintain a certain percentage, p, of its initial light output. For example, L70 would be the number of hours until an
LED’s light output had decayed to 70% of what it was when the LED was new.
For the last several years, the industry has used a test procedure described in IESNA as
LM-80-8 to measure Lp for LED packages, arrays, or modules driven by auxiliary drivers.3 In
the LM-80 procedure, LEDs are driven with external current sources. Their case temperature
is controlled during operation, with measurements made at room temperature.
In more detail, the devices under test are operated at three case temperatures: 55°C, 85°C,
and one other temperature that’s selected by the manufacturer. Air temperature must be maintained to within ±5°C and case temperature to within ±2°C. Relative humidity must be less
than 65%.
This environment is maintained for a minimum of 6000 hours (roughly 38 weeks). Data
is collected every 1000 hours. The data collected comprises lumen output, changes in chromaticity (color), and any incidents of catastrophic failure (burnouts).
“B” specs add a target statistical confidence interval. Thus, B50 indicates that no more
than 50% of a sample of LED devices would be expected to have their light output drop
below a target lumen maintenance level. B10 would mean no more than 10% of the sample
met that L standard within the given time.
1. An array of Philips LED bulbs like this successfully completed 18 months of field, lab, and product
testing to meet the rigorous requirements of the U.S.
Department of Energy’s L Prize competition. Ironically,
until last year, the LED industry had no procedure for
extrapolating test data to assign actual lifetime values to
specific products.
LIMITS OF LM-80
LM-80 is a test procedure only. Deliberately, it does
not include a way of getting from those recorded test
results to any values of Lp for the devices under test.
After the industry agreed on LM-80, it still needed
pass/fail criteria, or a way of graphing results so peo-
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
Engineering Essentials Vol. III
65
electronicdesign.com/subscribe
Do LeDs
harvke a
Da
SiDe
?
Visit http://electronicdesign.com
EngineeringEssentials
Points used
Normalized average light output
1000- to 6000-hour extrapolation
• If the test stops at 6000 hours, the average lumen maintenance
data points from 1000 hours to 6000 hours are fit to a simple
exponential extrapolation model using a least-squares curve fit.
• If the test runs for 6000 to 10,000 hours, only the last 5000
hours of data are used for the extrapolation.
• For tests that run more than 10,000 hours, the data points
from the last 50% of the total measurement time are used.
However, if the last 50% of the total measurement time is not
an integer multiple of 1000 hours, take more than 50% until
the data comes out to an integral multiple of 1000 hours. The
“times-six” rule is intended to limit the length of lumen maintenance predictions.
If that description is too concise to really understand, don’t
worry. In January, the EPA made available the Energy Star
TM-21 calculator.4 Its availability makes it possible for users
to request LM-80 data sets from LED vendors and interpolate
(for example) from test values at 55°C and 85°C to obtain
ple could make sense of the data. No curve-fitting methods lumen depreciation values for 75°C.
were recommended for extrapolating from the data to predict
RaTeD life
L70 values.
The best reference for understanding the difference between
Sample sizes and the values or even the number of drive currents were left up to whoever created any particular test. There TM-21 lumen-maintenance life and rated life is an article by
was even a problem with determining what kinds of LEDs the Jianzhong Jiao of Osram Opto Semiconductors in LEDs Magadata applied to, because there were no criteria for what LED zine, “Understanding the Difference between LED Rated Life
and Lumen-Maintenance Life.”5
package changes would require new testing.
This was particularly critical because the industry is constantTo explain the difference, Jiao refers readers to ANSI/IES
ly innovating packaging to improve the heat-flow characteristics RP-16, which describes the process for consistently deterof the devices. Because LEDs do not shed heat by radiation, like mining the life value for conventional lamp types. In RP-16,
incandescent lights, the effectiveness of packaging in providing rated life is designated Bp and expressed in hours, where p is a
a thermal path from the LED junction to heatsinks and the ambi- percentage. Thus, a B50 of 1000 hours means that 50% of the
ent environment can have significant effects on Lp.
tested products lasted 1000 hours without failure.
Some of these issues were addressed early. The U.S. EnviB50 is also known as the products’ rated average life. For
ronmental Protection Agency (EPA) introduced some stan- example, if a product has a B10 rated life of 1000 hours, 10%
dardization for testing residential and non-residential indoor of the tested products failed within 1000 hours and could
and outdoor lamps. It required LM-80 testing be conducted by be compared favorably to a product with a B50 rated life of
labs accredited by the National Institute of Standards and Tech- 1000 hours. “While Bp life is a statistical measure, Lp life is a
nology’s (NIST’s) National Voluntary Laboratory Accredita- defined durability measure,” Jiao says.
tion Program (NVLAP).
Bp life testing, then, requires a large and statistically meanFor each combination of current and external temperature, ingful sample size. There is no similar requirement for Lp life
the EPA required a minimum of 25 samples. To pass, after 6000 testing. The catch, Jiao notes, is that when LM-80 test data is
hours, the value for LM had to be better than 91.8% for products used to make lumen-maintenance projections per TM-21, the
intended for residential indoor use or better than 94.1% for sample size will affect the uncertainty of the projection. Consenon-residential and residential outdoor use. That clarified some quently, a smaller sample size will lead to shorter projected life
points, but everybody was waiting for IESNA TM-21.
to increase the statistical certainty.
With that caveat in mind, the first thing that must be defined
TM-21 DeTails
to provide a sensible basis for LED rated life estimates is what
The new document specifies precisely how to extrapolate constitutes a “failure” of an LED that has lost luminance but
the LM-80-08 lumen maintenance data (Fig. 2). Here’s how it hasn’t burned out.
works:
For example, Jiao says, “failure might be defined as when
the light output of an LED reaches 70% or lower of the initial
• For each unit in the data set, the measured light output at the light output (including if the LED’s light output is zero). In
start of is normalized to a value of “1.”
other words, for a given period of time, if an LED produces
• At each point where light output is measured, the normal- insufficient light or no light, the LED is considered at failure.”
ized data for all units is averaged. (In other words, the results
That would make it possible to combine a new statistical
depict the average behavior of the whole set of units.)
measure with the defined durability measure. Jiao suggests this
• All data from the start of the test to 1000 hours is discarded. would be a BpLp value. If an LED light source claimed to have
2. The extrapolation process described in IES TM-21-11, “Projecting Long
Term Lumen Maintenance of LED Light Sources,” provides a bridge from
LM-80 test results to predictions of lumen maintenance.
05.03.12 ElEctronic DEsign
66
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Do LeDs
harvke a
Da
SiDe
?
Visit http://electronicdesign.com
EngineeringEssentials
Optical power
No droop
That’s Jiao’s and Osram’s recommendation. Before the
industry establishes a recommendation for a standard practice,
though, LED integrators may need to request more testing and
modeling information from the manufacturers in regards to the
statistical failures of LED light sources.
Flicker
With droop
LED current
3. With conventional LEDs, light output does not increase linearly with
current. Something happens with the recombination of election-hole pairs
that normally results in the emission of a photon. A phenomenon called
auger scattering, in which an electron is generated instead of a photon, is
presently considered the best candidate for an explanation. This is leading
at least one company to pursue GaN-on-GaN as a possible solution.
The light output of devices driven by ac can flicker at twice the
line frequency, at harmonics of that frequency, and sometimes
at the fundamental of the line frequency. Flicker generally isn’t
observed in incandescent bulbs because of their thermal inertia.
However, flicker can be observed with fluorescent tubes and
with cold cathode fluorescent lamp (CCFL) and LED backlighting of video displays. Medical research associates flicker
with migraines and epilepsy in a segment of the population.
For LEDs, the solution is to clean up the output stages of
driver circuits. Scott Brown, senior vice president of marketing at iWatt, believes upcoming European regulations might
become part of IEC 61000-3-2, the European standard for
power-factor correction (PFC) in ac-dc supplies.
Brown agrees with Matt Reynolds, applications manager for
solid-state lighting at Texas Instruments’ Silicon Valley Analog
Division (the former National Semiconductor), and Suresh
Hariharan, applications director at Maxim Integrated Products—there is something serious behind the issue, though the
problem can (and should) be dealt with at a small cost delta.
Hariharan says flicker comes down to legacy triac dimmers
and the way LED drivers turn their chopped ac cycles into the
pulse-modulated dc that regulates the light output of the device.
A properly designed dimmable driver, all three companies agree,
has three stages: an ac-dc stage and two dc stages, the final one
of which pulse-modulates the current to control light output.
“Proper design” also demands power-factor correction
(PFC) in the ac-dc stage to keep the harmonics of the ac line
frequency off the power lines.6 Yet not all dimmer-compatible
drivers provide PFC, Hariharan says.
Plenty of companies around the world make generic LED
“light bulbs.” The driver electronics all fit into the base of
the bulb, so nobody knows whose electronics are in there. If
a copycat company wants to save a few cents on the bill-ofmaterials and use a cheaper driver chip, who can tell?
Also, the cheaper drivers might not display any flicker.
That’s because the triac in the dimmer in the wall of the building is often the element where the problem starts. If the triac
switches ON at a different point in the first half-cycle of the
ac waveform than it does in the second half-cycle, a series of
harmonics is produced that (among other things) shows up as
flicker at the ac line frequency.
What to do about that depends on the driver design. But
the ultimate solution, Hariharan believes, is more expensive
dc circuitry in the delivery stage. Encouraging this requires a
standards-based approach. That may be the IEC in Europe or
the IEEE in North America.
B50L70 of 30,000 hours, “then 50% of tested samples should
have a lumen-maintenance life of 30,000 hours,” Jiao says.
To support that, Jiao recommends integrating the statistical
failure measurement with lumen-maintenance measurements
during the life test. This would require a large enough LED
sample size to be statistically meaningful, as well as additional
tracking and recording of sample behaviors. A key point is how
long the testing would continue.
Instead of stopping at some arbitrary multiple of 1000
hours, “when 50% of the tested samples reached a light output
equal to 70% of initial lumens, including the samples that
failed to produce light, then B50L70 (in hours) [would be]
obtained,” Jiao says.
Jiao acknowledges a practical problem with that. It’s reasonable to expect B50L70 values to turn out to be on the order of
30,000 hours, so we really need a way to make a projection
based on shorter testing periods. Fortunately, LED makers
have already figured much of this out. They have taken two
approaches.
One approach carries out LM-80 testing on large samples
recording both light-output changes and failures. The data is
then fitted into a mathematical model with a statistical-certainty band. By analyzing the lumen-maintenance projection curve
along with the associated sample distribution bandwidth, it’s
possible to project an estimated B50L70 life.
Alternatively, manufacturers have always tested for real failures (the light goes out) separately from official LM-80 testing.
Infant mortality is fundamentally a manufacturing-process
problem, and process control is a key to profits.
What’s needed is a way to combine the data from both types of
testing in a way that everybody agrees is fair. Then, using TM-21,
the lumen-maintenance projection can be established, and the
data collected in the accelerated-failure-modes test could be enter the ieee
This is where IEEE Projects Authorization Request PAR
modeled with a different mathematical expression, with the rated
1789 enters the picture. The standards body is working on
life projected by mathematically combining both models.
05.03.12 ElEctronic DEsign
68
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Do LeDs
harvke a
Da
SiDe
?
Visit http://electronicdesign.com
EngineeringEssentials
Victor Auger, a twentieth century French
physicist.
The droop issue came to the forefront
2010
2011
2012
2013
2014
in February when Soraa, the LED startup
100 W
75 W
60 to 40 W
U.s.
founded by Shuji Nakamura, the inven100 W (deferred) 75 W (deferred) 60 to 40 W (deferred)
canada
tor of the blue laser and LED, described
100 W
75 W
60 to 40 W
Mexico
the company’s GaN-on-GaN (gallium
nitride) LEDs at the Strategies in Light
100 W
60 W
china
show. According to Soraa, its LED mateBanned
cuba
rial is 1000 times freer of dislocations
Banned
Argentina
than the usual silicon carbide. Also, its
100 W
75 W
60 W
40 to 15 W
Banned
european Union
LEDs can be driven much harder (250
100 to 75 W 60 W
40 to 15 W
Banned
U.K.
A/cm 2) than traditional LEDs without
Banned
south Korea
exhibiting significant droop.
At the same time, Soraa made a fullBanned
Japan
court
press with the business media,
Banned
Philippines
including The New York Times and The
100 W
75 W
60 W
40 W
Malaysia
Wall Street Journal, but did not engage
Banned
Australia
the technical trade press or issue a press
release about product availability or pricP1789, tentatively titled “Recommended Practices of Modulat- ing. GaN-on-GaN bears watching, given Soraa’s intellectual
ing Current in High Brightness LEDs for Mitigating Health property (IP) portfolio and technology team, but it’s too soon to
Risks to Viewers.” Brad Lehman of Northeastern University, speculate about where in the lighting spectrum it will fit.
the standards chair, can be reached at [email protected].
The first of the group’s efforts has been released for public RefeRences
comment.7 The report has been out for more than a year, so 1. Ies TM-21-11: “Projecting Long Term Lumen Maintenance of LeD
there’s no urgency. But that doesn’t mean the document is Light sources,” http://www.ies.org/store/product/projecting-longuninteresting. It includes detailed references to and summaries term-lumen-maintenance-of-led-light-sources-1253.cfm
of multiple studies of the effects of flicker on humans who are 2. Don Tuite, “High Brightness White LeDs Light The Way To Greener
Illumination” http://electronicdesign.com/content/catpath/comexposed to it through fluorescent lighting.
For instance, photosensitive epilepsy is more common than ponents/page/2?topic=high-brightness-white-leds-light-the-way-toone might think, affecting “about one in 4000 individuals,” greener+illumination
according to the group. Factors that may combine to affect the 3. Another test, LM-79, is an approved method for taking electrical
likelihood of seizures include flash frequency in the range of 3 and photometric measurements of solid-state lighting (ssL) products.
to 65 Hz, and especially in the range from 15 to 20 Hz. That’s It covers total flux, electrical power, efficacy, chromaticity, and intenwhy line frequency fundamentals (50 or 60 Hz, depending on sity distribution and applies to LeD-based products that incorporate
country) are important and why asymmetric behavior of the control electronics and heatsinks, including integrated LeD products
and complete luminaires, but not to bare LeD packages and modexternal triac controller is significant.
“Deep red flicker and alternating red and blue flashes may ules, nor to fixtures designed for LeD products but sold without a light
be particularly hazardous,” the group notes. “Bright flicker can source. Unlike traditional photometric evaluation, which involves sepbe more hazardous when the eyes are closed, partly because the arate testing of lamps and luminaires, LM-79 tests the complete LeD
luminaire because of the critical interactive thermal effects. While
entire retina is then stimulated.”
IncAnDescenT LAMP LIMITs
LM-79 doesn’t address product reliability or life, it does provide for the
Droop
important calculation of complete luminaire initial efficacy.
“Droop” refers to the phenomenon in LEDs where more current produces more lumens of output only up to a point, beyond
which lumen output no longer increases linearly with increasing current (Fig. 3). This phenomenon in LEDs has always
been hard to explain.
The quantum process by which the generation and recombination of electron-hole pairs causes photon emission doesn’t
behave nicely. Beyond a certain current, the recombinations
apparently produce another electron, instead of a photon.
Semiconductor physicists who deal with LEDs have been
chasing several suspect processes because knowing which
one it is could provide a handle for dealing with droop. The
prime candidate today is Auger scattering, named after Pierre
4. The ePA’s energy star TM-21 calculator can be downloaded at
www.energystar.gov/TM-21calculator.
5. Jianzhing Jiao, “Understanding the Difference between LeD Rated
Life and Lumen-Maintenance Life,” http://www.ledsmagazine.com/
features/8/10/12
6. Don Tuite, “What’s The Difference Between Reactive Power factor
And Ac-Dc supply Power factor?” http://electronicdesign.com/
article/power/whats-difference-reactive-power-factor-acdc-supplypower-factor-73569
7. “A Review of the Literature on Light flicker: ergonomics, Biological
Attributes, Potential Health effects, and Methods in Which some
LeD Lighting May Introduce flicker,” http://grouper.ieee.org/
groups/1789/flickerTR1_2_26_10.pdf
05.03.12 ElEctronic DEsign
70
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
ROGER ALLAN | CONTRIBUTING EDITOR
[email protected]
WARM UP TO
THE LATEST
PCB COOLING
TECHNIQUES
An array of advances in thermal-management products and
methodologies, some bordering on exotic, arm designers
with essenial weapons to battle
the heat.
A
s consumer demands for “smaller”
and “faster” intensify, mammoth
challenges emerge when it comes
to beating the heat generated by
ever-denser printed-circuit boards
(PCBs). As stacked-up microprocessors and logic elements reach into the gigahertz range of operation, cost-effective thermal
management becomes perhaps the highest priority
among engineers in the design and packaging and
materials fields.
Adding to those headaches is the current trend of
manufacturing 3D ICs for greater functional densities. Simulations show that a 10°C rise in temperature can double a 3D IC chip’s heat density,
degrading performance by more than one-third.
MICROPROCESSOR CHALLENGES
Projections by the International Technology
Roadmap for Semiconductors (ITRS) show that
within the next three years, interconnect wiring in
difficult-to-cool regions of a microprocessor will
10.20.11 ELECTRONIC DESIGN
52
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
consume up to 80% of the chip’s power. Thermal design power
(TDP) is one measure to assess a microprocessor’s propensity
to handle heat. It defines the upper point of the thermal profile
as well as the associated case temperature.
The latest microprocessors from Intel and Advanced Micro
Devices (AMD) feature TDPs ranging from 32 to 140 W. This
number continues to rise in conjunction with increasing microprocessor operating frequencies.
Large data centers that employ hundreds of computer servers are particularly susceptible to heating problems. According to some estimates, the servers’ cooling fans—which draw
up to 15% of the electrical power—actually become considerable heat sources in and of themselves. On top of that, the cost
of cooling a data center can constitute about 40% to 45% of
the center’s power consumption. All of these factors create a
greater demand for local and remote temperature sensing and
fan control.
The thermal-management challenge becomes trickier when
it involves PCBs housing multicore processors. While each
processor core in the array may dissipate less power (and thus
less heat) than a single-core processor, the net effect on large
computer servers is the addition of more heat dissipation to a
data center’s computer system. Simply put, many more processor cores run for a given amount of PCB space.
Another thorny issue with IC thermal management concerns
the appearance of hot spots on a chip’s package. Heat fluxes
can climb as high as 1000 W/cm2, which is a condition that’s
difficult to track.
PCBs play a critical role in thermal management, thus
requiring a thermal design layout. Whenever possible,
designers should keep power components as far away from
each other as possible. Furthermore, they should be kept
away from the PCB’s corners, which will help maximize the
amount of PCB area around the power components to facilitate thermal dissipation.
It’s common for exposed power pads to be soldered to a
PCB. Often, exposed-pad-type power pads conduct about 80%
of the heat generated through the bottom of the IC package and
into the PCB. The remaining heat dissipates through the package’s sides and leads.
Heat Helpers
Designers now can seek help via a number of improved heatmanagement products. They include heatsinks, heat pipes, and
fans that allow for active and passive convection, radiation,
and conduction cooling. Even the manner of the PCB-mounted
chip’s interconnection helps mitigate heat problems.
For example, the common exposed-pad approach used for
interconnecting an IC chip to a PCB may increase heat problems. When soldering the exposed path to a PCB, the heat
travels quickly out of the package and into the board. The heat
then dissipates through the board’s layers and into the surrounding air.
Thus, Texas Instruments (TI) devised a PowerPAD method
that mounts the IC die to a metal plate (Fig. 1). This die pad,
which supports the die during fabrication, serves as a good
thermal heat path to remove the heat away from the chip.
According to Matt Romig, analog packaging product manager at TI, its PowerStack method is the first 3D packaging
technology to stack high-side vertical MOSFETs. It combines
both high-side and low-side MOSFETs held in place by copper clips and uses a ground potential exposed pad to provide
thermal optimization (Fig. 2). Employing two copper clips
to connect the input and output voltage pins results in a more
integrated quad flat no-lead
Die
Encapsulation material
Bond wire
PowerPAD
(QFN) package.
pad-to-board
Heat management for power
solder area
devices is an even greater chalSignal
lenge. Higher-frequency sigLead
trace
nal processing and the need
to shrink package size are
pushing conventional cooling
techniques to the brink. Kaver
Azar, president and CEO of
Advanced Thermal Solutions,
proposes the use of an embedded thin-film thermoelectric
Internal copper planes
Thermal via area
device that includes water1. The die pad in Texas Instruments’ PowerPAD supports the die during fabrication, thus serving as a good thercooled microchannels.
mal heat path to remove the heat away from the chip.
ElEctronic DEsign Go To www.elecTronicdesiGn.com
Engineering Essentials Vol. III
53
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
Ground thermal pad
High-side MOSFET
Low-side MOSFET
VIN clip
VSW clip
2. TIÕs PowerStack 3D packaging technology stacks high-side vertical
MOSFETs. It combines both high-side and low-side MOSFETs held in place
by copper clips and uses a ground potential exposed pad to provide thermal optimization.
Simulate and Simulate again
Thermal control has always been, and continues to be, one
of the limiting factors to achieving greater IC performance.
With space at a premium in these ever-smaller ICs and their
packages, there’s little or no room to help cool them. It has
forced designers to consider exotic cooling techniques and
new, evolving cooling materials.
Nonetheless, the basic premise remains: Designers must pay
more attention to the science of thermodynamics for optimal
cooling solutions. And the entire process should start with
thermal analysis software—well before a design is put into
production.
That’s where simulation software tools enter the picture.
Products like the Mentor Graphics’ Flotherm 3D V.9 software
tool help 3D IC designers quantify thermal quantities, enabling
them to address thermal problems as they arise. This computational fluid-dynamics (CFD) product provides images of
bottleneck (Bn) and shortcut (Sc) fields. As a result, engineers
can identify where and why heat-flow congestion occurs in
their designs.
According to Erich Bürgel, general manager of Mentor
Graphics’ mechanical analysis division, innovative Bn fields
show where a design’s heat path is being congested as it
attempts to flow from high-junction temperature points to the
ambient point. The Sc fields highlight possible approaches to
create a new effective heat-flow path by adding a simple element such as a gap pad or a chassis extrusion.
Flotherm 3D V.9 supports the importing of XML model and
geometry data to enable the software’s integration into data
flows. It also has a direct interface to Mentor Graphics’ Expedition PCB design platform. As a result, users can add, edit, or
delete objects such as heatsinks, thermal vias, board cutouts,
and electromagnetic cans for more accurate thermal modeling.
With thermal simulation, designers can accurately predict
the thermal performance of the initial and subsequent designs
without having to build and test a prototype. Design variables
Azar envisions one solution that minimizes spreading resistance, the largest resistance in the path of heat transfer, with a
forced thermal spreader bonded directly to the microprocessor
die (Fig 3).
This approach distributes the concentrated heat of a small
microprocessor die to the larger base area of the heatsink,
which transfers the heat to the ambient environment. Such
a built-in forced thermal spreader combines micro and mini
channels in the silicon package. The water flow rate inside the
channels is approximately 0.5 to 1 liter/minute.
Simulation results showed that on a 10- by 10-mm die within
a ball-grid-array (BGA) package, a 120- by 120-mm heatsink
base-plate area yielded a thermal resistance of 0.055K/W.
Using a heatsink material with thermal conductivity equal to or
higher than diamond yielded 0.030K/W.
Paul Magill, vice president of marketing and business development for Nextreme Thermal Solutions, also suggests thermoelectric cooling, advocating that cooling should start at the
chip level. The company offers localized thermal management
deep inside electronic components using tiny thin-film thermoelectric (eTEC) structures known as thermal bumps (Fig.
4). The thermally active material is embedded into flip-chip
interconnects (e.g., copper pillar solder bumps) for use in electronic packaging.
Localized cooling at the chip wafer, die, and package levels delivers important economic benefits. For instance, in
a data center that employs
hundreds and thousands of
Fan
advanced microprocessors,
it’s far more efficient than
removing heat with more
expensive and bulkier airconditioning systems.
In some devices like LEDs,
Heatsink
a combination of passive and
active cooling techniques
can improve device perforForced heat
spreader
mance and lifetime (Fig. 5).
Die
For example, using a fan
Active BGA
inside a heatsink often will
packaging
reduce thermal resistance
to 0.5°C/W, which is a significant improvement over 3. An embedded thin-film thermoelectric device proposed by Advanced Thermal Solutions uses water-cooled
the typical 10°C/W achieved microchannels. A forced thermal spreader thatÕs bonded directly to the microprocessor die minimizes spreading
with passive cooling (heat- resistance, the largest resistance in the path of heat transfer. (Source: “Cooling High-Power Packages,” Kaveh Azar, Advanced
Packaging)
sinking) alone.
10.20.11 ElEctronic DEsign
54
Engineering Essentials Vol. III
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
4. Nextreme Thermal Solutions offers localized thermal management deep
inside electronic components thanks to tiny thin-film thermoelectric (eTEC)
structures known as thermal bumps. The thermally active material is
embedded into flip-chip interconnects, such as copper pillar solder bumps,
for electronic packaging. (Source: “Ensuring Optimal High-Power LED Performance With
Thermal Management,” by Jon Domingo, Lumex, ECN, April 11, 2011)
such as the number of heatsink fins, fin thickness, heatsink
base thickness, and thermal resistance of the thermal-interface
materials should be considered.
Proper thermal models are essential for future 3D ICs that
plan to use stacked logic and memory devices consisting of
thin die, which strongly reduces lateral heat spreading. As a
die’s thickness shrinks, higher-temperature spots become more
common. Hot spots on the logic die cause local temperature
increases in the memory die, possibly reducing DRAM retention time.
Researchers at Belgium’s Interuniversity Micro Electronics
Center (IMEC) have already proven correct thermal models
for the design of next-generation 3D mixed-stack ICs. These
3D stacks, which closely resemble commercial chips of the
future, consist of IMEC proprietary logic CMOS ICs stacked
on top of commercially available DRAMs. Stacking is accomplished with through-silicon vias (TSVs) and micro-bumps.
The research was a collaborative effort between IMEC and
partners Amkor, Fujitsu, Globalfoundries, Intel, Micron, Panasonic, Qualcomm, Samsung, Sony, and TSMC.
IBM plans to use microchannel water cooling for its future
3D IC processors, such as the Power8 processor scheduled for
introduction in 2013 (Fig. 6). Bruno Michel, manager of the
Advanced Thermal Packaging Group for IBM’s Zurich, Switzerland research facility, says that energy-efficient, hot-water
cooling technology is part of IBM’s concept of a zero-emissions data center. To cool 3D chip stacks, which generate more
heat than a single processor in nearly the same space, water
rather than air was used to reduce energy consumption.
Liquid cooling of CPUs is also performed in the XLR8 GTX
580 GeForce graphics card from PNY Technologies, which
addresses challenging graphics-intensive gaming products.
PNY and Asetek, a specialist in CPU thermal management,
joined forces to produce a product for gaming enthusiasts and
their GPU/CPU cooling systems.
Engineered with a closed-loop system and built with Asetek’s
sealed water cooler already attached, the combination design
offers consumers an out-of-the-box, ready-to-install product
that costs $649.99. PNY claims the new system offers up to
30% cooler temperatures, quieter acoustics, and faster performance than the standard-reference-designed Nvidia GeForce
GTX 580 graphics card.
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
Engineering Essentials Vol. III
55
electronicdesign.com/subscribe
Visit http://electronicdesign.com
EngineeringEssentials
Don’t let tough
specs keep you
from bidding
You know electronics. We
know electronics packaging.
5. In LEDs or similar devices, a combination of passive and active cooling
techniques can boost performance and lifetime. For example, using a fan
inside a heatsink often reduces thermal resistance to 0.5°C/W, which is a
considerably improvement over the typical 10°C/W achieved with passive
cooling (heatsinking) alone.
Give use your most challenging
packaging problem and we can
engineer it for you.
Thermal management via water cooling also is employed in
a wide variety of power devices—thyristors, MOSFETs, and
silicon-controlled rectifiers (SCRs) are just a few. One example
is the XW180GC34A/B developed by Westcode Semiconductors Ltd., a subsidiary of Ixys Corp. The nickel-plated heatsink
has a 127-mm diameter contact plate, suiting it for press-pack
devices with electrode contacts up to 125 mm in diameter.
Typical heatsink to input water thermal resistance, for flow
rates of 10 L/min., is 4.3K/kW (two coolers plus one semiconductor device) and 5.6K/kW (three coolers plus two semiconductor devices). The heatsink comes with or without an
integral connecting bus bar.
“Typical applications for the coolers would be mini megawatt-power-level devices and high-power rectifiers, as in heavy
industrial applications, or for electric train trackside substations,
as well as in applications in electricity generation and distribution,” says Frank Wakeman, Westcode’s marketing and technical support manager. “The high-efficiency cooling provided
with these coolers enables customers to achieve high-power
density in their systems with much reduced footprint.”
Our electronics packaging is
used in military, security,
telecommunications,
space research and
many other fields.
Whether it’s exceptionally high EMI/RFI attenuation, FCC/VDE,
EMP/Tempest, Mil-Spec
810 or 901, seismic resistance or protection against dust, shock and vibration we’ve probably engineered it.
Equipto Electronics’ experienced
6. IBM plans to use microchannel hot-water cooling for future 3D IC processors, such as the Power8 processor due to arrive in 2013.
enclosure engineers can work from
your drawings, rough sketches, even
conversations.
er
ay
rl
so
es
oc
Pr
Call us today for a free consultation.
ry
o
em
M
3D stack
Together we can win your bid.
er
lay
a
An
RF
its
cu
cir
log
its
cu
cir
2008
001: pliant
9
O
m
IS
s Co
RoH
800-204-7225 •630-859-7840 • www.equiptoelec.com
[email protected] • Aurora, IL 60506
icr
l
ne
an
h
oc
CMOS circuitry
M
Through silicon via
10.20.11 ElEctronic DEsign
56
Engineering Essentials Vol. III
electronicdesign.com/subscribe