Download - GPW Morni, Panchkula

Document related concepts

Pulse-width modulation wikipedia , lookup

Rotary encoder wikipedia , lookup

Switched-mode power supply wikipedia , lookup

Oscilloscope history wikipedia , lookup

Schmitt trigger wikipedia , lookup

Immunity-aware programming wikipedia , lookup

Control system wikipedia , lookup

Analog-to-digital converter wikipedia , lookup

Time-to-digital converter wikipedia , lookup

Opto-isolator wikipedia , lookup

Digital electronics wikipedia , lookup

Flip-flop (electronics) wikipedia , lookup

Transcript
Govt. Polytechnic for Women, Morni Hills, Panchkula
Department of Computer Engineering
Session Notes
Subject: Digital Electronics
Semester: 3rd
S. No
Topic Covered
Session No.
1
Analog signal , digital signals, difference between analog
and digital signals, digitization
Session 1
2
Number system, binary,hexadecimal, decimal,octal and its
conversion
Session 2
3
Binary addition, binary subtraction,multiplication, division,
1’s and 2’s complement
Session 3
4
Logic gates, electronic gates, OR, AND, NOT,
NOR,NAND, XOR, XNOR, universal gates
Session 4
5
Boolean algebra, Boolean rules, laws, duality principle
Session 5
6
De Morgan’s theorem, sum of products, K map
Session 6
7
Binary adder(half, full), Multiplexer, Digital multiplexer
Session 7
8
Decoder, BCD-to-Decimal decoder, LCD, flat panel LED
Session 8
9
Encoder, Decimal-to BCD Encoder, Keyboard encoder
Session 9
10
Flip flops, Latches,SR Latch, JK Latch
Session 10
11
Master slave flip flop,Edge Triggered, JK flip flop, counters
Session 11
12
Decade counters, Up/dowm counter, ring counter, Johnson
counter
Session 12
13
Shift register, SIPO, PISO, PIPO, universal shift register
Session 13
14
Shift register counter, Johnson counter, applications of shift
register
Keyboard encoder, Programmable Sequential logic
Session 14
15
Session 15
SESSION 1
ANALOG SIGNALS:
An analog or analogue signal is any continuous signal for which the time varying
feature (variable) of the signal is a representation of some other time varying
quantity, i.e., analogous to another time varying signal. For example, in an analog
audio signal, the instantaneous voltage of the signal varies continuously with the
pressure of the sound waves. It differs from a digital signal, in which a continuous
quantity is represented by a discrete function which can only take on one of a finite
number of values. The term analog signal usually refers to electrical signals;
however, mechanical, pneumatic, hydraulic, and other systems may also convey
analog signals.
An analog signal uses some property of the medium to convey the signal's
information. For example, an aneroid barometer uses rotary position as the signal
to convey pressure information. In an electrical signal, the voltage, current, or
frequency of the signal may be varied to represent the information.
Any information may be conveyed by an analog signal; often such a signal is a
measured response to changes in physical phenomena, such as sound, light,
temperature, position, or pressure. The physical variable is converted to an analog
signal by a transducer. For example, in sound recording, fluctuations in air pressure
(that is to say, sound) strike the diaphragm of a microphone which induces
corresponding fluctuations in the current produced by a coil in an electromagnetic
microphone, or the voltage produced by a condensor microphone. The voltage or
the current is said to be an "analog" of the sound.
An analog signal has a theoretically infinite resolution. In practice an analog signal
is subject to electronic noise and distortion introduced by communication channels
and signal processing operations, which can progressively degrade the signal-tonoise ratio. In contrast, digital signals have a finite resolution. Converting an
analog signal to digital form introduces a constant low-level noise called
quantization noise into the signal which determines the noise floor, but once in
digital form the signal can in general be processed or transmitted without
introducing additional noise or distortion. Therefore as analog signal processing
systems become more complex, they may ultimately degrade signal resolution to
such an extent that their performance is surpassed by digital systems. This explains
the widespread use of digital signals in preference to analog in modern technology.
In analog systems, it is difficult to detect when such degradation occurs. However,
in digital systems, degradation can not only be detected but corrected as well.
Digital signals:
A digital signal is a physical signal that is a representation of a sequence of
discrete values (a quantified discrete-time signal), for example of an arbitrary bit
stream, or of a digitized (sampled and analog-to-digital converted) analog signal.
The term digital signal can refer to either of the following:
1. any continuous-time waveform signal used in digital communication,
representing a bit stream or other sequence of discrete values
2. a pulse train signal that switches between a discrete number of voltage levels
or levels of light intensity, also known as a line coded signal or baseband
transmission, for example a signal found in digital electronics or in serial
communications, or a pulse code modulation (PCM) representation of a
digitized analog signal.
A signal that is generated by means of a digital modulation method (digital pass
band transmission), to be transferred between modems, is in the first case
considered as a digital signal, and in the second case as converted to an analog
signal.
DIFFERENCE BETWEEN ANALOG AND DIGITAL SIGNALS
Analog and digital signals are used to transmit information, usually through
electric signals. In both these technologies, the information, such as any audio or
video, is transformed into electric signals. The difference between analog and
digital technologies is that in analog technology, information is translated into
electric pulses of varying amplitude. In digital technology, translation of
information is into binary format (zero or one) where each bit is representative of
two distinct amplitudes.
Comparison chart
Technology:
Analog
Digital
Analog technology records Converts analog waveforms into
waveforms as they are.
set of numbers and records them.
Analog
Digital
The numbers are converted into
voltage stream for representation.
Can be used in various
computing platforms and
Uses:
under operating systems like Computing and electronics
Linux, Unix, Mac OS and
Windows
Analog signal is a continuous
signal
which
transmits Digital signals are discrete time
Signal:
information as a response to signals generated by digital
changes
in
physical modulation
phenomenon
Uses continuous range of
Uses discrete or discontinuous
Representation: values
to
represent
values to represent information
information
Applications: Thermometer
PCs, PDAs
Data
Not of high quality
High quality
transmissions:
Response
to More likely to get affected Less affected since noise response
Noise:
reducing accuracy
are analog in nature
Waves:
Denoted by sine waves
Denoted by square waves
Example:
Human voice in air
Electronic devices
DIGITIZATION:
Digitizing or digitization is the representation of an object, image, sound,
document or a signal (usually an analog signal) by a discrete set of its points or
samples. The result is called digital representation or, more specifically, a digital
image, for the object, and digital form, for the signal. Strictly speaking, digitizing
means simply capturing an analog signal in digital form. For a document the term
means to trace the document image or capture the "corners" where the lines end or
change direction.
Session 2
NEED:
Number System:
A number system is a code that uses symbols to refer to a number of items. Many
number systems are in use in digital system. The most common are the decimal,
binary, octal and hexadecimal systems.
Decimal system:
The decimal number system contains 10 symbols and is sometimes called the base
10 system. The symbols are 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9.
Binary system:
The binary number system uses only two symbols 0 and 1 is sometimes called the
base 2 system.
Bit and Byte:
A bit is an abbreviation for binary digits. A binary number like 1100 has 4 bits,
110011 has 6 bits.
A byte is a group of eight bits. The byte is the basic unit of binary information.
Most computers process data with a length of 8 bits or multiple of 8 bits.
MSB and LSB:
LSB: The right most bit of a number is known as LSB or least significant bit.
MSB: The left most bit of a number is known as MSB or most significant bit.
Conversion from decimal to binary:
For understanding conversion system considering the number 25.
25  16  8  0  0  1
 (1 24 )  (1 23 )  (0  22 )  (0  21)  (1 20 )
 11 0 0 1
 2510  110012
Double-Dabble Method:
It is a way of converting any decimal number to its binary equivalent. It requires
successive division by 2, writing down each quotient and its remainder.
25  2  12
remainder 1
LSB
12  2  6
"
0
62  3
"
0
32 1
"
1
1 2  0
"
1
MSB
 2510  110012
Prob.: Convert binary numbers to decimal:
11011
2
24 1  23 1  0  22  1 21  1 20
 16  8  2  1  2710
Binary number with decimal point:
In general, a number with a decimal point is represented by a series of coefficients
as follows:
a a a a a ao  a
a
a
5 4 3 2 1
1 2 3
Prob: Convert 1010 .0112 to decimal equivalent
1 23  0  22  1 21  0  2o  0  21  1 2 2  1 2 3
 (10.375)
10
Prob: Convert (5.625) to binary number
10
5  4  1  1 22  0  21  1 2o
 101
0.625  2  1.250
0.250  2  0.500
b  1
1
b 0
2
0.500  2  1.000

b 1
3
b b bo  b b b
2 1
1  2  3
 (101 .101)
2
Octal numbers:
Some older computer systems use octal numbers to represent binary information.
The octal numbers uses the eight symbols 0, 1, 2, 3, 4, 5, 6 and 7. Octal numbers
are also referred to as base 8 numbers.
Decimal
Octal
Binary
0
000
0
1
001
1
2
010
2
3
011
3
4
100
4
5
101
5
6
110
6
7
111
7
Prob: Convert (498)10 to octal number.
498  8 = 62 remainder of
2
62  8 = 7
6
”
”
7
8=0
”
” 7
7
6
28
 (498)10 = (762)8
Prob: Convert (0.513) to octal
0.513  8 = 4.104
0.104  8 = 0.832
0.832  8 = 6.656
0.656  8 = 5.248
0.248  8 = 1.984
0.984  8 = 7.872
 (0.513)10 = (0.406517)8
Binary to Octal:
Converting from binary to octal is simply a matter of grouping the binary positions
in groups of three and write down octal equivalent.
Prob:
Solution:
(a) Convert 0111012 to octal
(b) 101110012 to octal
011
101
3
add a leading zero
5
= 358
10
111
001
2
7
1
010
= 2718
Prob:
(a) Convert 3268 to decimal.
(b) Convert 48610 to octal.
Soln:
(a)
6  8o =
6
2  81 = 16
3  82 = 192
21410
(b)
486  8 = 60 remainder 6
60  8 = 7 remainder 4
7
 8 = 0 remainder 7
 48610 = 7468.
Hexadecimal Numbers:
Hexadecimal numbers are extensively used in microprocessor work. Hexadecimal
means 16. This system has a base 16. This means that it uses 16 digits to represent
all numbers. It uses the digits 0 through 9 plus the letters A, B, C, D, E and F. The
usefulness in converting directly from a 4 bit binary number.
Hexadeccimal
Decimal
Binary
0
0
0000
1
1
0001
2
2
0010
3
3
0011
4
4
0100
5
5
0101
6
6
0110
7
7
0111
8
8
1000
9
9
1001
A
10
1010
B
11
1011
C
12
1100
D
13
1101
E
14
1110
F
15
1111
Hex Conversion:
To convert from binary to hex, group the binary number in groups of four.
Prob: (a) Convert 011011012 to hex.
110
6
(b)
1101
D = 6D16
Convert A916 to binary
A
9
1010
1001
= 101010012
(c)
Convert 2A616 to decimal
6 16o =
6
10 162 = 160
2  162 = 512
67810
(d)
Convert 2A616 to binary and decimal
2
0010
A
1010
0010101001102
0 2o = 0
1 21 = 2
1 22 = 4
0 23 = 0
0 24 = 0
4 25 = 32
0 26 = 0
1 27 = 128
6
0110
0 28 =
0
1  29 = 512
67810
(e)
Convert 15110 to hex.
151  16 = 9
remainder
7
 16 = 0
remainder
9
9
15110 = 9716
Check:
9716
7  16o =
7
9  161 = 144
151
Binary Addition:
The four basic rules for adding binary digits are as follows:
0+0=0
Sum of 0 with a carry of 0
0+1=1
Sum of 1 with a carry of 0
1+0=1
Sum of 1 with a carry of 0
1 + 1 = 10
Example:
Sum of 0 with a carry of 1
Carry Carry
1
1
+
0
1
1
0
0
1
1
0
0
When there is a carry of 1, you have a addition in which three bits are being added.
These situations are as follows:
Carry
1 + 0 + 0 = 0 1 Sum of 1 with a carry of 0
1 + 1 + 0 = 1 0 Sum of 0 with a carry of 1
1 + 0 + 1 = 1 0 Sum of 0 with a carry of 1
1 + 1 + 1 = 1 1 Sum of 1 with a carry of 1
Prob: Add the following binary numbers
(a)
+
11 + 11
(b)
11
3
(b) 100
3
+ 10
11
+
110
100 + 10
6
(c)
111 + 11
(d) 110 + 100
(c)
+
110
111
11
1010
Session 3
Binary Subtraction:
The four basic rules for subtracting binary digits are as follows:
0
-
0
=
0
1
-
1
=
0
1
-
0
=
1
10
-
1
=
1
0 - 1 with a borrow of 1.
Prob: Perform the following binary subtractions:
(a)
(a)
(c)
11 - 01
(b)
11
3
- 01
10
11 - 10
(b)
(c)
111 - 100
11
3
-1
- 10
-2
2
01
1
101
5
011
-3
111
(d)
- 100
011
010
(d) 101 - 010
2
Binary Multiplication:
The four basic rules for multiplying binary digits are as follows:
0

0
=
0
0

1
=
0
1

0
=
0
1

1
=
1
Prob: Perform the following binary multiplication:
(a) 1  11
(b)
11  11
(c) 101  111
(a)
11
3
1
1
 11
 111
11
3
11
101
(b)
11
+ 11
1001
(c)
(d) 10011011
101
101
+ 101
100011
(b)
1001
(e)
1011
 1011
 1001
1001
1011
000
000
1001
000
111111
+ 1011
110111
Binary division:
Division in binary follows the same procedure as division in decimal.
Prob: Perform the following binary divisions:
(a)
110  11
(b)
110  10
10
(a)
2
11) 110
3)
11
6
(b)
10) 110
110
6
10
000
0
10
00
1’S and 2’S Complement of binary number:
1’S and 2’S complement are import because they permit the representation of
negative numbers. 2’S complement is commonly used in comp. To handle negative
numbers.
1’S complement of a binary number is found by simply changing all 1s to Os and
all Os to 1s, as illustrated below
1
0
1
1 0
0
1
0
 
  
 

0
0
1
1
1
0 1
0
2’S Complement of a Binary number:
binary number
1’s complement
The 2’s complement of a binary number is found by adding 1 to the LSB of the 1’s
complement
2’S Complement = 1’s complement + 1
Prob: Find the 2’s complement of a binary number. 10110010
10110010
binary number
01001101
1’s complement
+
1
01001110
Add 1
2’s complement
Prob: Determine the 2’s complement of 11001011.
11001011
binary number
00110100
1’s complement
+
1
00110101
2’s complement
SESSION 4
LOGIC GATES
A logic gate is an idealized or physical device implementing a Boolean function,
that is, it performs a logical operation on one or more logic inputs and produces a
single logic output. Depending on the context, the term may refer to an ideal logic
gate, one that has for instance zero rise time and unlimited fan-out, or it may refer
to a non-ideal physical device[1] (see Ideal and real op-amps for comparison).
Logic gates are primarily implemented using diodes or transistors acting as
electronic switches, but can also be constructed using electromagnetic relays (relay
logic), fluidic logic, pneumatic logic, optics, molecules, or even mechanical
elements. With amplification, logic gates can be cascaded in the same way that
Boolean functions can be composed, allowing the construction of a physical model
of all of Boolean logic, and therefore, all of the algorithms and mathematics that
can be described with Boolean logic.
Electronic gates
To build a functionally complete logic system, relays, valves (vacuum tubes), or
transistors can be used. The simplest family of logic gates using bipolar transistors
is called resistor-transistor logic (RTL). Unlike diode logic gates, RTL gates can be
cascaded indefinitely to produce more complex logic functions. These gates were
used in early integrated circuits. For higher speed, the resistors used in RTL were
replaced by diodes, leading to diode-transistor logic (DTL). Transistor-transistor
logic (TTL) then supplanted DTL with the observation that one transistor could do
the job of two diodes even more quickly, using only half the space. In virtually
every type of contemporary chip implementation of digital systems, the bipolar
transistors have been replaced by complementary field-effect transistors
(MOSFETs) to reduce size and power consumption still further, thereby resulting
in complementary metal–oxide–semiconductor (CMOS) logic.
For small-scale logic, designers now use prefabricated logic gates from families of
devices such as the TTL 7400 series by Texas Instruments and the CMOS 4000
series by RCA, and their more recent descendants. Increasingly, these fixedfunction logic gates are being replaced by programmable logic devices, which
allow designers to pack a large number of mixed logic gates into a single
integrated circuit. The field-programmable nature of programmable logic devices
such as FPGAs has removed the 'hard' property of hardware; it is now possible to
change the logic design of a hardware system by reprogramming some of its
components, thus allowing the features or function of a hardware implementation
of a logic system to be changed.
Electronic logic gates differ significantly from their relay-and-switch equivalents.
They are much faster, consume much less power, and are much smaller (all by a
factor of a million or more in most cases). Also, there is a fundamental structural
difference. The switch circuit creates a continuous metallic path for current to flow
(in either direction) between its input & its output. The semiconductor logic gate,
on the other hand, acts as a high-gain voltage amplifier, which sinks a tiny current
at its input and produces a low-impedance voltage at its output. It is not possible
for current to flow between the output and the input of a semiconductor logic gate.
Symbols
A synchronous 4-bit up/down decade counter symbol (74LS192) in accordance
with ANSI/IEEE Std. 91-1984 and IEC Publication 60617-12.
There are two sets of symbols for elementary logic gates in common use, both
defined in ANSI/IEEE Std 91-1984 and its supplement ANSI/IEEE Std 91a-1991.
The "distinctive shape" set, based on traditional schematics, is used for simple
drawings, and derives from MIL-STD-806 of the 1950s and 1960s. It is sometimes
unofficially described as "military", reflecting its origin. The "rectangular shape"
set, based on IEC 60617-12 and other early industry standards, has rectangular
outlines for all types of gate and allows representation of a much wider range of
devices than is possible with the traditional symbols.[3] The IEC's system has been
adopted by other standards, such as EN 60617-12:1999 in Europe and BS EN
60617-12:1999 in the United Kingdom.
The goal of IEEE Std 91-1984 was to provide a uniform method of describing the
complex logic functions of digital circuits with schematic symbols. These
functions were more complex than simple AND and OR gates. They could be
medium scale circuits such as a 4-bit counter to a large scale circuit such as a
microprocessor. IEC 617-12 and its successor IEC 60617-12 do not explicitly
show the "distinctive shape" symbols, but do not prohibit them. [3] These are,
however, shown in ANSI/IEEE 91 (and 91a) with this note: "The distinctive-shape
symbol is, according to IEC Publication 617, Part 12, not preferred, but is not
considered to be in contradiction to that standard." This compromise was reached
between the respective IEEE and IEC working groups to permit the IEEE and IEC
standards to be in mutual compliance with one another.
A third style of symbols was in use in Europe and is still preferred by some, see the
table de:Logikgatter#Typen von Logikgattern und Symbolik in the German wiki.
In the 1980s, schematics were the predominant method to design both circuit
boards and custom ICs known as gate arrays. Today custom ICs and the fieldprogrammable gate array are typically designed with Hardware Description
Languages (HDL) such as Verilog or VHDL.
Type
Distinctive
shape
Rectangular
shape
Boolean
algebra
Truth table
between A & B
AND
INPUT OUTPUT
A B A AND B
0 0 0
0 1 0
1 0 0
1 1 1
OR
INPUT OUTPUT
A B A OR B
0 0 0
0 1 1
1 0 1
1 1 1
NOT
INPUT OUTPUT
A
NOT A
0
1
1
0
In electronics a NOT gate is more commonly called an inverter. The circle on the
symbol is called a bubble, and is used in logic diagrams to indicate a logic negation
between the external logic state and the internal logic state (1 to 0 or vice versa).
On a circuit diagram it must be accompanied by a statement asserting that the
positive logic convention or negative logic convention is being used (high voltage
level = 1 or high voltage level = 0, respectively). The wedge is used in circuit
diagrams to directly indicate an active-low (high voltage level = 0) input or output
without requiring a uniform convention throughout the circuit diagram. This is
called Direct Polarity Indication. See IEEE Std 91/91A and IEC 60617-12. Both
the bubble and the wedge can be used on distinctive-shape and rectangular-shape
symbols on circuit diagrams, depending on the logic convention used. On pure
logic diagrams, only the bubble is meaningful.
NAND
INPUT OUTPUT
A B A NAND B
0 0 1
0 1 1
1 0 1
1 1 0
NOR
INPUT OUTPUT
A B A NOR B
0 0 1
0 1 0
1 0 0
1 1 0
XOR
INPUT OUTPUT
A B A XOR B
0 0 0
0 1 1
1 0 1
1 1 0
XNOR
INPUT OUTPUT
A B A XNOR B
0 0 1
0 1 0
or
1
1
0
1
0
1
Two more gates are the exclusive-OR or XOR function and its inverse, exclusiveNOR or XNOR. The two input Exclusive-OR is true only when the two input
values are different, false if they are equal, regardless of the value. If there are
more than two inputs, the gate generates a true at its output if the number of trues
at its input is odd ([1]). In practice, these gates are built from combinations of
simpler logic gates.
UNIVERSAL GATES
NAND
A NAND gate is logically an inverted AND gate. It has the following truth table:
Truth Table
Input A Input B
0
0
0
1
1
0
1
1
Output Q
1
1
1
0
Making other gates by using NAND gates
A NAND gate is a universal gate: all other gates can be represented as a
combination of NAND gates.
NOT
A NOT gate is made by joining the inputs of a NAND gate. Since a NAND gate is
equivalent to an AND gate followed by a NOT gate, joining the inputs of a NAND
gate leaves only the NOT gate.
Desired Gate
NAND
Construction
Truth Table
Input A Output Q
0
1
1
0
AND
An AND gate is made by following a NAND gate with a NOT gate as shown
below. This gives a NOT NAND, i.e. AND.
Desired Gate
NAND Construction
Truth Table
Input A Input B
0
0
0
1
1
0
1
1
Output Q
0
0
0
1
OR
If the truth table for a NAND gate is examined or by applying De Morgan's Laws,
it can be seen that if any of the inputs are 0, then the output will be 1. To be an OR
gate, however, the output must be 1 if any input is 1. Therefore, if the inputs are
inverted, any high input will trigger a high output.
Desired Gate
NAND Construction
Truth Table
Input A Input B
0
0
0
1
1
0
1
1
Output Q
0
1
1
1
NOR
A NOR gate is simply an inverted OR gate. Output is high when neither input A
nor input B is high:
Desired Gate
NAND Construction
Truth Table
Input A Input B
0
0
0
1
1
0
1
1
Output Q
1
0
0
0
XOR
An XOR gate is constructed similarly to an OR gate, except with an additional
NAND gate inserted such that if both inputs are high, the inputs to the final NAND
gate will also be high, and the output will be low. This effectively represents the
formula: "NAND(A NAND (A NAND B)) NAND (B NAND (A NAND B))".
Desired Gate
NAND Construction
Truth Table
Input A Input B
0
0
0
1
1
0
1
1
Output Q
0
1
1
0
XNOR
An XNOR gate is simply an XOR gate with an inverted output:
Desired Gate
NAND Construction
Truth Table
Input A Input B
0
0
0
1
1
0
1
1
Output Q
1
0
0
1
SESSION 5
Boolean algebra,
Boolean algebra is the algebra of truth values 0 and 1, or equivalently of subsets of
a given set. The operations are usually taken to be conjunction ∧, disjunction ∨,
and negation ¬, with constants 0 and 1. And the laws are definable as those
equations that hold for all values of their variables, for example x∨(y∧x) = x.
Applications include mathematical logic, digital logic, computer programming, set
theory, and statistics.[2] According to Huntington the moniker "Boolean algebra"
was first suggested by Sheffer in 1913.[3]
Boole's algebra predated the modern developments in abstract algebra and
mathematical logic; it is however seen as connected to the origins of both fields. [4]
In an abstract setting, Boolean algebra was perfected in the late 19th century by
Jevons, Schröder, Huntington, and others until it reached the modern conception of
an (abstract) mathematical structure.[4] For example, the empirical observation that
one can manipulate expressions in the algebra of sets by translating them into
expressions in Boole's algebra is explained in modern terms by saying that the
algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H.
Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets.
In the 1930s, while studying switching circuits, Claude Shannon observed that one
could also apply the rules of Boole's algebra in this setting, and he introduced
switching algebra as a way to analyze and design circuits by algebraic means in
terms of logic gates. Shannon already had at his disposal the abstract mathematical
apparatus, thus he cast his switching algebra as the two-element Boolean algebra.
In circuit engineering settings today, there is little need to consider other Boolean
algebras, thus "switching algebra" and "Boolean algebra" are often used
interchangeably.[5][6][7] Efficient implementation of Boolean functions is a
fundamental problem in the design of combinatorial logic circuits. Modern
electronic design automation tools for VLSI circuits often rely on an efficient
representation of Boolean functions known as (reduced ordered) binary decision
diagrams (BDD) for logic synthesis and formal verification.[8]
Operations
Basic operations
Some operations of ordinary algebra, in particular multiplication xy, addition x + y,
and negation −x, have their counterparts in Boolean algebra, respectively the
Boolean operations AND, OR, and NOT, also called conjunction x∧y, or Kxy,
disjunction x∨y, or Axy, and negation or complement ¬x, Nx, or sometimes !x.
Some authors use instead the same arithmetic operations as ordinary algebra
reinterpreted for Boolean algebra, treating xy as synonymous with x∧y and x+y
with x∨y.
Conjunction x∧y behaves on 0 and 1 exactly as multiplication does for ordinary
algebra: if either x or y is 0 then x∧y is 0, but if both are 1 then x∧y is 1.
Disjunction x∨y works almost like addition, with 0∨0 = 0 and 1∨0 = 1 and 0∨1 =
1. However there is a difference: 1∨1 is not 2 but 1.
Complement resembles ordinary negation in that it exchanges values. But
whereas in ordinary algebra negation interchanges 1 and −1, 2 and −2, etc. while
leaving 0 fixed, in Boolean algebra complement interchanges 0 and 1. One can
think of ordinary negation as reflecting about 0, and Boolean complement as
reflecting about the midpoint of 0 and 1. Complement can be defined
arithmetically as ¬x = 1−x because the latter maps 0 to 1 and vice versa, the
behavior of ¬x.
In summary the three basic Boolean operations can be defined arithmetically as
follows.
x∧y = xy
x∨y = x + y − xy
¬x = 1 − x
Alternatively the values of x∧y, x∨y, and ¬x can be expressed without reference to
arithmetic operations by tabulating their values with truth tables as follows.
Figure 1. Truth tables
¬x
x
y
x∧y x∨y x
0
0
0
0
0
1
1
0
0
1
1
0
0
1
0
1
1
1
1
1
For the two binary operations ∧ and ∨ the truth tables list all four possible
combinations of values for x and y, one per line. For each combination the truth
tables tabulate the values of x∧y and x∨y. The truth values of ¬x are tabulated
similarly except that only two lines are needed because there is only one variable.
Yet another way of specifying these operations is with equations explicitly giving
their values.
0∧0 = 0
0∨0 = 0
0∧1 = 0
0∨1 = 1
1∧0 = 0
1∨0 = 1
1∧1 = 1
1∨1 = 1
¬0 = 1
¬1 = 0
Derived operations
We have so far seen three Boolean operations. We referred to these as basic,
meaning that they can be taken as a basis for other Boolean operations that can be
built up from them by composition, the manner in which operations are combined
or compounded. Here are some examples of operations composed from the basic
operations.
x → y = (¬x ∨ y)
x ⊕ y = (x ∨ y) ∧ ¬(x ∧ y)
x ≡ y = ¬(x ⊕ y)
These definitions give rise to the following truth tables giving the values of these
operations for all four possible inputs.
x
y
x → yx ⊕ yx ≡ y
0
0
1
0
1
1
0
0
1
0
0
1
1
1
0
1
1
1
0
1
The first operation, x → y, or Cxy, is called material implication. If x is true then
the value of x → y is taken to be that of y. But if x is false then we ignore the value
of y; however we must return some truth value and there are only two choices, so
we choose the value that entails less, namely true. (Relevance logic addresses this
by viewing an implication with a false premise as something other than either true
or false.)
The second operation, x ⊕ y, or Jxy, is called exclusive or to distinguish it from
disjunction as the inclusive kind. It excludes the possibility of both x and y.
Defined in terms of arithmetic it is addition mod 2 where 1 + 1 = 0.
The third operation, the complement of exclusive or, is equivalence or Boolean
equality: x ≡ y, or Exy, is true just when x and y have the same value. Hence x ⊕ y
as its complement can be understood as x ≠ y, being true just when x and y are
different. Its counterpart in arithmetic mod 2 is x + y + 1.
Laws
A law of Boolean algebra is an equation such as x∨(y∨z) = (x∨y)∨z between two
Boolean terms, where a Boolean term is defined as an expression built up from
variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept
can be extended to terms involving other Boolean operations such as ⊕, →, and ≡,
but such extensions are unnecessary for the purposes to which the laws are put.
Such purposes include the definition of a Boolean algebra as any model of the
Boolean laws, and as a means for deriving new laws from old as in the derivation
of x∨(y∧z) = x∨(z∧y) from y∧z = z∧y as treated in the section on axiomatization.
[edit] Monotone laws
Boolean algebra satisfies many of the same laws as ordinary algebra when we
match up ∨ with addition and ∧ with multiplication. In particular the following
laws are common to both kinds of algebra:[13]
(Associativity of ∨)
x∨(y∨z) = (x∨y)∨z
(Associativity of ∧)
x∧(y∧z) = (x∧y)∧z
(Commutativity of ∨)
x∨y
= y∨x
(Commutativity of ∧)
x∧y
= y∧x
(Distributivity of ∧ over ∨) x∧(y∨z) = (x∧y)∨(x∧z)
(Identity for ∨)
x∨0
=x
(Identity for ∧)
x∧1
=x
(Annihilator for ∧)
x∧0
=0
Boolean algebra however obeys some additional laws, in particular the following
(Idempotence of ∨)
x∨x
=x
(Idempotence of ∧)
x∧x
=x
(Absorption 1)
x∧(x∨y) = x
(Absorption 2)
x∨(x∧y) = x
(Distributivity of ∨ over ∧) x∨(y∧z) = (x∨y)∧(x∨z)
(Annihilator for ∨)
x∨1
=1
A consequence of the first of these laws is 1∨1 = 1, which is false in ordinary
algebra, where 1+1 = 2. Taking x = 2 in the second law shows that it is not an
ordinary algebra law either, since 2×2 = 4. The remaining four laws can be
falsified in ordinary algebra by taking all variables to be 1, for example in
Absorption Law 1 the left hand side is 1(1+1) = 2 while the right hand side is 1,
and so on.
All of the laws treated so far have been for conjunction and disjunction. These
operations have the property that changing either argument either leaves the output
unchanged or the output changes in the same way as the input. Equivalently,
changing any variable from 0 to 1 never results in the output changing from 1 to 0.
Operations with this property are said to be monotone. Thus the axioms so far
have all been for monotonic Boolean logic. Nonmonotonicity enters via
complement ¬ as follows.[2]
Nonmonotone laws
The complement operation is defined by the following two laws.
(Complementation 1)
x∧¬x = 0
(Complementation 2)
x∨¬x
= 1.
All properties of negation including the laws below follow from the above two
laws alone.[2]
In both ordinary and Boolean algebra, negation works by exchanging pairs of
elements, whence in both algebras it satisfies the double negation law (also called
involution law)
¬¬x = x.
(Double negation)
But whereas ordinary algebra satisfies the two laws
(−x)(−y)
= xy
(−x) + (−y) = −(x + y),
Boolean algebra satisfies De Morgan's laws,
(De Morgan 1)
(¬x)∧(¬y) = ¬(x∨y)
(De Morgan 2)
(¬x)∨(¬y) = ¬(x∧y).
Completeness
At this point we can now claim to have defined Boolean algebra, in the sense that
the laws we have listed up to now entail the rest of the subject. The laws
Complementation 1 and 2, together with the monotone laws, suffice for this
purpose and can therefore be taken as one possible complete set of laws or
axiomatization of Boolean algebra. Every law of Boolean algebra follows logically
from these axioms. Furthermore Boolean algebras can then be defined as the
models of these axioms as treated in the section thereon.
To clarify, writing down further laws of Boolean algebra cannot give rise to any
new consequences of these axioms, nor can it rule out any model of them. Had we
stopped listing laws too soon, there would have been Boolean laws that did not
follow from those on our list, and moreover there would have been models of the
listed laws that were not Boolean algebras.
This axiomatization is by no means the only one, or even necessarily the most
natural given that we did not pay attention to whether some of the axioms followed
from others but simply chose to stop when we noticed we had enough laws, treated
further in the section on axiomatizations. Or the intermediate notion of axiom can
be sidestepped altogether by defining a Boolean law directly as any tautology,
understood as an equation that holds for all values of its variables over 0 and 1. All
these definitions of Boolean algebra can be shown to be equivalent.
Boolean algebra has the interesting property that x = y can be proved from any
non-tautology. This is because the substitution instance of any non-tautology
obtained by instantiating its variables with constants 0 or 1 so as to witness its nontautologyhood reduces by equational reasoning to 0 = 1. For example the nontautologyhood of x∧y = x is witnessed by x = 1 and y = 0 and so taking this as an
axiom would allow us to infer 1∧0 = 1 as a substitution instance of the axiom and
hence 0 = 1. We can then show x = y by the reasoning x = x∧1 = x∧0 = 0 = 1 = y∨1
= y∨0 = y.
Duality principle
There is nothing magical about the choice of symbols for the values of Boolean
algebra. We could rename 0 and 1 to say α and β, and as long as we did so
consistently throughout it would still be Boolean algebra, albeit with some obvious
cosmetic differences.
But suppose we rename 0 and 1 to 1 and 0 respectively. Then it would still be
Boolean algebra, and moreover operating on the same values. However it would
not be identical to our original Boolean algebra because now we find ∨ behaving
the way ∧ used to do and vice versa. So there are still some cosmetic differences to
show that we've been fiddling with the notation, despite the fact that we're still
using 0s and 1s.
But if in addition to interchanging the names of the values we also interchange the
names of the two binary operations, now there is no trace of what we have done.
The end product is completely indistinguishable from what we started with. We
might notice that the columns for x∧y and x∨y in the truth tables had changed
places, but that switch is immaterial.
When values and operations can be paired up in a way that leaves everything
important unchanged when all pairs are switched simultaneously, we call the
members of each pair dual to each other. Thus 0 and 1 are dual, and ∧ and ∨ are
dual. The Duality Principle, also called De Morgan duality, asserts that Boolean
algebra is unchanged when all dual pairs are interchanged.
One change we did not need to make as part of this interchange was to
complement. We say that complement is a self-dual operation. The identity or donothing operation x (copy the input to the output) is also self-dual. A more
complicated example of a self-dual operation is (x∧y) ∨ (y∧z) ∨ (z∧x). It can be
shown that self-dual operations must take an odd number of arguments; thus there
can be no self-dual binary operation.
The principle of duality can be explained from a group theory perspective by fact
that there are exactly four functions that are one-to-one mappings (automorphisms)
of the set of Boolean polynomials back to itself: the identity function, the
complement function, the dual function and the contradual function
(complemented dual). These four functions form a group under function
composition, isomorphic to the Klein four-group, acting on the set of Boolean
polynomials.
SESSION 6
DeMorgan's theorem - A logical theorem which states that the complement of a
conjunction is the disjunction of the complements or vice versa. In symbols:
not (x and y) = (not x) or (not y) not (x or y) = (not x) and (not y)
E.g. if it is not the case that I am tall and thin then I am either short or fat (or both).
The theorem can be extended to combinations of more than two terms in the
obvious
way.
The same laws also apply to sets, replacing logical complement with set
complement, conjunction ("and") with set intersection, and disjunction ("or") with
set union.
A
if
(C)
(!foo
programmer
&&
!bar)
might
...
as
use
if
this
(!(foo
to
||
re-write
bar))
...
thus saving one operator application (though an optimising compiler should do the
same, leaving the programmer free to use whichever form seemed clearest).
Formal Proof of DeMorgan's Theorems
DeMorgan's Theorems:
a. (A + B) = A* B
b. A*B = A + B
Note: * = AND operation
Proof of DeMorgan's Theorem (b):
For any theorem X=Y, if we can show that X Y = 0, and that X + Y = 1, then
by the complement postulates, A A = 0 and A + A = 1,
X = Y. By the uniqueness of the complement, X = Y.
Thus the proof consists of showing that (A*B)*( A + B) = 0; and also that (A*B) +
( A + B) = 1.
Prove: (A*B)*( A + B) = 0
(A*B)*( A + B) = (A*B)*A + (A*B)*B) by distributive postulate
= (A*A)*B + A*(B*B) by associativity postulate
= 0*B + A*0
by complement postulate
=0+0
by nullity theorem
=0
by identity theorem
(A*B)*( A + B) = 0
Q.E.D.
(A*B) + ( A +
=1
Prove:
B)
(A*B) + ( A + =(A + A + B))*(B + A by distributivity B*C + A = (B +
B)
+ B)
A)*(C + A)
(A*B) + ( A + =(A + A + B))*(B + B
by associativity postulate
B)
+ A)
=(1 + B)*(1 + A)
=1*1
=1
(A*B) + ( A +
=1
B)
by complement postulate
by nullity theorem
by identity theorem
Q.E.D.
Since (A*B)*( A + B) = 0, and (A*B) + ( A + B) =1,
A*B is the complement of A + B, meaning that A*B=(A + B)';
(note that ' = complement or NOT - double bars don't show in HTML)
Thus A*B= (A + B)''.
The involution theorem states that A'' = A. Thus by the involution theorem, (A +
B)'' = A + B.
This proves DeMorgan's Theorem (b).
DeMorgan's Theorem (a) may be proven using a similar approach.
SUM OF PRODUCT
In Boolean algebra, any Boolean function can be expressed in a canonical form
using the dual concepts of minterms and maxterms. Minterms are called products
because they are the logical AND of a set of variables, and maxterms are called
sums because they are the logical OR of a set of variables (further definition
appears in the sections headed Minterms and Maxterms below). These concepts
are called duals because of their complementary-symmetry relationship as
expressed by De Morgan's laws, which state that AND(x,y,z,...) = NOR(x',y',z',...)
and OR(x,y,z,...) = NAND(x',y',z',...) (the apostrophe ' is an abbreviation for logical
NOT, thus " x' " represents " NOT x ", the Boolean usage " x'y + xy' " represents
the logical equation " (NOT(x) AND y) OR (x AND NOT(y)) ").
The dual canonical forms of any Boolean function are a "sum of minterms" and a
"product of maxterms." The term "Sum of Products" or "SoP" is widely used for
the canonical form that is a disjunction (OR) of minterms. Its De Morgan dual is a
"Product of Sums" or "PoS" for the canonical form that is a conjunction (AND) of
maxterms. These forms allow for greater analysis into the simplification of these
functions, which is of great importance in the minimization or other optimization
of digital circuits.
Minterms
For a boolean function of variables
, a product term in which each of
the variables appears once (in either its complemented or uncomplemented form)
is called a minterm. Thus, a minterm is a logical expression of n variables that
employs only the complement operator and the conjunction operator.
For example,
,
and
are 3 examples of the 8 minterms for a Boolean
function of the three variables , and . The customary reading of the last of these
is a AND b AND NOT-c.
There are 2n minterms of n variables, since a variable in the minterm expression
can be in either its direct or its complemented form—two choices per n variables.
Indexing minterms
In general, one assigns each minterm an index based on a conventional binary
encoding of the complementation pattern of the variables (where the variables in
all the minterms are written in the same order, usually alphabetical). This
convention assigns the value 1 to the direct form ( ) and 0 to the complemented
form ( ). For example, we assign the index 6 to the minterm
(110) and denote
that minterm as
. Similarly,
of the same three variables is
(000), and
is
(111).
Functional equivalence
It is apparent that minterm n gives a true value (i.e., 1) for just one combination of
the input variables. For example, minterm 5, a b' c, is true only when a and c both
are true and b is false—the input arrangement where a = 1, b = 0, c = 1 results in 1.
If one is given a truth table of a logical function, it is possible to write the function
as a "sum of products". This is a special form of disjunctive normal form. For
example, if given the truth table for the arithmetic sum bit u of one bit position's
logic of an adder circuit, as a function of x and y from the addends and the carry in,
ci:
ci
x
y
u(ci,x,y)
0
0
0
0
0
0
1
1
0
1
0
1
0
1
1
0
1
0
0
1
1
0
1
0
1
1
0
0
1
1
1
1
Observing that the rows that have an output of 1 are the 2nd, 3rd, 5th, and 8th, we
can write u as a sum of minterms
and
. If we wish to verify this:
u(ci, x, y) =
= (ci' x' y) + (ci' x y') + (ci x' y') + (ci x y)
evaluated for all 8 combinations of the three variables will match the table.
Maxterms
For a boolean function of variables
, a sum term in which each of the
variables appears once (in either its complemented or uncomplemented form) is
called a maxterm. Thus, a maxterm is a logical expression of n variables that
employs only the complement operator and the disjunction operator. Maxterms are
a dual of the minterm idea (i.e., exhibiting a complementary symmetry in all
respects). Instead of using ANDs and complements, we use ORs and complements
and proceed similarly.
For example, the following are two of the eight maxterms of three variables:
a+b'+c
a'+b+c
There are again 2n maxterms of n variables, since a variable in the maxterm
expression can also be in either its direct or its complemented form—two choices
per n variables.
Indexing maxterms
Each maxterm is assigned an index based on the opposite conventional binary
encoding used for minterms. The maxterm convention assigns the value 0 to the
direct form
and 1 to the complemented form
. For example, we assign the
index 6 to the maxterm
(110) and denote that maxterm as M6. Similarly
M0 of these three variables is
(000) and M7 is
(111).
The Karnaugh map (K-map for short), Maurice Karnaugh's 1953 refinement of
Edward Veitch's 1952 Veitch diagram, is a method to simplify Boolean algebra
expressions. The Karnaugh map reduces the need for extensive calculations by
taking advantage of humans' pattern-recognition capability. It also permits the
rapid identification and elimination of potential race conditions.
The required boolean results are transferred from a truth table onto a twodimensional grid where the cells are ordered in Gray code, and each cell represents
one combination of input conditions. Optimal groups of 1s or 0s are identified,
which represent the terms of a canonical form of the logic in the original truth
table.[1] These terms can be used to write a minimal boolean expression
representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can
be implemented using a minimum number of physical logic gates. A sum-ofproducts expression can always be implemented using AND gates feeding into an
OR gate, and a product-of-sums expression leads to OR gates feeding an AND
gate.[2] Karnaugh maps can also be used to simplify logic expressions in software
design. Boolean conditions, as used for example in conditional statements, can get
very complicated, which makes the code difficult to read and to maintain. Once
minimised, canonical sum-of-products and product-of-sums expressions can be
implemented directly using AND and OR logic operators.[3]
The Karnaugh map (K-map for short), Maurice Karnaugh's 1953 refinement of
Edward Veitch's 1952 Veitch diagram, is a method to simplify Boolean algebra
expressions. The Karnaugh map reduces the need for extensive calculations by
taking advantage of humans' pattern-recognition capability. It also permits the
rapid identification and elimination of potential race conditions.
The required boolean results are transferred from a truth table onto a twodimensional grid where the cells are ordered in Gray code, and each cell represents
one combination of input conditions. Optimal groups of 1s or 0s are identified,
which represent the terms of a canonical form of the logic in the original truth
table. These terms can be used to write a minimal boolean expression representing
the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can
be implemented using a minimum number of physical logic gates. A sum-ofproducts expression can always be implemented using AND gates feeding into an
OR gate, and a product-of-sums expression leads to OR gates feeding an AND
gate. Karnaugh maps can also be used to simplify logic expressions in software
design. Boolean conditions, as used for example in conditional statements, can get
very complicated, which makes the code difficult to read and to maintain. Once
minimised, canonical sum-of-products and product-of-sums expressions can be
implemented directly using AND and OR logic operators.[3]
Example
Karnaugh maps are used to facilitate the simplification of Boolean algebra
functions. Take the Boolean or binary function described by the following truth
table.
Truth table of a function
A B C D f(A, B, C, D)
0 0000 0
1 0001 0
2 0010 0
3 0011 0
4 0100 0
5 0101 0
6 0110 1
7 0111 0
8 1000 1
9 1001 1
10 1 0 1 0 1
11 1 0 1 1 1
12 1 1 0 0 1
13 1 1 0 1 1
14 1 1 1 0 1
15 1 1 1 1 0
Following are two different notations describing the same function in unsimplified
Boolean algebra, using the Boolean variables , , , , and their inverses.


Note: The values inside
are the minterms to map (i.e. rows which have output 1 in the truth table).
Karnaugh map
K-map construction.
In this case, the four input variables can be combined in 16 different ways, so the
truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh
map is therefore arranged in a 4 × 4 grid.
The row and column values (shown across the top, and down the left side of the
Karnaugh map) are ordered in Gray code rather than binary numerical order. Gray
code ensures that only one variable changes between each pair of adjacent cells.
Each cell of the completed Karnaugh map contains a binary digit representing the
function's output for that combination of inputs.
After the Karnaugh map has been constructed it is used to find one of the simplest
possible forms—a canonical form—for the information in the truth table. Adjacent
1s in the Karnaugh map represent opportunities to simplify the expression. The
minterms ('minimal terms') for the final expression are found by encircling groups
of 1s in the map. Minterm groups must be rectangular and must have an area that is
a power of two (i.e. 1, 2, 4, 8…). Minterm rectangles should be as large as possible
without containing any 0s. Groups may overlap in order to make each one larger.
The optimal groupings in this example are marked by the green, red and blue lines,
and the red and green groups overlap. The red group is a 2 × 2 square, the green
group is a 4 × 1 rectangle, and the overlap area is indicated in brown.
The grid is toroidally connected, which means that rectangular groups can wrap
across the edges. Cells on the extreme right are actually 'adjacent' to those on the
far left; similarly, so are are those at the very top and those at the bottom.
Therefore
can be a valid term—it includes cells 12 and 8 at the top, and wraps
to the bottom to include cells 10 and 14—as is
, which includes the four
corners.
Solution
K-map showing minterms as colored rectangles and squares. The brown region is
an overlap of the red 2×2 square and the green 4×1 rectangle.
Once the Karnaugh map has been constructed and the adjacent 1s linked by
rectangular and square boxes, the algebraic minterms can be found by examining
which variables stay the same within each box.
For the red grouping:




The variable A is the same and is equal to 1 throughout the box, therefore it
should be included in the algebraic representation of the red minterm.
Variable B does not maintain the same state (it shifts from 1 to 0), and
should therefore be excluded.
C does not change. It is always 0 so its complement, NOT-C, should be
included thus, .
D changes, so it is excluded as well.
Thus the first minterm in the Boolean sum-of-products expression is
.
For the green grouping, A and B maintain the same state, while C and D change. B
is 0 and has to be negated before it can be included. Thus the second term is
.
In the same way, the blue grouping gives the term
.
The solutions of each grouping are combined thus
Thus
we
can
say
.
that
if
Then the Karnaugh map has shown that
It would also have been possible to derive this simplification of the first expression
by carefully applying the axioms of boolean algebra, but the time it takes to find it
grows exponentially with the number of terms
SESSION 7
Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with different
colored borders:



brown—
gold—
blue—
This yields the inverse:
Through the use of De Morgan's laws, the product of sums can be determined:
[edit] Don't cares
The value of f(A,B,C,D) for ABCD = 1111 is replaced by a "don't care". This
removes the green term completely and allows the red term to be larger. It also
allows blue inverse term to shift and become larger
Karnaugh maps also allow easy minimizations of functions whose truth tables
include "don't care" conditions. A "don't care" condition is a combination of inputs
for which the designer doesn't care what the output is. Therefore "don't care"
conditions can either be included in or excluded from any circled group, whichever
makes it larger. They are usually indicated on the map with a dash or X.
The example to the right is the same as the example above but with the value of F
for ABCD = 1111 replaced by a "don't care". This allows the red term to expand all
the way down and, thus, removes the green term completely.
This yields the new minimum equation:
Note that the first term is just not
. In this case, the don't care has dropped a
term (the green); simplified another (the red); and removed the race hazard (the
yellow as shown in a following section).
The inverse case is simplified as follows
In electronics, an adder or summer is a digital circuit that performs addition of
numbers. In many computers and other kinds of processors, adders are used not
only in the arithmetic logic unit(s), but also in other parts of the processor, where
they are used to calculate addresses, table indices, and similar operations.
Although adders can be constructed for many numerical representations, such as
binary-coded decimal or excess-3, the most common adders operate on binary
numbers. In cases where two's complement or ones' complement is being used to
represent negative numbers, it is trivial to modify an adder into an adder–
subtractor. Other signed number representations require a more complex adder
Half adder
Half Adder logic diagram
The half adder adds two single binary digits A and B. It has two outputs, sum (S)
and carry (C). The carry signal represents an overflow into the next digit of a
multi-digit addition. The value of the sum is 2C + S. The simplest half-adder
design, pictured on the right, incorporates an XOR gate for S and an AND gate for
C. With the addition of an OR gate to combine their carry outputs, two half adders
can be combined to make a full adder.[1]
Full adder
Schematic symbol for a 1-bit full adder with Cin and Cout drawn on sides of block to
emphasize their use in a multi-bit adder
A full adder adds binary numbers and accounts for values carried in as well as out.
A one-bit full adder adds three one-bit numbers, often written as A, B, and Cin; A
and B are the operands, and Cin is a bit carried in from the next less significant
stage.[2] The full-adder is usually a component in a cascade of adders, which add 8,
16, 32, etc. binary numbers. The circuit produces a two-bit output, output carry and
sum typically represented by the signals Cout and S, where
.
The one-bit full adder's truth table is:
Full-adder logic diagram
Inputs Outputs
A B Cin Cout
S
000 0
0
100 0
1
010 0
1
110 1
0
001 0
1
101 1
0
011 1
0
111 1
1
A full adder can be implemented in many different ways such as with a custom
transistor-level circuit or composed of other gates. One example implementation is
with
and
.
In this implementation, the final OR gate before the carry-out output may be
replaced by an XOR gate without altering the resulting logic. Using only two types
of gates is convenient if the circuit is being implemented using simple IC chips
which contain only one gate type per chip. In this light, Cout can be implemented as
.
A full adder can be constructed from two half adders by connecting A and B to the
input of one half adder, connecting the sum from that to an input to the second
adder, connecting Ci to the other input and OR the two carry outputs. Equivalently,
S could be made the three-bit XOR of A, B, and Ci, and Cout could be made the
three-bit majority function of A, B, and Ci.
Parallel Binary Adder
The use of one half-adder or one full-adder alone are great for adding up two
binary numbers with a length of one bit each, but what happens when the computer
needs to add up two binary numbers with a longer length? Well, there are several
ways of doing this. The fastest way by far is to use the Parallel Binary Adder. The
parallel binary adder uses one half-adder, along with one or more full adders. The
number of total adders needed depends on the length of the largest of the two
binary numbers that are to be added. For example, if we were to add up the binary
numbers 1011 and 1, we would need four adders in total, because the length of the
larger number is four. Keeping this in mind, here is a demonstration of how a fourbit parallel binary adder works, using 1101 and 1011 as the two numbers to add:
Just like when we add without the computer, in the parallel binary adder, the
computer adds from right to left. Here is a step by step list, showing you what
happens in the parallel Binary Adder
1 In the only half-adder, inputs of 1 and 1 give us 0 with a carry of 1.
2
In the first full-adder (going from right to left), the inputs of 1 and 0 plus the
carry of 1 from the half-adder give us a 0 with a carry of 1.
3
In the second full adder, the inputs of 0 and 1 plus the carry of 1 from the
previous full-adder give us a 0 with a carry of 1.
4
In the third and final full adder, the inputs of 1 and 1 plus the carry of 1 from
the previous full-adder give us a 1 with a carry of 1.
5
Since there are no more numbers to add up, and there is still a carry of 1, the
carry becomes the most significant bit.
6 The sum of 1101 and 1011 is 11000.
The Multiplexer
A data selector, more commonly called a Multiplexer, shortened to "Mux" or
"MPX", are combinational logic switching devices that operate like a very fast
acting multiple position rotary switch. They connect or control, multiple input lines
called "channels" consisting of either 2, 4, 8 or 16 individual inputs, one at a time
to an output.
Then the job of a "multiplexer" is to allow multiple signals to share a single
common output. For example, a single 8-channel multiplexer would connect one of
its eight inputs to the single data output. Multiplexers are used as one method of
reducing the number of logic gates required in a circuit or when a single data line
is required to carry two or more different digital signals.
Digital Multiplexers are constructed from individual analogue switches encased
in a single IC package as opposed to the "mechanical" type selectors such as
normal conventional switches and relays. Generally, multiplexers have an even
number of data inputs, usually an even power of two, n2 , a number of "control"
inputs that correspond with the number of data inputs and according to the binary
condition of these control inputs, the appropriate data input is connected directly to
the output. An example of a Multiplexer configuration is shown below.
4-to-1 Channel Multiplexer
Addressing
b
0
0
1
1
a
0
1
0
1
Input
Selected
A
B
C
D
The Boolean expression for this 4-to-1 Multiplexer above with inputs A to D and
data select lines a, b is given as:
Q = abA + abB + abC + abD
In this example at any one instant in time only ONE of the four analogue switches
is closed, connecting only one of the input lines A to D to the single output at Q.
As to which switch is closed depends upon the addressing input code on lines "a"
and "b", so for this example to select input B to the output at Q, the binary input
address would need to be "a" = logic "1" and "b" = logic "0". Adding more control
address lines will allow the multiplexer to control more inputs but each control line
configuration will connect only ONE input to the output.
Then the implementation of this Boolean expression above using individual logic
gates would require the use of seven individual gates consisting of AND, OR and
NOT gates as shown.
4 Channel Multiplexer using Logic Gates
The symbol used in logic diagrams to identify a multiplexer is as follows.
Multiplexer Symbol
Multiplexers are not limited to just switching a number of different input lines or
channels to one common single output. There are also types that can switch their
inputs to multiple outputs and have arrangements or 4 to 2, 8 to 3 or even 16 to 4
etc configurations and an example of a simple Dual channel 4 input multiplexer (4
to 2) is given below:
Conversely, a demultiplexer (or demux) is a device taking a single input signal
and selecting one of many data-output-lines, which is connected to the single input.
A multiplexer is often used with a complementary demultiplexer on the receiving
end.
An electronic multiplexer can be considered as a multiple-input, single-output
switch, and a demultiplexer as a single-input, multiple-output switch.[3] The
schematic symbol for a multiplexer is an isosceles trapezoid with the longer
parallel side containing the input pins and the short parallel side containing the
output pin.[4] The schematic on the right shows a 2-to-1 multiplexer on the left and
an equivalent switch on the right. The
wire connects the desired input to the
output.
One use for multiplexers is cost savings by connecting a multiplexer and a
demultiplexer (or demux) together over a single channel (by connecting the
multiplexer's single output to the demultiplexer's single input). The image to the
right demonstrates this. In this case, the cost of implementing separate channels for
each data source is more expensive than the cost and inconvenience of providing
the multiplexing/demultiplexing functions.
At the receiving end of the data link a complementary demultiplexer is normally
required to break single data stream back down into the original streams. In some
cases, the far end system may have more functionality than a simple demultiplexer
and so, while the demultiplexing still exists logically, it may never actually happen
physically. This would be typical where a multiplexer serves a number of IP
network users and then feeds directly into a router which immediately reads the
content of the entire link into its routing processor and then does the
demultiplexing in memory from where it will be converted directly into IP
sections.
Often, a multiplexer and demultiplexer are combined together into a single piece of
equipment, which is usually referred to simply as a "multiplexer". Both pieces of
equipment are needed at both ends of a transmission link because most
communications systems transmit in both directions.
In analog circuit design, a multiplexer is a special type of analog switch that
connects one signal selected from several inputs to a single output.
Digital multiplexers
In digital circuit design, the selector wires are of digital value. In the case of a 2-to1 multiplexer, a logic value of 0 would connect to the output while a logic value
of 1 would connect to the output. In larger multiplexers, the number of selector
pins is equal to
where is the number of inputs.
For example, 9 to 16 inputs would require no fewer than 4 selector pins and 17 to
32 inputs would require no fewer than 5 selector pins. The binary value expressed
on these selector pins determines the selected input pin.
A 2-to-1 multiplexer has a boolean equation where and are the two inputs, is
the selector input, and is the output:
A 2-to-1 mux
Which can be expressed as a truth table:
1
0
0
1
0
0
1
1
0
1
1
0
0
1
1
1
0
0
1
0
1
0
This truth table shows that when
then
but when
then
. A
straightforward realization of this 2-to-1 multiplexer would need 2 AND gates, an
OR gate, and a NOT gate.
Larger multiplexers are also common and, as stated above, require
selector
pins for inputs. Other common sizes are 4-to-1, 8-to-1, and 16-to-1. Since digital
logic uses binary values, powers of 2 are used (4, 8, 16) to maximally control a
number of inputs for the given number of selector inputs.

4-to-1 mux

8-to-1 mux

16-to-1 mux
The boolean equation for a 4-to-1 multiplexer is:
Two realizations for creating a 4-to-1 multiplexer are shown below:
File:Multiplexer Example01.svg
These are two realizations of a 4-to-1 multiplexer:


one realized from a decoder, AND gates, and an OR gate
one realized from 3-state buffers and AND gates (the AND gates are acting
as the decoder)
SESSION 8
DECODER
A decoder is a device which does the reverse operation of an encoder, undoing the
encoding so that the original information can be retrieved. The same method used
to encode is usually just reversed in order to decode. It is a combinational circuit
that converts binary information from n input lines to a maximum of 2 n unique
output lines.
In digital electronics, a decoder can take the form of a multiple-input, multipleoutput logic circuit that converts coded inputs into coded outputs, where the input
and output codes are different. e.g. n-to-2n, binary-coded decimal decoders. Enable
inputs must be on for the decoder to function, otherwise its outputs assume a single
"disabled" output code word. Decoding is necessary in applications such as data
multiplexing, 7 segment display and memory address decoding.
The example decoder circuit would be an AND gate because the output of an AND
gate is "High" (1) only when all its inputs are "High." Such output is called as
"active High output". If instead of AND gate, the NAND gate is connected the
output will be "Low" (0) only when all its inputs are "High". Such output is called
as "active low output". File:Decoder Example.svg A slightly more complex
decoder would be the n-to-2n type binary decoders. These type of decoders are
combinational circuits that convert binary information from 'n' coded inputs to a
maximum of 2n unique outputs. We say a maximum of 2n outputs because in case
the 'n' bit coded information has unused bit combinations, the decoder may have
less than 2n outputs. We can have 2-to-4 decoder, 3-to-8 decoder or 4-to-16
decoder. We can form a 3-to-8 decoder from two 2-to-4 decoders (with enable
signals).
Similarly, we can also form a 4-to-16 decoder by combining two 3-to-8 decoders.
In this type of circuit design, the enable inputs of both 3-to-8 decoders originate
from a 4th input, which acts as a selector between the two 3-to-8 decoders. This
allows the 4th input to enable either the top or bottom decoder, which produces
outputs of D(0) through D(7) for the first decoder, and D(8) through D(15) for the
second decoder.
A decoder that contains enable inputs is also known as a decoder-demultiplexer.
Thus, we have a 4-to-16 decoder produced by adding a 4th input shared among
both decoders, producing 16 outputs.
BCD-to-Decimal Decoder
PREVIOUS
PREVIOUS <<- Decoder
<<-
Decoder
The BCD-to-decimal decoder converts each BCD code to its decimal equivalent.
The technique employed is very similar to the one used in developing the 3-line-to8-line decoder. Again assuming active high outputs are required, Table-3 lists the
decoding functions for BCD-to-decimal decoder
Decimal
Digit
Binary Inputs
0
0
0
0
0
1
0
0
0
1
2
0
0
1
0
3
0
0
1
1
4
0
1
0
0
5
0
1
0
1
6
0
1
1
0
7
0
1
1
1
8
1
0
0
0
9
1
0
0
1
Logic Function
Table-3 Internal Circuitry for 3-line-to-8-line decoder
Now, we can develop a decoder based on each logic function and implement the
SOP
logic
circuit.
This
is
illustrated
below
in
Figure-6
.
BCD to Seven-Segment Decoder
You are likely familiar - very familiar - with the idea of a seven-segment indicator
for representing decimal numbers. Each segment of a seven-segment display is a
small light-emitting diode (LED) or liquid-crystal display (LCD), and - as is shown
below - a decimal number is indicated by lighting a particular combination of the
LED's or LCD's elements:
Bindary-coded-decimal (BCD) is a common way of encoding decimal numbers
with 4 binary bits as shown below:
Decimal digit
BCD code
0
1
2
3
4
0000 0001 0010 0011 0100
Decimal digit
BCD code
5
6
7
8
9
0101 0110 0111 1000 1001
Your job for this lab is to design and test a circuit to convert a 4-bit BCD signal
into a 7-bit control signal according to the following figure and table:
b3 b2 b1 b0
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
abcdefg
0000001
1001111
0010010
0000110
1001100
0100100
0100000
0001111
0000000
0000100
Notice that the truth-table corresponds to a seven-segment device whose display
elements are active low. That is, each element will be active when its
corresponding input is '0'.
Background Reading
Before beginning this laboratory, you should read the brief overview of BCD to
seven-segment convertors that is found in Section 6.4 of the text Fundamentals of
Digital Logic with VHDL Design. You will also benefit greatly from reviewing the
digital-circuit synthesis techniques that you have been studying in Chapter 4. In
particular, review the methods for minimization of sum-of-products forms found in
Section 4.2.
A liquid crystal display (LCD) is a flat panel display, electronic visual display, or
video display that uses the light modulating properties of liquid crystals. Liquid
crystals do not emit light directly.
LCDs are available to display arbitrary images (as in a general-purpose computer
display) or fixed images which can be displayed or hidden, such as preset words,
digits, and 7-segment displays as in a digital clock. They use the same basic
technology, except that arbitrary images are made up of a large number of small
pixels, while other displays have larger elements.
LCDs are used in a wide range of applications including computer monitors,
televisions, instrument panels, aircraft cockpit displays, and signage. They are
common in consumer devices such as video players, gaming devices, clocks,
watches, calculators, and telephones, and have replaced cathode ray tube (CRT)
displays in most applications. They are available in a wider range of screen sizes
than CRT and plasma displays, and since they do not use phosphors, they do not
suffer image burn-in. LCDs are, however, susceptible to image persistence.[1]
The LCD screen is more energy efficient and can be disposed of more safely than a
CRT. Its low electrical power consumption enables it to be used in batterypowered electronic equipment. It is an electronically modulated optical device
made up of any number of segments filled with liquid crystals and arrayed in front
of a light source (backlight) or reflector to produce images in color or
monochrome. Liquid crystals were first discovered in 1888.[2] By 2008, worldwide
sales of televisions with LCD screens exceeded annual sales of CRT units; the
CRT became obsolete for most purposes.
Each pixel of an LCD typically consists of a layer of molecules aligned between
two transparent electrodes, and two polarizing filters, the axes of transmission of
which are (in most of the cases) perpendicular to each other. With actual liquid
crystal between the polarizing filters, light passing through the first filter would be
blocked by the second (crossed) polarizer.
The surface of the electrodes that are in contact with the liquid crystal material are
treated so as to align the liquid crystal molecules in a particular direction. This
treatment typically consists of a thin polymer layer that is unidirectionally rubbed
using, for example, a cloth. The direction of the liquid crystal alignment is then
defined by the direction of rubbing. Electrodes are made of the transparent
conductor Indium Tin Oxide (ITO). The Liquid Crystal Display is intrinsically a
“passive” device, it is a simple light valve. The managing and control of the data to
be displayed is performed by one or more circuits commonly denoted as LCD
drivers.
Before an electric field is applied, the orientation of the liquid crystal molecules is
determined by the alignment at the surfaces of electrodes. In a twisted nematic
device (still the most common liquid crystal device), the surface alignment
directions at the two electrodes are perpendicular to each other, and so the
molecules arrange themselves in a helical structure, or twist. This induces the
rotation of the polarization of the incident light, and the device appears gray. If the
applied voltage is large enough, the liquid crystal molecules in the center of the
layer are almost completely untwisted and the polarization of the incident light is
not rotated as it passes through the liquid crystal layer. This light will then be
mainly polarized perpendicular to the second filter, and thus be blocked and the
pixel will appear black. By controlling the voltage applied across the liquid crystal
layer in each pixel, light can be allowed to pass through in varying amounts thus
constituting different levels of gray.
LCD with top polarizer removed from device and placed on top, such that the top
and bottom polarizers are parallel.
The optical effect of a twisted nematic device in the voltage-on state is far less
dependent on variations in the device thickness than that in the voltage-off state.
Because of this, these devices are usually operated between crossed polarizers such
that they appear bright with no voltage (the eye is much more sensitive to
variations in the dark state than the bright state). These devices can also be
operated between parallel polarizers, in which case the bright and dark states are
reversed. The voltage-off dark state in this configuration appears blotchy, however,
because of small variations of thickness across the device.
Both the liquid crystal material and the alignment layer material contain ionic
compounds. If an electric field of one particular polarity is applied for a long
period of time, this ionic material is attracted to the surfaces and degrades the
device performance. This is avoided either by applying an alternating current or by
reversing the polarity of the electric field as the device is addressed (the response
of the liquid crystal layer is identical, regardless of the polarity of the applied
field).
Displays for a small number of individual digits and/or fixed symbols (as in digital
watches and pocket calculators) can be implemented with independent electrodes
for each segment. In contrast full alphanumeric and/or variable graphics displays
are usually implemented with pixels arranged as a matrix consisting of electrically
connected rows on one side of the LC layer and columns on the other side, which
makes it possible to address each pixel at the intersections. The general method of
matrix addressing consists of sequentially addressing one side of the matrix, for
example by selecting the rows one-by-one and applying the picture information on
the other side at the columns row-by-row. For details on the various matrix
addressing schemes see Passive-matrix and active-matrix addressed LCDs.
An LED display is a flat panel display, which uses light-emitting diodes as a video
display. An LED panel is a small display, or a component of a larger display.
They are typically used outdoors in store signs and billboards, and in recent years
have also become commonly used in destination signs on public transport vehicles
or even as part of transparent glass area. LED panels are sometimes used as form
of lighting, for the purpose of general illumination, task lighting, or even stage
lighting rather than display.
There are two types of LED panels: conventional (using discrete LEDs) and
surface-mounted device (SMD) panels. Most outdoor screens and some indoor
screens are built around discrete LEDs, also known as individually mounted LEDs.
A cluster of red, green, and blue diodes is driven together to form a full-color
pixel, usually square in shape. These pixels are spaced evenly apart and are
measured from center to center for absolute pixel resolution. The largest LED
display in the world is over 500 meters long and is located in Suzhou, China,
covering the Yuanrong Times Square. The largest LED television in the world is
the Center Hung Video Display at Cowboys Stadium, which is 160 ft × 72 ft (49 m
× 22 m), 11,520 square feet (1,070 m2).
Most indoor screens on the market are built using SMD technology— a trend that
is now extending to the outdoor market. An SMD pixel consists of red, green, and
blue diodes mounted in a single package, which is then mounted on the driver PC
board. The individual diodes are smaller than a pinhead and are set very close
together. The difference is that the maximum viewing distance is reduced by 25%
from the discrete diode screen with the same resolution.
Indoor use generally requires a screen that is based on SMD technology and has a
minimum brightness of 600 candelas per square meter (cd/m², sometimes
informally called nits). This will usually be more than sufficient for corporate and
retail applications, but under high ambient-brightness conditions, higher brightness
may be required for visibility. Fashion and auto shows are two examples of highbrightness stage lighting that may require higher LED brightness. Conversely,
when a screen may appear in a shot on a television studio set, the requirement will
often be for lower brightness levels with lower color temperatures; common
displays have a white point of 6500–9000 K, which is much bluer than the
common lighting on a television production set.
For outdoor use, at least 2,000 cd/m² is required for most situations, whereas
higher-brightness types of up to 5,000 cd/m² cope even better with direct sunlight
on the screen. (The brightness of LED panels can be reduced from the designed
maximum, if required.)
Suitable locations for large display panels are identified by factors such as line of
sight, local authority planning requirements (if the installation is to become semipermanent), vehicular access (trucks carrying the screen, truck-mounted screens, or
cranes), cable runs for power and video (accounting for both distance and health
and safety requirements), power, suitability of the ground for the location of the
screen (if there are no pipes, shallow drains, caves, or tunnels that may not be able
to support heavy loads), and overhead obstructions.
Flat panel LED television display
The first true all-LED flat panel television screen was possibly developed,
demonstrated and documented by James P. Mitchell in 1977.[1] The modular,
scalable display was initially designed with hundreds of MV50 LEDs and a newly
available transistor-transistor logic memory addressing circuit from National
Semiconductor.[2] The ¼ in thin flat panel prototype and the scientific paper were
displayed at the 29th ISEF expo in Washington D.C. in May 1978.[3] It received
awards by NASA,[4] and General Motors Corporation.[5] A liquid crystal display
(LCD) matrix design was also cited in the LED paper as an alternative x-y scan
technology and as a future alternate television display method. The replacement of
the 70 year+ high-voltage analog system (cathode-ray tube technology) with a
digital x-y scan system has been a significant achievement. Displacement of the
electromagnetic scan systems included the removal of inductive deflection,
electron beam and color convergence circuits. The digital x-y scan system has
helped the modern television to “collapse” into its current thin form factor.
The 1977 model was monochromatic by design. Efficient blue LEDs did not arrive
for another decade. Large displays now use high-brightness diodes to generate a
wide spectrum of colors. It took three decades and organic light-emitting diodes for
Sony to introduce an LED TV: the Sony XEL-1 OLED screen which was marketed
in 2009.
The largest 3D LED television display
The 2011 UEFA Champions League Final match between Manchester United and
Barcelona was broadcast live in 3D format in Gothenburg (Sweden), on an EKTA
screen. It had a refresh rate of 100 Hz, a diagonal of 7.11 m (23 ft 3.92 in) and a
display area of 6.192×3.483 m, and was listed in the Guinness Book of Records as
the largest LED 3D TV.[6][7]
SESSION 9
ENCODER
An encoder is a device, circuit, transducer, software program, algorithm or person
that converts information from one format or code to another, for the purposes of
standardization, speed, secrecy, security, or saving space by shrinking size.
DECIMAL to BCD ENCODER
Encoders are the opposite of decoders. They are used to generate a coded output
from a single active numeric input. To illustrate this in a simple manner, let’s take
a look at the simple decimal-to-BCD encoder circuit shown below. In this circuit,
normally all lines are held high by the pull-up resistors connected to +5 V. To
generate a BCD output that is equivalent to a single selected decimal input, the
switch corresponding to that decimal is closed. (The switch acts as an active-low
input.) The truth table in Fig. 12.39 explains the rest. Figure 12.40 shows a
74LS147 decimal-to-BCD (10-line-to-4-line) priority encoder IC. The 74LS147
provides the same basic function as the circuit shown in Fig. 12.39, but it has
active-low outputs. This means that instead of getting an LLHH output when “3”
is selected, as in the previous encoder, you get HHLL. The two outputs represent
the same thing (“3”); one is expressed in positive true logic, and the other (the
74LS147) is expressed in negative true logic. If you do not like negative true
logic, you can slap inverters on the outputs of the 74LS147 to get positive true
logic. The choice to use positive or negative true logic really depends on what you
are planning to drive. For example, negative true logic is useful when the device
that you wish to drive uses active-low inputs.
74LS147 decimal-to-4-bit BCD Priority Encoder IC
Another important difference between the two encoders is the term priority that is
used with the 74LS147 and not used with the encoder in Fig. 12.39. The term
priority is applied to the 74LS147 because this encoder is designed so that if two
or more inputs are selected at the same time, it will only select the larger-order
digit. For example, if 3, 5, and 8 are selected at the same time, only the 8 (negative
true BCD LHHH or 0111) will be output. The truth table in Fig. 12.40
demonstrates this look at the “don’t care” or “X” entries. With the non priority
encoder, if two or more inputs are applied at the same time, the output will be
unpredictable. The circuit shown in Fig. 12.41 provides a simple illustration of
how an encoder and a decoder can be used together to drive an LED display via a
0-to-9 keypad. The 74LS147 encodes a keypad’s input into BCD (negative logic).
A set of inverters then converts the negative true BCD into positive true BCD. The
transformed BCD is then fed into a 7447 seven-segment LED display
decoder/driver IC. Figure 12.42 shows a 74148 octal-to-binary priory encoder IC.
It is used to transform a specified single octal input into a binary 3-bit output code.
As with the 74LS147, the 74148 comes with a priority feature, meaning, again,
that if two or more inputs are selected at the same time, only the higher order
number is selected. complement and ten’s complement. The active-high BCD code
is applied to inputs A through E. The G input is an active-low enable input.
Keyboard Encoders
We use the I-PAC2, I-PAC4 and J-PAC Keyboard Encoders in our products, as
well as many custom applications, and find them to be an excellent device.
All 3 models simply plug into the PS/2 Port of a PC or the USB Port of a PC or
Mac and are automatically detected as a keyboard. All you do is wire up the
microswitches from your pushbuttons and joysticks and they are detected by your
computer as keystrokes.
The great thing about these encoders is that they have been purpose-built for
Home Arcading and as such they do not suffer from key ghosting/blocking, unlike
many DIY keyboard hacks that people have attempted. They are also fully
programmable, so although they come ready to run with the default key inputs
used by the popular MAME emulator, you can reprogram them to suit whatever
program you wish to use.
The encoders come with a PS/2 Keyboard Extension cable and a USB conversion
cable is available at additional cost. We also supply a CD containing the I-PAC
programming utility along with comprehensive instructions. We strongly suggest
you visit the Ultimarc website to get the full impact of what these encoders have
to offer.
I-PAC2 Keyboard Encoder
PRICE: $55.00
This encoder has 32 inputs available, which is enough to connect 2 joysticks with
8 buttons each, 2 player start buttons, 2 coin inputs and now has an additional 4
inputs for extra buttons or whatever else you may wish to include. This is more
than enough to handle most DIY arcade cabinet projects.
They come with a PS/2 keyboard extension cable.
**NEW MODEL** - Now with 32 inputs.
Options:

Add a USB converter for an extra $2.00AUD.
Quantity:
1
Optional USB converter?
No
I-PAC4 Keyboard Encoder
Price: $82.50AUD
If you are creating a BIG 4 player cabinet, then this is the encoder for you. With
56 inputs you can connect up to 4 joysticks with 8 buttons each, along with 4
player start buttons and 4 coin inputs. That would make a very serious looking
control panel!
They come with a PS/2 keyboard extension cable.
Options:

Add a USB converter for an extra $2.00AUD.
digital comparator or magnitude comparator is a hardware electronic device
that takes two numbers as input in binary form and determines whether one
number is greater than, less than or equal to the other number. Comparators are
used in central processing units (CPUs) and microcontrollers (MCUs). Examples
of digital comparator include the CMOS 4063 and 4585 and the TTL 7485 and
74682-'89.
The analog equivalent of digital comparator is the voltage comparator. Many
microcontrollers have analog comparators on some of their inputs that can be read
or trigger an interrupt
Comparator truth tables
The operation of a single bit digital comparator can be expressed as a truth table:
Inputs Outputs
0
0
1
1
0
1
0
1
0
0
1
0
1
0
0
1
0
1
0
0
The operation of a two bit digital comparator can be expressed as a truth table:
Inputs
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
0
0
0
0
1
1
1
1
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
0
1
1
0
0
1
1
Outputs
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
1
1
0
0
1
1
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
1
1
0
0
1
1
1
0
Implementation
Consider two 4-bit binary numbers A and B such that
Here each subscript represents one of the digits in the numbers.
Equality
The binary numbers A and B will be equal if all the pairs of significant digits of
both numbers are equal, i.e.,
,
,
and
Since the numbers are binary, the digits are either 0 or 1 and the boolean function
for equality of any two digits and can be expressed as
.
is 1 only if
and
are equal.
For the equality of A and B, all
variables (for i=0,1,2,3) must be 1.
So the quality condition of A and B can be implemented using the AND operation
as
The binary variable (A=B) is 1 only if all pairs of digits of the two numbers are
equal.
Inequality
In order to manually determine the greater of two binary numbers, we inspect the
relative magnitudes of pairs of significant digits, starting from the most significant
bit, gradually proceeding towards lower significant bits until an inequality is
found. When an inequality is found, if the corresponding bit of A is 1 and that of
B is 0 then we conclude that A>B.
This sequential comparison can be expressed logically as:
(A>B) and (A < B) are output binary variables, which are equal to 1 when A>B or
A<B respectively.
SESSION 10
FLIP FLOP AND LATCHES
In electronics, a flip-flop or latch is a circuit that has two stable states and can be
used to store state information. Flip Flop is a bistable multivibrator. The circuit
can be made to change state by signals applied to one or more control inputs and
will have one or two outputs. It is the basic storage element in sequential logic.
Flip-flops and latches are a fundamental building block of digital electronics
systems used in computers, communications, and many other types of systems.
Flip-flops and latches are used as data storage elements. Such data storage can be
used for storage of state, and such a circuit is described as sequential logic. When
used in a finite-state machine, the output and next state depend not only on its
current input, but also on its current state (and hence, previous inputs). It can also
be used for counting of pulses, and for synchronizing variably-timed input signals
to some reference timing signal.
The first electronic flip-flop was invented in 1918 by William Eccles and F. W.
Jordan.[3][4] It was initially called the Eccles–Jordan trigger circuit and consisted
of two active elements (vacuum tubes).[5] Such circuits and their transistorized
versions were common in computers even after the introduction of integrated
circuits, though flip-flops made from logic gates are also common now.[6][7]
Early flip-flops were known variously as trigger circuits or multivibrators. A
multivibrator is a two-state circuit; they come in several varieties, based on
whether each state is stable or not: an astable multivibrator is not stable in either
state, so it acts as a relaxation oscillator; a monostable multivibrator makes a pulse
while in the unstable state, then returns to the stable state, and is known as a oneshot; a bistable multivibrator has two stable states, and this is the one usually
known as a flip-flop. However, this terminology has been somewhat variable,
historically. For example:





1942 – multivibrator implies astable: "The multivibrator circuit (Fig. 7-6) is
somewhat similar to the flip-flop circuit, but the coupling from the anode of
one valve to the grid of the other is by a condenser only, so that the
coupling is not maintained in the steady state."[8]
1942 – multivibrator as a particular flip-flop circuit: "Such circuits were
known as 'trigger' or 'flip-flop' circuits and were of very great importance.
The earliest and best known of these circuits was the multivibrator."[9]
1943 – flip-flop as one-shot pulse generator: "It should be noted that an
essential difference between the two-valve flip-flop and the multivibrator is
that the flip-flop has one of the valves biased to cutoff."[10]
1949 – monostable as flip-flop: "Monostable multivibrators have also been
called 'flip-flops'."[11]
1949 – monostable as flip-flop: "... a flip-flop is a monostable multivibrator
and the ordinary multivibrator is an astable multivibrator."[12]
According to P. L. Lindley, a JPL engineer, the flip-flop types discussed below
(RS, D, T, JK) were first discussed in a 1954 UCLA course on computer design
by Montgomery Phister, and then appeared in his book Logical Design of Digital
Computers.[13][14] Lindley was at the time working at Hughes Aircraft under Dr.
Eldred Nelson, who had coined the term JK for a flip-flop which changed states
when both inputs were on. The other names were coined by Phister. They differ
slightly from some of the definitions given below. Lindley explains that he heard
the story of the JK flip-flop from Dr. Eldred Nelson, who is responsible for
coining the term while working at Hughes Aircraft. Flip-flops in use at Hughes at
the time were all of the type that came to be known as J-K. In designing a logical
system, Dr. Nelson assigned letters to flip-flop inputs as follows: #1: A & B, #2: C
& D, #3: E & F, #4: G & H, #5: J & K. Nelson used the notations "j-input" and "kinput" in a patent application filed in 1953.[15]
Implementation
A traditional latch circuit based on bipolar junction transistors
Flip-flops can be either simple (transparent or asynchronous) or clocked
(synchronous); the transparent ones are commonly called latches.[1] The word
latch is mainly used for storage elements, while clocked devices are described as
flip-flops
Simple flip-flops can be built around a pair of cross-coupled inverting elements:
vacuum tubes, bipolar transistors, field effect transistors, inverters, and inverting
logic gates have all been used in practical circuits. Clocked devices are specially
designed for synchronous systems; such devices ignore their inputs except at the
transition of a dedicated clock signal (known as clocking, pulsing, or strobing).
Clocking causes the flip-flop to either change or retain its output signal based
upon the values of the input signals at the transition. Some flip-flops change
output on the rising edge of the clock, others on the falling edge.
Since the elementary amplifying stages are inverting, two stages can be connected
in succession (as a cascade) to form the needed non-inverting amplifier. In this
configuration, each amplifier may be considered as an active inverting feedback
network for the other inverting amplifier. Thus the two stages are connected in a
non-inverting loop although the circuit diagram is usually drawn as a symmetric
cross-coupled pair (both the drawings are initially introduced in the Eccles–Jordan
patent).
Flip-flop types
Flip-flops can be divided into common types: the SR ("set-reset"), D ("data" or
"delay"[16]), T ("toggle"), and JK types are the common ones. The behavior of a
particular type can be described by what is termed the characteristic equation,
which derives the "next" (i.e., after the next clock pulse) output,
, in terms of
the input signal(s) and/or the current output, .
[edit] Simple set-reset latches
[edit] SR NOR latch
An SR latch, constructed from a pair of cross-coupled NOR gates (an animated
picture). Red and black mean logical '1' and '0', respectively.
When using static gates as building blocks, the most fundamental latch is the
simple SR latch, where S and R stand for set and reset. It can be constructed from
a pair of cross-coupled NOR logic gates. The stored bit is present on the output
marked Q.
While the S and R inputs are both low, feedback maintains the Q and Q outputs in
a constant state, with Q the complement of Q. If S (Set) is pulsed high while R
(Reset) is held low, then the Q output is forced high, and stays high when S returns
to low; similarly, if R is pulsed high while S is held low, then the Q output is
forced low, and stays low when R returns to low.
SR latch operation[17]
Characteristic table Excitation table
S R Qnext Action
Q Qnext
S R
00 Q
hold state
0 0
0 x
01 0
reset
0 1
1 0
10 1
set
1 0
0 1
11 X
not allowed 1 1
X 0
The R = S = 1 combination is called a restricted combination or a forbidden
state because, as both NOR gates then output zeros, it breaks the logical equation
Q = not Q. The combination is also inappropriate in circuits where both inputs
may go low simultaneously (i.e. a transition from restricted to keep). The output
would lock at either 1 or 0 depending on the propagation time relations between
the gates (a race condition). In certain implementations, it could also lead to
longer ringings (damped oscillations) before the output settles, and thereby result
in undetermined values (errors) in high-frequency digital circuits. Although this
condition is usually avoided, it can be useful in some applications.[citation needed]
To overcome the restricted combination, one can add gates to the inputs that
would convert (S,R) = (1,1) to one of the non-restricted combinations. That can
be:



Q = 1 (1,0) – referred to as an S-latch
Q = 0 (0,1) – referred to as an R-latch
Keep state (0,0) – referred to as an E-latch
Alternatively, the restricted combination can be made to toggle the output. The
result is the JK latch.
Characteristic: Q+ = R'Q + R'S or Q+ = R'Q + S.[18]
[edit] SR NAND latch
An SR latch
This is an alternate model of the simple SR latch which is built with NAND (not
AND) logic gates. Set and reset now become active low signals, denoted S and R
respectively. Otherwise, operation is identical to that of the SR latch. Historically,
SR-latches have been predominant despite the notational inconvenience of activelow inputs.[citation needed]
SR latch operation
S R Action
0 0 Restricted combination
01 Q=1
Symbol for an SR NAND latch
10 Q=0
1 1 No Change
JK latch
The JK latch is much less used than the JK flip-flop. The JK latch follows the
following state table:
JK latch truth table
J K Qnext Comment
00 Q
No change
01 0
Reset
10 1
Set
11 Q
Toggle
Hence, the JK latch is an SR latch that is made to toggle its output when passed
the restricted combination of 11. Unlike the JK flip-flop, the 11 input combination
for the SR latch is not useful because there is no clock that directs toggling.[19]
[Gated latches and conditional transparency
Latches are designed to be transparent. That is, input signal changes cause
immediate changes in output; when several transparent latches follow each other,
using the same clock signal, signals can propagate through all of them at once.
Alternatively, additional logic can be added to a simple transparent latch to make
it non-transparent or opaque when another input (an "enable" input) is not
asserted. By following a transparent-high latch with a transparent-low (or
opaque-high) latch, a master–slave flip-flop is implemented.
[edit] Gated SR latch
A gated SR latch circuit diagram constructed from NOR gates.
A synchronous SR latch (sometimes clocked SR flip-flop) can be made by adding a
second level of NAND gates to the inverted SR latch (or a second level of AND
gates to the direct SR latch). The extra gates further invert the inputs so the simple
SR latch becomes a gated SR latch (and a simple SR latch would transform into a
gated SR latch with inverted enable).
With E high (enable true), the signals can pass through the input gates to the
encapsulated latch; all signal combinations except for (0,0) = hold then
immediately reproduce on the (Q,Q) output, i.e. the latch is transparent.
With E low (enable false) the latch is closed (opaque) and remains in the state it
was left the last time E was high.
The enable input is sometimes a clock signal, but more often a read or write
strobe.
Gated SR latch operation
E/C Action
0
No action (keep state)
Symbol for a gated SR latch
1
The same as non-clocked SR latch
[edit] Gated D latch
A D-type transparent latch based on an SR NAND latch
A gated D latch based on an SR NOR latch
This latch exploits the fact that, in the two active input combinations (01 and 10)
of a gated SR latch, R is the complement of S. The input NAND stage converts the
two D input states (0 and 1) to these two input combinations for the next SR latch
by inverting the data input signal. The low state of the enable signal produces the
inactive "11" combination. Thus a gated D-latch may be considered as a one-input
synchronous SR latch. This configuration prevents application of the restricted
input combination. It is also known as transparent latch, data latch, or simply
gated latch. It has a data input and an enable signal (sometimes named clock, or
control). The word transparent comes from the fact that, when the enable input is
on, the signal propagates directly through the circuit, from the input D to the
output Q.
Transparent latches are typically used as I/O ports or in asynchronous systems, or
in synchronous two-phase systems (synchronous systems that use a two-phase
clock), where two latches operating on different clock phases prevent data
transparency as in a master–slave flip-flop.
Latches are available as integrated circuits, usually with multiple latches per chip.
For example, 74HC75 is a quadruple transparent latch in the 7400 series.
Gated D latch truth table
E/C D
Q
0
Qprev Qprev No change Symbol for a gated D latch
X
Q
Comment
1
0
0
1
Reset
1
1
1
0
Set
The truth table shows that when the enable/clock input is 0, the D input has no
effect on the output. When E/C is high, the output equals D.
Earle latch
Earle latch uses complementary enable inputs: enable active low (E_L) and enable
active high (E_H)
The classic gated latch designs have some undesirable characteristics. [20] They
require double-rail logic or an inverter. The input-to-output propagation may take
up to three gate delays. The input-to-output propagation is not constant – some
outputs take two gate delays while others take three.
Designers looked for alternatives.[21] A successful alternative is the Earle latch.[22]
It requires only a single data input, and its output takes a constant two gate delays.
In addition, the two gate levels of the Earle latch can be merged with the last two
gate levels of the circuits driving the latch.[clarification needed] Merging the latch
function can implement the latch with no additional gate delays.[20]
The Earle latch is hazard free.[23] If the middle NAND gate is omitted, then one
gets the polarity hold latch, which is commonly used because it demands less
logic.[23][24] However, it is susceptible to logic hazard. Intentionally skewing the
clock signal can avoid the hazard.[24]
D flip-flop
D flip-flop symbol
The D flip-flop is widely used. It is also known as a data or delay flip-flop.
The D flip-flop captures the value of the D-input at a definite portion of the clock
cycle (such as the rising edge of the clock). That captured value becomes the Q
output. At other times, the output Q does not change.[25][26] The D flip-flop can be
viewed as a memory cell, a zero-order hold, or a delay line.[citation needed]
Truth table:
Clock
D Qnext
Rising edge 0 0
Rising edge 1 1
Non-Rising X Q
('X' denotes a Don't care condition, meaning the signal is irrelevant)
Most D-type flip-flops in ICs have the capability to be forced to the set or reset
state (which ignores the D and clock inputs), much like an SR flip-flop. Usually,
the illegal S = R = 1 condition is resolved in D-type flip-flops. By setting S = R =
0, the flip-flop can be used as described above. Here is the truth table for the
others S and R possible configurations:
Inputs
Outputs
S R D > Q Q'
01 XX0
1
10 XX1
0
11 XX1
1
4-bit serial-in, parallel-out (SIPO) shift register
These flip-flops are very useful, as they form the basis for shift registers, which
are an essential part of many electronic devices. The advantage of the D flip-flop
over the D-type "transparent latch" is that the signal on the D input pin is captured
the moment the flip-flop is clocked, and subsequent changes on the D input will
be ignored until the next clock event. An exception is that some flip-flops have a
"reset" signal input, which will reset Q (to zero), and may be either asynchronous
or synchronous with the clock.
The above circuit shifts the contents of the register to the right, one bit position on
each active transition of the clock. The input X is shifted into the leftmost bit
position.
[edit] Classical positive-edge-triggered D flip-flop
A positive-edge-triggered D flip-flop
This clever circuit[27] consists of two stages implemented by SR NAND latches.
The input stage (the two latches on the left) processes the clock and data signals to
ensure correct input signals for the output stage (the single latch on the right). If
the clock is low, both the output signals of the input stage are high regardless of
the data input; the output latch is unaffected and it stores the previous state. When
the clock signal changes from low to high, only one of the output voltages
(depending on the data signal) goes low and sets/resets the output latch: if D = 0,
the lower output becomes low; if D = 1, the upper output becomes low. If the
clock signal continues staying high, the outputs keep their states regardless of the
data input and force the output latch to stay in the corresponding state as the input
logical zero remains active while the clock is high. Hence the role of the output
latch is to store the data only while the clock is low.
The circuit is closely related to the gated D latch as both the circuits convert the
two D input states (0 and 1) to two input combinations (01 and 10) for the output
SR latch by inverting the data input signal (both the circuits split the single D
signal in two complementary S and R signals). The difference is that in the gated
D latch simple NAND logical gates are used while in the positive-edge-triggered
D flip-flop SR NAND latches are used for this purpose. The role of these latches
is to "lock" the active output producing low voltage (a logical zero); thus the
positive-edge-triggered D flip-flop can be thought of as a gated D latch with
latched input gates.
SESSION 11
Master–slave edge-triggered D flip-flop
A master–slave D flip-flop is created by connecting two gated D latches in series,
and inverting the enable input to one of them. It is called master–slave because the
second latch in the series only changes in response to a change in the first (master)
latch.
A master–slave D flip-flop. It responds on the negative edge of the enable input
(usually a clock)
An implementation of a master–slave D flip-flop that is triggered on the positive
edge of the clock
For a positive-edge triggered master–slave D flip-flop, when the clock signal is
low (logical 0) the "enable" seen by the first or "master" D latch (the inverted
clock signal) is high (logical 1). This allows the "master" latch to store the input
value when the clock signal transitions from low to high. As the clock signal goes
high (0 to 1) the inverted "enable" of the first latch goes low (1 to 0) and the value
seen at the input to the master latch is "locked". Nearly simultaneously, the twice
inverted "enable" of the second or "slave" D latch transitions from low to high (0
to 1) with the clock signal. This allows the signal captured at the rising edge of the
clock by the now "locked" master latch to pass through the "slave" latch. When
the clock signal returns to low (1 to 0), the output of the "slave" latch is "locked",
and the value seen at the last rising edge of the clock is held while the "master"
latch begins to accept new values in preparation for the next rising clock edge.
By removing the leftmost inverter in the circuit at side, a D-type flip-flop that
strobes on the falling edge of a clock signal can be obtained. This has a truth table
like this:
DQ>
Qnext
0 X Falling 0
1 X Falling 1
A CMOS IC implementation of a "true single-phase edge-triggered flip-flop with
reset"
Edge-triggered dynamic D storage element
An efficient functional alternative to a D flip-flop can be made with dynamic
circuits as long as it is clocked often enough; while not a true flip-flop, it is still
called a flip-flop for its functional role. While the master–slave D element is
triggered on the edge of a clock, its components are each triggered by clock levels.
The "edge-triggered D flip-flop", as it is called even though it is not a true flipflop, does not have the master–slave properties.
Edge-triggered D flip-flops are often implemented in integrated high-speed
operations using dynamic logic. This means that the digital output is stored on
parasitic device capacitance while the device is not transitioning. This design of
dynamic flip flops also enables simple resetting since the reset operation can be
performed by simply discharging one or more internal nodes. A common dynamic
flip-flop variety is the true single-phase clock (TSPC) type which performs the
flip-flop operation with little power and at high speeds. However, dynamic flipflops will typically not work at static or low clock speeds: given enough time,
leakage paths may discharge the parasitic capacitance enough to cause the flipflop to enter invalid states.
[edit] T flip-flop
A circuit symbol for a T-type flip-flop
If the T input is high, the T flip-flop changes state ("toggles") whenever the clock
input is strobed. If the T input is low, the flip-flop holds the previous value. This
behavior is described by the characteristic equation:
(expanding the XOR operator)
and can be described in a truth table:
T flip-flop operation[28]
Characteristic table
Excitation table
Comment
Comment
0 0 0
hold state (no clk) 0 0
0 No change
0 1 1
hold state (no clk) 1 1
0 No change
1 0 1
toggle
0 1
1 Complement
1 1 0
toggle
1 0
1 Complement
When T is held high, the toggle flip-flop divides the clock frequency by two; that
is, if clock frequency is 4 MHz, the output frequency obtained from the flip-flop
will be 2 MHz. This "divide by" feature has application in various types of digital
counters. A T flip-flop can also be built using a JK flip-flop (J & K pins are
connected together and act as T) or D flip-flop (T input and Qprevious is connected
to the D input through an XOR gate).
JK flip-flop
A circuit symbol for a positive-edge-triggered JK flip-flop
JK flip-flop timing diagram
The JK flip-flop augments the behavior of the SR flip-flop (J=Set, K=Reset) by
interpreting the S = R = 1 condition as a "flip" or toggle command. Specifically,
the combination J = 1, K = 0 is a command to set the flip-flop; the combination J =
0, K = 1 is a command to reset the flip-flop; and the combination J = K = 1 is a
command to toggle the flip-flop, i.e., change its output to the logical complement
of its current value. Setting J = K = 0 does NOT result in a D flip-flop, but rather,
will hold the current state. To synthesize a D flip-flop, simply set K equal to the
complement of J. Similarly, to synthesize a T flip-flop, set K equal to J. The JK
flip-flop is therefore a universal flip-flop, because it can be configured to work as
an SR flip-flop, a D flip-flop, or a T flip-flop.
The characteristic equation of the JK flip-flop is:
and the corresponding truth table is:
JK flip-flop operation[28]
Characteristic table Excitation table
J K Qnext Comment Q Qnext J K Comment
00 Q
hold state 0 0
0 X No change
01 0
reset
0 1
1 X Set
10 1
Set
1 0
X 1 Reset
11 Q
toggle
1 1
X 0 No change
Metastability
Flip-flops are subject to a problem called metastability, which can happen when
two inputs, such as data and clock or clock and reset, are changing at about the
same time. When the order is not clear, within appropriate timing constraints, the
result is that the output may behave unpredictably, taking many times longer than
normal to settle to one state or the other, or even oscillating several times before
settling. Theoretically, the time to settle down is not bounded. In a computer
system, this metastability can cause corruption of data or a program crash, if the
state is not stable before another circuit uses its value; in particular, if two
different logical paths use the output of a flip-flop, one path can interpret it as a 0
and the other as a 1 when it has not resolved to stable state, putting the machine
into an inconsistent state.[29]
[edit] Timing considerations
[edit] Setup, hold, recovery, removal times
Flip-flop setup, hold and clock-to-output timing parameters
Setup time is the minimum amount of time the data signal should be held steady
before the clock event so that the data are reliably sampled by the clock. This
applies to synchronous input signals to the flip-flop.
Hold time is the minimum amount of time the data signal should be held steady
after the clock event so that the data are reliably sampled. This applies to
synchronous input signals to the flip-flop.
Synchronous signals (like Data) should be held steady from the set-up time to the
hold time, where both times are relative to the clock signal.
Recovery time is like setup time for asynchronous ports (set, reset). It is the time
available between the asynchronous signals going inactive and the active clock
edge.
Removal time is like hold time for asynchronous ports (set, reset). It is the time
between active clock edge and asynchronous signal going inactive.[30]
Short impulses applied to asynchronous inputs (set, reset) should not be applied
completely within the recovery-removal period, or else it becomes entirely
indeterminable whether the flip-flop will transition to the appropriate state. In
another case, where an asynchronous signal simply makes one transition that
happens to fall between the recovery/removal time, eventually the asynchronous
signal will be applied, but in that case it is also possible that a very short glitch
may appear on the output, dependent on the synchronous input signal. This second
situation may or may not have significance to a circuit design.
Set and Reset (and other) signals may be either synchronous or asynchronous and
therefore may be characterized with either Setup/Hold or Recovery/Removal
times, and synchronicity is very dependent on the TTL design of the flip-flop.
Differentiation between Setup/Hold and Recovery/Removal times is often
necessary when verifying the timing of larger circuits because asynchronous
signals may be found to be less critical than synchronous signals. The
differentiation offers circuit designers the ability to define the verification
conditions for these types of signals independently.
The metastability in flip-flops can be avoided by ensuring that the data and control
inputs are held valid and constant for specified periods before and after the clock
pulse, called the setup time (tsu) and the hold time (th) respectively. These times
are specified in the data sheet for the device, and are typically between a few
nanoseconds and a few hundred picoseconds for modern devices.
Unfortunately, it is not always possible to meet the setup and hold criteria,
because the flip-flop may be connected to a real-time signal that could change at
any time, outside the control of the designer. In this case, the best the designer can
do is to reduce the probability of error to a certain level, depending on the required
reliability of the circuit. One technique for suppressing metastability is to connect
two or more flip-flops in a chain, so that the output of each one feeds the data
input of the next, and all devices share a common clock. With this method, the
probability of a metastable event can be reduced to a negligible value, but never to
zero. The probability of metastability gets closer and closer to zero as the number
of flip-flops connected in series is increased.
So-called metastable-hardened flip-flops are available, which work by reducing
the setup and hold times as much as possible, but even these cannot eliminate the
problem entirely. This is because metastability is more than simply a matter of
circuit design. When the transitions in the clock and the data are close together in
time, the flip-flop is forced to decide which event happened first. However fast we
make the device, there is always the possibility that the input events will be so
close together that it cannot detect which one happened first. It is therefore
logically impossible to build a perfectly metastable-proof flip-flop.
COUNTERS
In digital logic and computing, a counter is a device which stores (and
sometimes displays) the number of times a particular event or process has
occurred, often in relationship to a clock signal. In electronics, counters can be
implemented quite easily using register-type circuits such as the flip-flop, and a
wide variety of classifications exist:








Asynchronous (ripple) counter – changing state bits are used as clocks to
subsequent state flip-flops
Synchronous counter – all state bits change under control of a single clock
Decade counter – counts through ten states per stage
Up/down counter – counts both up and down, under command of a control
input
Ring counter – formed by a shift register with feedback connection in a ring
Johnson counter – a twisted ring counter
Cascaded counter
modulas counter.
Each is useful for different applications. Usually, counter circuits are digital in
nature, and count in natural binary. Many types of counter circuits are available as
digital building blocks, for example a number of chips in the 4000 series
implement different counters.
Occasionally there are advantages to using a counting sequence other than the
natural binary sequence—such as the binary coded decimal counter, a linear
feedback shift register counter, or a Gray-code counter.
Counters are useful for digital clocks and timers, and in oven timers, VCR clocks,
etc.[1]
Asynchronous (ripple) counter
Asynchronous counter created from two JK flip-flops
An asynchronous (ripple) counter is a single JK-type flip-flop, with its J (data)
input fed from its own inverted output. This circuit can store one bit, and hence
can count from zero to one before it overflows (starts over from 0). This counter
will increment once for every clock cycle and takes two clock cycles to overflow,
so every cycle it will alternate between a transition from 0 to 1 and a transition
from 1 to 0. Notice that this creates a new clock with a 50% duty cycle at exactly
half the frequency of the input clock. If this output is then used as the clock signal
for a similarly arranged D flip-flop (remembering to invert the output to the input),
one will get another 1 bit counter that counts half as fast. Putting them together
yields a two-bit counter:
Cycle Q1 Q0 (Q1:Q0)dec
0
0 0 0
1
0 1 1
2
1 0 2
3
1 1 3
4
0 0 0
You can continue to add additional flip-flops, always inverting the output to its
own input, and using the output from the previous flip-flop as the clock signal.
The result is called a ripple counter, which can count to 2 n − 1 where n is the
number of bits (flip-flop stages) in the counter. Ripple counters suffer from
unstable outputs as the overflows "Ripple" from stage to stage, but they do find
frequent application as dividers for clock signals, where the instantaneous count is
unimportant, but the division ratio overall is (to clarify this, a 1-bit counter is
exactly equivalent to a divide by two circuit; the output frequency is exactly half
that of the input when fed with a regular train of clock pulses).
The use of flip-flop outputs as clocks leads to timing skew between the count data
bits, making this ripple technique incompatible with normal synchronous circuit
design styles.
Synchronous counter
A 4-bit synchronous counter using JK flip-flops
A simple way of implementing the logic for each bit of an ascending counter
(which is what is depicted in the image to the right) is for each bit to toggle when
all of the less significant bits are at a logic high state. For example, bit 1 toggles
when bit 0 is logic high; bit 2 toggles when both bit 1 and bit 0 are logic high; bit
3 toggles when bit 2, bit 1 and bit 0 are all high; and so on.
Synchronous counters can also be implemented with hardware finite state
machines, which are more complex but allow for smoother, more stable
transitions.
Hardware-based counters are of this type.
SESSION 12
Decade counter
A circuit diagram of decade counter using JK FlipFlops (74LS112D)
A decade counter is one that counts in decimal digits, rather than binary. A decade
counter may have each digit binary encoded (that is, it may count in binary-coded
decimal, as the 7490 integrated circuit did) or other binary encodings (such as the
bi-quinary encoding of the 7490 integrated circuit). Alternatively, it may have a
"fully decoded" or one-hot output code in which each output goes high in turn (the
4017 is such a circuit). The latter type of circuit finds applications in multiplexers
and demultiplexers, or wherever a scanning type of behavior is useful. Similar
counters with different numbers of outputs are also common.
The decade counter is also known as a mod-counter when it counts to ten (0, 1, 2,
3, 4, 5, 6, 7, 8, 9). A Mod Counter that counts to 64 stops at 63 because 0 counts
as a valid digit.
Up/down counter
A counter that can change state in either direction, under the control of an up or
down selector input, is known as an up/down counter. When the selector is in the
up state, the counter increments its value. When the selector is in the down state,
the counter decrements the count.
Ring counter
A ring counter is a circular shift register which is initiated such that only one of its
flip-flops is the state one while others are in their zero states.
A ring counter is a Shift Register (a cascade connection of flip-flops) with the
output of the last one connected to the input of the first, that is, in a ring.
Typically, a pattern consisting of a single bit is circulated so the state repeats
every n clock cycles if n flip-flops are used. It can be used as a cycle counter of n
states.
Johnson counter
A Johnson counter (or switchtail ring counter, twisted-ring counter, walking-ring
counter, or Moebius counter) is a modified ring counter, where the output from the
last stage is inverted and fed back as input to the first stage.[2][3][4] The register
cycles through a sequence of bit-patterns, whose length is equal to twice the length
of the shift register, continuing indefinitely. These counters find specialist
applications, including those similar to the decade counter, digital-to-analog
conversion, etc. They can be implemented easily using D- or JK-type flip-flops.
Computer science counters
In computability theory, a counter is considered a type of memory. A counter
stores a single natural number (initially zero) and can be arbitrarily long. A
counter is usually considered in conjunction with a finite-state machine (FSM),
which can perform the following operations on the counter:



Check whether the counter is zero
Increment the counter by one.
Decrement the counter by one (if it's already zero, this leaves it unchanged).
The following machines are listed in order of power, with each one being strictly
more powerful than the one below it:
1.
2.
3.
4.
5.
Deterministic or non-deterministic FSM plus two counters
Non-deterministic FSM plus one stack
Non-deterministic FSM plus one counter
Deterministic FSM plus one counter
Deterministic or non-deterministic FSM.
For the first and last, it doesn't matter whether the FSM is a deterministic finite
automaton or a nondeterministic finite automaton. They have power. The first two
and the last one are levels of the Chomsky hierarchy.
The first machine, an FSM plus two counters, is equivalent in power to a Turing
machine. See the article on counter machines for a proof.
[edit] Mechanical counters
Mechanical counter wheels showing both sides. The bump on the wheel shown at
the top engages the ratchet on the wheel below every turn.
Several mechanical counters
Long before electronics became common, mechanical devices were used to count
events. These are known as tally counters. They typically consist of a series of
disks mounted on an axle, with the digits 0 through 9 marked on their edge. The
right most disk moves one increment with each event. Each disk except the leftmost has a protrusion that, after the completion of one revolution, moves the next
disk to the left one increment. Such counters were originally used to control
manufacturing processes, but were later used as odometers for bicycles and cars
and in fuel dispensers. One of the largest manufacturers was the Veeder-Root
company, and their name was often used for this type of counter.[5]
In digital circuits, a shift register is a cascade of flip flops, sharing the same
clock, in which the output of each flip-flop is connected to the "data" input of the
next flip-flop in the chain, resulting in a circuit that shifts by one position the "bit
array" stored in it, shifting in the data present at its input and shifting out the last
bit in the array, at each transition of the clock input. More generally, a shift
register may be multidimensional, such that its "data in" and stage outputs are
themselves bit arrays: this is implemented simply by running several shift registers
of the same bit-length in parallel.
SESSION 13
SHIFT REGISTER
Shift registers can have both parallel and serial inputs and outputs. These are often
configured as serial-in, parallel-out (SIPO) or as parallel-in, serial-out (PISO).
There are also types that have both serial and parallel input and types with serial
and parallel output. There are also bi-directional shift registers which allow
shifting in both directions: L→R or R→L. The serial input and last output of a
shift register can also be connected to create a circular shift register.
Serial-in, serial-out (SISO)
Destructive readout
These are the simplest kind of shift registers. The data string is presented 0 0 0 0
at 'Data In', and is shifted right one stage each time 'Data Advance' is
brought high. At each advance, the bit on the far left (i.e. 'Data In') is 1 0 0 0
shifted into the first flip-flop's output. The bit on the far right (i.e. 'Data
Out') is shifted out and lost.
0 1 0 0
The data are stored after each flip-flop on the 'Q' output, so there are four
storage 'slots' available in this arrangement, hence it is a 4-Bit Register. 1 0 1
To give an idea of the shifting pattern, imagine that the register holds
0000 (so all storage slots are empty). As 'Data In' presents 1,0,1,1,0,0,0,0 1 1 0
(in that order, with a pulse at 'Data Advance' each time—this is called
clocking or strobing) to the register, this is the result. The left hand 0 1 1
column corresponds to the left-most flip-flop's output pin, and so on.
0 0 1
So the serial output of the entire register is 10110000. As you can see if
we were to continue to input data, we would get exactly what was put in,
0 0 0
but offset by four 'Data Advance' cycles. This arrangement is the
hardware equivalent of a queue. Also, at any time, the whole register can
0 0 0
be set to zero by bringing the reset (R) pins high.
0
1
0
1
1
0
This arrangement performs destructive readout - each datum is lost once it has
been shifted out of the right-most bit.
Serial-in, parallel-out (SIPO)
This configuration allows conversion from serial to parallel format. Data is input
serially, as described in the SISO section above. Once the data has been input, it
may be either read off at each output simultaneously, or it can be shifted out and
replaced.
Parallel-in, Serial-out (PISO)
This configuration has the data input on lines D1 through D4 in parallel format. To
write the data to the register, the Write/Shift control line must be held LOW. To
shift the data, the W/S control line is brought HIGH and the registers are clocked.
The arrangement now acts as a SISO shift register, with D1 as the Data Input.
However, as long as number of clock cycles is not more than the length of the datastring, the Data Output, Q, will be the parallel data read off in order.
4-Bit PISO Shift Register
The animation below shows the write/shift sequence, including the internal state of
the shift register.
Uses
One of the most common uses of a shift register is to convert between serial and
parallel interfaces. This is useful as many circuits work on groups of bits in
parallel, but serial interfaces are simpler to construct. Shift registers can be used as
simple delay circuits. Several bidirectional shift registers could also be connected
in parallel for a hardware implementation of a stack.
SIPO registers are commonly attached to the output of microprocessors when more
output pins are required than are available. This allows several binary devices to be
controlled using only two or three pins - the devices in question are attached to the
parallel outputs of the shift register, then the desired state of all those devices can
be sent out of the microprocessor using a single serial connection. Similarly, PISO
configurations are commonly used to add more binary inputs to a microprocessor
than are available - each binary input (i.e. a switch or button, or more complicated
circuitry designed to output high when active) is attached to a parallel input of the
shift register, then the data is sent back via serial to the microprocessor using
several fewer lines than originally required.
Shift registers can also be used as pulse extenders. Compared to monostable
multivibrators, the timing has no dependency on component values, however it
requires external clock and the timing accuracy is limited by a granularity of this
clock. Example: Ronja Twister, where five 74164 shift registers create the core of
the timing logic this way (schematic).
In early computers, shift registers were used to handle data processing: two
numbers to be added were stored in two shift registers and clocked out into an
arithmetic and logic unit (ALU) with the result being fed back to the input of one
of the shift registers (the accumulator) which was one bit longer since binary
addition can only result in an answer that is the same size or one bit longer.
Many computer languages include instructions to 'shift right' and 'shift left' the data
in a register, effectively dividing by two or multiplying by two for each place
shifted.
Very large serial-in serial-out shift registers (thousands of bits in size) were used in
a similar manner to the earlier delay line memory in some devices built in the early
1970s. Such memories were sometimes called circulating memory. For example,
the Datapoint 3300 terminal stored its display of 25 rows of 72 columns of uppercase characters using fifty-four 200-bit shift registers, arranged in six tracks of nine
packs each, providing storage for 1800 six-bit characters. The shift register design
meant that scrolling the terminal display could be accomplished by simply pausing
the display output to skip one line of characters.[1]
Universal Shift Register
Today, high speed bi-directional "universal" type Shift Registers such as the TTL
74LS194, 74LS195 or the CMOS 4035 are available as a 4-bit multi-function
devices that can be used in either serial-to-serial, left shifting, right shifting, serialto-parallel, parallel-to-serial, and as a parallel-to-parallel multifunction data
register, hence the name "Universal". These devices can perform any combination
of parallel and serial input to output operations but require additional inputs to
specify desired function and to pre-load and reset the device.
4-bit Universal Shift Register 74LS194
Universal shift registers are very useful digital devices. They can be configured to
respond to operations that require some form of temporary memory, delay
information such as the SISO or PIPO configuration modes or transfer data from
one point to another in either a serial or parallel format. Universal shift registers
are frequently used in arithmetic operations to shift data to the left or right for
multiplication or division.
Summary of Shift Registers









Then to summarise.
A simple Shift Register can be made using only D-type flip-Flops, one flipFlop for each data bit.
The output from each flip-Flop is connected to the D input of the flip-flop at
its right.
Shift registers hold the data in their memory which is moved or "shifted" to
their required positions on each clock pulse.
Each clock pulse shifts the contents of the register one bit position to either
the left or the right.
The data bits can be loaded one bit at a time in a series input (SI)
configuration or be loaded simultaneously in a parallel configuration (PI).
Data may be removed from the register one bit at a time for a series output
(SO) or removed all at the same time from a parallel output (PO).
One application of shift registers is converting between serial and parallel
data.
Shift registers are identified as SIPO, SISO, PISO, PIPO, and universal shift
registers.
SESSION 14
Shift Register Counters
Two of the most common types of shift register counters are introduced here:
the Ring counter and the Johnson counter. They are basically shift registers
with the serial outputs connected back to the serial inputs in order to produce
particular sequences. These registers are classified as counters because they
exhibit a specified sequence of states.
Ring Counters
A ring counter is basically a circulating shift register in which the output of
the most significant stage is fed back to the input of the least significant stage.
The following is a 4-bit ring counter constructed from D flip-flops. The
output of each stage is shifted into the next stage on the positive edge of a
clock pulse. If the CLEAR signal is high, all the flip-flops except the first one
FF0 are reset to 0. FF0 is preset to 1 instead.
Since the count sequence has 4 distinct states,
the counter can be considered as a mod-4
counter. Only 4 of the maximum 16 states are
used, making ring counters very inefficient in
terms of state usage. But the major
advantage of a ring counter over a binary
counter is that it is self-decoding. No extra
decoding circuit is needed to determine what
state the counter is in.
Johnson Counters
APPLICATIONS OF SHIFT REGISTERS
The major application of a shift register is to convert between parallel and serial
data.
Shift registers are also used as keyboard encoders. The two applications of the shift
registers
are discussed.
1. Serial-to-Parallel Converter
Earlier, Multiplexer and Demultiplexer based Parallel to Serial and Serial to
Parallel converters were discussed. The Multiplexer and Demultiplexer require
registers to store the parallel data that is converted into serial data and parallel data
which is obtained after converting the incoming serial data. A Parallel In/Serial
Out shift register offers a better solution instead of using a Multiplexer-Register
combination to convert parallel data into serial data. Similarly, a Serial In/Parallel
Out shift register replaces a Demultiplexer-Register combination.
In Asynchronous Serial data transmission mode, a character which is constituted of
8- bits (which can include a parity bit) is transmitted. To separate one character
from another and to indicate when data is being transmitted and when the serial
transmission line is idle (no data is being transmitted) a set of start bit and stop bits
are appended at both ends of the 8-bit character. A character is preceded by a logic
low start bit. When the line is idle it is set to logic high, when a character is about
to be transmitted the start bit sets the line to logic low. The
logic low start bit is an indication that 8 character bits are to follow and the
transmission line is no longer in an idle state. After 8-character bits have been
transmitted, the end of the character is indicated by two stop bits that are at logic
high. The two logic bits indicate the end of the character and also set the
transmission line to the idle state. Therefore a total of 11 bits are transmitted to
send one character from one end to the other. The logic low start bit is also a signal
for the receiver circuit to start receiving the 8 character bits that are following the
start bit. The 11-bit serial character format is shown. Figure 35.1.
0
0/1
0/1
0/1
0/1 0/1
0/1
0/1
0/1
1
1
Stop
Data bits
Stop bits
bit
Figure 35.1
11-bit Serial Data format
A Serial to Parallel converter circuit based on shift registers is shown. Figure 35.2.
The serial data is preceded by a logic low start bit which triggers the J-K flip-flop.
The output of the flip-flop is set to logic high which enables the clock generator.
The clock pulses generated are connected to the clock input of a Serial In/Parallel
Out shift register and also to the clock input of an 8-bit counter. On each clock
transition, the Serial In/Parallel Out shift register shifts in one bit data. When the 8bit counter reaches its terminal count 111, the terminal count output
signal along with the clock signal trigger the One-Shot and also allow the Parallel
In/Parallel Out register to latch in the Parallel data at the output of the Serial
In/Parallel Out shift register.
The One-shot resets the J-K flip-flop output Q to logic 0 disabling the clock
generator and also clears the 8-bit counter to count 000.
356
CS302 - Digital Logic & Design
LOAD
Figure 35.2
Series-to-Parallel Converter
SESSION 15
2. Keyboard Encoder
Earlier a simple keypad encoder circuit was discussed where, the 0 to 9 digit
keypad was connected through a decade to BCD encoder. Pressing any keypad key
enables the corresponding input of the encoder circuit which encodes the input as a
4-bit BCD output.
Computer keyboards which have more keys employ a keyboard encoder circuit
that regularly scans the keyboard to check for any key press. Figure 35.3. The
scanning is done by organizing the keys in the form of rows and columns. With the
help of a shift register based ring counter one row is selected at a time. The two
counters are connected as an 8-bit Ring counter which sequences through a bit
pattern having all 1's and a single 0. The 8 state sequence selects one row at a time
by setting it to logic 0. If a key is pressed, the corresponding column also becomes
logic 0 as it connected to the selected row. The row and column which are selected
are encoded by the row and column encoders. When a key is pressed, the selected
column which is set to logic 0 sets the output of the NAND gate to logic 1 which
triggers two One Shots. The first One Shot inhibits the clock signal to the ring
counters for a short interval until the Key Code is stored. The One Shot also
triggers the second One- Shot that sends a pulse to the clock input of the Key Code
register. The Key Code Register stores the key ID represented as 3-bit column and
3-bit row code.357
CS302 - Digital Logic & Design
+V
SH / LD
CLK
74HC195
74HC195
(5KHz)
+V
Row Encoder
Column Encoder
74HC147
74HC147
One
One
Key Code Register
Shot
Shot
74HC174A
Figure 35.3
Keyboard Encoder circuit
Programmable Sequential Logic
Earlier PLD devices were discussed and their Combinational Modes were
discussed.
PLD devices can be programmed to implement Sequential Circuits. The AND-OR
gate array o a PLD device is used to implement the excitation inputs for the
memory element. The Memory element is implemented in the form of a flip-flop in
each OLMC module of the PLD device. The present state output of the memory
element is connected back to the AND gate array to form
358