Download Chapters_0_5 - SIUE Computer Science

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Fault tolerance wikipedia , lookup

Computer science wikipedia , lookup

Computer program wikipedia , lookup

Immunity-aware programming wikipedia , lookup

Manchester Mark 1 wikipedia , lookup

Transcript
Chapter 1: The Big Picture
The Layers of a Computing System
Communications
Applications
Operating Systems
Programming
Hardware
Information
•Sharing information via networks
•Examples: Internet, World Wide Web, Wireless
•High-level software to solve real-world problems
•Examples: Spreadsheets, Games, Word Processing
•Manages how hardware and software interact
•Special programs to ensure proper flow of data and
instructions
•Coded software instructions
•Designed to solve specific problems via the computer
•Circuitry to move the data where it needs to go
•Processors, memory, display monitors, etc.
•Digital data
•Binary representation of text, numerics, images, sounds, etc.
Chapter 1
The Big Picture
Page 1
History of Computer Science
The Abacus
Originally developed by the Babylonians around
2400 BC, this arithmetic calculating tool was
also used by ancient Egyptians, Greeks,
Romans, Indians, and Chinese.
The Algorithm
In the year 825, the Persian mathematician
Muhammad ibn Mūsā al-Khwārizmī developed
the concept of performing a series of steps in
order to accomplish a task, such as the
systematic application of arithmetic to algebra.
Chapter 1
The Big Picture
Page 2
History of Computer Science
The Analytical Engine
Designed by British mathematician
Charles Babbage in the mid-19th
century, this steam-powered
mechanical device (never successfully
built) had the functionality of today’s
modern computers.
Binary Logic
Also in the mid-1800’s, British
mathematician George Boole
developed a complete algebraic
system that allowed computational
processes to be mathematically
modeled with zeros and ones
(representing true/false, on/off, etc.).
Chapter 1
The Big Picture
Page 3
History of Computer Science
Computability
In the early 20th century, American mathematician
Alonzo Church and British mathematician Alan Turing
independently developed the thesis that a
mathematical method is effective if it could be set out
as a list of instructions able to be followed by a human
clerk (a “computer”) with paper and pencil, for as long
as necessary, and without ingenuity or insight.
Turing Machine
In 1936, Turing developed a mathematical
model for an extremely basic abstract
symbol-manipulating device which, despite
its simplicity, could be adapted to simulate
the logic of any computer that could
possibly be constructed.
Chapter 1
The Big Picture
Page 4
History of Computer Science
Digital Circuit Design
In 1937, Claude Shannon, an American electrical engineer,
recognized that Boolean algebra could be used to arrange
electromechanical relays, which were then used in telephone
routing switches, to solve logic problems, the basic concept
underlying all electronic digital computers.
Cybernetics
During World War II, American
mathematician Norbert Wiener
experimented with anti-aircraft
systems that automatically
interpreted radar images to
detect enemy planes. This
approach of developing
artificial systems by examining
real systems became known as
cybernetics.
Chapter 1
The Big Picture
Page 5
History of Computer Science
Transistor
The fundamental building block of the circuitry in modern electronic
devices was developed in the early 1950s. Because of its fast response
and accuracy, the transistor is used in a wide variety of digital and analog
functions, including switching, amplification, voltage regulation, and signal
modulation.
Programming Languages
In 1957, IBM released the Fortran programming language (the IBM
Mathematical Formula Translating System), designed to facilitate
numerical computation and scientific computing.
In 1958, a committee of European and American scientists developed
ALGOL, the Algorithmic Language, which pioneered the language
design features that characterize most modern languages.
In 1959, under the supervision of the U.S. Department of Defense, a
consortium of technology companies (IBM, RCA, Sylvania, Honeywell,
Burroughs, and Sperry-Rand) developed COBOL, the Common
Business-Oriented Language, to help develop business, financial, and
administrative systems for companies and governments.
Chapter 1
The Big Picture
Page 6
History of Computer Science
Operating Systems
In 1964, IBM’s System 360 mainframe computers utilized a
single operating system (rather than using separate ad hoc
systems for each machine) to schedule and manage the
execution of different jobs on the computer.
Mouse
In 1967, Stanford’s Douglas
Engelbart employed a
wooden case and two metal
wheels to invent his “X-Y
Position Indicator for a
Display System”.
Chapter 1
The Big Picture
Page 7
History of Computer Science
Relational Databases
In 1969, IBM’s Edgar Codd developed a table-based model for organizing
data in large systems so it could be easily accessed.
Computational Complexity
In 1971, American computer scientist Stephen Cook pioneered research
into NP-completeness, the notion that some problems may not be
solvable on a computer in a “reasonable” amount of time.
Supercomputers
In 1976, Seymour Cray developed the first computer to utilize
multiple processors in order to vastly accelerate the computation
of extremely complex scientific calculations.
Personal Computers
In 1976, Steve Jobs and Steve Wozniak formed Apple
Computer, Inc., facilitating the capability of purchasing a
computer for home use.
Chapter 1
The Big Picture
Page 8
History of Computer Science
Internet
In 1969, DARPA (the Defense Advanced Research Projects Agency) established
ARPANET as a computer communication network that did not require dedicated lines
between every pair of communicating terminals.
By 1977, ARPANET had grown from its initial four nodes in California and Utah to over
100 nodes nationwide.
In 1988, the National Science Foundation established five supercomputer centers and
connected them via ARPANET in order to provide supercomputer access to academic
researchers nationwide.
By 1995, private sector entities had begun to find it profitable to build and expand the
Internet’s infrastructure, so NSFNET was retired and the Internet backbone was
officially privatized.
Chapter 1
The Big Picture
Page 9
History of Computer Science
Microsoft
In 1975, Bill Gates and Paul Allen founded the software company that would ultimately
achieve numerous milestones in the history of computer science:
• 1981: Contracted with IBM to produce DOS (Disk Operating System) for use in IBM’s
new line of personal computers.
• 1985: Introduced Microsoft Windows, providing PC users with a graphical user
interface, which promoted ease of use in PCs. (Resulted in “look-and-feel” lawsuit
from Apple.)
• 1989: Released Microsoft Office, a suite of office productivity applications, including
Microsoft Word and Microsoft Excel. (Accused of unfairly exploiting its knowledge
of underlying operating systems by office suite competitors.)
• 1995: Entered Web browser market with Internet Explorer. (Criticized for security
flaws and lack of compliance with many Web standards.)
Chapter 1
The Big Picture
Page 10
Chapter 2: Binary Values and
Number Systems
• Information may be reduced to its fundamental state by means of binary
numbers (e.g., on/off, true/false, yes/no, high/low, positive/negative).
• “Bits” (binary digits) are used to accomplish this. Normally, we consider a
binary value of 1 to represent a “high” state, while a binary value of 0
represents a “low” state.
• In machines, these values are represented electronically by high and low
voltages, and magnetically by positive and negative polarities.
Chapter 2
Binary Values
and Number
Systems
Page 11
Binary Numerical Expressions
• Binary expressions with multiple digits may be viewed in the same way that
multi-digit decimal numbers are viewed, except in base 2 instead of base 10.
• For example, just as the decimal number 275 is viewed as 5 ones, 7 tens, and 2
hundreds combined, the binary number 01010110 can be viewed in right-to-left
fashion as...
01010110
• 0 ones
• 1 two
• 1 four
• 0 eights
• 1 sixteen
• 0 thirty-twos
• 1 sixty-four
• 0 one hundred twenty-eights
So, 01010110 is
equivalent to the
decimal number
2 + 4 + 16 + 64 =
86
Chapter 2
Binary Values
and Number
Systems
Page 12
Hexadecimal (Base-16) Notation
• As a shorthand way of writing lengthy binary codes, computer scientists
often use hexadecimal notation.
Binary
Code
Hexadecimal
Notation
0000
0
0001
1
0010
2
0011
3
0100
4
0101
5
0110
6
0111
7
For example, the binary
expression
1011001011101000 may be
written in hexadecimal
notation as B2E8.
The two expressions
mean the same thing, but
they are in different
notations.
Binary
Code
Hexadecimal
Notation
1000
8
1001
9
1010
A
1011
B
1100
C
1101
D
1110
E
1111
F
Chapter 2
Binary Values
and Number
Systems
Page 13
Chapter 3: Data Representation
Computers use bits to represent all types of data, including text, numerical values,
sounds, images, and animation.
How many bits does it take to represent a piece of data that could have one of,
say, 1000 values?
• If only one bit is used, then there are only two possible values: 0 and 1.
•
•
•
•
•
If two bits are used, then there are four possible values: 00, 01, 10, and 11.
Three bits produces eight possible values: 000, 001, 010, 011, 100, 101, 110 and 111.
Four bits produces 16 values; five bits produces 32; six produces 64; ...
Continuing in this fashion, we see that k bits would produce 2k possible values.
Since 29 is 512 and 210 is 1024, we would need ten bits to represent a piece of
data that could have one of 1000 values.
• Mathematically, this is the “ceiling” of the base-two logarithm, i.e., the count
of how many times you could divide by two until you get to the value one:
1000/2=500 500/2=250 250/2=125 125/2=63 63/2=32 32/2=16 16/2=8 8/2=4 4/2=2 2/2=1
1
2
3
4
5
6
7
8
9
10
Chapter 3
Data
Representation
Page 14
Representing Integers with Bits
Two’s complement notation was established to ensure that addition between
positive and negative integers shall follow the logical pattern.
4-Bit
Integer
Pattern Value
4-Bit
Integer
Pattern Value
0000
0
1000
-8
0001
1
1001
-7
0010
2
1010
-6
0011
3
1011
-5
0100
4
1100
-4
0101
5
1101
-3
0110
6
1110
-2
0111
7
1111
-1
Examples:
1 1 1 1
1 1 0 1
+0 0 1 1
1
0 0 1 1
+0 0 1 0
1 1
1 1 0 0
+1 1 0 1
0 0 0 0
0 1 0 1
1 0 0 1
-3 + 3 = 0
3+2=5
1 1
0 1 1 0
+0 0 1 1
1 0 0 1
-4 + -3 = -7
1
1 0 0 1
+1 1 1 0
0 1 1 1
6 + 3 = -7??? -7 + -2 = 7???
OVERFLOW!
OVERFLOW!
Chapter 3
Data
Representation
Page 15
Two’s Complement Coding & Decoding
How do we code –44 in two’s complement notation using 8 bits?
• First, write the value 44 in binary using 8 bits:
00101100
• Starting on the right side, skip over all zeros and the first one:
00101100
• Continue moving left, complementing each bit:
11010100
• The result is -44 in 8-bit two’s complement notation:
11010100
How do we decode 10110100 from two’s complement into an integer?
• Starting on the right side, skip over all zeros and the first one:
10110100
• Continue moving left, complementing each bit:
01001100
• Finally, convert the resulting positive bit code into an integer:
76
• So, the original negative bit code must have represented:
–76
Chapter 3
Data
Representation
Page 16
Representing Real Numbers with Bits
• When representing a real number like 17.15 in binary form, a rather
complicated approach is taken.
• Using only powers of two, we note that 17 is 24 + 20 and .15 is 2-3 + 2-6 + 2-7 + 2-10 +
2-11 + 2-14 + 2-15 + 2-18 + 2-19 + 2-22 + …
• So, in pure binary form, 17.15 would be 10001.0010011001100110011001…
• In “scientific notation”, this would be 1.0001001001100110011001… × 24
• The standard for floating-point notation is to use 32 bits. The first bit is a sign
bit (0 for positive, 1 for negative). The next eight are a bias-127 exponent (i.e.,
127 + the actual exponent). And the last 23 bits are the mantissa (i.e., the
exponent-less scientific notation value, without the leading 1).
• So, 17.15 would have the following floating-point notation:
0 10000011 00010010011001100110011
Chapter 3
Data
Representation
Page 17
Representing Text with Bits
ASCII: American
Standard Code
for Information
Interchange
•ASCII code was
developed as a
means of
converting text
into a binary
notation.
•Each character
has a 7-bit
representation.
•For example,
CAT would be
represented by
the bits:
10000111000001
1010100
Chapter 3
Data
Representation
Page 18
Fax Machines
In order to transmit a facsimile of a document over telephone lines,
fax machines were developed to essentially convert the document
into a grid of tiny black and white rectangles.
This important
document must
be faxed
immediately!!!
A standard 8.511 page is divided into
1145 rows and 1728 columns, producing
approximately 2 million 0.0050.01
rectangles.
Each rectangle is scanned by the
transmitting fax machine and
determined to be either predominantly
white or predominantly black.
We could just use the binary nature
of this black/white approach (e.g., 1
for black, 0 for white) to fax the
document, but that would require 2
million bits per page!
Chapter 3
Data
Representation
Page 19
CCITT Fax Conversion Code
length
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
white
00110101
000111
0111
1000
1011
1100
1110
1111
10011
10100
00111
01000
001000
000011
110100
110101
101010
101011
0100111
0001100
0001000
0010111
0000011
0000011
0101000
0101011
0010011
0100100
0011000
00000010
00000011
00011010
00011011
00010010
00010011
black
0000110111
010
11
10
011
0011
0010
00011
000101
000100
0000100
0000101
0000111
00000100
00000111
000011000
0000010111
0000011000
0000001000
0000100111
00001101000
00001101100
00000110111
00000101000
00000010111
00000011000
000011001010
000011001011
000011001100
000011001101
000001101000
000001101001
000001101010
000001101011
000011010010
length
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
128
192
256
320
384
white
00010100
00010101
00010110
00010111
00101000
00101001
00101010
00101011
00101100
00101101
00000100
00000101
00001010
00001011
01010010
01010011
01010100
01010101
00100100
00100101
01011000
01011001
01011010
01011011
01001010
01001011
00110010
00110011
00110100
11011
10010
010111
0110111
00110110
00110111
black
000011010011
000011010100
000011010101
000011010110
000011010111
000001101100
000001101101
000011011010
000011011011
000001010100
000001010101
000001010110
000001010111
000001100100
000001100101
000001010010
000001010011
000000100100
000000110111
000000111000
000000100111
000000101000
000001011000
000001011001
000000101011
000000101100
000001011010
000001100110
000001100111
000000111
000011001000
000011001001
000001011011
000000110011
000000110100
length
448
512
576
640
704
768
832
896
960
1024
1088
1152
1216
1280
1344
1408
1472
1536
1600
1664
1728
1792
1856
1920
1984
2048
2112
2176
2240
2304
2368
2432
2496
2560
white
01100100
01100101
01101000
01100111
011001100
011001101
011010010
011010011
011010100
011010101
011010110
011010111
011011000
011011001
011011010
011011011
010011000
010011001
010011010
011000
010011011
00000001000
00000001100
00000001101
000000010010
000000010011
000000010100
000000010101
000000010110
000000010111
000000011100
000000011101
000000011110
000000011111
black
000000110101
0000001101100
0000001101101
0000001001010
0000001001011
0000001001100
0000001001101
0000001110010
0000001110011
0000001110100
0000001110101
0000001110110
0000001110111
0000001010010
0000001010011
0000001010100
0000001010101
0000001011010
0000001011011
0000001100100
0000001100101
00000001000
00000001100
00000001101
000000010010
000000010011
000000010100
000000010101
000000010110
000000010111
000000011100
000000011101
000000011110
000000011111
By using
one
sequence of
bits to
represent a
long run of
a single
color
(either
black or
white), the
fax code
can be
compressed
to a fraction
of the two
million bit
code that
would
otherwise
be needed.
Chapter 3
Data
Representation
Page 20
Binary Code Interpretation
How is the following binary code interpreted?
10100111101111010000011001011100001111001111110010101110
In “programmer’s shorthand” (hexadecimal notation)…
1010 0111 1011 1101 0000 0110 0101 1100 0011 1100 1111 1100 1010 1110
A
7
B
D
0
6
5
C
3
F
C
C
A
E
As a two’s complement integer...
The negation of 01011000010000101111100110100011110000110000001101010010
(21+24+26+28+29+216+217+222+223+224+225+229+231+232+235+236+237+238+239+241+246+251+252+254)
-24,843,437,912,294,226
As ASCII text…
1010011
1101111
0100000
1100101
1100001
1110011
1111001
0101100
S
o
(space)
e
a
s
y
.
10
10
As CCITT fax conversion code…
10100
9 white
11
2 black
11011 110100
64 white 14 white
00011
7 black
0010111
21 white
0000111
12 black
10011
6 white
11
2 black
1100
3 black 1011 3 black
5 white
4 white
Chapter 3
Data
Representation
Page 21
Representing Audio Data with Bits
Audio files are digitized by sampling the audio signal thousands
of times per second and then “quantizing” each sample (i.e.,
rounding off to one of several discrete values).
The ability to recreate the original analog audio
depends on the resolution (i.e., the number of
quantization levels used) and the sampling rate.
Chapter 3
Data
Representation
Page 22
Representing Still Images with Bits
Digital images are composed of
three fields of color intensity
measurements, separated into a
grid of thousands of pixels
(picture elements) .
The size of the grid (the image’s
resolution) determines how clear
the image can be displayed.
256
128
512
64
32
16
4
82  512
232
4
16
64
128
256
8
Chapter 3
Data
Representation
Page 23
RGB Color Representation
In digital display systems, each pixel in an image is
represented as an additive combination of the three
primary color components: red, green, and blue.
TrueColor Examples
Red
Green
Blue
255
185
0
255
0
185
255
125
125
185
255
0
0
255
185
125
255
125
185
0
255
0
185
255
125
125
255
Result
Printers, however, use a
subtractive color system, in
which the complementary
colors of red, green, and
blue (cyan, magenta, and
yellow) are applied in inks
and toners in order to
subtract colors from a
viewer’s perception.
Chapter 3
Data
Representation
Page 24
Compressing Images with JPEG
The Joint Photographic Experts Group developed an
elaborate procedure for compressing color image files:
First, the original
image is split into 88
squares of pixels.
After rounding off the values in
the three grids in order to reduce
the number of bits needed, each
grid is traversed in a zig-zag
pattern to maximize the chances
that consecutive values will be
equal, which, as occurred in fax
machines, reduces the bit
requirement even further.
Each square is split into three 88 grids
indicating the levels of lighting and blue and
red coloration the square contains.
Depending on how severely the values were
rounded, the restored image will either be a
good representation of the original (with a
high bit count) or a bad representation (with
a low bit count).
Chapter 3
Data
Representation
Page 25
Representing Video with Bits
Video images are merely a sequence of still images, shown in rapid succession.
One means of compressing such a vast
amount of data is to use the JPEG
technique on each frame, thus
exploiting each image’s spatial
redundancy. The resulting image frames
are called intra-frames.
Video also possesses temporal redundancy, i.e., consecutive frames
are usually nearly identical, with only a small percentage of the
pixels changing color significantly. So video can be compressed
further by periodically replacing several I-frames with predictive
frames, which only contain the differences between the predictive
frame and the last I-frame in the sequence. P-frames are generally
about one-third the size of corresponding I-frames.
The Motion Picture Experts Group (MPEG) went even
further by using bidirectional frames sandwiched between Iframes and P-frames (and between consecutive P-frames).
Each B-frame includes just enough information to allow the
original frame to be recreated by blending the previous and
next I/P-frames. B-frames are generally about half as big as
the corresponding P-frames (i.e., one-sixth the size of the
corresponding I-frames).
Chapter 3
Data
Representation
Page 26
Chapter 4: Gates and Circuits
The following Boolean operations are easy to incorporate into
circuitry and can form the building blocks of many more
sophisticated operations…
The NOT Operation (i.e., what’s the opposite of the operand’s value?)
NOT 1 = 0
NOT 0 = 1
NOT 10101001 = 01010110
NOT 00001111 = 11110000
The AND Operation (i.e., are both operands “true”?)
1
AND 1
1
1
AND 0
0
0
AND 1
0
0
AND 0
0
10101001
AND 10011100
10001000
00001111
AND 10110101
00000101
The OR Operation (i.e., is either operand “true”?)
1
OR 1
1
1
OR 0
1
0
OR 1
1
0
OR 0
0
10101001
OR 10011100
10111101
00001111
OR 10110101
10111111
Chapter 4
Gates and
Circuits
Page 27
More Boolean Operators
The NAND Operation (“NOT AND”)
1
NAND 1
0
1
0
0
NAND 0 NAND 1 NAND 0
1
1
1
10101001
NAND 10011100
01110111
00001111
NAND 10110101
11111010
10101001
NOR 10011100
01000010
00001111
NOR 10110101
01000000
The NOR Operation (“NOT OR”)
1
NOR 1
0
1
NOR 0
0
0
NOR 1
0
0
NOR 0
1
The XOR Operation (“Exclusive OR”, i.e, either but not both is “true”)
1
XOR 1
0
1
XOR 0
1
0
XOR 1
1
0
XOR 0
0
10101001
XOR 10011100
00110101
00001111
XOR 10110101
10111010
Chapter 4
Gates and
Circuits
Page 28
Transistors
Transistors are relatively inexpensive mechanisms for
implementing the Boolean operators.
In addition to the input connection (the base),
transistors are connected to both a power source
and a voltage dissipating ground.
Essentially, when the input voltage is high, an electric path is
formed within the transistor that causes the power source to be
drained to ground.
When the input voltage is low, the path is not created, so
the power source is not drained.
Chapter 4
Gates and
Circuits
Page 29
Using Transistors to Create Logic Gates
A NOT gate is essentially implemented
by a transistor all by itself.
A NAND gate uses a slightly more complex setup
in which both inputs would have to be high to
force the power source to be grounded.
Use the output of a NAND gate as the input to a
NOT gate to produce an AND gate.
A NOR gate grounds the power source if
either or both of the inputs are high.
Use the output of a NOR gate as the input to a
NOT gate to produce an OR gate.
Chapter 4
Gates and
Circuits
Page 30
How to Use Logic Gates for Arithmetic
ANDs and ORs are all well and good, but how can they be used to
produce binary arithmetic?
Let’s start with simple one-bit addition (with a “carry” bit just in
case someone tries to add 1 + 1!).
0
0
1
1
+
+
+
+
0
1
0
1
=
=
=
=
Sum
Bit
0
1
1
0
Carry
Bit
0
0
0
1
0
0
1
1
XOR
XOR
XOR
XOR
0
1
0
1
=
=
=
=
Result
0
1
1
0
0
0
1
1
AND
AND
AND
AND
0
1
0
1
=
=
=
=
Result
0
0
0
1
Notice that the sum bit always yields the same result as the XOR
operation, and the carry bit always yields the same result as the
AND operation!
By combining the right circuitry, then, multiple-bit addition
can be implemented, as well as the other arithmetic
operations.
Chapter 4
Gates and
Circuits
Page 31
Memory Circuitry
With voltages constantly on the move, how can a piece of
circuitry be used to retain a piece of information?
In the S-R latch, as long as the S and R
inputs remain at one, the value of the Q
output will never change, i.e., the circuit
serves as memory!
To set the stored value to one, merely set the S input to zero (for
just an instant!) while leaving the R input at one.
To set the stored value to zero, merely set the R input to zero (for
just an instant!) while leaving the S input at one.
Question: What goes wrong if both inputs
are set to zero simultaneously?
Chapter 4
Gates and
Circuits
Page 32
Chapter 5: Computing Components
In the 1940s and 1950s, John von
Neumann helped develop the
architecture that continues to
be used in the design of most
modern computer systems.
Control Unit,
Coordinating
CPU Activity
Arithmetic/
Logic Unit,
Processing
Data
Chapter 5
Computing
Components
Page 33
Central Processing Unit (CPU)
Code Cache
Storage for instructions for deciphering
data
Branch Predictor Unit
Decides which ALU can best handle
specific data & divides the tasks
Bus Interface Unit
Information from the RAM
enters the CPU here , and
then it is sent to separate
storage units or cache
Instruction Prefetch &
Decoding Unit
Translates data into simple
instructions for ALU to process
Arithmetic Logic Unit
Whole number cruncher
Floating Point Unit
Data Cache
Sends data from ALUs to Bus
Interface Unit, and then back to RAM
Floating-point number
cruncher
Instruction Register
Provides the ALUs with processing
instructions from the data cache
Chapter 5
Computing
Components
Page 34
Simplified View of the CPU
CPU
Registers
ALU
Circuitry that
manipulates
the data
Special
memory cells
to temporarily
store the data
being
manipulated
Control Unit
Circuitry to
coordinate the
operation of the
computer
Bus
RAM
Chapter 5
Computing
Components
Page 35
The Processing Cycle
CONTROL UNIT
ARITHMETIC/LOGIC UNIT
EXECUTE the
decoded instruction
DECODE instruction to
determine what to do
FETCH the next instruction
from main memory
STORE the result
in main memory
MAIN MEMORY
Chapter 5
Computing
Components
Page 36
Sample Machine Architecture
Main Memory Cells
CPU
Registers
ALU
0
1
2
3
4
5
6
7
8
9
A
B
C
D
E
F
Control Unit
Program
Counter
(Keeps track of the
address of the next
instruction to be
executed)
Instruction
Register
(Contains a copy of
the 2-byte
instruction currently
being executed)
Bus
00
01
02
03
04
05
06
07
08
09
0A
0B
0C
0D
0E
0F
10
11
:
:
:
EE
EF
F0
F1
F2
F3
F4
F5
F6
F7
F8
F9
FA
FB
FC
FD
FE
FF
:
Chapter 5
Computing
Components
Page 37
Random Access Memory (RAM)
• Whenever a computer accesses
information (e.g., a program that’s being
executed, data that’s being examined),
that information is stored as electronic
pulses within main memory.
• Main memory is a system of electronic
circuits known as random access memory
(RAM), the idea being that the user can
randomly access any part of memory (as
long as the location of what’s being
accessed is known).
• The circuitry in main memory is usually
dynamic RAM, meaning that the binary
values must be continuously refreshed
(thousands of times per second) or the
charge will dissipate and the values will
be lost.
Chapter 5
Computing
Components
Page 38
Cache Memory
• Due to the need for continuous refreshing,
dynamic RAM is rather slow. An alternative
approach is static RAM, which uses “flipflop” circuitry that doesn’t waste time
refreshing the stored binary values.
• Static RAM is much faster than dynamic
RAM, but is much more expensive.
Consequently, it is used less in most
machines.
• Cache memory uses static RAM as the first
place to look for information and as the
place to store the information that was
most recently accessed (e.g., the current
program being executed).
Chapter 5
Computing
Components
Page 39
Magnetic Memory
• When the power is turned off, a
computer’s electronic memory devices
immediately lose their data. In order to
store information on a computer when
it’s turned off, some non-magnetic
storage capability is required.
• Most computers contain hard drives, a
system of magnetic platters and readwrite heads that detect the polarity of
the magnetic filaments beneath them
(i.e., “reading” the bit values) and
induce a magnetic field onto the
filaments (i.e., “writing” the bit values).
Chapter 5
Computing
Components
Page 40
Disk Tracks and Sectors
• Each platter is divided into
concentric circles, called tracks,
and each track is divided into
wedges, called sectors.
• The read-write head moves radially
towards and away from the center
of the platter until it reaches the
right track.
• The disk spins around until the
read-write head reaches the
appropriate sector.
Chapter 5
Computing
Components
Page 41
Optical Memory
• Compact Disks – Read-Only Memory (CDROMs) use pitted disks and lasers to
store binary information.
• When the laser hits an unpitted “land”,
light is reflected to a sensor and
interpreted as a 1-bit; when the laser
hits a pit, light isn’t reflected back, so
it’s interpreted as a 0-bit.
• Digital Versatile Disks
(DVDs) use the same
pits-and-lands
approach as CDROMs, but with finer
gaps between tracks
and pits, resulting in
over four times the
storage capacity as
CD-ROMs.
Chapter 5
Computing
Components
Page 42
Flash Memory
• Recent advances in memory
circuitry have made it possible
to develop portable electronic
devices with large memory
capacities.
• Flash memory is Electrically
Erasable Programmable ReadOnly Memory (EEPROM):
Universal
Serial Bus
(USB)
Connector
to Host
Computer
USB Mass
Storage
Controller
• Read-Only Memory: Non-volatile
(retains data even after power is
shut off), but difficult to alter.
Flash
Test
Memory
Points for
Chip
Verifying
LEDs to
Crystal
Proper
Indicate
Oscillator to
Loading
Data
Produce
Clock Signal Transfers
• Programmable: Programs aren’t
added until after the device is
manufactured, by “blowing” all
fuses for which a 1-value is
desired.
• Electrically Erasable: Erasing is
possible by applying high electric
fields.
WriteProtect
Switch
Space for
Second Flash
Memory Chip
Chapter 5
Computing
Components
Page 43
Input Device: Keyboard
One of the principal devices for
providing input to a computer is
the keyboard.
When a key is pressed, a plunger
on the bottom of the key pushes
down against a rubber dome…
…the center of which completes a
circuit within the keyboard,
resulting in the CPU being signaled
regarding which key (or keys) has
been pressed.
Chapter 5
Computing
Components
Page 44
Input Device: Mouse
The other primary input device is the computer mouse.
Optical Mouse
Mechanical Mouse
The mouse
driver software
processes the X
and Y data and
transfers it to
the operating
system
Moving
the mouse
turns the
ball
Infrared
LEDs shine
through the
disks
X and Y rollers
grip the ball
and transfer
movement
Sensors
gather light
pulses to
convert to X
and Y
velocities
Optical
encoding
disks
include
light holes
Optical mice use red LEDs (or
lasers) to illuminate the surface
beneath the mouse, and sensors
detect the subtle changes that
indicate how much and in what
direction the mouse is being
moved.
Chapter 5
Computing
Components
Page 45
Output Device: Cathode Ray Tube (CRT)
Deflection Coils
These magnetic plates
deflect the beams
horizontally and vertically
to particular screen
coordinates
Anode Connection
The positive charge on
the anode attracts the
electrons and
accelerates them
forwards
Focusing Coil
The magnetic coil forces
the electron flows to
focus into tight beams
Electron Guns
A heating filament
releases electrons
from a cathode,
which flow through
a control grid
(controlling
brightness)
Shadow Mask
A perforated metal sheet halts
stray electrons and ensures
that beams focus upon target
phosphors
Phosphor-Coated Screen
Each pixel is comprised of a
triad of RGB phosphors that
are illuminated by the three
electron beams
Chapter 5
Computing
Components
Page 46
Output Device: Liquid Crystal Display
Twisted Nematic Liquid Crystals
Twists shaft of light 90º when
uncharged, 0º when fully
charged
Color Filter
Provides red, green,
or blue color to
resulting light
Light
Source
Horizontal Polarizer
Converts light into
horizontal shafts
Thin Film Transistor
Applies charge to
individual subpixel
Vertical Polarizer
Amount of light permitted to
pass is proportional to how close
to vertical its shafts are
Chapter 5
Computing
Components
Page 47
Output Device: Plasma Display
Dielectric Layer
Contains transparent display electrodes,
arranged in long vertical columns
Plasma Cells
Phosphor coating is excited
by plasma ionization and
photon release
Pixel
Comprised of three
plasma cells, one of each
RGB phosphor coating
Rear Plate Glass
Front Plate Glass
Dielectric Layer
Contains transparent
address electrodes,
arranged in long
horizontal rows
Chapter 5
Computing
Components
Page 48
Input/Output Device: Touch Screen
Resistive
Capacitive
Infrared
Acoustic
The glass layer has an
outer coating of
conductive material, and
insulating dots separate
it from a flexible
membrane with an inner
conductive coating.
When the screen is
touched, the two
conductive materials
meet, producing a
locatable voltage.
Small amounts of
voltage are applied
to the four corners
of the screen.
Touching the screen
draws current from
each corner, and a
controller measures
the ratio of the four
currents to
determine the
touch location.
A small frame is
placed around the
display, with infrared
LEDs and
photoreceptors on
opposite sides.
Touching the screen
breaks beams that
identify the specific
X and Y coordinates.
Four ultrasonic
devices are placed
around the display.
When the screen is
touched, an acoustic
pattern is produced
and compared to the
patterns
corresponding to
each screen position.
Chapter 5
Computing
Components
Page 49
Parallel Processing
Traditional computers have a single processor. They execute one instruction at
a time and can deal with only one piece of data at a time. These machines are
said to have SISD (Single Instruction, Single Data) architectures.
When multiple processors are applied within a single computer, parallel
processing can take place. There are two basic approaches used in these
“supercomputers”:
SIMD (Single Instruction,
Multiple Data) Architectures
MIMD (Multiple Instruction,
Multiple Data) Architectures
• Each processor does the same thing at the • At any given moment, each processor does
its own task to its own portion of the data
same time to its own portion of the data
• Example: Have some processors retrieve
• Example: Have the processors perform
data, some perform calculations, and some
the graphics rendering for different
render the resulting images:
sectors of the viewscreen:
Chapter 5
Computing
Components
Page 50