Download Fundamentals

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Foundations of mathematics wikipedia , lookup

List of first-order theories wikipedia , lookup

Large numbers wikipedia , lookup

History of logarithms wikipedia , lookup

Principia Mathematica wikipedia , lookup

Algebra wikipedia , lookup

Approximations of π wikipedia , lookup

Laws of Form wikipedia , lookup

Addition wikipedia , lookup

Elementary mathematics wikipedia , lookup

Location arithmetic wikipedia , lookup

Arithmetic wikipedia , lookup

Positional notation wikipedia , lookup

Transcript
Fundamentals
 Terms for magnitudes
– logarithms and logarithmic graphs
 Digital representations
– Binary numbers
– Text
– Analog information
 Boolean algebra
 Logical expressions and circuits
(2.1)
(2.2)
Information Technology Magnitude Terms
 Large
– kilo
– mega
– giga
– tera
– peta
=
=
=
=
=
1,000
1,000,000
1,000,000,000
1,000,000,000,000
1,000,000,000,000,000
=
=
=
=
=
1/1,000
1/1,000,000
1/1,000,000,000
1/1,000,000,000,000
1/1,000,000,000,000,000
 Small
– milli
– micro
– nano
– pico
– femto
Logarithms
(2.3)
 Because of these great differences in
magnitudes, often use logarithms to represent
values
– a logarithm is the power to which some base
must be raised to get a particular value
For example, the base 10 logarithm
of 1000 (written log10 1000) is 3,
since
103 = 1000
(2.4)
Logarithm Scale Graphs
 Graphs often use a logarithmic scale on one axis so
that the data fit on a reasonable size graph
Intel Microprocessors
10000000
1000000
10000
Transistors
1000
100
10
Year
1997
1995
1993
1991
1989
1987
1985
1983
1981
1979
1977
1975
1973
1
1971
Transistors
100000
(2.5)
Logarithm Scale Graphs (continued)
 The problem with this is that such graphs lose the
impact of how rapidly the magnitudes change
Intel Microprocessors
8000000
7000000
5000000
4000000
Transistors
3000000
2000000
1000000
Year
1997
1995
1993
1991
1989
1987
1985
1983
1981
1979
1977
1975
1973
0
1971
Transistors
6000000
Binary Numbers
(2.6)
 Digital systems operate using the binary
number system
– only two digits, 0 and 1
– can be represented in computer several ways
» voltage high or low
» magnetized one direction or another
– each digit is a binary digit, or bit
– referred to as being in base 2
 Magnitudes of binary numbers determined
using positional notation, just like decimal
– 269110 = 1*100 + 9*101 + 6*102 + 2*103
– 1001012 = 1*20 + 0*21 + 1*22 + 0*23 + 0*24 + 1*25
Converting Between Number Systems
(2.7)
 To convert binary to decimal, simply perform
arithmetic in base 10
– 1001012 = 1*20 + 0*21 + 1*22 + 0*23 + 0*24 + 1*25
= 1 + 4 + 32 = 37
 To convert decimal to binary, divide the
decimal value by 2
– remainder is rightmost digit of binary number
– repeat on quotient
binary number is
100101
37/2 = 18
18/2 = 9
9/2 = 4
4/2 = 2
2/2 = 1
1/2 = 0
remainder 1
remainder 0
remainder 1
remainder 0
remainder 0
remainder 1
Converting Between Number Systems
(continued)
(2.8)
 Alternatively, build a table of powers of 2,
write 1 by largest magnitude less than value
to convert, then subtract that from the
number and repeat until get to 0
– produces number most significant digit first
Power of 2
5
4
3
2
1
0
Decimal
Equivalent
32
16
8
4
2
1
Binary
Number
1
0
0
1
0
1
New Decimal
Value
37 - 32 = 5
5-4=1
1-1=0
(2.9)
Binary Arithmetic
 What happens if you add two digits in base
10 and get a result greater than 9?
– generate a carry
5
+5
10
 Same thing happens if you add two binary
digits and get a result greater than 1
0
0
1
1
+0
+1
+0
+1
0
1
1
10
Binary Arithmetic (continued)
(2.10)
 To do addition, we need just one more piece
of information
10
+ 1
11
 Then, we can add two binary numbers by
using the four cases on the previous slide and
the identity above
carry
1111000
addend
addend
result
1011010
+
111100
10010110
(2.11)
Binary Arithmetic (continued)
 Subtraction uses a similar idea, that of a
borrow from the next column left when we’re
trying to subtract a larger digit from a smaller
16
- 9
7
 With binary digits, the same thing holds
0
1
1
10
-0
-0
-1
- 1
0
1
0
1
Binary Arithmetic (continued)
 Consider a couple examples
1000
1
111
1001010
1110
111100
(2.12)
Binary Arithmetic (continued)
 Multiplication is simple
– 0 times anything is 0
– 1 times anything is that thing again
 For example
1011001
x 100101
1011001
0000000
1011001
0000000
0000000
1011001
110011011101
(2.13)
Binary Arithmetic (continued)
(2.14)
 For division, the divisor is either
– less than or equal to what it’s dividing into, so
the quotient is 1
– greater than what it’s dividing into, so the
quotient is 0
101110
quotient
 For example
10
1011101
10
011
10
11
10
10
10
01
remainder
(2.15)
Octal and Hexadecimal
 Reading and writing binary numbers can be
confusing, so we often use octal (base 8) or
hexadecimal (base 16) numbers
– group binary number into sets of 3 (octal) or 4
(hex) bits, then replace each group by its
corresponding digit from the tables
– to convert back to binary, just replace each octal
or hex digit with its binary equivalent
binary
octal
000
0
001
1
010
2
011
3
100
4
101
5
110
6
111
7
binary
hex
0000
0
0001
1
0010
2
0011
3
0100
4
0101
5
0110
6
0111
7
binary
hex
1000
8
1001
9
1010
A
1011
B
1100
C
1101
D
1110
E
1111
F
Real Numbers
(2.16)
 Previous numeric values were all integers
 We commonly use real numbers (with
decimal point and fractional part) as well
– 24.125 = 2x101 + 4x100 + 1x10-1 + 2x10-2 + 5x10-3
 Same idea holds for binary numbers
– 11000.001 = 1x24 + 1x23 + 0x22 + 0x21 + 0x20 +
0x2-1 + 0x2-2 + 1x2-3
 Can also write these in scientific notation
– 0.24125 x 102
– 0.11000001 x 25
 Referred to as floating point numbers in
“computer speak”
Holding Binary Numbers in a Computer
(2.17)
 Computer memory is organized into chunks
of 8 bits, called bytes
 The range of values that an integer can hold
depends on how many bytes of memory are
used
– 1 byte
– 2 bytes
– 4 bytes
0 - 255
-32,768 - 32,767
-2,147,483,648 - 2,147,483,647
 Floating point numbers usually have 4 or 8
byte representations
– separate exponent and magnitude
0 1
7 8
exp
31
magnitude
sign
exp
63
magnitude
Representing Text
(2.18)
 Text is an example of discrete information
– like integers - there are only certain values
that are allowed
 Representing text in a computer is simply a
matter of defining a correspondence
between each character and a unique
binary number
– called a code
– need different numbers for upper and lower
case representation of same letter
– need representation for digits 0 - 9 as
characters
– want A to be less than B so it’s possible to
alphabetize character information
Representing Text (continued)
(2.19)
 American Standard Code for Information
Interchange (ASCII) code is standard for most
computers
– 7-bit code (128 possible characters)
– stored in memory as single byte
 Won’t represent non-Roman characters easily
– new 16-bit UniCode will
Representing Analog Information
(2.20)
 If the data we want to represent in a
computer is not discrete but continuous, need
to turn it into a sequence of numerical values
by sampling
– examples are sound, pressures, images, video
– sequence of samples approximates original
signal
Representing Analog Information
(continued)
(2.21)
 Values used for the samples determine
precision of measurement
– too coarse a division of the range of possible
input values yields a poor approximation
– too fine a division wastes storage space (since
more bits needed for each sample)
» 8 bits, 256 levels; 16 bits, 65,536 levels
Representing Analog Information
(continued)
 Number of samples
in given time period
is called the
frequency or sample
rate
– defined by number
of measurements
per second (Hz)
– sample rate
needed depends
on how rapidly the
input signal
changes
(2.22)
Representing Analog Information
(continued)
(2.23)
 Need to trade off sampling rate and precision to
achieve acceptable approximation without letting
resulting digital data get too large
 Audio CD
– 44.1 KHz sampling rate
per channel
– 16 bit precision
– 1 minute of CD-quality stereo is almost 10.6
Mbytes
 For images
– resolution (number of samples in horizontal
and vertical direction) takes role of sampling
rate
– Precision is measured by number of bits per
sample (samples are called pixels)
Original – 1600 x 800 Pixels, 24 Bits per
Pixel
(2.24)
(2.25)
Lower Resolution – 300 x 150 Pixels, 24 Bit
(2.26)
Lower Resolution – 150 x 75 Pixels, 24 Bit
Lower Resolution – 50 x 25 Pixels, 24 Bit
(2.27)
Lower Resolution – 25 x 12 Pixels, 24 Bit
(2.28)
Base Image – 300 x 150, 24 Bit
(2.29)
Lower Precision – 300 x 200 Pixels, 8 Bit
(2.30)
Lower Precision – 300 x 200 Pixels, 4 Bits
(2.31)
Lower Precision – 300 x 200 Pixels, 1 Bit
(2.32)
Boolean Algebra
(2.33)
 Developed in 1854 by English mathematician
George Boole
– logical algebra in which all quantities are
either true or false
– fits well with binary representations (1 = true, 0
= false)
 Foundation of all computer hardware design
 Three fundamental logical operations
A B not A A or B A and B
0 0
1
0
0
0 1
1
1
0
1 0
0
1
0
1 1
0
1
1
example of a truth
table
Boolean Algebra (continued)
(2.34)
 It’s important that the possible values for A
and B are assigned so they cover all the
possible combinations
– assign methodically as shown on preceding
slide
Boolean Algebra (continued)
(2.35)
 Two other logical operations (combinations of
the fundamental ones) are important
– not or (nor)
think of as or followed by not, or
– not and (nand)
and followed by not
– any logic function that can be expressed
using and, or, not can also be expressed using
just one of nand, nor
A B A or B
0
0
1
1
0
1
0
1
0
1
1
1
not (A or A nor B A and B not (A and A nand B
B)
B)
1
1
0
1
1
0
0
0
1
1
0
0
0
1
1
0
0
1
0
0
(2.36)
Logical Expressions
 Can combine these logical operations just as
we combine arithmetic expressions, to
produce logical expressions
– order of operations is not first, then and, then or
– do equal precedence operations left to right
– change order with parentheses
A
B
C
B or C
not C
not C and (B or C)
A and B
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
1
1
0
1
1
1
1
0
1
0
1
0
1
0
0
0
1
0
0
0
1
0
0
0
0
0
0
0
1
1
A and B or
(not C and (B or C)
0
0
1
0
0
0
1
1
Implementing Logical Expressions
(2.37)
 To convert the logical expression to a circuit
that calculates the equivalent logical value,
simply provide a circuit for each of the terms
of the logical expression
A
B
C
and
or
not
Implementing Logical Expressions
(continued)
(2.38)
 Of course, it’s not really as simple as this
– there may be many possible logical
expressions that produce the same output of
0s and 1s
– the hardware designer must choose the
optimal one based on one or more criteria
» minimum number of logic functions
» fewest different types of logic functions
» fewest levels of logic functions between inputs
and outputs
Remembering the Past
(2.39)
 The previous logic circuit is an example of a
combinational circuit
– the output at any given time depends solely
on the current values of the input
 Another kind of logic circuit is a sequential
circuit
– the output at any given time depends on the
current values of the input and the current
value of the output