Download EE282-Autumn 2002 - Oregon State University

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
ECE472
Computer Architecture
Lecture #2—Sep. 26, 2007
Patrick Chiang
TA: Kang-Min Hu
Department of Electrical Engineering
Oregon State University
http://eecs.oregonstate.edu/~pchiang
EE472 – Spring 2007
P. Chiang, with Slide Help from
C. Kozyrakis (Stanford)
Chapter 1
Department of Electrical Engineering
Oregon State University
http://eecs.oregonstate.edu/~pchiang
EE472 – Spring 2007
P. Chiang, with Slide Help from
C. Kozyrakis (Stanford)
Introduction
• This course is all about how computers work
• But what do we mean by a computer?
– Different types: desktop, servers, embedded devices
– Different uses: automobiles, graphics, finance, genomics…
– Different manufacturers: Intel, Apple, IBM, Microsoft, Sun…
– Different underlying technologies and different costs!
• Analogy: Consider a course on “automotive vehicles”
– Many similarities from vehicle to vehicle (e.g., wheels)
– Huge differences from vehicle to vehicle (e.g., gas vs. electric)
• Best way to learn:
– Focus on a specific instance and learn how it works
– While learning general principles and historical perspectives
EE472 – Fall 2007
Lecture 1 - 3
P. Chiang with slides from C.
Kozyrakis (Stanford)
Why learn this stuff?
• You want to call yourself a “computer scientist”
• You want to build software people use (need performance)
• You need to make a purchasing decision or offer “expert” advice
• Both Hardware and Software affect performance:
– Algorithm determines number of source-level statements
– Language/Compiler/Architecture determine machine instructions
(Chapter 2 and 3)
– Processor/Memory determine how fast instructions are executed
(Chapter 5, 6, and 7)
• Assessing and Understanding Performance in Chapter 4
EE472 – Fall 2007
Lecture 1 - 4
P. Chiang with slides from C.
Kozyrakis (Stanford)
What is a computer?
• Components:
– input (mouse, keyboard)
– output (display, printer)
– memory (disk drives, DRAM, SRAM, CD)
– network
• Our primary focus: the processor (datapath and control)
– implemented using millions of transistors
– Impossible to understand by looking at each transistor
– We need...
EE472 – Fall 2007
Lecture 1 - 5
P. Chiang with slides from C.
Kozyrakis (Stanford)
Abstraction
• Delving into the depths
reveals more information
• An abstraction omits unneeded detail,
helps us cope with complexity
What are some of the details that
appear in these familiar abstractions?
EE472 – Fall 2007
Lecture 1 - 6
P. Chiang with slides from C.
Kozyrakis (Stanford)
How do computers work?
• Need to understand abstractions such as:
–
–
–
–
–
–
–
–
–
–
–
Applications software
Systems software
Assembly Language
Machine Language
Architectural Issues: i.e., Caches, Virtual Memory, Pipelining
Sequential logic, finite state machines
Combinational logic, arithmetic circuits
Boolean logic, 1s and 0s
Transistors used to build logic gates (CMOS)
Semiconductors/Silicon used to build transistors
Properties of atoms, electrons, and quantum dynamics
• So much to learn!
EE472 – Fall 2007
Lecture 1 - 7
P. Chiang with slides from C.
Kozyrakis (Stanford)
Instruction Set Architecture
• A very important abstraction
– interface between hardware and low-level software
– standardizes instructions, machine language bit patterns, etc.
– advantage: different implementations of the same architecture
– disadvantage: sometimes prevents using new innovations
True or False: Binary compatibility is extraordinarily important?
• Modern instruction set architectures:
– IA-32, PowerPC, MIPS, SPARC, ARM, and others
EE472 – Fall 2007
Lecture 1 - 8
P. Chiang with slides from C.
Kozyrakis (Stanford)
Historical Perspective
• ENIAC built in World War II was the first general purpose computer
–
–
–
–
–
Used for computing artillery firing tables
80 feet long by 8.5 feet high and several feet wide
Each of the twenty 10 digit registers was 2 feet long
Used 18,000 vacuum tubes
Performed 1900 additions per second
–Since then:
Moore’s Law:
transistor capacity doubles
every 18-24 months
EE472 – Fall 2007
Lecture 1 - 9
P. Chiang with slides from C.
Kozyrakis (Stanford)
• Lecture #2 – Sep. 27, 2007
• Notes: Course Notes at: eecs.oregonstate.edu/~pchiang/
under ECE472
EE472 – Fall 2007
Lecture 1 - 10
P. Chiang with slides from C.
Kozyrakis (Stanford)
Today’s Lecture
• Review of Tuesday’s class
– Computer architecture is on the brink of major upheaval
• Multi-core computing
– Computer Systems Performance Metrics
• Execution Time
• Power
• Cost
• Today’s lecture material
– Benchmarks – how to evaluate computer performance
– MIPS Assembly Language
EE472 – Fall 2007
Lecture 1 - 11
P. Chiang with slides from C.
Kozyrakis (Stanford)
Examples
• Latency metric: program execution time in seconds
CPUtime 

Seconds
Cycles Seconds


Pr ogram Pr ogram Cycle
Instructions
Cycles
Seconds


Pr ogram Instruction Cycle
 IC  CPI  CCT
– Your system architecture can affect all of them
• CPI: memory latency, IO latency, …
• CCT: cache organization, …
• IC: OS overhead, …
EE472 – Fall 2007
Lecture 1 - 12
P. Chiang with slides from C.
Kozyrakis (Stanford)
A is Faster than B?
• Given the CPUtime for machines A and B, A is X times faster than B
means:
CPUTimeB
X
CPUTimeA
• Example, CPUtimeA=3.4sec & CPUtimeB=5.3sec then
– A is 5.3/3.4=1.55 times faster than B or 55% faster
• If you start with bandwidth metrics of performance, use inverse ratio
X
EE472 – Fall 2007
BandWidth A
BandWidth B
Lecture 1 - 13
P. Chiang with slides from C.
Kozyrakis (Stanford)
Speedup and Amdahl’s Law
• Speedup = CPUtimeold / CPUtimenew
• Given an optimization x that accelerates fraction fx of program by a
factor of Sx, how much is the overall speedup?
Speedup 
CPUTimeold
CPUTimeold
1


CPUTimenew CPUTime [(1  f )  f x ] (1  f )  f x
old
x
x
Sx
Sx
• Lesson’s from Amdhal’s law
– Make common cases fast: as fx→1, speedup→Sx
– But don’t overoptimize common case: as Sx→, speedup→ 1 / (1-fx)
• Speedup is limited by the fraction of the code that can be accelerated
• Uncommon case will eventually become the common one
EE472 – Fall 2007
Lecture 1 - 14
P. Chiang with slides from C.
Kozyrakis (Stanford)
Amdahl’s Law Example
• If Sx=100, what is the overall speedup as a function of fx?
Speedup vs Optimized Fraction
100
90
80
70
Speedup
60
50
40
30
20
10
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fraction of Code Optimized
EE472 – Fall 2007
Lecture 1 - 15
P. Chiang with slides from C.
Kozyrakis (Stanford)
Cost of Integrated Circuits
Die cost  Testing cost  Packaging cost
IC cost 
Final test yiel d
Wafer cost
Die cost 
Dies per Wafer  Die yield


Defect_Den
sity

Die_area
 

 
Die Yield  Wafer_yiel d  1  
 


 
 

EE472 – Fall 2007
Lecture 1 - 16
P. Chiang with slides from C.
Kozyrakis (Stanford)
Power
•
Power = C(capacitance)*Vdd2*f(frequency)
•
Execution Time
•
Conflicting goals:

Instructions
Cycles
Seconds


Pr ogram Instruction Cycle
– Execution time goes down
but power goes up!
– Really exponential power increase
• Ways to solve this problem?
• Operate on N instructions in parallel
– Clock Frequency => f/N
–
Keep clock frequency the same or reduce it
EE472 – Fall 2007
Lecture 1 - 17
P. Chiang with slides from C.
Kozyrakis (Stanford)
Evaluating Performance
• What do we mean by “performance?”
• How do we select benchmark programs?
• How do we summarize performance across a suite of programs?
– When to use the different types of means
– Statistics for architects
EE472 – Fall 2007
Lecture 1 - 18
P. Chiang with slides from C.
Kozyrakis (Stanford)
What is Performance?
• Unlike cost, depends on the program you run. Can be stated in terms
of execution time or bandwidth.
• Given execution time for machines A and B, “A is X times faster than B”
CPUTimeB
means:
X
CPUTimeA
X is called the speedup of A over B.
• Example: time(A)=3.4sec & time(B)=5.3sec for some program.
– A is 5.3/3.4=1.55 times faster than B or 55% faster
• For bandwidth metrics of performance, use inverse ratio
X
EE472 – Fall 2007
BandWidth A
BandWidth B
Lecture 1 - 19
P. Chiang with slides from C.
Kozyrakis (Stanford)
Choosing Benchmark Programs
• Criteria
– Representative of real workloads in some way
– Hard to “cheat” (i.e. get deceptively good performance that will never be
seen in real life)
• Best solution: run substantial, real-world programs
– Representative because real
– Improvements on these programs = improvements in the real world
– …but require more effort than “toy benchmarks”
• Examples:
– SPEC CPU integer/floating-point suites
– TPC transaction processing benchmarks
EE472 – Fall 2007
Lecture 1 - 20
P. Chiang with slides from C.
Kozyrakis (Stanford)
Benchmarks
•
Scientific computing: Linpack, SpecOMP, SpecHPC, …
•
Embedded benchmarks: EEMBC, Dhrystone, …
•
Enterprise computing
– TCP-C, TPC-W, TPC-H
– SpecJbb, SpecSFS, SpecMail, Streams,…
– MinuteSort, PennySort, …
•
Other
– 3Dmark, ScienceMark, Winstone, iBench, AquaMark, …
•
Caveats:
– Your results will be as good as your benchmarks
– Make sure you know what the benchmark is designed to measure
– Performance is not the only metric for computing systems
• Cost, power consumption, reliability, real-time performance, …
– Predicting the real-world programs/datasets for 3 years from now
EE472 – Fall 2007
Lecture 1 - 21
P. Chiang with slides from C.
Kozyrakis (Stanford)
How do you summarize performance?
• Combining different benchmark results into 1 number: sometimes
misleading, always controversial…and inevitable
• 3 types of means
– Arithmetic: for times
– Harmonic: for rates
– Geometric: for ratios
• Statistics for architects: benchmark suites as samples of a population
– Distributions
– Confidence intervals
EE472 – Fall 2007
Lecture 1 - 22
P. Chiang with slides from C.
Kozyrakis (Stanford)
(Weighted) Arithmetic Mean
1 n
Weight i  Timei

n i 1
Machine A
Machine B
Speedup (B over A)
Prog. 1 (sec)
1
10
0.1
Prog. 2 (sec)
1000
100
10
Mean (50/50)
500.5
55
9.1
Mean (75/25)
250.75
32.5
7.7
• If you know your exact workload (benchmarks & relative
frequencies), this is the right way to summarize performance.
EE472 – Fall 2007
Lecture 1 - 23
P. Chiang with slides from C.
Kozyrakis (Stanford)
(Weighted) Harmonic Mean
n
HM  n
Weight i 

Ratei
i 1
• Exactly analogous, but for averaging rates (work / unit time).
EE472 – Fall 2007
Lecture 1 - 24
P. Chiang with slides from C.
Kozyrakis (Stanford)
Geometric mean: used for ratios
1
 
n
n

GM    Ratio i 

 i 1

• Used by SPEC CPU suite. To avoid questions of how to weight
benchmarks, normalize Machine A’s performance on each benchmark i
to the performance of some reference machine Ref:
Timei , MachineA
SPECRatioi 
Timei , Ref
and report GM of ratios as final result.
EE472 – Fall 2007
Lecture 1 - 25
P. Chiang with slides from C.
Kozyrakis (Stanford)
Pros and Cons of Geometric Mean
• Pros: Ratio of means = mean of ratios
 X  GM  X 
GM   
 Y  GM Y 
• Cons:
– No intuitive physical meaning
– Can’t be related back to execution time
EE472 – Fall 2007
Lecture 1 - 26
P. Chiang with slides from C.
Kozyrakis (Stanford)
Chapter 2
Department of Electrical Engineering
Oregon State University
http://eecs.oregonstate.edu/~pchiang
EE472 – Spring 2007
P. Chiang, with Slide Help from
C. Kozyrakis (Stanford)
Instructions:
• Language of the Machine
• We’ll be working with the MIPS instruction set architecture
– similar to other architectures developed since the 1980's
– Almost 100 million MIPS processors manufactured in 2002
– used by NEC,
Nintendo, Cisco, Silicon Graphics, Sony, …
1400
1300
1200
1100
1000
Other
SPARC
Hitachi SH
PowerPC
Motorola 68K
MIPS
900
IA-32
800
ARM
700
600
500
400
300
200
100
0
1998
EE472 – Fall 2007
1999
2000
2001
Lecture 1 - 28
2002
P. Chiang with slides from C.
Kozyrakis (Stanford)
MIPS arithmetic
• All instructions have 3 operands
• Operand order is fixed (destination first)
Example:
C code:
a = b + c
MIPS ‘code’:
add a, b, c
(we’ll talk about registers in a bit)
“The natural number of operands for an operation like addition is
three…requiring every instruction to have exactly three operands, no
more and no less, conforms to the philosophy of keeping the hardware
simple”
EE472 – Fall 2007
Lecture 1 - 29
P. Chiang with slides from C.
Kozyrakis (Stanford)
MIPS arithmetic
• Design Principle: simplicity favors regularity.
• Of course this complicates some things...
C code:
a = b + c + d;
MIPS code:
add a, b, c
add a, a, d
• Operands must be registers, only 32 registers provided
• Each register contains 32 bits
• Design Principle: smaller is faster.
EE472 – Fall 2007
Lecture 1 - 30
Why?
P. Chiang with slides from C.
Kozyrakis (Stanford)
Registers vs. Memory
• Arithmetic instructions operands must be registers,
— only 32 registers provided
• Compiler associates variables with registers
• What about programs with lots of variables
Control
Input
Memory
Datapath
Processor
EE472 – Fall 2007
Output
I/O
Lecture 1 - 31
P. Chiang with slides from C.
Kozyrakis (Stanford)
Memory Organization
• Viewed as a large, single-dimension array, with an address.
• A memory address is an index into the array
• "Byte addressing" means that the index points to a byte of memory.
0
1
2
3
4
5
6
...
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
EE472 – Fall 2007
Lecture 1 - 32
P. Chiang with slides from C.
Kozyrakis (Stanford)
Memory Organization
• Bytes are nice, but most data items use larger "words"
• For MIPS, a word is 32 bits or 4 bytes.
0
4
8
12
...
32 bits of data
32 bits of data
32 bits of data
Registers hold 32 bits of data
32 bits of data
• 232 bytes with byte addresses from 0 to 232-1
• 230 words with byte addresses 0, 4, 8, ... 232-4
• Words are aligned
i.e., what are the least 2 significant bits of a word address?
EE472 – Fall 2007
Lecture 1 - 33
P. Chiang with slides from C.
Kozyrakis (Stanford)
Instructions
• Load and store instructions
• Example:
C code:
A[12] = h + A[8];
MIPS code:
lw $t0, 32($s3)
add $t0, $s2, $t0
sw $t0, 48($s3)
• Can refer to registers by name (e.g., $s2, $t2) instead of number
• Store word has destination last
• Remember arithmetic operands are registers, not memory!
Can’t write:
EE472 – Fall 2007
add 48($s3), $s2, 32($s3)
Lecture 1 - 34
P. Chiang with slides from C.
Kozyrakis (Stanford)
So far we’ve learned:
• MIPS
— loading words but addressing bytes
— arithmetic on registers only
• Instruction
Meaning
add $s1, $s2, $s3
sub $s1, $s2, $s3
lw $s1, 100($s2)
sw $s1, 100($s2)
EE472 – Fall 2007
$s1 = $s2 + $s3
$s1 = $s2 – $s3
$s1 = Memory[$s2+100]
Memory[$s2+100] = $s1
Lecture 1 - 35
P. Chiang with slides from C.
Kozyrakis (Stanford)
Machine Language
• Instructions, like registers and words of data, are also 32 bits long
– Example: add $t1, $s1, $s2
– registers have numbers, $t1=9, $s1=17, $s2=18
• Instruction Format:
000000
op
10001 10010 01000 00000 100000
rs
rt
rd
shamt funct
• Can you guess what the field names stand for?
EE472 – Fall 2007
Lecture 1 - 36
P. Chiang with slides from C.
Kozyrakis (Stanford)
Machine Language
• Consider the load-word and store-word instructions,
– What would the regularity principle have us do?
– New principle: Good design demands a compromise
• Introduce a new type of instruction format
– I-type for data transfer instructions
– other format was R-type for register
• Example: lw $t0, 32($s2)
35
18
9
op
rs
rt
32
16 bit number
• Where's the compromise?
EE472 – Fall 2007
Lecture 1 - 37
P. Chiang with slides from C.
Kozyrakis (Stanford)
Stored Program Concept
• Instructions are bits
• Programs are stored in memory
— to be read or written just like data
Processor
Memory
memory for data, programs,
compilers, editors, etc.
• Fetch & Execute Cycle
– Instructions are fetched and put
into a special register
Lecture 1 - 38
EE472 – Fall 2007
P. Chiang with slides from C.
Kozyrakis (Stanford)
Control
• Decision making instructions
– alter the control flow,
– i.e., change the "next" instruction to be executed
• MIPS conditional branch instructions:
bne $t0, $t1, Label
beq $t0, $t1, Label
• Example:
if (i==j) h = i + j;
bne $s0, $s1, Label
add $s3, $s0, $s1
Label:
....
EE472 – Fall 2007
Lecture 1 - 39
P. Chiang with slides from C.
Kozyrakis (Stanford)
Control
• MIPS unconditional branch instructions:
j label
• Example:
if (i!=j)
h=i+j;
else
h=i-j;
beq $s4, $s5, Lab1
add $s3, $s4, $s5
j Lab2
Lab1: sub $s3, $s4, $s5
Lab2: ...
• Can you build a simple for loop?
EE472 – Fall 2007
Lecture 1 - 40
P. Chiang with slides from C.
Kozyrakis (Stanford)
So far:
• Instruction
Meaning
add $s1,$s2,$s3
sub $s1,$s2,$s3
lw $s1,100($s2)
sw $s1,100($s2)
bne $s4,$s5,L
beq $s4,$s5,L
j Label
R
$s1 = $s2 + $s3
$s1 = $s2 – $s3
$s1 = Memory[$s2+100]
Memory[$s2+100] = $s1
Next instr. is at Label if $s4 ≠ $s5
Next instr. is at Label if $s4 = $s5
Next instr. is at Label
op
rs
rt
rd
op
• IFormats:
rs
rt
16 bit address
J
op
EE472 – Fall 2007
shamt
funct
26 bit address
Lecture 1 - 41
P. Chiang with slides from C.
Kozyrakis (Stanford)
Control Flow
• We have: beq, bne, what about Branch-if-less-than?
• New instruction:
if
slt $t0, $s1, $s2
$s1 < $s2 then
$t0 = 1
else
$t0 = 0
• Can use this instruction to build "blt $s1, $s2, Label"
— can now build general control structures
• Note that the assembler needs a register to do this,
— there are policy of use conventions for registers
EE472 – Fall 2007
Lecture 1 - 42
P. Chiang with slides from C.
Kozyrakis (Stanford)
Policy of Use Conventions
Name Register number
$zero
0
$v0-$v1
2-3
$a0-$a3
4-7
$t0-$t7
8-15
$s0-$s7
16-23
$t8-$t9
24-25
$gp
28
$sp
29
$fp
30
$ra
31
Usage
the constant value 0
values for results and expression evaluation
arguments
temporaries
saved
more temporaries
global pointer
stack pointer
frame pointer
return address
Register 1 ($at)
reserved for
26-27 for operating system
Department
of assembler,
Electrical Engineering
Oregon State University
http://eecs.oregonstate.edu/~pchiang
EE472 – Spring 2007
P. Chiang, with Slide Help from
C. Kozyrakis (Stanford)
Constants
• Small constants are used quite frequently (50% of operands)
e.g.,
A = A + 5;
B = B + 1;
C = C - 18;
• Solutions? Why not?
– put 'typical constants' in memory and load them.
– create hard-wired registers (like $zero) for constants like one.
• MIPS Instructions:
addi $29, $29, 4
slti $8, $18, 10
andi $29, $29, 6
ori $29, $29, 4
EE472 – Fall 2007
Lecture 1 - 44
P. Chiang with slides from C.
Kozyrakis (Stanford)
How about larger constants?
• We'd like to be able to load a 32 bit constant into a register
• Must use two instructions, new "load upper immediate" instruction
filled with zeros
lui $t0, 1010101010101010
1010101010101010
0000000000000000
• Then must get the lower order bits right, i.e.,
1010101010101010
0000000000000000
ori $t0,
$t0, 1010101010101010
0000000000000000
1010101010101010
1010101010101010
1010101010101010
ori
EE472 – Fall 2007
Lecture 1 - 45
P. Chiang with slides from C.
Kozyrakis (Stanford)
Assembly Language vs. Machine Language
• Assembly provides convenient symbolic representation
– much easier than writing down numbers
– e.g., destination first
• Machine language is the underlying reality
– e.g., destination is no longer first
• Assembly can provide 'pseudoinstructions'
– e.g., “move $t0, $t1” exists only in Assembly
– would be implemented using “add $t0,$t1,$zero”
• When considering performance you should count real instructions
EE472 – Fall 2007
Lecture 1 - 46
P. Chiang with slides from C.
Kozyrakis (Stanford)
Other Issues
• Discussed in your assembly language programming lab:
for procedures
linkers, loaders, memory layout
stacks, frames, recursion
manipulating strings and pointers
interrupts and exceptions
system calls and conventions
support
• Some of these we'll talk more about later
• We’ll talk about compiler optimizations when we hit chapter 4.
EE472 – Fall 2007
Lecture 1 - 47
P. Chiang with slides from C.
Kozyrakis (Stanford)
Overview of MIPS
• simple instructions all 32 bits wide
• very structured, no unnecessary baggage
• only three instruction formats
R
op
rs
rt
rd
I
op
rs
rt
16 bit address
J
op
shamt
funct
26 bit address
• rely on compiler to achieve performance
— what are the compiler's goals?
• help compiler where we can
EE472 – Fall 2007
Lecture 1 - 48
P. Chiang with slides from C.
Kozyrakis (Stanford)
Addresses in Branches and Jumps
• Instructions:
bne $t4,$t5,Label
Next instruction is at Label if $t4 ° $t5
beq $t4,$t5,Label
Next instruction is at Label if $t4 = $t5
j Label
Next instruction is at Label
• Formats:
I
op
J
op
rs
rt
16 bit address
26 bit address
• Addresses are not 32 bits
— How do we handle this with load and store instructions?
EE472 – Fall 2007
Lecture 1 - 49
P. Chiang with slides from C.
Kozyrakis (Stanford)
Addresses in Branches
• Instructions:
bne $t4,$t5,Label
beq $t4,$t5,Label
Next instruction is at Label if $t4≠$t5
Next instruction is at Label if $t4=$t5
• Formats:
I
op
rs
rt
16 bit address
• Could specify a register (like lw and sw) and add it to address
– use Instruction Address Register (PC = program counter)
– most branches are local (principle of locality)
• Jump instructions just use high order bits of PC
– address boundaries of 256 MB
EE472 – Fall 2007
Lecture 1 - 50
P. Chiang with slides from C.
Kozyrakis (Stanford)
To summarize:
MIPS operands
Name
32 registers
Example
Comments
$s0-$s7, $t0-$t9, $zero, Fast locations for data. In MIPS, data must be in registers to perform
$a0-$a3, $v0-$v1, $gp,
arithmetic. MIPS register $zero always equals 0. Register $at is
$fp, $sp, $ra, $at
reserved for the assembler to handle large constants.
Memory[0],
2
30
Accessed only by data transfer instructions. MIPS uses byte addresses, so
memory Memory[4], ...,
words
and spilled registers, such as those saved on procedure calls.
add
MIPS assembly language
Example
Meaning
add $s1, $s2, $s3
$s1 = $s2 + $s3
Three operands; data in registers
subtract
sub $s1, $s2, $s3
$s1 = $s2 - $s3
Three operands; data in registers
$s1 = $s2 + 100
$s1 = Memory[$s2 + 100]
Memory[$s2 + 100] = $s1
$s1 = Memory[$s2 + 100]
Memory[$s2 + 100] = $s1
Used to add constants
Category
Arithmetic
sequential words differ by 4. Memory holds data structures, such as arrays,
Memory[4294967292]
Instruction
addi $s1, $s2, 100
lw $s1, 100($s2)
sw $s1, 100($s2)
store word
lb $s1, 100($s2)
load byte
sb $s1, 100($s2)
store byte
load upper immediate lui $s1, 100
add immediate
load word
Data transfer
Conditional
branch
Unconditional jump
EE472 – Spring 2007
$s1 = 100 * 2
16
Comments
Word from memory to register
Word from register to memory
Byte from memory to register
Byte from register to memory
Loads constant in upper 16 bits
branch on equal
beq
$s1, $s2, 25
if ($s1 == $s2) go to
PC + 4 + 100
Equal test; PC-relative branch
branch on not equal
bne
$s1, $s2, 25
if ($s1 != $s2) go to
PC + 4 + 100
Not equal test; PC-relative
set on less than
slt
$s1, $s2, $s3
if ($s2 < $s3) $s1 = 1;
else $s1 = 0
Compare less than; for beq, bne
jump
j
2500
go to 10000
Jump to target address
jump and link
jal
2500
$ra = PC + 4; go to 10000 For procedure call
Department of Electrical Engineering
slti $s1, $s2, 100 if ($s2 < 100) $s1 = 1;
set less than
Compare less than constant
Oregon StateelseUniversity
$s1 = 0
immediate
http://eecs.oregonstate.edu/~pchiang
jr
$ra
jump register
For switch, procedure return
go to $ra
P. Chiang, with Slide Help from
C. Kozyrakis (Stanford)
1. Immediate addressing
op
rs
rt
Immediate
2. Register addressing
op
rs
rt
rd
...
funct
Registers
Register
3. Base addressing
op
rs
rt
Memory
Address
+
Register
Byte
Halfword
Word
4. PC-relative addressing
op
rs
rt
Memory
Address
PC
+
Word
5. Pseudodirect addressing
op
Memory
Address
Word
PC
EE472 – Fall 2007
Lecture 1 - 52
P. Chiang with slides from C.
Kozyrakis (Stanford)