Download document 4198521

Document related concepts

History of compiler construction wikipedia , lookup

Memory disambiguation wikipedia , lookup

IBM System/360 architecture wikipedia , lookup

Addressing mode wikipedia , lookup

Von Neumann architecture wikipedia , lookup

Computer program wikipedia , lookup

Register renaming wikipedia , lookup

Reduced instruction set computer wikipedia , lookup

Unisys 2200 Series system architecture wikipedia , lookup

Motorola 68000 wikipedia , lookup

RISC-V wikipedia , lookup

PDP-11 architecture wikipedia , lookup

Microarchitecture wikipedia , lookup

Transcript
Chapter 1
Instructions: Language of the
Computer
Rabie A. Ramadan
http://www.rabieramadan.org/
[email protected]
http://www.rabieramadan.org/classes/2014-2015/CA/
Class Style

Do not think of the exam


Feel free to stop me at any time


Just think of the class materials and how much
you learn from it
I do not care how much I teach in class as long
as you understand what I am saying
There will be an interactive sessions
in class

you solve some of the problems with my help
Chapter 2 — Instructions: Language of the Computer — 2
When the time is up , just let me know….
Chapter 2 — Instructions: Language of the Computer — 3
Course Objectives




To evaluate the issues involved in choosing
and designing instruction set.
To learn concepts behind advanced pipelining
techniques.
To understand the “hitting the memory wall”
problem and the current state-of-art in
memory system design.
To understand the qualitative and quantitative
tradeoffs in the design of modern computer
systems
4
Before you start …..

Why do we need an architecture ?

Why do we have different architectures?
Chapter 2 — Instructions: Language of the Computer — 5
What is Computer Architecture?

Functional operation of the individual
HW units within a computer system, and
the flow of information and control
among them.
Technology
Hardware Organization
Parallelism
Computer
Architecture:
Programming
Language
Interface
Interface Design
(ISA)
Applications
Measurement &
Evaluation
6
OS
Computer Architecture Topics
Input/Output and Storage
Disks, WORM, Tape
Emerging Technologies
Interleaving Memories
DRAM
Memory
Hierarchy
Coherence,
Bandwidth,
Latency
L2 Cache
L1 Cache
VLSI
Instruction Set Architecture
RAID
Addressing,
Protection,
Exception Handling
Pipelining, Hazard Resolution,
Superscalar, Reordering,
Prediction, Speculation,
Vector, DSP
Pipelining and Instruction
Level Parallelism
7
Computer Architecture Topics
P M
P M
S
°°°
P M
P M
Interconnection Network
Processor-Memory-Switch
Multiprocessors
Networks and Interconnections
Shared Memory,
Message Passing,
Data Parallelism
Network Interfaces
Topologies,
Routing,
Bandwidth,
Latency,
Reliability
8
Measurement and Evaluation
Design
Architecture is an iterative process:
• Searching the space of possible designs
• At all levels of computer systems
Analysis
Creativity
Cost /
Performance
Analysis
Good Ideas
Mediocre Ideas
Bad Ideas
9
Enjoy the Video
Computer Architecture
Central Processing
Unit
CPU
Input/
Output
Devices
Bus
RAM
Computer Architecture
CPU
Keyboard
Display
Bus
RAM
Hard
Disk
CD-ROM
The Bus
• What is a bus?
• It is a simplified way for many devices to
communicate to each other.
• Looks like a “highway” for information.
• Actually, more like a “basket” that they all
share.
CPU
Bus
Keyboard
Display
The Bus
CPU
Bus
Keyboard
Display
The Bus
• Suppose CPU needs to check to see if the user
typed anything.
CPU
Bus
Keyboard
Display
The Bus
• CPU puts “Keyboard, did the user type
anything?” (represented in some way) on the Bus.
CPU
Keyboard
Display
Bus
“Keyboard, did the user type anything?”
The Bus
• Each device (except CPU) is a State Machine
that constantly checks to see what’s on the Bus.
CPU
Keyboard
Display
Bus
“Keyboard, did the user type anything?”
The Bus
• Keyboard notices that its name is on the Bus,
and reads info. Other devices ignore the info.
CPU
Keyboard
Display
Bus
“Keyboard, did the user type anything?”
The Bus
• Keyboard then writes “CPU: Yes, user typed ‘a’.”
to the Bus.
CPU
Keyboard
Bus
“CPU: Yes, user typed ‘a’.”
Display
The Bus
• At some point, CPU reads the Bus, and gets
the Keyboard’s response.
CPU
Keyboard
Bus
“CPU: Yes, user typed ‘a’.”
Display
Computer Architecture
CPU
Keyboard
Display
Bus
RAM
Hard
Disk
CD-ROM
Inside the CPU
• The CPU is the brain of the computer.
• It is the part that actually executes
the instructions.
• Let’s take a look inside.
Inside the CPU (cont.)
Memory Registers
Register 0
Register 1
Temporary Memory.
Computer “Loads” data from
RAM to registers, performs
operations on data in registers,
and “stores” results from
registers back to RAM
Register 2
Register 3
Remember our initial example: “read value of A from memory; read
value of B from memory; add values of A and B; put result in
memory in variable C.” The reads are done to registers, the
addition is done in registers, and the result is written to memory
from a register.
Inside the CPU (cont.)
Memory Registers
Register 0
Register 1
Register 2
Register 3
Arithmetic
/ Logic
Unit
For doing basic
Arithmetic / Logic
Operations on Values stored
in the Registers
Inside the CPU (cont.)
Memory Registers
Register 0
Register 1
Register 2
Register 3
Instruction Register
Arithmetic
/ Logic
Unit
To hold the current
instruction
Inside the CPU (cont.)
Memory Registers
Register 0
Register 1
Register 2
Arithmetic
/ Logic
Unit
Register 3
Instruction Register
Instr. Pointer (IP)
To hold the
address of the
current instruction
in RAM
Inside the CPU (cont.)
Memory Registers
Register 0
Register 1
Register 2
Register 3
Instruction Register
Instr. Pointer (IP)
Arithmetic
/ Logic
Unit
Control Unit
(State Machine)
The Control Unit
• It all comes down to the Control Unit.
• This is just a State Machine.
• How does it work?
The Control Unit
• Control Unit State Machine has very simple
structure:
• 1) Fetch:
Ask the RAM for the instruction
whose address is stored in IP.
• 2) Execute: There are only a small number
of possible instructions.
Depending on which it is, do
what is necessary to execute it.
• 3) Repeat: Add 1 to the address stored in
IP, and go back to Step 1 !
The Control Unit is a State Machine
Add
Exec
…
Fetch
Load
…
Exec
Exec
…
…
Add 1
to IP
Store
Goto
Exec
…
Exec
…
A Simple Program



Want to add values of variables a and b
(assumed to be in memory), and put the
result in variable c in memory,
I.e. c  a+b
Instructions in program




Load a into register r1
Load b into register r3
r2  r1 + r3
Store r2 in c
Running the Program
r1
r2
r3
r4
IR
IP
2
a
1
c
3
b
2
Logic
Load a into r1
2005
CPU
Memory
Load a into r1
Load b into r3
r2 r1 + r3
Store r2 into c
2005
2006
2007
2008
Running the Program
r1
r2
r3
r4
IR
IP
2
a
1
c
3
b
2
3
Logic
Load b into r3
2006
CPU
Memory
Load a into r1
Load b into r3
r2 r1 + r3
Store r2 into c
2005
2006
2007
2008
Running the Program
r1
r2
r3
r4
IR
IP
2
a
1
c
3
b
2
5
3
Logic
r2  r1 + r3
2007
CPU
Memory
Load a into r1
Load b into r3
r2 r1 + r3
Store r2 into c
2005
2006
2007
2008
Running the Program
r1
r2
r3
r4
IR
IP
2
a
1
c
3
b
2
5
3
Logic
Store r2 into c
2008
CPU
Memory
Load a into r1
Load b into r3
r2 r1 + r3
Store r2 into c
2005
2006
2007
2008
Running the Program
r1
r2
r3
r4
IR
IP
2
a
5
c
3
b
2
5
3
Logic
Store r2 into c
2008
CPU
Memory
Load a into r1
Load b into r3
r2 r1 + r3
Store r2 into c
2005
2006
2007
2008
Putting it all together
• Computer has many parts, connected by a Bus:
CPU
Keyboard
Display
Bus
RAM
Hard
Disk
CD-ROM
Putting it all together
• The RAM is the computer’s main memory.
• This is where programs and data are stored.
CPU
Keyboard
Display
Bus
RAM
Hard
Disk
CD-ROM
Putting it all together
• The CPU goes in a never-ending cycle, reading
instructions from RAM and executing them.
CPU
Keyboard
Display
Bus
RAM
Hard
Disk
CD-ROM
Putting it all together
• This cycle is orchestrated by the Control Unit
in the CPU.
Memory Registers
Register 0
Register 1
Register 2
Register 3
Instruction Register
Instr. Pointer (IP)
Arithmetic
/ Logic
Unit
Control Unit
(State Machine)
Back to the Control Unit
Memory Registers
Register 0
Register 1
Register 2
Register 3
Instruction Register
Instr. Pointer (IP)
Arithmetic
/ Logic
Unit
Control Unit
(State Machine)
Putting it all together
• To execute an instruction, the Control Unit uses
the ALU plus Memory and/or the Registers.
Memory Registers
Register 0
Register 1
Register 2
Register 3
Instruction Register
Instr. Pointer (IP)
Arithmetic
/ Logic
Unit
Control Unit
(State Machine)
Microprocessors in the Market
What’s the difference?
Areas of Development

Below are technologies which can be improved
in CPU design:







System bus speed
Internal and external clock frequency
Casing
Cooling system
Instruction set
Material used for the die
End result: enhance speed of the CPU and the
system in general

How do you evaluate an architecture?
Performance
Chapter 2 — Instructions: Language of the Computer — 47
Architecture Development and Styles

Performance is the main goal of any architecture

Complex instructions

Reduces the number of instructions to be used



Small number of instructions to perform a job.
Using different addressing modes that fits the required task
Examples:

Complex Instructions Set Computers (CISCs) such as :
 Intel PentiumTM,
 Motorola,
 MC68000TM,
 and the IBM & Macintosh PowerPCTM.
Architecture Development and Styles (Cont.)

Speeding up some of the effective instructions

More than 80% of the instructions executed are those using:

Assignment statements, conditional branching and procedure calls.

Simple assignment statements constitute almost 50% of those operations.

Optimizing such instructions enhances the performance

Reduced Instructions Set Computers (RISCs)


Few types of instructions
Example:

Sun SPARCTM and MIPS machines.
Amdahl’s Law and Performance Measure

Speedup

:
a measure of how a machine performs after some enhancement
relative to its original performance.
Amdahl’s Law and Performance Measure (Cont.)

Not all program instructions execution time can be
enhanced



May be part of it
speedup due to the enhancement for a fraction of time
For more than
Group Activity

A machine for which a speedup of 30 is possible after
applying an enhancement. If under certain conditions
the enhancement was only possible for 30% of the
time, what is the speedup due to this partial
application of the enhancement?
Answer
Performance Measures
Performance Measures

Performance analysis:


User Point of View


How fast can a program be executed using a given
computer?
Time taken to execute a given job (program)
Lab Engineer

Total amount of work done in a given time.
Performance Measures (Cont.)

Definitions

Clock Cycle Time


The time between two consecutive rising (trailing) edges of a periodic
clock signal
Cycle Count (CC)

The number of CPU clock cycles for executing a job
Performance Measures (Cont.)

Cycle Time (CT)


A mount of time taken by a cycle
Clock Frequency (f) / Clock Rate
 f= 1/ CT
First Performance Measure

The CPU time to execute a job is :

CPU time = Clock Count x Clock Time
Performance Measures (Cont.)

The average number of Clock Cycles Per Instruction (CPI):
Not All instructions take the
same number of clock cycles

In case of the CPI per instruction is known :

Ii is the repetition time in the given job
Performance Measures (Cont.)

The relation between the CPI and CPU Time:
Performance Measures (Cont.)

Another Measure is the rate of instruction execution per
unit time:

Millions Instructions Per Second (MIPS)
Group Activity

Drive CPU Time and MIPS in terms of “Clock Rate”?
Group Activity

Consider computing the overall CPI for a machine A for which the
following performance measures were recorded when executing a set of
benchmark programs. Assume that the clock rate of the CPU is 200 MHz.
Answer

Assuming the execution of 100 instructions, the overall CPI
can be computed as:
Repeat the same problem
without the “others “
instructions.
Note : 85 instructions only
Group Activity

Suppose that the same set of benchmark programs considered above were
executed on another machine, call it machine B, for which the following
measures were recorded. What is the MIPS rating for the machine
considered in the previous slide (machine A) and machine B assuming a
clock rate of 200 MHz?
Machine A
Machine B
Answer
Group Activity

Given the following benchmarks, compute the CPU time and MIPS?
Solution
Be Carful using it as a
performance measure.
It does not consider the
execution time.
More Performance Measures
Performance Measure

Rate of floating-point instruction execution per unit time


Million floating-point instructions per second (MFLOPS)
Defined only for subset of instructions where floating point is used
Performance Measure (Cont.)

Arithmetic Mean




Gives a clear picture of the expected behavior of the system
Used to compare different systems based on Benchmarks
i is the execution time for the ith program
n is the total number of programs in the set of benchmarks.
Performance Measure (Cont.)


Example
Program
System A
Execution Time
System B
Execution Time
System C
Execution Time
v
50
100
500
w
200
400
600
x
250
500
500
y
400
800
800
z
5000
4100
3500
Average
1180
1180
1180
Note: We could not decide on which system to use due to no great
deal of variability
Performance Measure (Cont.)

Geometric Mean



Gives a consistent measure with which to perform comparisons
regardless of the distribution of the data.
i is the execution time for the ith program
n is the total number of programs in the set of benchmarks.
Performance Measure (Cont.)

Example
Program
System A
Execution
Time
System B
Execution Time
System C
Execution Time
v
50
100
500
w
200
400
600
x
250
500
500
y
400
800
800
z
5000
4100
3500
Geometric
mean
346.6
580
840.7
Different Architectures
What makes the architecture
programmable ?


The repertoire/collection of instructions of
a computer
Different computers have different
instruction sets


But with many aspects in common
Early computers had very simple
instruction sets


§2.1 Introduction
Instruction Set
Simplified implementation
Many modern computers also have simple
instruction sets
Chapter 2 — Instructions: Language of the Computer — 78
The MIPS Instruction Set

Used as the example throughout the book

Stanford MIPS commercialized by MIPS
Technologies (www.mips.com)

Large share of embedded core market


Applications in consumer electronics, network/storage
equipment, cameras, printers, …
Typical of many modern ISAs
Chapter 2 — Instructions: Language of the Computer — 79

Add and subtract, three operands



Two sources and one destination
add a, b, c # a gets b + c
All arithmetic operations have this form
Design Principle 1: Simplicity favours
regularity


§2.2 Operations of the Computer Hardware
Arithmetic Operations
Regularity makes implementation simpler
Simplicity enables higher performance at
lower cost
Chapter 2 — Instructions: Language of the Computer — 80
Arithmetic Example

C code:
f = (g + h) - (i + j);

Compiled MIPS code:
add t0, g, h
add t1, i, j
sub f, t0, t1
# temp t0 = g + h
# temp t1 = i + j
# f = t0 - t1
Chapter 2 — Instructions: Language of the Computer — 81


Arithmetic instructions use register
operands
MIPS has a 32 × 32-bit register file




Assembler names



Use for frequently accessed data
Numbered 0 to 31
32-bit data called a “word”
$t0, $t1, …, $t9 for temporary values
$s0, $s1, …, $s7 for saved variables
§2.3 Operands of the Computer Hardware
Register Operands
Design Principle 2: Smaller is faster

c.f. main memory: millions of locations
Chapter 2 — Instructions: Language of the Computer — 82
Register Operand Example

C code:
f = (g + h) - (i + j);
 f, …, j in $s0, …, $s4

Compiled MIPS code:
add $t0, $s1, $s2
add $t1, $s3, $s4
sub $s0, $t0, $t1
Chapter 2 — Instructions: Language of the Computer — 83
Memory Operands

Main memory used for composite data


To apply arithmetic operations



Each address identifies an 8-bit byte
Words are aligned in memory


Load values from memory into registers
Store result from register to memory
Memory is byte addressed


Arrays, structures, dynamic data
Address must be a multiple of 4
MIPS is Big Endian


Most-significant byte at least address of a word
c.f. Little Endian: least-significant byte at least address
Chapter 2 — Instructions: Language of the Computer — 84
Memory Operand Example 1

C code:
g = h + A[8];
 g in $s1, h in $s2, base address of A in $s3

Compiled MIPS code:

Index 8 requires offset of 32

4 bytes per word
lw $t0, 32($s3)
add $s1, $s2, $t0
offset
# load word
base register
Chapter 2 — Instructions: Language of the Computer — 85
Memory Operand Example 2

C code:
A[12] = h + A[8];
 h in $s2, base address of A in $s3

Compiled MIPS code:
Index 8 requires offset of 32
lw $t0, 32($s3)
# load word
add $t0, $s2, $t0
sw $t0, 48($s3)
# store word

Chapter 2 — Instructions: Language of the Computer — 86
Registers vs. Memory


Registers are faster to access than
memory
Operating on memory data requires loads
and stores


More instructions to be executed
Compiler must use registers for variables
as much as possible


Only spill to memory for less frequently used
variables
Register optimization is important!
Chapter 2 — Instructions: Language of the Computer — 87
Immediate Operands

Constant data specified in an instruction
addi $s3, $s3, 4

No subtract immediate instruction

Just use a negative constant
addi $s2, $s1, -1

Design Principle 3: Make the common
case fast


Small constants are common
Immediate operand avoids a load instruction
Chapter 2 — Instructions: Language of the Computer — 88
The Constant Zero

MIPS register 0 ($zero) is the constant 0


Cannot be overwritten
Useful for common operations

E.g., move between registers
add $t2, $s1, $zero
Chapter 2 — Instructions: Language of the Computer — 89

Given an n-bit number
n 1
x  x n1 2


 x n2 2
   x1 2  x 0 2
1
0
Range: 0 to +2n – 1
Example


n2
§2.4 Signed and Unsigned Numbers
Unsigned Binary Integers
0000 0000 0000 0000 0000 0000 0000 10112
= 0 + … + 1×23 + 0×22 +1×21 +1×20
= 0 + … + 8 + 0 + 2 + 1 = 1110
Using 32 bits

0 to +4,294,967,295
Chapter 2 — Instructions: Language of the Computer — 90
2s-Complement Signed Integers

Given an n-bit number
n 1
x   x n1 2


 x n2 2
   x1 2  x 0 2
1
0
Range: –2n – 1 to +2n – 1 – 1
Example


n2
1111 1111 1111 1111 1111 1111 1111 11002
= –1×231 + 1×230 + … + 1×22 +0×21 +0×20
= –2,147,483,648 + 2,147,483,644 = –410
Using 32 bits

–2,147,483,648 to +2,147,483,647
Chapter 2 — Instructions: Language of the Computer — 91
Signed Negation

Complement and add 1

Complement means 1 → 0, 0 → 1
x  x  1111...1112  1
x  1  x

Example: negate +2


+2 = 0000 0000 … 00102
–2 = 1111 1111 … 11012 + 1
= 1111 1111 … 11102
Chapter 2 — Instructions: Language of the Computer — 92

Instructions are encoded in binary


MIPS instructions




Called machine code
Encoded as 32-bit instruction words
Small number of formats encoding operation code
(opcode), register numbers, …
Regularity!
Register numbers



$t0 – $t7 are reg’s 8 – 15
$t8 – $t9 are reg’s 24 – 25
$s0 – $s7 are reg’s 16 – 23
§2.5 Representing Instructions in the Computer
Representing Instructions
Chapter 2 — Instructions: Language of the Computer — 93
MIPS R-format Instructions

op
rs
rt
rd
shamt
funct
6 bits
5 bits
5 bits
5 bits
5 bits
6 bits
Instruction fields






op: operation code (opcode)
rs: first source register number
rt: second source register number
rd: destination register number
shamt: shift amount (00000 for now)
funct: function code (extends opcode)
Chapter 2 — Instructions: Language of the Computer — 94
R-format Example
op
rs
rt
rd
shamt
funct
6 bits
5 bits
5 bits
5 bits
5 bits
6 bits
add $t0, $s1, $s2
special
$s1
$s2
$t0
0
add
0
17
18
8
0
32
000000
10001
10010
01000
00000
100000
000000100011001001000000001000002 = 0232402016
Chapter 2 — Instructions: Language of the Computer — 95
Hexadecimal

Base 16


0
1
2
3

Compact representation of bit strings
4 bits per hex digit
0000
0001
0010
0011
4
5
6
7
0100
0101
0110
0111
8
9
a
b
1000
1001
1010
1011
c
d
e
f
1100
1101
1110
1111
Example: eca8 6420

1110 1100 1010 1000 0110 0100 0010 0000
Chapter 2 — Instructions: Language of the Computer — 96
MIPS I-format Instructions

rs
rt
constant or address
6 bits
5 bits
5 bits
16 bits
Immediate arithmetic and load/store
instructions




op
rt: destination or source register number
Constant: –215 to +215 – 1
Address: offset added to base address in rs
Design Principle 4: Good design demands
good compromises


Different formats complicate decoding, but allow 32-bit
instructions uniformly
Keep formats as similar as possible
Chapter 2 — Instructions: Language of the Computer — 97
Stored Program Computers
The BIG Picture



Instructions represented in
binary, just like data
Instructions and data stored
in memory
Programs can operate on
programs


e.g., compilers, linkers, …
Binary compatibility allows
compiled programs to work
on different computers

Standardized ISAs
Chapter 2 — Instructions: Language of the Computer — 98


Instructions for bitwise manipulation
Operation
C
Java
MIPS
Shift left
<<
<<
sll
Shift right
>>
>>>
srl
Bitwise AND
&
&
and, andi
Bitwise OR
|
|
or, ori
Bitwise NOT
~
~
nor
§2.6 Logical Operations
Logical Operations
Useful for extracting and inserting
groups of bits in a word
Chapter 2 — Instructions: Language of the Computer — 99
Shift Operations


rs
rt
rd
shamt
funct
6 bits
5 bits
5 bits
5 bits
5 bits
6 bits
shamt: how many positions to shift
Shift left logical



op
Shift left and fill with 0 bits
sll by i bits multiplies by 2i
Shift right logical


Shift right and fill with 0 bits
srl by i bits divides by 2i (unsigned only)
Chapter 2 — Instructions: Language of the Computer — 100
AND Operations

Useful to mask bits in a word

Select some bits, clear others to 0
and $t0, $t1, $t2
$t2
0000 0000 0000 0000 0000 1101 1100 0000
$t1
0000 0000 0000 0000 0011 1100 0000 0000
$t0
0000 0000 0000 0000 0000 1100 0000 0000
Chapter 2 — Instructions: Language of the Computer — 101
OR Operations

Useful to include bits in a word

Set some bits to 1, leave others unchanged
or $t0, $t1, $t2
$t2
0000 0000 0000 0000 0000 1101 1100 0000
$t1
0000 0000 0000 0000 0011 1100 0000 0000
$t0
0000 0000 0000 0000 0011 1101 1100 0000
Chapter 2 — Instructions: Language of the Computer — 102
NOT Operations

Useful to invert bits in a word


Change 0 to 1, and 1 to 0
MIPS has NOR 3-operand instruction

a NOR b == NOT ( a OR b )
nor $t0, $t1, $zero
Register 0: always
read as zero
$t1
0000 0000 0000 0000 0011 1100 0000 0000
$t0
1111 1111 1111 1111 1100 0011 1111 1111
Chapter 2 — Instructions: Language of the Computer — 103
Chapter 2 — Instructions: Language of the Computer — 104

Branch to a labeled instruction if a
condition is true


beq rs, rt, L1


if (rs == rt) branch to instruction labeled L1;
bne rs, rt, L1


Otherwise, continue sequentially
§2.7 Instructions for Making Decisions
Conditional Operations
if (rs != rt) branch to instruction labeled L1;
j L1

unconditional jump to instruction labeled L1
Chapter 2 — Instructions: Language of the Computer — 105
Compiling If Statements

C code:
if (i==j) f = g+h;
else f = g-h;


f, g, … in $s0, $s1, …
Compiled MIPS code:
bne
add
j
Else: sub
Exit: …
$s3, $s4, Else
$s0, $s1, $s2
Exit
$s0, $s1, $s2
Assembler calculates addresses
Chapter 2 — Instructions: Language of the Computer — 106
Compiling Loop Statements

C code:
while (save[i] == k) i += 1;


i in $s3, k in $s5, address of save in $s6
Compiled MIPS code:
Loop: sll
add
lw
bne
addi
j
Exit: …
$t1,
$t1,
$t0,
$t0,
$s3,
Loop
$s3, 2
$t1, $s6
0($t1)
$s5, Exit
$s3, 1
Chapter 2 — Instructions: Language of the Computer — 107
Basic Blocks

A basic block is a sequence of
instructions with


No embedded branches (except at end)
No branch targets (except at beginning)


A compiler identifies basic
blocks for optimization
An advanced processor
can accelerate execution
of basic blocks
Chapter 2 — Instructions: Language of the Computer — 108
More Conditional Operations

Set result to 1 if a condition is true


slt rd, rs, rt


if (rs < rt) rd = 1; else rd = 0;
slti rt, rs, constant


Otherwise, set to 0
if (rs < constant) rt = 1; else rt = 0;
Use in combination with beq, bne
slt $t0, $s1, $s2
bne $t0, $zero, L
# if ($s1 < $s2)
#
branch to L
Chapter 2 — Instructions: Language of the Computer — 109
Branch Instruction Design


Why not bge, etc? branch less than,…
Hardware for <=, ≥, … slower than =, ≠




Combining with branch involves more work
per instruction, requiring a slower clock
All instructions penalized!
beq and bne are the common case
This is a good design compromise
Chapter 2 — Instructions: Language of the Computer — 110

Steps required
1.
2.
3.
4.
5.
6.
Place parameters in registers
Transfer control to procedure
Acquire storage for procedure
Perform procedure’s operations
Place result in register for caller
Return to place of call
§2.8 Supporting Procedures in Computer Hardware
Procedure Calling
Chapter 2 — Instructions: Language of the Computer — 111
Register Usage



$a0 – $a3: arguments (reg’s 4 – 7)
$v0, $v1: result values (reg’s 2 and 3)
$t0 – $t9: temporaries


$s0 – $s7: saved





Can be overwritten by callee
Must be saved/restored by callee
$gp: global pointer for static data (reg 28)
$sp: stack pointer (reg 29)
$fp: frame pointer (reg 30)
$ra: return address (reg 31)
Chapter 2 — Instructions: Language of the Computer — 112
Procedure Call Instructions

Procedure call: jump and link
jal ProcedureLabel
 Address of following instruction put in $ra
 Jumps to target address

Procedure return: jump register
jr $ra
 Copies $ra to program counter
 Can also be used for computed jumps

e.g., for case/switch statements
Chapter 2 — Instructions: Language of the Computer — 113
Leaf Procedure Example

C code:
int leaf_example (int g, h, i, j)
{ int f;
f = (g + h) - (i + j);
return f;
}



Arguments g, …, j in $a0, …, $a3
f in $s0 (hence, need to save $s0 on stack)
Result in $v0
Chapter 2 — Instructions: Language of the Computer — 114
Leaf Procedure Example

MIPS code:
leaf_example:
addi $sp, $sp, -4
sw
$s0, 0($sp)
add $t0, $a0, $a1
add $t1, $a2, $a3
sub $s0, $t0, $t1
add $v0, $s0, $zero
lw
$s0, 0($sp)
addi $sp, $sp, 4
jr
$ra
Save $s0 on stack
Store word
Procedure body
Result
Restore $s0
Return
Chapter 2 — Instructions: Language of the Computer — 115
Explain the function of each line.
Chapter 2 — Instructions: Language of the Computer — 116
Non-Leaf Procedures


Procedures that call other procedures
For nested call, caller needs to save on the
stack:



Its return address
Any arguments and temporaries needed
after the call
Restore from the stack after the call
Chapter 2 — Instructions: Language of the Computer — 117
Non-Leaf Procedure Example

C code:
int fact (int n)
{
if (n < 1) return f;
else return n * fact(n - 1);
}
 Argument n in $a0
 Result in $v0
Chapter 2 — Instructions: Language of the Computer — 118
Non-Leaf Procedure Example

MIPS code:
fact:
addi
sw
sw
slti
beq
addi
addi
jr
L1: addi
jal
lw
lw
addi
mul
jr
$sp,
$ra,
$a0,
$t0,
$t0,
$v0,
$sp,
$ra
$a0,
fact
$a0,
$ra,
$sp,
$v0,
$ra
$sp, -8
4($sp)
0($sp)
$a0, 1
$zero, L1
$zero, 1
$sp, 8
$a0, -1
0($sp)
4($sp)
$sp, 8
$a0, $v0
#
#
#
#
adjust stack for 2 items
save return address
save argument
test for n < 1
#
#
#
#
#
#
#
#
#
#
if so, result is 1
pop 2 items from stack
and return
else decrement n
recursive call
restore original n
and return address
pop 2 items from stack
multiply to get result
and return
Chapter 2 — Instructions: Language of the Computer — 119
Local Data on the Stack

Local data allocated by callee


e.g., C automatic variables
Procedure frame (activation record)

Used by some compilers to manage stack storage
Chapter 2 — Instructions: Language of the Computer — 120
Memory Layout


Text: program code
Static data: global
variables



Dynamic data: heap


e.g., static variables in C,
constant arrays and strings
$gp initialized to address
allowing ±offsets into this
segment
E.g., malloc in C, new in
Java
Stack: automatic storage
Chapter 2 — Instructions: Language of the Computer — 121

Byte-encoded character sets

ASCII: 128 characters


Latin-1: 256 characters


95 graphic, 33 control
ASCII, +96 more graphic characters
§2.9 Communicating with People
Character Data
Unicode: 32-bit character set



Used in Java, C++ wide characters, …
Most of the world’s alphabets, plus symbols
UTF-8, UTF-16: variable-length encodings
Chapter 2 — Instructions: Language of the Computer — 122
Byte/Halfword Operations


Could use bitwise operations
MIPS byte/halfword load/store

String processing is a common case
lb rt, offset(rs)

Sign extend to 32 bits in rt
lbu rt, offset(rs)

lhu rt, offset(rs)
Zero extend to 32 bits in rt
sb rt, offset(rs)

lh rt, offset(rs)
sh rt, offset(rs)
Store just rightmost byte/halfword
Chapter 2 — Instructions: Language of the Computer — 123
String Copy Example -skip

C code (naïve):
Null-terminated string
void strcpy (char x[], char y[])
{ int i;
i = 0;
while ((x[i]=y[i])!='\0')
i += 1;
}
 Addresses of x, y in $a0, $a1
 i in $s0

Chapter 2 — Instructions: Language of the Computer — 124
String Copy Example- skip

MIPS code:
strcpy:
addi
sw
add
L1: add
lbu
add
sb
beq
addi
j
L2: lw
addi
jr
$sp,
$s0,
$s0,
$t1,
$t2,
$t3,
$t2,
$t2,
$s0,
L1
$s0,
$sp,
$ra
$sp, -4
0($sp)
$zero, $zero
$s0, $a1
0($t1)
$s0, $a0
0($t3)
$zero, L2
$s0, 1
0($sp)
$sp, 4
#
#
#
#
#
#
#
#
#
#
#
#
#
adjust stack for 1 item
save $s0
i = 0
addr of y[i] in $t1
$t2 = y[i]
addr of x[i] in $t3
x[i] = y[i]
exit loop if y[i] == 0
i = i + 1
next iteration of loop
restore saved $s0
pop 1 item from stack
and return
Chapter 2 — Instructions: Language of the Computer — 125

Most constants are small


16-bit immediate is sufficient
For the occasional 32-bit constant
lui rt, constant


Copies 16-bit constant to left 16 bits of rt
Clears right 16 bits of rt to 0
lhi $s0, 61
0000 0000 0111 1101 0000 0000 0000 0000
ori $s0, $s0, 2304 0000 0000 0111 1101 0000 1001 0000 0000
§2.10 MIPS Addressing for 32-Bit Immediates and Addresses
32-bit Constants
Chapter 2 — Instructions: Language of the Computer — 126
Branch Addressing

Branch instructions specify


Opcode, two registers, target address
Most branch targets are near branch


Forward or backward
op
rs
rt
constant or address
6 bits
5 bits
5 bits
16 bits
PC-relative addressing


Target address = PC + offset × 4
PC already incremented by 4 by this time
Chapter 2 — Instructions: Language of the Computer — 127
Jump Addressing

Jump (j and jal) targets could be
anywhere in text segment


Encode full address in instruction
op
address
6 bits
26 bits
(Pseudo)Direct jump addressing

Target address = PC31…28 : (address × 4)
Chapter 2 — Instructions: Language of the Computer — 128
Target Addressing Example

Loop code from earlier example

Assume Loop at location 80000
Loop: sll
$t1, $s3, 2
80000
0
0
19
9
4
0
add
$t1, $t1, $s6
80004
0
9
22
9
0
32
lw
$t0, 0($t1)
80008
35
9
8
0
bne
$t0, $s5, Exit 80012
5
8
21
2
19
19
1
addi $s3, $s3, 1
80016
8
j
80020
2
Exit: …
Loop
20000
80024
Chapter 2 — Instructions: Language of the Computer — 129
Branching Far Away


If branch target is too far to encode with
16-bit offset, assembler rewrites the code
Example
beq $s0,$s1, L1
↓
bne $s0,$s1, L2
j L1
L2: …
Chapter 2 — Instructions: Language of the Computer — 130
Addressing Mode Summary
Chapter 2 — Instructions: Language of the Computer — 131

Two processors sharing an area of memory


P1 writes, then P2 reads
Data race if P1 and P2 don’t synchronize


Hardware support required



Result depends of order of accesses
Atomic read/write memory operation
No other access to the location allowed between the
read and write
Could be a single instruction


E.g., atomic swap of register ↔ memory
Or an atomic pair of instructions
§2.11 Parallelism and Instructions: Synchronization
Synchronization
Chapter 2 — Instructions: Language of the Computer — 132
Synchronization in MIPS


Load linked: ll rt, offset(rs)
Store conditional: sc rt, offset(rs)

Succeeds if location not changed since the ll


Fails if location is changed


Returns 1 in rt
Returns 0 in rt
Example: atomic swap (to test/set lock variable)
try: add
ll
sc
beq
add
$t0,$zero,$s4
$t1,0($s1)
$t0,0($s1)
$t0,$zero,try
$s4,$zero,$t1
;copy exchange value
;load linked
;store conditional
;branch store fails
;put load value in $s4
Chapter 2 — Instructions: Language of the Computer — 133
Many compilers produce
object modules directly
Static linking
§2.12 Translating and Starting a Program
Translation and Startup
Chapter 2 — Instructions: Language of the Computer — 134
Assembler Pseudoinstructions


Most assembler instructions represent
machine instructions one-to-one
Pseudoinstructions: figments of the
assembler’s imagination
move $t0, $t1
→ add $t0, $zero, $t1
- Branch on less than – set on less than
blt $t0, $t1, L
→ slt $at, $t0, $t1
bne $at, $zero, L

$at (register 1): assembler temporary
Chapter 2 — Instructions: Language of the Computer — 135
Producing an Object Module


Assembler (or compiler) translates program into
machine instructions
Provides information for building a complete
program from the pieces






Header: described contents of object module
Text segment: translated instructions
Static data segment: data allocated for the life of the
program
Relocation info: for contents that depend on absolute
location of loaded program
Symbol table: global definitions and external refs
Debug info: for associating with source code
Chapter 2 — Instructions: Language of the Computer — 136
Linking Object Modules

Produces an executable image
1. Merges segments
2. Resolve labels (determine their addresses)
3. Patch location-dependent and external refs

Could leave location dependencies for
fixing by a relocating loader


But with virtual memory, no need to do this
Program can be loaded into absolute location
in virtual memory space
Chapter 2 — Instructions: Language of the Computer — 137
Loading a Program

Load from image file on disk into memory
1. Read header to determine segment sizes
2. Create virtual address space
3. Copy text and initialized data into memory

Or set page table entries so they can be faulted in
4. Set up arguments on stack
5. Initialize registers (including $sp, $fp, $gp)
6. Jump to startup routine


Copies arguments to $a0, … and calls main
When main returns, do exit syscall
Chapter 2 — Instructions: Language of the Computer — 138
Dynamic Linking

Only link/load library procedure when it is
called



Requires procedure code to be relocatable
Avoids image expand caused by static linking
of all (transitively) referenced libraries
Automatically picks up new library versions
Chapter 2 — Instructions: Language of the Computer — 139
Lazy Linkage
Indirection table
Stub: Loads routine ID,
Jump to linker/loader
Linker/loader code
Dynamically
mapped code
Chapter 2 — Instructions: Language of the Computer — 140
Starting Java Applications
Simple portable
instruction set for
the JVM
Compiles
bytecodes of
“hot” methods
into native
code for host
machine
Interprets
bytecodes
Chapter 2 — Instructions: Language of the Computer — 141


Illustrates use of assembly instructions
for a C bubble sort function
Swap procedure (leaf)

void swap(int v[], int k)
{
int temp;
temp = v[k];
v[k] = v[k+1];
v[k+1] = temp;
}
v in $a0, k in $a1, temp in $t0
§2.13 A C Sort Example to Put It All Together
C Sort Example
Chapter 2 — Instructions: Language of the Computer — 142
The Procedure Swap
swap: sll $t1, $a1, 2
# $t1 = k * 4
add $t1, $a0, $t1 # $t1 = v+(k*4)
#
(address of v[k])
lw $t0, 0($t1)
# $t0 (temp) = v[k]
lw $t2, 4($t1)
# $t2 = v[k+1]
sw $t2, 0($t1)
# v[k] = $t2 (v[k+1])
sw $t0, 4($t1)
# v[k+1] = $t0 (temp)
jr $ra
# return to calling routine
Chapter 2 — Instructions: Language of the Computer — 143
The Sort Procedure in C

Non-leaf (calls swap)

void sort (int v[], int n)
{
int i, j;
for (i = 0; i < n; i += 1) {
for (j = i – 1;
j >= 0 && v[j] > v[j + 1];
j -= 1) {
swap(v,j);
}
}
}
v in $a0, k in $a1, i in $s0, j in $s1
Chapter 2 — Instructions: Language of the Computer — 144
The Procedure Body
move
move
move
for1tst: slt
beq
addi
for2tst: slti
bne
sll
add
lw
lw
slt
beq
move
move
jal
addi
j
exit2:
addi
j
$s2, $a0
$s3, $a1
$s0, $zero
$t0, $s0, $s3
$t0, $zero, exit1
$s1, $s0, –1
$t0, $s1, 0
$t0, $zero, exit2
$t1, $s1, 2
$t2, $s2, $t1
$t3, 0($t2)
$t4, 4($t2)
$t0, $t4, $t3
$t0, $zero, exit2
$a0, $s2
$a1, $s1
swap
$s1, $s1, –1
for2tst
$s0, $s0, 1
for1tst
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
save $a0 into $s2
save $a1 into $s3
i = 0
$t0 = 0 if $s0 ≥ $s3 (i ≥ n)
go to exit1 if $s0 ≥ $s3 (i ≥ n)
j = i – 1
$t0 = 1 if $s1 < 0 (j < 0)
go to exit2 if $s1 < 0 (j < 0)
$t1 = j * 4
$t2 = v + (j * 4)
$t3 = v[j]
$t4 = v[j + 1]
$t0 = 0 if $t4 ≥ $t3
go to exit2 if $t4 ≥ $t3
1st param of swap is v (old $a0)
2nd param of swap is j
call swap procedure
j –= 1
jump to test of inner loop
i += 1
jump to test of outer loop
Move
params
Outer loop
Inner loop
Pass
params
& call
Inner loop
Outer loop
Chapter 2 — Instructions: Language of the Computer — 145
The Full Procedure
sort:
addi $sp,$sp, –20
sw $ra, 16($sp)
sw $s3,12($sp)
sw $s2, 8($sp)
sw $s1, 4($sp)
sw $s0, 0($sp)
…
…
exit1: lw $s0, 0($sp)
lw $s1, 4($sp)
lw $s2, 8($sp)
lw $s3,12($sp)
lw $ra,16($sp)
addi $sp,$sp, 20
jr $ra
#
#
#
#
#
#
#
make room on stack for 5 registers
save $ra on stack
save $s3 on stack
save $s2 on stack
save $s1 on stack
save $s0 on stack
procedure body
#
#
#
#
#
#
#
restore $s0 from stack
restore $s1 from stack
restore $s2 from stack
restore $s3 from stack
restore $ra from stack
restore stack pointer
return to calling routine
Chapter 2 — Instructions: Language of the Computer — 146
Effect of Compiler Optimization
Compiled with gcc for Pentium 4 under Linux
Relative Performance
3
140000
Instruction count
120000
2.5
100000
2
80000
1.5
60000
1
40000
0.5
20000
0
0
none
O1
O2
Clock Cycles
180000
160000
140000
120000
100000
80000
60000
40000
20000
0
none
O3
O1
O2
O3
O2
O3
CPI
2
1.5
1
0.5
0
none
O1
O2
O3
none
O1
Chapter 2 — Instructions: Language of the Computer — 147
Effect of Language and Algorithm
Bubblesort Relative Performance
3
2.5
2
1.5
1
0.5
0
C/none
C/O1
C/O2
C/O3
Java/int
Java/JIT
Quicksort Relative Performance
2.5
2
1.5
1
0.5
0
C/none
C/O1
C/O2
C/O3
Java/int
Java/JIT
Quicksort vs. Bubblesort Speedup
3000
2500
2000
1500
1000
500
0
C/none
C/O1
C/O2
C/O3
Java/int
Java/JIT
Chapter 2 — Instructions: Language of the Computer — 148
Lessons Learnt



Instruction count and CPI are not good
performance indicators in isolation
Compiler optimizations are sensitive to the
algorithm
Java/JIT compiled code is significantly
faster than JVM interpreted


Comparable to optimized C in some cases
Nothing can fix a dumb algorithm!
Chapter 2 — Instructions: Language of the Computer — 149

Array indexing involves



Multiplying index by element size
Adding to array base address
Pointers correspond directly to memory
addresses

§2.14 Arrays versus Pointers
Arrays vs. Pointers
Can avoid indexing complexity
Chapter 2 — Instructions: Language of the Computer — 150

We studied enough in this chapter
Chapter 2 — Instructions: Language of the Computer — 151
Example: Clearing and Array
clear1(int array[], int size) {
int i;
for (i = 0; i < size; i += 1)
array[i] = 0;
}
clear2(int *array, int size) {
int *p;
for (p = &array[0]; p < &array[size];
p = p + 1)
*p = 0;
}
move $t0,$zero
loop1: sll $t1,$t0,2
add $t2,$a0,$t1
move $t0,$a0
# p = & array[0]
sll $t1,$a1,2
# $t1 = size * 4
add $t2,$a0,$t1 # $t2 =
#
&array[size]
loop2: sw $zero,0($t0) # Memory[p] = 0
addi $t0,$t0,4 # p = p + 4
slt $t3,$t0,$t2 # $t3 =
#(p<&array[size])
bne $t3,$zero,loop2 # if (…)
# goto loop2
# i = 0
# $t1 = i * 4
# $t2 =
#
&array[i]
sw $zero, 0($t2) # array[i] = 0
addi $t0,$t0,1
# i = i + 1
slt $t3,$t0,$a1 # $t3 =
#
(i < size)
bne $t3,$zero,loop1 # if (…)
# goto loop1
Chapter 2 — Instructions: Language of the Computer — 152
Comparison of Array vs. Ptr


Multiply “strength reduced” to shift
Array version requires shift to be inside
loop



Part of index calculation for incremented i
c.f. incrementing pointer
Compiler can achieve same effect as
manual use of pointers


Induction variable elimination
Better to make program clearer and safer
Chapter 2 — Instructions: Language of the Computer — 153


ARM: the most popular embedded core
Similar basic set of instructions to MIPS
ARM
MIPS
1985
1985
Instruction size
32 bits
32 bits
Address space
32-bit flat
32-bit flat
Data alignment
Aligned
Aligned
9
3
15 × 32-bit
31 × 32-bit
Memory
mapped
Memory
mapped
Date announced
Data addressing modes
Registers
Input/output
§2.16 Real Stuff: ARM Instructions
ARM & MIPS Similarities
Chapter 2 — Instructions: Language of the Computer — 154
Compare and Branch in ARM

Uses condition codes for result of an
arithmetic/logical instruction



Negative, zero, carry, overflow
Compare instructions to set condition codes
without keeping the result
Each instruction can be conditional


Top 4 bits of instruction word: condition value
Can avoid branches over single instructions
Chapter 2 — Instructions: Language of the Computer — 155
Instruction Encoding
Chapter 2 — Instructions: Language of the Computer — 156

Evolution with backward compatibility

8080 (1974): 8-bit microprocessor


8086 (1978): 16-bit extension to 8080


Adds FP instructions and register stack
80286 (1982): 24-bit addresses, MMU


Complex instruction set (CISC)
8087 (1980): floating-point coprocessor


Accumulator, plus 3 index-register pairs
§2.17 Real Stuff: x86 Instructions
The Intel x86 ISA
Segmented memory mapping and protection
80386 (1985): 32-bit extension (now IA-32)


Additional addressing modes and operations
Paged memory mapping as well as segments
Chapter 2 — Instructions: Language of the Computer — 157
The Intel x86 ISA

Further evolution…

i486 (1989): pipelined, on-chip caches and FPU


Pentium (1993): superscalar, 64-bit datapath



New microarchitecture (see Colwell, The Pentium Chronicles)
Pentium III (1999)


Later versions added MMX (Multi-Media eXtension)
instructions
The infamous FDIV bug
Pentium Pro (1995), Pentium II (1997)


Compatible competitors: AMD, Cyrix, …
Added SSE (Streaming SIMD Extensions) and associated
registers
Pentium 4 (2001)


New microarchitecture
Added SSE2 instructions
Chapter 2 — Instructions: Language of the Computer — 158
The Intel x86 ISA

And further…


AMD64 (2003): extended architecture to 64 bits
EM64T – Extended Memory 64 Technology (2004)



Intel Core (2006)


Intel declined to follow, instead…
Advanced Vector Extension (announced 2008)


Added SSE4 instructions, virtual machine support
AMD64 (announced 2007): SSE5 instructions


AMD64 adopted by Intel (with refinements)
Added SSE3 instructions
Longer SSE registers, more instructions
If Intel didn’t extend with compatibility, its
competitors would!

Technical elegance ≠ market success
Chapter 2 — Instructions: Language of the Computer — 159
Basic x86 Registers
Chapter 2 — Instructions: Language of the Computer — 160
Basic x86 Addressing Modes


Two operands per instruction
Source/dest operand
Second source operand
Register
Register
Register
Immediate
Register
Memory
Memory
Register
Memory
Immediate
Memory addressing modes




Address in register
Address = Rbase + displacement
Address = Rbase + 2scale × Rindex (scale = 0, 1, 2, or 3)
Address = Rbase + 2scale × Rindex + displacement
Chapter 2 — Instructions: Language of the Computer — 161
x86 Instruction Encoding

Variable length
encoding


Postfix bytes specify
addressing mode
Prefix bytes modify
operation

Operand length,
repetition, locking, …
Chapter 2 — Instructions: Language of the Computer — 162
Implementing IA-32

Complex instruction set makes
implementation difficult

Hardware translates instructions to simpler
microoperations





Simple instructions: 1–1
Complex instructions: 1–many
Microengine similar to RISC
Market share makes this economically viable
Comparable performance to RISC

Compilers avoid complex instructions
Chapter 2 — Instructions: Language of the Computer — 163

Powerful instruction  higher performance


Fewer instructions required
But complex instructions are hard to implement



May slow down all instructions, including simple ones
§2.18 Fallacies and Pitfalls
Fallacies
Compilers are good at making fast code from simple
instructions
Use assembly code for high performance


But modern compilers are better at dealing with
modern processors
More lines of code  more errors and less
productivity
Chapter 2 — Instructions: Language of the Computer — 164
Fallacies

Backward compatibility  instruction set
doesn’t change

But they do accrete more instructions
x86 instruction set
Chapter 2 — Instructions: Language of the Computer — 165
Pitfalls

Sequential words are not at sequential
addresses


Increment by 4, not by 1!
Keeping a pointer to an automatic variable
after procedure returns


e.g., passing pointer back via an argument
Pointer becomes invalid when stack popped
Chapter 2 — Instructions: Language of the Computer — 166

Design principles
1.
2.
3.
4.

Layers of software/hardware


Simplicity favors regularity
Smaller is faster
Make the common case fast
Good design demands good compromises
§2.19 Concluding Remarks
Concluding Remarks
Compiler, assembler, hardware
MIPS: typical of RISC ISAs

c.f. x86
Chapter 2 — Instructions: Language of the Computer — 167
Concluding Remarks

Measure MIPS instruction executions in
benchmark programs


Consider making the common case fast
Consider compromises
Instruction class
MIPS examples
SPEC2006 Int
SPEC2006 FP
Arithmetic
add, sub, addi
16%
48%
Data transfer
lw, sw, lb, lbu,
lh, lhu, sb, lui
35%
36%
Logical
and, or, nor, andi,
ori, sll, srl
12%
4%
Cond. Branch
beq, bne, slt,
slti, sltiu
34%
8%
Jump
j, jr, jal
2%
0%
Chapter 2 — Instructions: Language of the Computer — 168