Download CSE 420 Computer Architecture

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer science wikipedia , lookup

Microprocessor wikipedia , lookup

Transistor wikipedia , lookup

Immunity-aware programming wikipedia , lookup

History of the transistor wikipedia , lookup

Fault tolerance wikipedia , lookup

Transcript
CSE 420
Computer Architecture
1
Today’s Lecture

Course organization

Computing environment

Semester project

Overview of course topics
2
1
Course Organization

Course website
http://www.cse.msu.edu/~cse420/

Syllabus and calendar

Enrollment
3
Computing Environment

Operating system: Linux

Remote access:
•
•
•

arctic
xserver2
cse410
Accounts
4
2
Semester Project

Simulation of a 32-bit processor
•
•

based on MIPS microprocessor
subset of instructions
Series of 10 milestones
•
•
full credit if milestone completed by due date
10% deducted for each day late
5
The Information Revolution
Computers have led to a third revolution for
civilization, with the information revolution taking its
place alongside the agricultural and industrial
revolutions.
This race to innovate has led to unprecedented
progress since the inception of electronic computing
in the late 1940s. Had the transportation industry
kept pace with the computer industry, for example,
today we could travel from New York to London in
a second for a penny. (Patterson and Hennessy) 6
3
The Information Revolution

Driven by rapid innovation in technology

Complex applications now feasible
•
•
•
•
Computers in automobiles
Cell phones
Human genome project
World Wide Web
7
Classes of Computers

Personal computers
•
•

General purpose, variety of software
Subject to cost/performance tradeoffs
Server computers
•
•
•
Network based
High capacity, performance, reliability
Range of sizes
8
4
Classes of Computers

Supercomputers
•
•

High-end scientific applications
Highest capability but represent a small
fraction of the overall computer market
Embedded computers
•
•
Hidden as components of systems
Stringent power/performance/cost constraints
9
The Post PC Era

Personal Mobile Device (PMD)
•
•

Smart phones, tablets (touch screen)
Battery operated, wireless connection to net
Cloud computing
•
•
•
Warehouse Scale Computers (WSC)
Software as a Service (SaaS)
Part of software runs on a PMD and part runs
in the Cloud
10
5
The Post PC Era
11
Focus of the Course

The interface between software and
hardware: how programs are translated
into machine language and how the
hardware executes them

What determines program performance

What hardware designers can do to
improve performance
12
6
Eight Great Ideas in Architecture

Design for Moore’s Law

Use abstraction to simplify design

Make the common case fast

Performance via parallelism
13
Eight Great Ideas in Architecture

Performance via pipelining

Performance via prediction

Hierarchy of memories

Dependability via redundancy
14
7
Moore’s Law
Observation in 1965: number of transistors
on an integrated circuit doubles every 18-24
months (exponential growth).
A transistor’s dimensions are scaled by 30%
every generation, so only 50% of the area is
required (.7 x .7 = .49).
In other words, transistor density doubles.
15
Moore’s Law
Voltage used by a transistor is reduced by
30%, so power consumption is reduced by
50% (power based on voltage squared).
Also, signals only have to travel 70% as far,
so clock speeds can increase by 40%.
Observation accurate for past 50 years.
16
8
Moore’s Law: Summary
Every 18-24 months:

Transistor density doubles (each transistor
only requires half the area)

Power consumption remains the same
(twice as many transistors, each requires
only half the power)

Circuits 40% faster (shorter distances)
17
Abstraction
Productivity technique used for hardware and
software: use abstractions to represent the
design at different levels
Lower-level details hidden to offer a simpler
model at higher levels
18
9
Make the Common Case Fast
Improve overall performance by making the
common case fast (rather than optimizing for
rare special cases)
Use experimentation and measurement to
decide which cases are common and which
are rare
19
Performance via Parallelism
Accomplish more work in a given amount of
time by doing tasks in parallel

Instruction-level parallelism: overlap work
at the level of individual instructions

Processor-level parallelism: overlap work
at the level of the processor
20
10
Performance via Pipelining
Particular pattern of parallelism: subdivide
task into smaller pieces, overlap steps
21
Performance via Prediction
Faster (on average) to guess and start
working immediately, rather than waiting until
all of the information is available – as long as
the mechanism to recover from a wrong
guess isn't too expensive
22
11
Hierarchy of Memories
Systems contain a hierarchy of different kinds
of "memory" (storage)


top: fast, expensive and small
bottom: slow, inexpensive and large
Provides the illusion that memory is nearly as
fast as the top of the hierarchy and nearly as
large (and cheap) as the bottom
23
Dependability via Redundancy
Make systems more dependable by including
redundant components


used when failures occur
help detect failures
24
12