Download Design Partioning

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Intelligence explosion wikipedia , lookup

Transcript
Design Partitioning
Design Partitioning
Chapter 3
H/W – S/W Partitioning
• Why so much concern over the H/W –
S/W partitioning decision?
• Because the lines between H/W and
S/W are blurring making the decision
seem “obvious”…
• …which can lead to grave mistakes
H/W – S/W Partitioning
• The partitioning decision has a significant
impact on various aspects of the project
– Overall cost (development and production)
– Overall time (development and production)
– Overall success/failure (risk)
H/W – S/W Duality
• Anything that can be done in digital hardware
can be done in software
• Anything that can be done in software can be
done in digital hardware
• Examples?
• This “duality” is why the partitioning decision
is such an issue
• To make matters worse(?), it’s getting harder
and harder to define the line between
hardware and software
H/W – S/W Partitioning
• The decision space is multi-dimensional
–
–
–
–
Microprocessor architecture
Algorithm complexity
Physical space
Development resources
•
•
•
•
Available expertise
Available time
Available money
Available tools
– etc.
• “In practice, the analysis of trade-offs for partitioning
a system is most often an informal process done with
pencil and paper or spreadsheets.”
H/W – S/W Partitioning
• If it’s so important, why is it delegated to an
informal process?
• Because it’s too hard (impossible?) to
formalize/automate the traversal of the multidimensional space
– There’s just too much data affecting the decision
– It’s a optimization problem and optimization
problems are [in general] difficult to solve
– Artificial Intelligence researchers have struggled
with this type of problem for over 40 years!
H/W – S/W Partitioning
• Generally accepted approach is to put
off the final decision as long as possible
• This way you will have gathered as
much information as possible
• “Path of least commitment”
• This is the approach adopted by AI
researchers
– Consider the chess playing machines
Hardware Trends
• H/W – S/W partitioning used to be easy
– If you had a complex algorithm, you wrote
software
• Hardware design/implementation just took too
long and designs were error prone
• Silicon real estate was just too expensive
Hardware Trends
• Then smart people like Carver Mead
(CalTech) came along and spoiled things
– His seminal book Introduction to VLSI Design
[1980] made it look “simple”
• Then along came Application Specific
Integrated Circuit (ASIC) technology
– Generic term but here we’ll use it to mean
“programmable gate arrays”
– Suddenly, you didn’t need a silicon foundry to
create custom chips – you could do it in your office
Hardware Trends
• Then along came Hardware Design
Languages (Verilog and VHDL)
– Programmer’s can be hardware designers with no
additional training
• And finally, the foundries (fab-houses) keep
shrinking the “technology” (size of gates) on
the silicon and increasing the wafer sizes
– Higher density (transistors/unit area) provides
smaller devices and faster circuits
– More devices per wafer reduce processing costs
Programmer’s Can Be Hardware
Designers?
• Verilog HDL
module simple;
reg [0:7] A, B; // -- declare two 8-bit registers
reg C;
// -- declare one 1-bit register
initial begin: stop_at
#20; $stop; // -- run for 20 clock cycles
end
initial begin: Init
A = 0;
$display(“Time A
B
C”); // -- debug output
$monitor(“ %0d %b %b %b”, $time, A, B, C); // -- debug output
end
always begin: main_process
#1 A = A + 1;
#1 B[0:3] = ~A[4:7];
#1 C = &A[6:7];
end
endmodule
Verilog HDL
• The two initial blocks and the always
blocks run concurrently
– The initial blocks run once
– The always block loops until the simulation stops
• The variables/functions preceded by $ are
simulation variables/functions (not part of the
circuit)
• The #1’s in the always block are 1 time-unit
delays
Verilog HDL
• Simulation output:
Time
A
B
0
00000000 xxxxxxxx
1
00000001 xxxxxxxx
2
00000001 1110xxxx
3
00000001 1110xxxx
4
00000010 1110xxxx
5
00000010 1101xxxx
7
00000011 1101xxxx
8
00000011 1100xxxx
9
00000011 1100xxxx
10
00000100 1100xxxx
11
00000100 1011xxxx
12
00000100 1011xxxx
13
00000101 1011xxxx
14
00000101 1010xxxx
16
00000110 1010xxxx
17
00000110 1001xxxx
19
00000111 1001xxxx
Stop at simulation time 20
C
x
x
x
0
0
0
0
0
1
1
1
0
0
0
0
0
0
Note: the $monitor only
produces output when
a register value
changes
Verilog HDL
• Two key points here:
– First [and most obvious] the “hardware
design” looks an awful lot like “C” code
• And it’s run through a compiler that acts an
awful lot like a “C” compiler
– Whereas a “C” compiler generates assembly
language statements targeted for a specific computer
architecture, a Verilog compiler generates
commands used in creating a circuit on the target
device
Verilog HDL
• Two key points here:
– Second, the design was run through a
hardware simulator
– This means that the design can be
debugged prior to spending time and
money fabricating the device!
How Does It Help?
• Since
– the H/W and S/W development tasks are starting to look
similar,
– and their designs are developed/simulated on the same
workstation platform
– bugs can be caught earlier
• Rather than write S/W to simulate the hardware (as a
means of testing the software) we can link the actual
software to the hardware (Verilog) simulation
– This also eliminates the possibility of bugs in the test
software
Goal
• The goal is to find design bugs early!!!
System specification
and design
Hardware and software
design/debug
Cost to fix
Design cycle
Prototype
debug
System
test
The Moral of the Story
• Even though hardware is developed by
writing software, the cost to fix it as time goes
on is still tremendous
– Eventually, it ends up on a piece of silicon that is
manufactured by someone else
• Within another part of your company
• By an outside vendor in the case of “fab-less design
houses”
– Especially true as systems get more and more
highly integrated (system on chip)
– Many design teams will budget resources for
multiple “chip spins” to alleviate the pain of bugs
What’s the Catch?
• But all of these worries go away since we
now have hardware and software
environments that can be merged together,
right?
• Wrong!
– Even the simplest of Verilog hardware simulations
can take hours to run
– Simulation tools are expensive
– Exhaustive testing is very difficult
What’s the Payoff?
• If you can afford the tools…
• and you can tolerate the simulation
time…
• and everything works as planned…
• then the hardware software/software
integration phase is [almost] trivial
• This is referred to as “co-verification”
– The hardware and software are being
verified concurrently
The Two Co’s
• Co-design
– The process of developing the hardware
and software simultaneously
– We almost always do this, albeit the
software is often untested w.r.t. the actual
hardware
• Co-verification
– The process of verifying the correctness of
the complete hardware/software system as
a single unit including actual interfaces
between the two
Risk Management
• Hardware is the biggest risk
• Hardware testing as a means of risk
management
– Test vectors – literally vectors of 0’s and 1’s that
exercise all of the functionality of the hardware
– Designers specify the input and expected-output
pairs
Risk Management (cont.)
– Testers then run the inputs, the simulation
generates the outputs, and the compares
them to the expected-outputs
• Testers may be part of the design team
• Testers may be employees of the fabrication
facility
– Fabrication facility requires this because they want
your business!
Risk Management
• Realistically, you won’t exhaustively test
the hardware
• Realistically, the parts you don’t test are
the parts that fail
Test Vectors
• In some cases you may be able to generate
these automatically
– The HDL compiler can be used to generate test
vectors by creating I/O functions
– You just write a little bit of code that exercises the
circuit code
• In some cases you may only have time to
check the “extreme cases”
– You assume that if those work then all the cases
in between will also work
Co-design/Co-verification
• As useful (mandatory?) as these techniques seem,
there’s still issues
– Cost of the tools
– Time to learn the tools
– Execution time of the tools
• While us programmer’s can now design hardware
using the tools, the bottom line is that if you want
“good” hardware, have a hardware designer design it
• An analogous statement goes for software
development
• Hardware/software integration remains a problem
– “We’ll fix it in software” is a common industrial cry
And now…
• …on to the lab
• Interfacing to the outside world