Download matrixexp

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Circuit Simulation using Matrix
Exponential Method
Shih-Hung Weng, Quan Chen and Chung-Kuan Cheng
CSE Department, UC San Diego, CA 92130
Contact: [email protected]
1
Outline
• Introduction
• Computation of Matrix Exponential Method
– Krylov Subspace Approximation
– Adaptive Time Step Control
• Experimental Results
• Conclusions
2
Circuit Simulation
• Numerical integration
– Approximate with rational functions
– Explicit: simplified computation vs. small time steps
– Implicit: linear system derivation vs. large time steps
– Trade off between stability and performance
• Time step of both methods still suffer from accuracy
– Truncation error from low-order rational approximation
• Method beyond low-order approximation?
– Require: scalable and accurate for modern design
3
Statement of Problem
• Linear circuit formulation
Cx(t )  Gx(t )  u(t )
• Let A=-C-1G, b=C-1u, the analytical solution is
x(t  h)  e Ah x(t )  e A (t  h )
t h

e  A b( )d
t
• Let input be piecewise linear
e

x(t  h)  x(t ) 
Ah
 I
A
 Ax(t )  b(t ) 
e


Ah
  Ah  I   b(t  h)  b(t )
A2
h
4
Statement of Problem
• Integration Methods
– Explicit (Forward Euler): eAh => (I+Ah)
“Simpler” computation but smaller time
steps
– Implicit (Backward Euler): eAh => (I-Ah)-1
Direct matrix solver (LU Decomp) with
complexity O(n1.4) where n=#nodes
– Error derived from Taylor’s expansion
5
Statement of Problem
voltage
• Integration Methods
Local Truncation Error
Low order approx.
tn
tn+1
time
Error of low order polynomial approximation
6
Approach
• Parallel Processing: Avoid LU decomp matrix
solver
• Matrix Exponential Operator:
– Stability: Good
– Operation: Matrix vector multiplications
• Assumption
– C-1v exits and is easy to derive
– Regularization when C is singular
7
Matrix Exponential Method
• Krylov subspace approximation
– Orthogonalization: Better conditions
– High order polynomial
• Adaptive time step control
– Dynamic order adjustment
– Optimal tuning of parameters
• Better convergence with coefficient 1/k! at kth
term
eA= I + A + ½ A2 + … + 1/k! Ak +…
(I-A)-1= I + A + A2 +…+ Ak +…
8
Krylov Subspace Approximation (1/2)
• Krylov subspace
– K(A, v, m)={v, Av, A2v, …, Amv}
– Matrix vector multiplication
Av=-C-1(Gv)
AVm  Vm H m  H(m  1, m) v m 1emT
– Orthogonalization (Arnoldi
Process): Vm=[v1 v2 … vm]
• Matrix exponential operator
Hm
A
e v  v 2 Vme e1
– Size of Hm is about 10~30 while
size of A can be millions
– Ease of computation of eHm
• Posteriori Error Estimation
T Hm
– Evaluate without extra
err  v H(m  1, m) em e e1
overhead
9
RC circuit of 500 nodes, random cap ranges 1e11~1e-16, h = 1e-13
Krylov Subspace Approximation (2/2)
• Matrix exponential method
e

x(t  h)  x(t ) 
Ah
 I
A
 Ax(t )  b(t ) 
e


Ah
  Ah  I   b(t  h)  b(t )
A2
h
v1
Krylov space
Approximation
x(t  h)  x(t )  v1 2 Vm1
e
Hm1h
 I
Hm1h
e1  v 2 2 Vm2
v2
e
Hm2 h
  Hm2 h  I  
(Hm2 h)
2
e1
• Error estimation for matrix exponential method
T
m
err1  v1 H m1 (m  1, m) e
e
Hm1 h
 I
H m1 h
e1
T
m
err2  v 2 H m2 (m  1, m) e
e
Hm2 h
  H m2 h  I  
 Hm2 h 
2
11
e1
Adaptive Time Step Control
• Strategy:
– Maximize step size with a given error budget
– Error are from Krylov space method and nonlinear
component
toll
tolnl
f (h)  err1  err2  h
, residual  h
T
T
• Step size adjustment
– Krylov subspace approximation
• Require only to scale Hm: αA→αHm
– Backward Euler
• (C+hG)-1 changes as h changes
12
Experimental Results
• EXP (matrix exp.) and BE (Backward Euler) in
MATLAB
• Machine
– Linux Platform
– Xeon 3.0 GHz and 16GB memory
• Test cases
Circuit (L)
Description #nodes Circuit (NL)
Description
#nodes
D1
trans. Line
5.6K
D5
Inv. chain
82
D2
power grid
160K
D6
power amp
342
D3
power grid
1.6M
D7
16-bit adder
572
D4
power grid
4M
D8
ALU
10K
13
Stability and Accuracy
• BE requires smaller time steps
• EXP can leap large steps
Test case: D2
14
Performance at fixed time step sizes
• Reference: BE with small step size href
• EXP runs faster under the same error tol.
• D2: 20x
• D3: 4x
• D4: inf
• Scalable for large cases
• Case D4: BE runs out of memory (4M nodes)
15
Adaptive Time Step – Linear Circuits
• Strategy:
– Enlarge by 1.25
– Shrink by 0.8
• Adaptive EXP
– Speedup by large step
– Efficient re-evaluation
• Adaptive BE
– Smaller step for
accuracy
– Slow down by resolving linear system
• 10X speedup for D2
Test case: D2
16
Adaptive Time Step – Nonlinear
• Strategy:
– Enlarge by 1.25
– Shrink by 0.8
• Adaptive BE
– Multiple Newton
iterations for
convergence
Test case: D7
• Up to 7X speedup
17
Summary
Method
Equation
Stability
(passive)
Matrix
inverse
Major
Oper.
Memory1
Adaptive
Parameters2
Cost 3
Adaption
Error
Implicit
Rational
order < 10
High
C+hG
LU
decomp
NC+G1.4
Time
Step h
High
Taylor
series
Poly.
Explicit
Polynom.
order < 10
Weak
C
Mat-vec
product
NC*
Time
Step h
Low
Taylor
series
C
Arnoldi
Process
NC*+
mN
Step h
Order m
Low
Matrix
exp.
Matrix
Exp.
1
Analytical
High
Nc* for C-1; 2 Variable order BDF is not considered
here; 3 Cost of re-evaluation for a new step size
Summary
• Matrix exponential method is scalable
– Stability: Good
– Accuracy: SPICE
• Krylov subspace approximation
– Reduce the complexity
• Preliminary results
– Up to 10X and 7X for linear and nonlinear, respectively
• Limitations of matrix exponential method
– Singularity of C
– Stiffness of C-1G
19
Future Works
• Scalable Parallel Processing
– Integration
– Matrix Operations
• Applications
– Power Ground Network Analysis
– Substrate Noises
– Memory Analysis
– Tera Hertz Circuit Simulation
20
Thank You!
21
Related documents