Claims
What is claimed is:
1. A system for simulating operation of a VLSI interconnect structure having capacitive and inductive coupling between nodes thereof, comprising:
a processor; and
a memory, said processor configured to:
obtaining a matrix X and a matrix Y containing different combinations of passive circuit element values for said interconnect structure, said element values for each matrix including inductance L and inverse capacitance P,
obtaining an adjacency matrix A associated with said interconnect structure,
storing said matrices X, Y, and A in said memory, and
performing numerical integration to solve first and second equations each including as a factor product of inverse said matrix X (X −1 ) and at least one other matrix, said first equation including X −1 Y, X −1 A, and X −1 P, and said second equation including X −1 A and X −1 P.
2. The system of claim 1 , wherein said matrices X and Y each contain resistance values in addition to L and P.
3. The system of claim 2 , wherein said first equation is substantially of the form,
i
l
k
+
1
=
X

1
Y
i
l
k
+
X

1
A
v
n
k
+
h
4
X

1
A
P
(
I
s
k
+
1
+
I
s
k
)
,
where v n and i l are node voltages and inductor currents, respectively, A is an adjacency matrix for the circuit, and I s is a current source vector, and wherein said second equation is substantially of the form
X

1
A
v
n
k
+
1
=
X

1
A
v
n
k

h
2
X

1
A
P
A
T
(
i
l
k
+
1
+
i
l
k
)
+
h
2
X

1
A
P
(
I
s
k
+
1
+
I
s
k
)
.
CROSSREFERENCE TO RELATED APPLICATIONS
This application is a continuation application of patent application Ser. No. 12/852,942, filed on Aug. 9, 2010, now U.S. Pat. No. 8,336,014 to Jain et al., issued Dec. 18, 2012, which is a divisional application of patent application Ser. No. 11/593,465, filed on Nov. 6, 2006, now U.S. Pat. No. 7,774,725 to Jain et al., issued Aug. 10, 2010, which claims the benefit of Provisional Patent Application No. 60/733,460, filed Nov. 4, 2005, and Provisional Patent Application No. 60/740,990, filed Nov. 30, 2005, which applications are hereby incorporated by reference along with all references cited therein.
GOVERNMENT RIGHTS
This invention was made with government support under Contract/Grant No. NCC 21363 awarded by the National Aeronautics and Space Administration (NASA), under Contract/Grant Nos. CCR9984553 and CCR0203362 awarded by the National Science Foundation, and under Contract/Grant No. USAFFA865004D2409 awarded by the United States Air Force Research Laboratories. The government has certain rights in the invention.
TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to electrical circuit modeling and simulation techniques and, more particularly, to methods for simulating interconnect effects in very large scale integrated circuits.
BACKGROUND OF THE INVENTION
With aggressive technology scaling, the accurate and efficient modeling and simulation of interconnect effects has become (and continues to be) a problem of central importance. In a threedimensional interconnect structure there can be significant amounts of coupling, both inductive and capacitive, between interconnects. Models that capture these effects tend to involve large matrices, resulting in extraordinary demands on memory. Simulation with these models require prohibitive amounts of computation.
While all coupling effects in theory extend without bound, it is wellrecognized that, in practice, the effects of capacitive coupling, and to some extent that of inductive coupling, can be assumed to be local without much sacrifice in accuracy. Practical modeling and simulation techniques exploit this localization to significantly reduce storage and computational costs. For practical interconnect structures, the capacitance matrix C and the inverse of the inductance matrix K=L −1 turn out to be (approximately) sparse. A number of techniques exploit the sparsity in K at extraction level. Exploiting sparsity of C and K in simulation however, is much less straightforward. The main problem is that simulation requires terms that not only involve the sparsified matrices C and K, but also inverses of terms that involve them; these inverses are dense in general.
The Modified Nodal Analysis (MNA) of interconnect structures such as the one shown in FIG. 1 yields equations of the form
G
~
x
+
C
~
x
.
=
b
,
where
G
~
=
[
??
A
l
T

A
l
0
]
,
C
~
=
[
??
0
0
L
]
,
x
=
[
v
n
i
l
]
,
b
=
[
A
i
T
I
s
0
]
,
??
=
A
g
T
R

1
A
g
,
and
??
=
A
c
T
CA
c
.
(
1
)
R is the resistance matrix. The matrices , L and C are the conductance, inductance and capacitance matrices respectively, with corresponding adjacency matrices A g , A l and A c . I s is the current source vector with adjacency matrix A i , and ν n and i l are the node voltages and inductor currents respectively. With n denoting the number of inductors, we note that
L,C,RεR n×n ,C, εR 2n×2n .
A standard algorithm for the numerical integration of differential equations such as (1) is the trapezoidal method. Consider a uniform discretization of the time axis with resolution h. Then, using the notation x k =x(kh), and the approximations
ⅆ ⅆ t x ( t )  t = kh ≈ x k + 1  x k h and x k ≈ x k + 1  x k 2
over the interval [kh,(k+1)h], we may solve for x k+1 in terms of x k by solving the equation
( G ~ 2 + C ~ h ) x k + 1 =  ( G ~ 2  C ~ h ) x k + b k + 1 + b k 2 . ( 2 )
A direct implementation of this algorithm requires O(n 3 +pn 2 ) operations, where p is the number of time steps. The direct implementation ignores the structure of the matrices {tilde over (G)} and {tilde over (C)} that is evident in (1); explicitly recognizing this structure yields the socalled Nodal Analysis (NA) equations, used in INDUCTWISE:
( ?? + 2 h ?? + h 2 S ) ︸ U v n k + 1 = (  ?? + 2 h ??  h 2 S ) ︸ V v n k  2 A l T i l k + A i T ( I s k + 1 + I s k ) .
and ( 3 ) 2 A l T i l k + 1 = 2 A l T i l k + hS ( v n k + 1 + v n k ) , ( 4 )
where S=A l KA l T (recall that K=L −1 , L being the inductance matrix, with corresponding adjacency matrix A l , and A l T being the transpose of A l ).
The NA equations (3) and (4) enjoy several advantages over the MNA equations (1). The first advantage is that the solution of equations (1), a problem of size 3n, has been divided into two subproblems of sizes 2n and 2n, which yields computational savings with polynomialtime algorithms. Next, it has been observed that with typical VLSI interconnect structures, the matrices K, C and exhibit sparsity. This can be used at the extraction stage to write down (3) and (4) with fewer parameters. Finally, at the simulation stage, the structure of the matrix U defined in (3)—symmetry, positivedefiniteness and sparsity—lends itself to the use of fast and sound numerical techniques such as Cholesky factorizations. These advantages have been extensively used to develop INDUCTWISE. For future reference, we note that the computation with INDUCTWISE is O(n 3 +pn 2 ) operations, and is usually dominated by O(pn 2 ).
SUMMARY OF THE INVENTION
The approach that is employed is to sparsify the various matrices that underlie the model of interconnects; the resulting approximate models can be represented by far fewer parameters, leading to savings in storage.
The present invention presents methods that systematically take advantage of sparsity in C and K, in simulation, achieving a very significant reduction in computation with very little sacrifice in simulation accuracy. The first idea underlying our approach is that if the sparsity in the inverse of a dense matrix is known, the (sparse) inverse can be computed very efficiently. We take advantage of this fact by writing the simulation equations in terms of L and P=C −1 . The most computationally intensive step in simulation, of system formulated in such a fashion, reduces to that of matrixvector multiplication involving a sparse matrix. We also show that savings with sparsematrixvector multiplication can be obtained with simulation using K=L −1 and C, as well, but to a lesser extent.
The RLP formulation is extended to include nonlinear devices, without sacrificing the computational benefits achieved due to sparsity of the linear system. It should be noted that the A matrix involved in the solution of the linear system is constant throughout the simulation. In contrast, the A matrix involved in solving the nonlinear system changes in each simulation step. However, the A matrix is sparse. Due to the sparse and time varying nature of the problem at hand Krylov subspace based iterative methods could be used for efficient simulation. Our second contribution is to introduce a novel preconditioner constructed based on the sparsity structure of the nonlinear system. The inverse of the preconditioner has a compact representation in the form of the Hadamard product, which facilitates not only the fast computation of the inverse, but also the fast dense matrixvector product.
The objects and advantages of the present invention will be more apparent upon reading the following detailed description in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 a distributed model of a typical three dimensional VLSI interconnect structure that may be operated in simulation with the methods of the present invention.
FIG. 2 shows the average sparsity index of the matrices U −1 V, U −1 A l T , U −1 A i T and U −1 S, for a structure as a function of h for various values of ε.
FIG. 3 shows average sparsity index of the matrices X −1 Y, X −1 A and X −1 AP, for a structure as a function of h for the sparsity threshold of ε=0.001, as compared with the average sparsity index of the matrices encountered in the GKCalgorithm.
FIG. 4( a ) shows the significant entries (shown darker) of the adjacency matrix A for a structure with 1500 conductors.
FIGS. 4( b ) and 4 ( c ) show the significant entries (shown darker) of W −1 and X −1 , respectively, for the structure in FIG. 4( a ).
FIG. 5 shows the voltage wave forms, obtained from SPICE and ExactRLP, of the active line and the seventh line of a 100conductor circuit.
FIG. 6 shows the voltage wave forms, obtained through INDUCTWISE, ExactRLP, RLP, and GKC, of the active line and the seventh line of a 600conductor circuit.
FIGS. 7 and 8 show plots of the RMSE for the active and the seventh line as a function of threshold value for a 600conductor circuit.
FIG. 9 shows the sparsity structure (nonzero entries shown darker) of the A matrix for an exemplary circuit of parallel wires driving a bank of inverters.
FIG. 10 shows an exemplary preconditioner matrix that may be used with the exemplary circuit of FIG. 9 .
FIG. 11 shows the sparsity pattern (nonzero entries shown darker) of matrix A of a circuit having only nonlinear devices and no interconnects.
FIG. 12 shows average sparsity versus circuit size.
FIG. 13 shows the voltage wave form obtained through SPICE and ExactRLP and SASIMI.
FIG. 14 shows a two dimensional nonuniform spatial grid for a Nanotransistor.
FIG. 15 shows the ratio of memory consumption of the algorithm in [9] as compared to ours for varying division sizes (N y /D).
FIG. 16 shows the ratio of memory consumption of the algorithm in [9] as compared to ours for a varying number of divisions (D).
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated device and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
While significant storage and computational advantages accrue with INDUCTWISE, we note that the sparsity of U has not been fully taken advantage of at the level of linear algebra (beyond the possible use of sparse Cholesky factorizations) in the numerical solution of (3). In particular, with the formulation used by INDUCTWISE, while the matrix U is sparse, its inverse is dense. Thus, trapezoidal numerical integration, at a first glance, entails matrixvector multiplies with a dense matrix at each time step. However, it has been observed that the matrices U −1 V (where V is defined in (3)), U −1 A l T , U −1 A i T and U −1 S are approximately sparse, and this information can be used to significantly reduce the computation as follows. Rewrite (3) and (4) as
ν n k+1 =U −1 Vν n k −2 U −1 A l T i l k +U −1 A i T ( I s k+1 +I s k )
2 U −1 A l T i l k+1 =2 U −1 A l T i l k +hU −1 S (ν n k+1 +ν n k ).
Precompute and store the sparsified matrices U −1 V, U −1 A l T , U −1 A i T and U −1 S. Then, every time step in the trapezoidal integration scheme requires only sparse matrixvector multiplies. We will henceforth refer to this technique as the GKCalgorithm (as the computations are done with the conductance, inverse of the inductance and the capacitance as the parameters).
In order to quantify the computational savings obtained with the GKCalgorithm over INDUCTWISE, we define the “sparsity index” μ e (A) of a matrix A as ratio of the number of entries of A with absolute value less than ε to the total number of entries. Then, the computation required for each iteration with the GKCalgorithm, with some appropriate value of ε, is O((1−ν)n 2 ) where ν is the minimum of the sparsity indices of the matrices U −1 V, U −1 A l T , U −1 A i T and U −1 S. The value of ν can be expected to depend on the threshold for detecting sparsity ε, as well as the time step size h. FIG. 2 shows the average sparsity index of the matrices U −1 V, U −1 A l T , U −1 A i T and U −1 S, for a structure with three parallel planes consisting of 600 conductors, as a function of h for various values of ε. The typical value of h used solving the MNA equations for VLSI interconnects is 0.1 picoseconds. With such values of h and ε=0.001, it can be seen that ν≈0.8. Thus the total computation time with the GKCalgorithm is approximately a fifth of that required by INDUCTWISE.
We now explore an alternative formulation of the MNA equations that uses the resistance, inductance and the inverse of the capacitance matrix. For typical interconnect structures, shown in FIG. 1 , we can manipulate the MNA equations (2) to obtain
( L h + R 2 + h 4 APA T ) ︸ X i l k + 1 = ( L h  R 2  h 4 APA T ) ︸ Y i l k + Av n k + h 4 AP ( I s k + 1 + I s k ) ,
and ( 5 ) v n k + 1 = v n k  h 2 PA T ( i l k + 1 + i l k ) + h 2 P ( I s k + 1 + I s k ) , ( 6 )
where P is the inverse capacitance matrix i.e P=C −1 and A is the adjacency matrix of the circuit, obtained by first adding A g and A l and then removing zero columns (these correspond to intermediate nodes, representing the connection of a resistance to an inductance). When compared with the NA equations (3) and (4), we see that the number of state variables has been halved. Compared to INDUCTWISE, this represents immediate savings. For future reference, we will term the technique of directly solving (5) and (6) as the “ExactRLP” algorithm.
In contrast with the GKCalgorithm, it turns out here that X is dense, but with an inverse that is approximately sparse. Thus, windowing techniques such as those employed by INDUCTWISE during the extraction stage to obtain a sparsified matrix K can be employed here to quickly compute a sparsified X −1 . (Windowing techniques details will be described below.) Moreover, the matrices X −1 Y, X −1 A and X −1 AP turn out to be approximately sparse. Thus, paralleling the development of the GKCalgorithm, we have the following RLPalgorithm:
Rewrite (5) and (6) as
i l k + 1 = X  1 Y i l k + X  1 A v n k + h 4 X  1 A P ( I s k + 1 + I s k ) ,
X  1 A v n k + 1 = X  1 A v n k  h 2 X  1 A P A T ( i l k + 1 + i l k ) + h 2 X  1 A P ( I s k + 1 + I s k ) .
Precompute and store the sparsified matrices X −1 Y, X −1 A and X −1 AP. Again, every timestep in the trapezoidal integration scheme requires only sparse matrixvector multiplies. As with the GKCalgorithm, the total computation with the RLPalgorithm is dominated by O((1−γ)n 2 ), where is the γ is the minimum of the sparsity indices the matrices X −1 Y, X −1 A and X −1 AP.
FIG. 3 shows the average sparsity index of the matrices X −1 Y, X −1 A and X −1 AP, for a structure with three parallel planes consisting of 600 conductors, as a function of h for the sparsity threshold of ε=0.001, as compared with the average sparsity index of the matrices encountered in the GKCalgorithm. It is clear that the matrices encountered in the RLPalgorithm exhibit much higher sparsity over a wide range of timesteps. In particular, for h=0.1 ps, it can be seen that γ≈0.9. Thus the total computation time with the above RLPalgorithm is approximately onetenth of that required by the RLP formulation that does not use sparsity information. When compared to the GKCalgorithm and INDUCTWISE which use twice as many state variables, the amount of computation required by the RLPalgorithm is approximately oneeighth and onefiftieth respectively.
We now provide the details on the fast inversion of X. Assume for simplicity that the sparsity pattern in X −1 is known, deferring for later the problem of detecting this sparsity pattern. Then, manipulations of only a subset of the entries of the X matrix (rather than the entire X matrix) can be used to compute the inverse matrix. To briefly illustrate the idea consider the example when XεR 11×11 , and the 5th row of X −1 has the following form:
[0 0 ★ 0 ★ 0 0 ★ 0 0],
where ★ denotes the nonzero entries. Then, it can be shown that these nonzero entries can be computed exactly from the second row of the inverse of the following 3×3 matrix obtained from X:
[
X
33
X
35
X
38
X
53
X
55
X
58
X
83
X
85
X
88
]
.
More generally, suppose that there are α i nonzero entries in the ith row of X −1 . By following a procedure as above, the ith row of X −1 can be computed by inverting an α i ×α i matrix. Thus, the overall computation in determining is X −1 is O(Σ i α i 3 ). It is typical with VLSI interconnects that α i is a small constant. Thus if X −1 is exactly sparse, with a known sparsity pattern, it can be computed in O(n) from X. Table 1 gives the time taken for inversion for different circuit sizes.
TABLE 1
Inversion time in matlab (in seconds)
No. of conductors
500
1000
2000
5000
Direct Inversion
.29
2.18
16.87
260.68
Fast Inversion
.79
1.48
2.93
10.17
Thus, there remains the problem of determining the sparsity pattern in X −1 . Recall that
X = L h + R 2 + h 4 A P A T .
Let
W = L h + R 2 and Z = h 4 A P A T .
Then
X −1 =W −1 −W −1 ( W −1 +Z −1 ) −1 W −1 . (7)
For the values of R, L, C and h under consideration, it turns out that
X −1 ≈W −1 −W −1 ZW −1 . (8)
Thus, the significant entries of X −1 can be obtained by superposing the significant entries of W −1 and the significant entries of W −1 ZW −1 . The sparsity pattern of W −1 can be efficiently determined using the techniques available in the literature. Turning next to
W  1 Z W  1 = h 4 W  1 A P A T W  1 ,
note that the significant entries of W −1 A are obtained by distributing the significant entries of W −1 into locations determined by the adjacency matrix A. In summary, we have the following heuristic for predicting the sparsity pattern in X −1 : First determine the significant entries of W −1 by determining the set of segments that are inductively couple with a given segment. In addition, spread the nonzero entries of W −1 to locations suggested by the adjacency matrix to find the remaining significant entries.
These ideas are illustrated via a three dimensional interconnect structure of three parallel planes with 1500 conductors. In FIG. 4( a ), the significant entries of the adjacency matrix A are shown to be darker. FIGS. 4( b ) and 4 ( c ) show the entries of W −1 and X −1 respectively, again with the significant entries shown darker.
We emphasize that the actual computation of the significant entries of X −1 proceeds via the technique in, where given the knowledge of the sparsity pattern resident in X −1 , the actual entries can be directly and efficiently computed. Thus, (7) and (8) are not used for computation, but only to motivate the heuristic for efficiently determining the sparsity pattern of X −1 .
We implemented the INDUCTWISE, RLP and GKC algorithms in MATLAB on a PC with an Intel Pentium IV 2.4 GHz processor. In order to quantify the simulation accuracy with various methods, we used as the benchmark the ExactRLP simulation (recall that this is the direct simulation of equations (5) and (6)). (While SPICE simulations would have been more natural to use as the benchmark, we found that the computation time grew quickly to make them impractical; for a modestsize circuit comprising 100 parallel conductors, SPICE simulation took 350 seconds as compared to 1.08 seconds with the ExactRLP algorithm, with no detectable simulation error, as shown in the FIG. 5 ).
Simulations were done on a three dimensional structure of three parallel planes, with each plane consisting of busses with parallel conductors, with wirelengths of 1 mm, and a cross section of 1 μm×1 μm. The wire separation was taken to be 1 μm; each wire was divided into ten segments. A periodic 1V square wave with rise and fall times of 6 ps each was applied to the first signal on the lowest plane, with a time period of 240 ps. All the other lines were assumed to be quiet. For each wire, the drive resistance was 10Ω the load capacitance was 20 fF. A time step of 0.15 ps was taken and the simulation was performed over 330 ps (or 2200 time steps).
As expected, with all methods, there is an inherent tradeoff between simulation accuracy and cost (CPU time and memory). We first present results comparing the accuracy in simulating the voltage waveforms at the far end of the first (active) and the seventh (victim or quiet) lines. The metric for comparing the simulations is the relative mean square error (RMSE) defined as
∑ i ( v i  v ~ i ) 2 ∑ i v i 2
where ν and {tilde over (ν)} denote the waveforms obtained from ExactRLP and the algorithm under consideration respectively. A threshold value of 0.001 was chosen for sparsification of RLP and GKC algorithms, as well as for sparsification of L −1 in INDUCTWISE. Table 2 presents a summary of the results from the study of simulation accuracy.
TABLE 2
RMSE comparison.
Active Line
7th line
Size
INDUCTWISE
RLP
GKC
INDUCTWISE
RLP
GKC
300
.0013
.0010
.0017
.1622
.1266
.1960
600
.0014
.0011
.0014
.4381
.3452
.4651
900
.0006
.0007
.0008
.3222
.3076
.4078
1200
.0004
.0004
.0004
.2382
.2656
.2992
1500
.0003
.0003
.0004
.2021
.2200
.2336
It can be seen that the simulation accuracy of the RLP and the GKC algorithms are comparable to that of INDUCTWISE, with a marginally inferior performance as measured by the RMSE. A plot of the voltage waveforms at the far end of the active line and the 7th line, obtained by INDUCTWISE, RLP, and GKC algorithms, is shown in the FIG. 6 .
We briefly explore the influence the choice of the threshold for determining sparsity. A higher threshold can be expected to decrease the computational and memory requirements, however with loss in simulation accuracy. FIGS. 7 and 8 show plots of the RMSE for the active and seventh line as a function of threshold value, again for a circuit of size 600 conductors. Any value of the threshold below 0.001 appears to be a reasonable choice.
We now turn to a comparison of the computational and memory requirements between INDUCTWISE, RLP and GKC algorithms. Table 3 summarizes the findings.
TABLE 3
Run time and memory comparisons
Time (in sec)
Memory (in MB)
Size
ExactRLP
INDUCTWISE
RLP
GKC
ExactRLP
INDUCTWISE
RLP
GKC
300
14.30
74.34
4.09
18.99
2.95
11.61
1.02
6.61
600
76.21
422.00
16.28
77.32
11.61
46.20
2.36
15.38
900
244.14
1133.40
33.21
162.08
26.03
103.84
4.09
31.68
1200
513.08
3051.10
60.53
312.93
46.20
184.56
6.16
52.22
1500
827.50
4682.00
92.16
813.00
72.14
288.24
7.60
86.00
It can be seen that for a circuit consisting of 1200 conductors, RLP is about nine times faster than the ExactRLP, and fifty times faster than INDUCTWISE. The GKC algorithm is about twice as fast as the ExactRLP, and ten times faster than INDUCTWISE. The ExactRLP is about six times as fast as INDUCTWISE. With larger circuit sizes, the advantage of RLP over INDUCTWISE continues to grow, while the ExactRLP and GKC algorithms have an advantage over INDUCTWISE that grows slightly. An explanation for the slower performance of INDUCTWISE compared to ExactRLP is that the number of variables with the latter algorithm is onehalf as that with the former. The same trends are observed with memory requirements.
VLSI interconnect structures with non linear devices can also be analyzed using the Modified Nodal Analysis (MNA) formulation, yielding equations of the form
G ~ x + C ~ x . = b ,
where
G ~ = [ ?? A l T  A l T 0 ] , C ~ = [ ?? 0 0 L ] , x = [ v n i l ] ,
b = [ A i T I s + I n l 0 ] , ?? = A g T R  1 A g , and ?? = A c T C A c . ( 9 )
R denotes the resistance matrix. The matrices , L and C are the conductance, inductance and capacitance matrices respectively, with corresponding adjacency matrices A g , A l and A c . I s is the current source vector with adjacency matrix A i , and ν n and i l are the node voltages and inductor currents respectively.
Vector, I nl captures the effect of nonlinear loads and depends on the node voltages as I nl =f(ν n ). f is a function which varies depending on the load characteristics and in general can be a nonlinear function.
Utilizing the trapezoidal method to numerically solve (9) requires the solution of a set of linear and nonlinear equations:
( G ~ 2 + C ~ h ) x k + 1 =  ( G ~ 2  C ~ h ) x k + b k + 1 + b k 2
and ( 10 ) I nl k + 1 = f ( v n k + 1 ) . ( 11 )
The nonlinearity in the above set of equations can be handled by the standard NewtonRaphson technique of linearizing (11) and iterating until convergence: Equation (10) is a linear equation of the form L(x)=0, where we have omitted the iteration index k for simplicity. Equation (11) is a nonlinear equation of the form g(x)=0. Let g(x)≈G(x) be a linear approximation of g(x), linearized around some x=x 0 . Then, simultaneously solving L(x)=0 and G(x)=0 yields numerical values for x and hence ν n . These values are then used to obtain a new linear approximation g(x)≈G new (x), and the process is repeated until convergence. A good choice of the point x 0 for the initial linearization at the kth timestep is given by the value of ν n from the previous timestep.
A direct implementation of this algorithm requires O(pqn 1 3 ) operations, where p is the number of time steps, q is the maximum number of NewtonRaphson iterations in each time step, and n 1 =3N.
We begin by decomposing C, A, and A i as:
C = [ C cc C cv C vc C vv ] , A T = [ A 1 T A 2 T ] A i T = [ A i 1 T A i 2 T ] I nl = [ 0 I v ] · v n = [ v c v v ] .
Here C νν denotes the submatrix of the capacitance matrix that changes amid the simulation, while all other submatrices remain constant. The matrix C νν captures the drain, gate and bulk capacitances of all devices, which are voltagedependent, while C cc , and C cν are the capacitance matrices that arise from interconnects and are hence constant.
For typical interconnect structures, the above decomposition allows us to manipulate the MNA equations (10) and (11):
( L h + R 2 + h 4 A 1 P cc A 1 T ) ︸ X i l k + 1 = ( L h  R 2  h 4 A 1 P cc A 1 T ) ︸ Y i l k + A 1 v c k + h 4 A 1 P cc A i 1 T ( I s k + 1 + I s k )  A 1 P cc C cv ( v v k + 1  v v k ) + A 2 2 ( v v k + 1 + v v k ) , ( 12 ) v c k + 1 = v c k  h 2 P cc A 2 T ( i l k + 1 + i l k ) + h 2 P cc A i 1 T ( I s k + 1 + I s k )  P cc C cv ( v v k + 1  v v k ) , ( 13 ) C vv v v k + 1 = C vv v v k  h 2 A 2 T ( i l k + 1 + i l k ) + h 2 A i 2 T ( I s k + 1 + I s k )  C vc ( v c k + 1  v c k ) + h 2 ( I v k + 1 + I v k ) , ( 14 ) I v k + 1 = f ( v v k + 1 ) . ( 15 )
Here r denotes the size of interconnect structure connected directly to non linear circuit, and given l=N−r we note that
C cc εR l×l ,C νν εR r×r .
P cc =C cc −1 is the inverse capacitance matrix, and A is the adjacency matrix of the circuit. A is obtained by first adding A g and A l and then removing zero columns (these correspond to intermediate nodes, representing the connection of a resistance to an inductance).
Thus far, the analysis is similar to that of the linear elements structures described above, with the major difference being the addition of (14) and (15), which account for the nonlinear elements. We will show here how the linear techniques can be extended to handle the case when nonlinear elements are present.
For future reference, we will call the technique of directly solving (12), (13), (14), and (15) as the “ExactRLP” algorithm. It can be shown that the computational complexity of the ExactRLP algorithm is O(l 3 +pq(l 2 +r 3 )). For large VLSI interconnect structures we have l>>r, reducing the complexity to O(l 3 +pq(l 2 )).
We now turn to the fast solution of equations (12) through (15). Recall that the nonlinear equation (15) is handled via the NewtonRaphson technique. This requires, at each time step, linearizing (15) and substituting it into (14). The resulting set of linear equations have very specific structure:
Equations (12) and (13) are of the form Ax=b where A is fixed (does not change with the timestep). Moreover, A −1 is typically approximately sparse. Equation (14) (after the substitution of the linearized (15)) is again of the form Ax=b, where the matrix A is obtained by adding C νν and the coefficient of the firstorder terms in the linearized equation (15). Recall that the matrix C νν captures the drain, gate and bulk capacitances of all devices. It also contains the interconnect coupling capacitances between gates and drains of different nonlinear devices in the circuit. As each nonlinear device is connected to only a few nodes and the capacitive effects of interconnects are localized, the A matrix is observed to be sparse in practice. Note that A changes with each NewtonRaphson iteration and with the timestep.
Thus the key computational problem is the solution of a sparse timevarying set of linear equations, coupled with a large fixed system of linear equations Ax=b with A −1 being sparse.
Krylov subspace methods have been shown to work extremely well for sparse timevarying linear equations. Specifically, the GMRES (Generalized Minimum Residual) method of Saad and Schultz allows the efficient solution of a sparse, possibly nonsymmetric, linear system to within a prespecified tolerance. This method performs a directional search along the orthogonal Arnoldi vectors which span the Krylov subspace of A. That is, given an initial guess x 0 and corresponding residual r 0 =b−Ax 0 , orthogonal vectors {q 1 , q 2 , . . . , q m } are generated with the property that they span S m , the solution search space at iteration m.
S
m
=
x
0
+
span
{
r
0
,
Ar
0
,
…
,
A
m
r
0
}
=
x
0
+
κ
(
A
,
r
0
,
m
)
⊆
span
{
q
1
,
q
2
…
,
q
m
}
.
(
16
)
These vectors are chosen according to the Arnoldi iteration: AQ m =Q m+1 H m where Q m ={q 1 , q 2 , . . . , q m } is orthogonal and H m εR m+1×m is an upper Heisenberg matrix.
For these methods the choice of a preconditioner matrix M, which is an approximation of A, can greatly affect the convergence. A good preconditioner should have the following two properties:
M −1 A≈I. It must accommodate a fast solution to an equation of the form Mz=c for a general c.
FIG. 9 depicts the sparsity structure of the A matrix for a circuit example of parallel wires driving a bank of inverters. For such a sparsity structure, an appropriate choice of the preconditioner could be of the form as shown in FIG. 10 . Although we have chosen a circuit with only inverters for simplicity, a more complicated circuit structure would simply distribute the entries around the diagonal and offdiagonal bands and lead to possibly more off diagonal bands. To see this, consider an extreme case where the circuit under consideration has only nonlinear devices and does not comprise of interconnects. In this case the sparsity pattern of the A matrix is as shown in FIG. 11 . Therefore, the chosen preconditioner would encompass not only the sparsity structure shown in FIG. 9 but also other sparsity patterns that might arise with the analysis of more complicated nonlinear devices. Correspondingly the structure of the preconditioner (see FIG. 10 ) would have additional bands.
Matrices of the form shown in FIG. 10 have the following two properties which make them an ideal choice for preconditioner.
The inverses of the preconditioner matrix can be computed efficiently in linear time, O(r) (r denotes the size of interconnect structure directly connected to nonlinear devices), by exploiting the Hadamard product formulation. It can also be shown that this formulation facilitates the fast matrixvector products, again in linear time (O(r)), which arise while solving linear systems of equations with the preconditioner matrix.
A simple example which best illustrates these advantages is a symmetric tridiagonal matrix.
B = ( a 1  b 1  b 1 a 2  b 2 ⋱ ⋱ ⋱  b n  2 a n  1  b n  1  b n  1 a n ) ( 17 )
The inverse of B can be represented compactly as a Hadamard product of two matrices, which are defined as follows:
B  1 = ( u 1 u 1 … u 1 u 1 u 2 … u 2 ⋮ ⋮ ⋱ ⋮ u 1 u 2 … u n ) ︸ U • ( v 1 v 1 … v n v 2 v 2 … v n ⋮ ⋮ ⋱ ⋮ v n v n … v n ) ︸ V ( 18 )
There exists an explicit formula to compute the sequences {u},{ν} efficiently in O(n) operations. In this case, if we are interested in solving a linear system of equations By=c, we only need to concern ourselves with the matrixvector product B −1 c=y. This computation can also be performed efficiently in O(n) computations as outlined below:
P u i = ∑ j = 1 i u j c j , P v i = ∑ j = 1 n v j c j , i = 1 , … , n
y 1 = u 1 P v 1 ,
y i = v i P u i  1 + u i P v i , i = 2 , … , n . ( 19 )
The above formulation for a tridiagonal matrix could be easily extended to handle the more general case when the preconditioner matrix is a zero padded block tridiagonal matrix (matrix with zero diagonals inserted between the main diagonal and the nonzero superdiagonal and subdiagonal of tridiagonal matrix) as in FIG. 10 . Elementary row and column block permutations could be performed on such a matrix to reduce it into a block tridiagonal matrix. This has been shown with a small example as below.
B = ( a 1 0  b 1 0 0 a 2 0  b 2  b 1 0 a 3 0 0  b 2 0 a 4 ) = P ( a 1  b 1 0 0  b 1 a 2 0 0 0 0 a 3  b 2 0 0  b 2 a 4 ) ︸ X P T ,
where
P = ( 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 ) . ( 20 ) and ( 21 )
Hence,
B  1 = PX  1 P T = ( u 1 0 u 1 0 0 u 2 0 u 2 u 1 0 u 3 0 0 u 2 0 u 4 ) ︸ U • ( v 1 0 v 3 0 0 v 2 0 v 4 v 3 0 v 3 0 0 v 4 0 v 4 ) ︸ V . ( 22 )
We have not included block matrices for simplicity of presentation, however the zero padded block tridiagonal case is a natural extension of the above example. All the entries in U,V matrices have to be now replaced by blocks and accordingly the row and column permutations would be replaced by their block counterparts with an identity matrix of appropriate size replacing the ‘ones’ in the P matrix. Table 4 gives the comparison for IncompleteLU preconditioner and the zeropadded (ZPad) preconditioner. Simulations were done on circuits consisting of busses with parallel conductors driving bank of inverters. ‘Size’ denotes the number of nonlinear devices. All the results are reported as a ratio of runtime and iterationcount (number of iterations for the solution to converge to within a tolerance of 1e10) of ZPad to the IncompleteLU preconditioner. As can be seen from Table 4, ZPad offers a substantial improvement in run time as compared to the IncompleteLU preconditioner.
TABLE 4
Preconditioner comparison.
Size
400
800
1600
3200
Runtime
.44
.42
.42
.43
Iterations
5/10
5/10
5/10
5/10
We now turn to the solution of equations (12) and (13). As mentioned earlier, these equations reduce to the form Ax=b with a constant, approximately sparse A −1 . A (corresponding to X in (12) is composed of L, R and P. Each of these matrices has a sparse inverse for typical VLSI interconnects which then leads to a approximately sparse A −1 (Note that this argument is used for motivating the sparsity inherent in A −1 and cannot be used as a theoretical proof for the same). In addition this sparsity has a regular pattern which can be explained on the basis of how inductance and capacitance matrices are extracted. The distributed RLC effects of VLSI interconnects can be modeled by dividing conductors into small subsets of segments, each of which are aligned. Each of these subsets leads to a sparsity pattern (corresponding to a band in A −1 ). All the effects when summed up lead to a A −1 matrix that has a regular sparsity pattern. Window selection algorithm can then be employed to find out the sparsity pattern in A −1 . It has been recognized in earlier work that this property (sparsity) yields enormous computational savings; it has been shown that an approximate implementation of the ExactRLP algorithm, referred to simply as the “RLP algorithm” provides an orderofmagnitude in computational savings with little sacrifice in simulation accuracy.
To proceed, we rewrite (12) and (13) as
i l k + 1 = X  1 Y i l k + X  1 A 1 v c k + h 4 X  1 A 1 P c c A i 1 T ( I s k + 1 + I s k )  X  1 A 1 P c c C c v ( v v k + 1  v v k ) + A 2 2 ( v v k + 1 + v v k ) , ( 23 ) X  1 A 1 v c k + 1 = X  1 A 1 v c k  X  1 A 1 h 2 P c c A 2 T ( i l k + 1 + i l k ) + h 2 X  1 A 1 P c c A i 1 T ( I s k + 1 + I s k )  X  1 A 1 P c c C c v ( v v k + 1  v v k ) . ( 24 )
Although X is a dense matrix, X −1 turns out to be an approximately sparse matrix. Moreover the matrices X −1 Y, X −1 A 1 , X −1 A 1 P cc A i1 T , X −1 A 1 P cc C cν are also approximately sparse. This information can be used to reduce the computation significantly by noting that each step of trapezoidal integration now requires only sparse vector multiplications. Solving sparse (23) and (24) along with (15) and (16) is termed as the RLP algorithm (SASIMI). To analyze the computational saving of the approximate algorithm over the ExactRLP algorithm, we denote “sparsity index” of a matrix A as ratio of the number of entries of A with absolute value less than ε to the total number of entries. The computation required for each iteration of (23) and (24) is then O((1−ν)l 2 ), where ν is the minimum of the sparsity indices the matrices X −1 Y, X −1 A 1 , X −1 A 1 P cc A i1 T , X −1 A 1 P cc C cν . FIG. 12 provides the average sparsity for the matrices for a system with parallel conductors driving a bank of inverters. The sizes in consideration are 100, 200, 500 and 1000. On top of this the computation time of can be reduced to O(l) by using the windowing techniques (details in [16]). Hence the computational complexity of RLP is O(pq(1−ν)l 2 ) as compared to O(pqn 1 3 ) for the MNA approach.
We implemented the ExactRLP and RLP (SASIMI) algorithms in C++. A commercially available version of SPICE with significant speedup over the publicdomain SPICE has been used for reporting all results with SPICE. Simulations were done on circuits consisting of busses with parallel conductors driving bank of inverters, with wires of length 1 mm, cross section 1 μm×1 μm, and with a wire separation of 1 μm. A periodic 1V square wave with rise and fall times of 6 ps each was applied to the first signal with a time period of 240 ps. All the other lines were assumed to be quiet. For each wire, the drive resistance was 10Ω. A time step of 0.15 ps was taken and the simulation was performed over 30 ps (or 200 time steps). For the inverters the W/L ratio of NMOS and PMOS were taken to be 0.42 μm/0.25 μm and 1.26 μm/0.25 μm respectively.
In order to explore the effect of the number of nonlinear elements relative to the total, three cases were considered. With ρ denoting the ratio of the number of linear elements to that of nonlinear elements, the experiments were performed for ρ equaling 5, 20 and 50. The number of linear elements in the following results is denoted by σ.
We first present results comparing the accuracy in simulating the voltage waveforms at the far end of the first line (after the inverter load). The RMSE is again use for comparing the simulations, defined here as
∑ i ( v i  v ~ i ) 2 ∑ i v i 2
where ν and {tilde over (ν)} denote the waveforms obtained from ExactRLP and SASIMI respectively.
Table 5 presents a summary of the results from the study of simulation accuracy. It can be seen that the simulation accuracy of the ExactRLP algorithm is almost identical to that of SPICE, while the SASIMI has a marginally inferior performance as measured by the RMSE. The error values for SASIMI are compared simply with the ExactRLP as it had the same accuracy as SPICE results for all the experiments run. A plot of the voltage waveforms at the far end of the active line, obtained from SPICE, ExactRLP and SASIMI algorithms, is shown in FIG. 13 . (The number of conductors in this simulation example is 200.) There is almost no detectable simulation error between the SASIMI, ExactRLP and SPICE waveforms over 200 time steps. To give a better picture, the accuracy results reported are for a larger simulation time of 2200 time steps.
TABLE 5
RMSE comparison.
σ
ρ = 5
ρ = 20
ρ = 50
100
.0054
.0053
.0088
200
.0078
.0052
.0071
500
.0006
.0022
.0001
1000
.0003
.0005
.0004
2000
.0003
.0004
.0004
We now turn to a comparison of the computational requirements between ExactRLP, SASIMI and SPICE. Table 6 summarizes the findings. For a fair comparison, our total simulation time is compared against the transient simulation time for SPICE (i.e we have not included any of the error check or set up time for SPICE). As can be seen from the table, SASIMI outperforms the ExactRLP algorithm and SPICE. For the case of 500 conductors with ρ=50, the ExactRLP algorithm is 390 times as fast compared to SPICE. SASIMI is about 1400 times faster as compared to SPICE, and more than three times faster than ExactRLP. As can be seen, the computational savings increase as the ratio of linear to nonlinear elements is increased from 5 to 50. The savings also increase with increase in the size of the problem considered. The computational efficiency of the SASIMI can be explained on the use of sparsityaware algorithms for both the linear and nonlinear parts of the problem.
TABLE 6
Run time (in seconds) comparisons.
ρ = 5
ρ = 20
ρ = 50
σ
SPICE
ExactRLP
SASIMI
SPICE
ExactRLP
SASIMI
SPICE
ExactRLP
SASIMI
100
11.96
1.34
1.26
13.73
.27
.21
13.54
.15
.12
200
100.25
3.28
2.68
68.72
.64
.28
67.68
.55
.22
500
3590.12
17.13
4.872
1919.21
13.47
3.01
1790.67
4.58
1.30
1000
>12 hrs
87.75
22.71
>10 hrs
79.07
16.49
>10 hrs
77.56
15.20
2000
>1 day
545.6
78.06
>1 day
526.23
59.33
>1 day
408.54
56.05
The existing methods for finding the inverse of a block tridiagonal matrix suffer from being either numerically unstable or heavily memory intensive and hence impractical for problems of very large size (e.g. 10 6 ×10 6 ).
Consider the twodimensional model of a nanoscale transistor, shown in FIG. 14 . The body of the transistor is projected onto a twodimensional nonuniform spatial grid of dimension N x ×N y , where N x (N y ) denote the number of grid points along the depth (length) of the device. A brief review of the governing physics of this device is provided here.
The Hamiltonian of a valley b for electrons, associated with the device under consideration, is as follows:
H b ( r ) =  ℏ 2 [ ⅆ ⅆ x ( 1 m x b ⅆ ⅆ x ) + ⅆ ⅆ y ( 1 m y b ⅆ ⅆ y ) + ⅆ ⅆ z ( 1 m z b ⅆ ⅆ z ) ] + V ( r ) , ( 25 )
where (m x b , m y b , m z b ) are the components of effective mass in valley b. The equation of motion for the retarded Green's function (G r ) and lessthan Green's function (G < ) are:
[ E  ℏ 2 k z 2 2 m z  H b ( r 1 ) ] G b r ( r 1 , r 2 , k z , E )  ∫ ⅆ r ∑ b r ( r 1 , r 2 , k z , E ) G b r ( r 1 , r 2 , k z , E ) = δ ( r 1  r 2 ) , ( 26 ) [ E  ℏ 2 k z 2 2 m z  H b ( r 1 ) ] G b r ( r 1 , r 2 , k z , E )  ∫ ⅆ r ∑ b r ( r 1 , r 2 , k z , E ) G b < ( r 1 , r 2 , k z , E ) = ∫ ⅆ r ∑ b < ( r 1 , r 2 , k z , E ) G b a ( r 1 , r 2 , k z , E ) . ( 27 )
Given G r and G < , the density of states and the charge density can be written as a sum of contributions from the individual valleys,
N
(
r
,
k
z
,
E
)
=
∑
b
N
b
(
r
,
k
z
,
E
)
=

1
π
Im
[
G
b
r
(
r
,
r
,
k
z
,
E
)
]
,
(
28
)
ρ
(
r
,
k
z
,
E
)
=
∑
b
ρ
b
(
r
,
k
z
,
E
)
=

ⅈ
[
G
b
<
(
r
,
r
,
k
z
,
E
)
]
.
(
29
)
The selfconsistent solution of the Green's function is often the most time intensive step in the simulation of electron density. It was shown by Svizhenko et al. in “Twodimensional quantum mechanical modeling of nanotransistors,” Journal of Applied Physics, 91(4):23432354, 2002, hereby incorporated by reference, that the approximate block tridiagonal structure of the lefthand side in (26) and (27) facilitates the efficient calculation of the electron density. Specifically, it was demonstrated that only the diagonal entries of G r are needed to be computed for such a simulation.
Svizhenko et al. showed that the problem of computing electron densities in a nanotransistor can be reduced to finding the diagonal blocks of G r , where AG r =I and A is a block tridiagonal matrix of the form
A = ( A 1  B 1  B 1 T A 2  B 2 ⋱ ⋱ ⋱  B N y T  2 A N y  1  B N y  1  B N y T  1 A N y ) , ( 30 )
where each A i , B i ε N x ×N x . Thus Aε N y N x ×N y N x , with N y diagonal blocks of size N x each. We will denote A compactly as A=trid(A i ,B i ).
The algorithm in Svizhenko et al. calculates the diagonal blocks (D i ) of A −1 in the following manner.
G 1 =A 1 −1 ,
G i+1 =( A i+1 −B i G i B i ) −1 , i= 1, 2 , . . . , N y −1,
D N y =G N y ,
D i =G i +G i B i D i+1 B i G i , i=N y −1 , N y −2, . . . , 1. (31)
The time complexity of this algorithm was shown to be O(N x 3 N y ), with a memory requirement of O(N x 3 N y ).
Alternatively, the inverse of a block tridiagonal matrix can be computed explicitly by exploiting the Hadamard product formulation. When {B i } are nonsingular, A is said to be proper. In this case there exists two (nonunique) sequences of matrices {U i },{V i } such that for j≧i, (A −1 ) ij =U i V j T .
Hence, A −1 can be written as
A

1
=
(
U
1
V
1
T
U
1
V
2
T
U
1
V
3
T
…
U
1
V
N
y
T
V
2
U
1
T
U
1
V
2
T
U
2
V
3
T
…
U
2
V
N
y
T
V
3
U
1
T
V
3
U
2
T
U
3
V
3
T
…
U
3
V
N
y
T
⋮
⋮
⋮
⋱
⋮
V
N
y
U
1
T
V
N
y
U
2
T
V
N
y
U
3
T
…
U
N
y
V
N
y
T
)
.
The U and V sequences can be efficiently computed in O(N y N x 3 ) operations in the following manner:
U 1 =I, U 2 =B 1 −1 A 1 ,
U i+1 =B i −1 ( A i U i −B i−1 T U i−1 ), i= 2 , . . . , N y −1,
V N y T =( A N y U N y −B N y −1 T U N y −1 ) −1 ,
V N y −1 T =V N y T A N y B N y −1 −1 ,
V i T =( V i+1 T A i+1 −V i+2 T B i+1 T ), i=N y −2, . . . , 2, 1. (32)
I denotes identity matrix of appropriate size.
The matrices encountered in the simulation of nanotransistors enjoy the property that the diagonal blocks {A i } are tridiagonal while the offdiagonal blocks {B i } are diagonal (this is due to the tightbinding approximation of the Hamiltonian). The complexity of the above parameterization is then reduced to O(N y N x 2 +N x 3 ). It is worthwhile to note here that the complexity of algorithm in (31), which was given to be O(N y N x 3 ), does not change based upon these properties (although the actual run time is reduced). Therefore, the reduced complexity of the Hadamard formulation makes it an ideal choice for the solution of this problem. However, it has been shown that the above recursions can be numerically unstable for large problem sizes. This will preclude it from being directly implemented to solve these problems. Alternatively, the divide and conquer algorithm described below avoids these numerical issues by only handling manageable problem sizes at each step.
In the most simple case, the block tridiagonal matrix, A can be decomposed into two submatrices connected by what we will call a bridge matrix. This concept is illustrated below:
A
=
A
~
+
X
Y
,
A
~
=
(
S
1
S
2
)
,
S
1
=
(
A
1

B
1

B
1
T
A
2

B
2
⋱
⋱
⋱

B
i

2
T
A
i

1

B
i

1

B
i

1
T
A
i
)
,
S
2
=
(
A
i
+
1

B
i
+
1

B
i
+
1
T
A
i
+
2

B
i
+
2
⋱
⋱
⋱

B
N
y
T

2
A
N
y

1

B
N
y

1

B
N
y
T

1
A
N
y
)
,
X
=
(
0
0
⋮
⋮

B
i
0
0

B
i
T
⋮
⋮
0
0
)
,
Y
=
(
0
…
0
I
…
0
0
…
I
0
…
0
)
.
Notice that the block tridiagonal structure is preserved in each subproblem, and they are joined by the N x ×N x bridge matrix B i . The structure of S 1 and S 2 facilitates the use of Hadamard based methods (32) for determining the solution of each subproblem. However, the question then becomes how to determine the global solution from each subproblem solution and their corresponding bridge. This problem can be resolved by the use of a fundamental result of linear algebra. The matrix inversion lemma describes the effect of a low rank correction term on the inverse of a matrix, and can be used in the following way:
A

1
=
(
A
~
+
X
Y
)

1
=
A

1
~

(
A
~

1
X
)
(
I
+
Y
A
~

1
X
)

1
(
Y
A
~

1
)
where
,
A
~

1
X
=
(
(

C
1
B
i
)
0
0
(

C
2
B
i
T
)
)
,
(
I
+
Y
A
~

1
X
)

1
=
(
I

S
2

1
(
1
,
1
)
B
i
T

S
1

1
(
i
,
i
)
B
i
I
)

1
,
Y
A
~

1
=
(
0
(
C
2
T
)
(
C
1
T
)
0
)
.
(
33
)
Here, C 1 =S 1 −1 (:,i) and C 2 =S 2 −1 (:,1) denotes the last (first) block columns of S 1 −1 (S 2 −1 ) respectively.
Although (33) can be used directly to solve the problem described above, its lack of computational efficiency precludes its use in this case. However, it is important to note that the adjustment term, (I+YÃ −1 X) −1 , depends only on the corner blocks of the inverses of the subproblems. This observation provides the basis for a formulation of the solution for the more general case. Specifically, the divide and conquer algorithm of the present invention addresses the issue of how to combine multiple subproblem solutions in both a memory and computationally efficient manner.
An overview of the divide and conquer algorithm is provided here.
The procedure begins with separating the block tridiagonal matrix A into D subproblems, each joined by a bridge matrix. The procedure presented previously motivates the formulation of the global solution by combining the subproblems in a simple radix 2 fashion. However, this method offers no improvement in terms of memory consumption and is computationally intensive. Alternatively, matrix maps are created to capture the effect of each combining step without performing all of the associated computation. Adjustments to the matrix maps at each combining stage are not constant and must be modified to parallel the procedure above. These maps can then be used in the final stage to transform the subproblem solutions into the global solution.
Each combining step in the divide and conquer algorithm will have an associated entry in the “job” list pointing to the first and last subproblems along with the corresponding bridge point. For example, (1˜2,3˜4) describes the action of combining joined subproblems 1 and 2 (S 1˜2 ) to joined subproblems 3 and 4 (S 3˜4 ) by use of the bridge matrix between problems 2 and 3. The corresponding entry in the job list would be of the form: [start, stop, bridge]=[1, 4, 2].
This formulation lends itself directly to parallel implementation, since computations associated with nonoverlapping combinations can be farmed out across multiple systems.
To model the effect of any combining stage in the job list, it is necessary to know the corner block elements from the inverse of each combined subproblem. This process can be easily illustrated by a simple example. Suppose once again that the combination stage is (1˜2,3˜4), let Q 1 =S 1˜2 and Q 2 =S 3˜4 . The corner block elements of Q 1 −1 and Q 2 −1 can be found using the parameterization given in (32).
Q  1 = ( Q 1  1 0 0 Q 2  1 ) = ( [ U ~ 1 V ~ 1 T ] … [ U ~ 1 V ~ n T ] ⋮ ⋱ ⋮ 0 … [ U ~ n V ~ n T ] [ U ^ 1 V ^ 1 T ] … [ U ^ 1 V ^ m T ] 0 ⋮ ⋱ ⋮ … [ U ^ m V ^ m T ] ) ,
where n=size (S 1 )+size (S 2 ), and m=size (S 3 )+size (S 4 ).
It would be impractical to recalculate the inverse of each joined problem for each combining stage. Alternatively, matrix maps are used to efficiently produce the required block entries.
Matrix maps are created to produce the cumulative effect of each combining step associated with a particular subproblem. There are a total of eight N x ×N x matrix maps {M i } for each subproblem. The process of updating matrix maps can be broken down into two categories: Adjustments to Upper subproblem and those to Lower subproblems, the distinction being their location with respect to the bridge point. This procedure will be illustrated using the above example. The “adjustment” matrix, ADJ, for the combining step is defined as follows:
Z 1 =  B n + 1 T U ^ 1 V ^ 1 T ,
Z 2 =  B n + 1 U ~ n V ~ n T ,
P = ( I  Z 2 Z 1 )  1 ,
A D J = ( I + Z 1 P Z 2  Z 1 P  P Z 2 P ) = ( A D J 11 A D J 12 A D J 21 A D J 22 ) ( 34 )
The matrix maps associated with the Upper subproblems (S 1 and S 2 ) are updated in the following manner.
M 1 +( Ũ 1 {tilde over (V)} n T ADJ 12 B n+1 ) M 3 →M 1
M 2 ( Ũ 1 {tilde over (V)} n T ADJ 12 B n+1 ) M 4 →M 2
( Û 1 {circumflex over (V)} m ADJ 22 B n+1 ) M 3 →M 3
( Û 1 {circumflex over (V)} m T ADJ 22 B n+1 ) M 4 →M 4
M 5 −M 3 T ( ADJ 12 B n+1 ) M 3 →M 5
M 6 −M 3 T ( ADJ 12 B n+1 ) M 4 →M 6
M 7 −M 4 T ( ADJ 12 B n+1 ) M 3 →M 7
M 8 −M 4 T ( ADJ 12 B n+1 ) M 4 →M 8 (35)
Those associated with the Lower subproblems (S 3 and S 4 ) are updated in the following manner:
( Ũ 1 {tilde over (V)} n T ADJ 11 B n+1 T ) M 1 →M 1
( Ũ 1 {tilde over (V)} n T ADJ 11 B n+1 T ) M 2 →M 2
M 3 +( {circumflex over (V)} m Û 1 T ADJ 21 B n+1 T ) M 1 →M 3
M 4 +( {circumflex over (V)} m Û 1 T ADJ 21 B n+1 T ) M 2 →M 4
M 5 −M 1 T ( ADJ 21 B n+1 T ) M 1 →M 5
M 6 −M 1 T ( ADJ 21 B n+1 T ) M 2 →M 6
M 7 −M 2 T ( ADJ 21 B n+1 T ) M 1 →M 7
M 8 −M 2 T ( ADJ 21 B n+1 T ) M 2 →M 8 (36)
The above procedure for modifying the matrix maps is repeated for each entry in the job list. In the final stage the matrix maps are used to generate the diagonal blocks of A −1 . It is important to note that this scheme fits nicely into a parallel framework due to the fact that systems handling Upper or Lower subproblems would only have to trade the limited amount of information in (34) to modify the matrix maps they are governing.
PseudoCode
1. For each subproblem in {1,2,...,D} :
• Determine corner blocks from inverse of subproblem
• Associate bridge matrix (excluding problem D)
• Initialize matrix maps
2. Generate list of subproblem combinations radix 2
3. Adjust mappings for each combining step
4. Compute diagonal elements for each division
5. Apply matrix maps to transform subproblem solutions to the global
solution
The time complexity of the divide and conquer algorithm is O(N x α N y +N x 3 D log 2 D) where α is defined to be the order associated with a N x ×N x matrix multiplication (typically α≈2.7). The memory consumption is
O
(
N
x
2
N
y
D
+
N
x
2
D
)
.
The divide and conquer algorithm, along with the algorithm in Svizhenko et al. have been implemented, in Matlab, on a single 32bit x86 Linux workstation. All results reported are for test cases on a MIT welltempered 25 nm devicelike structure.
Table 7 shows the runtime comparison of the two algorithms across the cases: N x =100, N y =3,00080,000. Notice that the algorithm given in Svizhenko et al. was unable to perform past the case N y =11,000 due to memory restrictions. However, the divide and conquer algorithm was able to handle these cases without encountering memory overflow.
TABLE 7
Run time (min) comparison
Size = N x * N y (×10 6 )
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
2.0
4.0
8.0
Divide and Conquer Algorithm
5.38
6.66
8.10
9.41
11.13
12.25
13.75
14.99
27.59
58.86
107.0
Algorithm in Svizhenko et al.
1.22
1.51
1.82
2.12
2.43
2.72
3.01
3.33
—
—
—
FIG. 15 shows the ratio of memory consumption of the algorithm in Svizhenko et al. as compared to the divide and conquer algorithm for varying number of blocks per division (subproblem size). FIG. 16 shows the same ratio for varying number of divisions.
While the invention has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiment has been shown and described and that all changes and modifications that come within the spirit of the invention are desired to be protected.