How John Got 15x Improvement Without Really Trying
- by rchrd
The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.
How I Got 15x Improvement Without Really Trying
John Feo, Sun Microsystems
Taking ten "personal" program codes used in scientific
and engineering research, the author was able to get from 2 to 15 times performance
improvement easily by applying some simple general optimization techniques.
Introduction
Scientific research
based on computer simulation depends on the simulation for
advancement. The research can advance only as fast as the
computational codes can execute. The codes' efficiency determines
both the rate and quality of results. In the same amount of time, a
faster program can generate more results and can carry out a more
detailed simulation of physical phenomena than a slower program.
Highly optimized programs help science advance quickly and insure
that monies supporting scientific research are used as effectively as
possible.
Scientific computer
codes divide into three broad categories: ISV, community, and
personal. ISV codes are large, mature production codes developed and
sold commercially. The codes improve slowly over time both in
methods and capabilities, and they are well tuned for most vendor
platforms. Since the codes are mature and complex, there are few
opportunities to improve their performance solely through code
optimization. Improvements of 10% to 15% are typical. Examples of
ISV codes are DYNA3D, Gaussian, and Nastran.
Community codes are
non-commercial production codes used by a particular research field.
Generally, they are developed and distributed by a single academic or
research institution with assistance from the community. Most users
just run the codes, but some develop new methods and extensions that
feed back into the general release. The codes are available on most
vendor platforms. Since these codes are younger than ISV codes,
there are more opportunities to optimize the source code.
Improvements of 50% are not unusual. Examples of community codes are
AMBER, CHARM, BLAST, and FASTA.
Personal codes are those
written by single users or small research groups for their own use.
These codes are not distributed, but may be passed from
professor-to-student or student-to-student over several years. They
form the primordial ocean of applications from which community and
ISV codes emerge. Government research grants pay for the development
of most personal codes. This paper reports on the nature and
performance of this class of codes.
Over the last year, I
have looked at over two dozen personal codes from more than a dozen
research institutions. The codes cover a variety of scientific fields,
including astronomy, atmospheric sciences, bioinformatics, biology,
chemistry, geology, and physics. The sources range from a few
hundred lines to more than ten thousand lines, and are written in
Fortran, Fortran 90, C, and C++. For the most part, the codes are
modular, documented, and written in a clear, straightforward
manner. They do not use complex language features, advanced data
structures, programming tricks, or libraries. I had little trouble
understanding what the codes did or how data structures were used.
Most came with a makefile.
Surprisingly, only one
of the applications is parallel. All developers have access to
parallel machines, so availability is not an issue. Several tried to
parallelize their applications, but stopped after encountering
difficulties. Lack of education and a perception that parallelism is
difficult prevented most from trying. I parallelized several of the
codes using OpenMP, and did not judge any of the codes as difficult
to parallelize.
Even more surprising
than the lack of parallelism is the inefficiency of the codes. I was
able to get large improvements in performance in a matter of a few
days applying simple optimization techniques. Table 1 lists ten
representative codes [names and affiliation are omitted to preserve
anonymity]. Improvements on one processor range from 2x to 15.5x
with a simple average of 4.75x. I did not use sophisticated
performance tools or drill deep into the program's execution
character as one would do when tuning ISV or community codes. Using
only a profiler and source line timers, I identified inefficient
sections of code and improved their performance by inspection. The
changes were at a high level. I am sure there is another factor of 2
or 3 in each code, and more if the codes are parallelized. The
study’s results show that personal scientific codes are running
many times slower than they should and that the problem is pervasive.
Computational
scientists are not sloppy programmers; however, few are trained in
the art of computer programming or code optimization. I found that
most have a working knowledge of some programming language and
standard software engineering practices; but they do not know, or
think about, how to make their programs run faster. They simply do
not know the standard techniques used to make codes run faster. In
fact, they do not even perceive that such techniques exist. The case
studies described in this paper show that applying simple, well known
techniques can significantly increase the performance of personal
codes. It is important that the scientific community and the
Government agencies that support scientific research find ways to
better educate academic scientific programmers. The inefficiency of
their codes is so bad that it is retarding both the quality and
progress of scientific research.
#
cacheperformance
redundantoperations
loopstructures
performanceimprovement
1
x
x
15.5
2
x
2.8
3
x
x
2.5
4
x
2.1
5
x
x
2.0
6
x
5.0
7
x
5.8
8
x
6.3
9
2.2
10
x
x
3.3
Table 1 — Area of improvement and performance gains of 10
codes
The remainder of the
paper is organized as follows: sections 2, 3, and 4 discuss the three
most common sources of inefficiencies in the codes studied. These
are cache performance, redundant operations, and loop structures.
Each section includes several examples. The last section summaries
the work and suggests a possible solution to the issues raised.
Optimizing cache
performance
Commodity microprocessor
systems use caches to increase memory bandwidth and reduce memory
latencies. Typical latencies from processor to L1, L2, local, and
remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover,
bandwidth falls off dramatically as memory distances increase.
Programs that do not use cache effectively run many times slower than
programs that do.
When optimizing for
cache, the biggest performance gains are achieved by accessing data
in cache order and reusing data to amortize the overhead of cache
misses. Secondary considerations are prefetching, associativity, and
replacement; however, the understanding and analysis required to
optimize for the latter are probably beyond the capabilities of the
non-expert. Much can be gained simply by accessing data in the
correct order and maximizing data reuse. 6 out of the 10 codes
studied here benefited from such high level optimizations.
Array Accesses
The most important cache
optimization is the most basic: accessing Fortran array elements in
column order and C array elements in row order. Four of the ten
codes—1, 2, 4, and 10—got it wrong. Compilers will
restructure nested loops to optimize cache performance, but may not
do so if the loop structure is too complex, or the loop body includes
conditionals, complex addressing, or function calls. In code 1, the
compiler failed to invert a key loop because of complex addressing
do I = 0, 1010, delta_x
IM = I - delta_x
IP = I + delta_x
do J = 5, 995, delta_x
JM = J - delta_x
JP = J + delta_x
T1 = CA1(IP, J) + CA1(I, JP)
T2 = CA1(IM, J) + CA1(I, JM)
S1 = T1 + T2 - 4 * CA1(I, J)
CA(I, J) = CA1(I, J) + D * S1
end do
end do
In code 2, the culprit is conditionals
do I = 1, N
do J = 1, N
If (IFLAG(I,J) .EQ. 0) then
T1 = Value(I, J-1)
T2 = Value(I-1, J)
T3 = Value(I, J)
T4 = Value(I+1, J)
T5 = Value(I, J+1)
Value(I,J) = 0.25 * (T1 + T2 + T5 + T4)
Delta = ABS(T3 - Value(I,J))
If (Delta .GT. MaxDelta) MaxDelta = Delta
endif
enddo
enddo
I fixed both programs by
inverting the loops by hand.
Code 10 has
three-dimensional arrays and triply nested loops. The structure of
the most computationally intensive loops is too complex to invert
automatically or by hand. The only practical solution is to
transpose the arrays so that the dimension accessed by the innermost
loop is in cache order. The arrays can be transposed at
construction or prior to entering a computationally intensive section
of code. The former requires all array references to be modified,
while the latter is cost effective only if the cost of the transpose
is amortized over many accesses. I used the second approach to
optimize code 10.
Code 5 has
four-dimensional arrays and loops are nested four deep. For all of
the reasons cited above the compiler is not able to restructure three
key loops. Assume C arrays and let the four dimensions of the arrays
be i, j, k, and l. In the original code,
the index structure of the three loops is
L1: for i L2: for i L3: for i
for l for l for j
for k for j for k
for j for k for l
So only L3 accesses
array elements in cache order. L1 is a very complex loop—much
too complex to invert. I brought the loop into cache alignment by
transposing the second and fourth dimensions of the arrays. Since
the code uses a macro to compute all array indexes, I effected the
transpose at construction and changed the macro appropriately. The
dimensions of the new arrays are now: i, l, k,
and j. L3 is a simple loop and easily inverted. L2 has a
loop-carried scalar dependence in k. By promoting the scalar
name that carries the dependence to an array, I was able to invert
the third and fourth subloops aligning the loop with cache.
Code 5 is by far the
most difficult of the four codes to optimize for array accesses; but
the knowledge required to fix the problems is no more than that
required for the other codes. I would judge this code at the limits
of, but not beyond, the capabilities of appropriately trained
computational scientists.
Array Strides
When a cache miss
occurs, a line (64 bytes) rather than just one word is loaded into
the cache. If data is accessed stride 1, than the cost of the miss
is amortized over 8 words. Any stride other than one reduces the
cost savings. Two of the ten codes studied suffered from non-unit
strides. The codes represent two important classes of "strided"
codes.
Code 1 employs a
multi-grid algorithm to reduce time to convergence. The grids are
every tenth, fifth, second, and unit element. Since time to
convergence is inversely proportional to the distance between
elements, coarse grids converge quickly providing good starting
values for finer grids. The better starting values further reduce
the time to convergence. The downside is that grids of every nth
element, n > 1, introduce non-unit strides into the
computation. In the original code, much of the savings of the
multi-grid algorithm were lost due to this problem. I eliminated the
problem by compressing (copying) coarse grids into continuous memory,
and rewriting the computation as a function of the compressed grid.
On convergence, I copied the final values of the compressed grid back
to the original grid. The savings gained from unit stride access of
the compressed grid more than paid for the cost of copying. Using
compressed grids, the loop from code 1 included in the previous
section becomes
do j = 1, GZ
do i = 1, GZ
T1 = CA(i+0, j-1) + CA(i-1, j+0)
T4 = CA1(i+1, j+0) + CA1(i+0, j+1)
S1 = T1 + T4 - 4 * CA1(i+0, j+0)
CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1
enddo
enddo
where CA and CA1 are
compressed arrays of size GZ.
Code 7 traverses a list
of objects selecting objects for later processing. The labels of the
selected objects are stored in an array. The selection step has unit
stride, but the processing steps have irregular stride. A fix is to
save the parameters of the selected objects in temporary arrays as
they are selected, and pass the temporary arrays to the processing
functions. The fix is practical if the same parameters are used in
selection as in processing, or if processing comprises a series of
distinct steps which use overlapping subsets of the parameters. Both
conditions are true for code 7, so I achieved significant improvement
by copying parameters to temporary arrays during selection.
Data reuse
In the previous
sections, we optimized for spatial locality. It is also important to
optimize for temporal locality. Once read, a datum should be used as
much as possible before it is forced from cache. Loop fusion and
loop unrolling are two techniques that increase temporal locality.
Unfortunately, both techniques increase register pressure—as
loop bodies become larger, the number of registers required to hold
temporary values grows. Once register spilling occurs, any gains
evaporate quickly. For multiprocessors with small register sets or
small caches, the sweet spot can be very small. In the ten codes
presented here, I found no opportunities for loop fusion and only two
opportunities for loop unrolling (codes 1 and 3).
In code 1, unrolling the
outer and inner loop one iteration increases the number of result
values computed by the loop body from 1 to 4,
do J = 1, GZ-2, 2
do I = 1, GZ-2, 2
T1 = CA1(i+0, j-1) + CA1(i-1, j+0)
T2 = CA1(i+1, j-1) + CA1(i+0, j+0)
T3 = CA1(i+0, j+0) + CA1(i-1, j+1)
T4 = CA1(i+1, j+0) + CA1(i+0, j+1)
T5 = CA1(i+2, j+0) + CA1(i+1, j+1)
T6 = CA1(i+1, j+1) + CA1(i+0, j+2)
T7 = CA1(i+2, j+1) + CA1(i+1, j+2)
S1 = T1 + T4 - 4 * CA1(i+0, j+0)
S2 = T2 + T5 - 4 * CA1(i+1, j+0)
S3 = T3 + T6 - 4 * CA1(i+0, j+1)
S4 = T4 + T7 - 4 * CA1(i+1, j+1)
CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1
CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2
CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3
CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4
enddo
enddo
The loop body executes
12 reads, whereas as the rolled loop shown in the previous section
executes 20 reads to compute the same four values.
In code 3, two loops are
unrolled 8 times and one loop is unrolled 4 times. Here is the
before
for (k = 0; k < NK[u]; k++) {
sum = 0.0;
for (y = 0; y < NY; y++) {
sum += W[y][u][k] * delta[y];
}
backprop[i++]=sum;
}
and after code
for (k = 0; k < KK - 8; k+=8) {
sum0 = 0.0;
sum1 = 0.0;
sum2 = 0.0;
sum3 = 0.0;
sum4 = 0.0;
sum5 = 0.0;
sum6 = 0.0;
sum7 = 0.0;
for (y = 0; y < NY; y++) {
sum0 += W[y][0][k+0] * delta[y];
sum1 += W[y][0][k+1] * delta[y];
sum2 += W[y][0][k+2] * delta[y];
sum3 += W[y][0][k+3] * delta[y];
sum4 += W[y][0][k+4] * delta[y];
sum5 += W[y][0][k+5] * delta[y];
sum6 += W[y][0][k+6] * delta[y];
sum7 += W[y][0][k+7] * delta[y];
}
backprop[k+0] = sum0;
backprop[k+1] = sum1;
backprop[k+2] = sum2;
backprop[k+3] = sum3;
backprop[k+4] = sum4;
backprop[k+5] = sum5;
backprop[k+6] = sum6;
backprop[k+7] = sum7;
}
for one of the loops
unrolled 8 times.
Optimizing for temporal
locality is the most difficult optimization considered in this paper.
The concepts are not difficult, but the sweet spot is small.
Identifying where the program can benefit from loop unrolling or loop
fusion is not trivial. Moreover, it takes some effort to get it
right. Still, educating scientific programmers about temporal
locality and teaching them how to optimize for it will pay dividends.
Reducing instruction count
Execution time is a
function of instruction count. Reduce the count and you usually
reduce the time. The best solution is to use a more efficient
algorithm; that is, an algorithm whose order of complexity is
smaller, that converges quicker, or is more accurate. Optimizing
source code without changing the algorithm yields smaller, but still
significant, gains. This paper considers only the latter because the
intent is to study how much better codes can run if written by
programmers schooled in basic code optimization techniques.
The ten codes studied
benefited from three types of "instruction reducing"
optimizations. The two most prevalent were hoisting invariant memory
and data operations out of inner loops. The third was eliminating
unnecessary data copying. The nature of these inefficiencies is
language dependent.
Memory operations
The semantics of C make
it difficult for the compiler to determine all the invariant memory
operations in a loop. The problem is particularly acute for loops in
functions since the compiler may not know the values of the
function's parameters at every call site when compiling the function.
Most compilers support pragmas to help resolve ambiguities; however,
these pragmas are not comprehensive and there is no standard syntax.
To guarantee that invariant memory operations are not executed
repetitively, the user has little choice but to hoist the operations
by hand. The problem is not as severe in Fortran programs because in
the absence of equivalence statements, it is a violation of the
language's semantics for two names to share memory.
Codes 3 and 5 are C
programs. In both cases, the compiler did not hoist all invariant
memory operations from inner loops. Consider the following loop from
code 3
for (y = 0; y < NY; y++) {
i = 0;
for (u = 0; u < NU; u++) {
for (k = 0; k < NK[u]; k++) {
dW[y][u][k] += delta[y] * I1[i++];
}
}
}
Since dW[y][u]
can point to the same memory space as delta for one or more
values of y and u, assignment to dW[y][u][k]
may change the value of delta[y]. In reality, dW
and delta do not overlap in memory, so I rewrote the loop as
for (y = 0; y < NY; y++) {
i = 0;
Dy = delta[y];
for (u = 0; u < NU; u++) {
for (k = 0; k < NK[u]; k++) {
dW[y][u][k] += Dy * I1[i++];
}
}
}
Failure to hoist
invariant memory operations may be due to complex address
calculations. If the compiler can not determine that the address
calculation is invariant, then it can hoist neither the calculation
nor the associated memory operations. As noted above, code 5 uses a
macro to address four-dimensional arrays
#define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0]
+ (i)*(a)->strides[3] + (j)*(a)->strides[2]
+ (k)*(a)->strides[1])
The macro is too complex
for the compiler to understand and so, it does not identify any
subexpressions as loop invariant. The simplest way to eliminate the
address calculation from the innermost loop (over i) is to
define
a0 = MAT4D(a,q,0,j,k)
before the loop and then
replace all instances of
*MAT4D(a,q,i,j,k)
in the loop with
a0[i]
A similar problem
appears in code 6, a Fortran program. The key loop in this program
is
do n1 = 1, nh
nx1 = (n1 - 1) / nz + 1
nz1 = n1 - nz * (nx1 - 1)
do n2 = 1, nh
nx2 = (n2 - 1) / nz + 1
nz2 = n2 - nz * (nx2 - 1)
ndx = nx2 - nx1
ndy = nz2 - nz1
gxx = grn(1,ndx,ndy)
gyy = grn(2,ndx,ndy)
gxy = grn(3,ndx,ndy)
balance(n1,1) = balance(n1,1) +
(force(n2,1) * gxx + force(n2,2) * gxy) * h1
balance(n1,2) = balance(n1,2) +
(force(n2,1) * gxy + force(n2,2) * gyy)*h1
end do
end do
The programmer has
written this loop well—there are no loop invariant operations
with respect to n1 and n2. However, the loop resides
within an iterative loop over time and the index calculations are
independent with respect to time. Trading space for time, I
precomputed the index values prior to the entering the time loop and
stored the values in two arrays. I then replaced the index
calculations with reads of the arrays.
Data operations
Ways to reduce data
operations can appear in many forms. Implementing a more efficient
algorithm produces the biggest gains. The closest I came to an
algorithm change was in code 4. This code computes the inner product
of K-vectors A(i) and B(j), 0 =
i < N, 0 = j < M, for most
values of i and j. Since the program computes most of
the NM possible inner products, it is more efficient to
compute all the inner products in one triply-nested loop rather than
one at a time when needed. The savings accrue from reading A(i)
once for all B(j) vectors and from loop unrolling.
for (i = 0; i < N; i+=8) {
for (j = 0; j < M; j++) {
sum0 = 0.0;
sum1 = 0.0;
sum2 = 0.0;
sum3 = 0.0;
sum4 = 0.0;
sum5 = 0.0;
sum6 = 0.0;
sum7 = 0.0;
for (k = 0; k < K; k++) {
sum0 += A[i+0][k] * B[j][k];
sum1 += A[i+1][k] * B[j][k];
sum2 += A[i+2][k] * B[j][k];
sum3 += A[i+3][k] * B[j][k];
sum4 += A[i+4][k] * B[j][k];
sum5 += A[i+5][k] * B[j][k];
sum6 += A[i+6][k] * B[j][k];
sum7 += A[i+7][k] * B[j][k];
}
C[i+0][j] = sum0;
C[i+1][j] = sum1;
C[i+2][j] = sum2;
C[i+3][j] = sum3;
C[i+4][j] = sum4;
C[i+5][j] = sum5;
C[i+6][j] = sum6;
C[i+7][j] = sum7;
}}
This change requires
knowledge of a typical run; i.e., that most inner products are
computed. The reasons for the change, however, derive from basic
optimization concepts. It is the type of change easily made at
development time by a knowledgeable programmer.
In code 5, we have the
data version of the index optimization in code 6. Here a very
expensive computation is a function of the loop indices and so cannot
be hoisted out of the loop; however, the computation is invariant
with respect to an outer iterative loop over time. We can compute
its value for each iteration of the computation loop prior to
entering the time loop and save the values in an array. The increase
in memory required to store the values is small in comparison to the
large savings in time.
The main loop in Code 8
is doubly nested. The inner loop includes a series of guarded
computations; some are a function of the inner loop index but not the
outer loop index while others are a function of the outer loop index
but not the inner loop index
for (j = 0; j < N; j++) {
for (i = 0; i < M; i++) {
r = i * hrmax;
R = A[j];
temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]);
high = temp * kcoeff * B[j] * PRM[2] * PRM[4];
low = high * PRM[6] * PRM[6] /
(1.0 + pow(PRM[4] * PRM[6], 2.0));
kap = (R > PRM[6]) ?
high * R * R / (1.0 + pow(PRM[4]*r, 2.0) :
low * pow(R/PRM[6], PRM[5]);
< rest of loop omitted >
}}
Note that the value of
temp is invariant to j. Thus, we can hoist the
computation for temp out of the loop and save its values in an
array.
for (i = 0; i < M; i++) {
r = i * hrmax;
TEMP[i] = pow(r, PRM[3]);
}
[N.B. – the case
for PRM[3] = 0 is omitted and will be reintroduced later.] We
now hoist out of the inner loop the computations invariant to i.
Since the conditional guarding the value of kap is invariant
to i, it behooves us to hoist the computation out of the inner
loop, thereby executing the guard once rather than M times.
The final version of the code is
for (j = 0; j < N; j++) {
R = rig[j] / 1000.;
tmp1 = kcoeff * par[2] * beta[j] * par[4];
tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]);
tmp3 = 1.0 + (par[4] * par[4] * R * R);
tmp4 = par[6] * par[6] / tmp2;
tmp5 = R * R / tmp3;
tmp6 = pow(R / par[6], par[5]);
if ((par[3] == 0.0) && (R > par[6])) {
for (i = 1; i <= imax1; i++)
KAP[i] = tmp1 * tmp5;
} else if ((par[3] == 0.0) && (R <= par[6])) {
for (i = 1; i <= imax1; i++)
KAP[i] = tmp1 * tmp4 * tmp6;
} else if ((par[3] != 0.0) && (R > par[6])) {
for (i = 1; i <= imax1; i++)
KAP[i] = tmp1 * TEMP[i] * tmp5;
} else if ((par[3] != 0.0) && (R <= par[6])) {
for (i = 1; i <= imax1; i++)
KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6;
}
for (i = 0; i < M; i++) {
kap = KAP[i];
r = i * hrmax;
< rest of loop omitted >
}
}
Maybe not the
prettiest piece of code, but certainly much more efficient than the
original loop,
Copy operations
Several programs
unnecessarily copy data from one data structure to another. This
problem occurs in both Fortran and C programs, although it manifests
itself differently in the two languages.
Code 1 declares two
arrays—one for old values and one for new values. At the end
of each iteration, the array of new values is copied to the array of
old values to reset the data structures for the next iteration. This
problem occurs in Fortran programs not included in this study and in
both Fortran 77 and Fortran 90 code.
Introducing pointers to
the arrays and swapping pointer values is an obvious way to eliminate
the copying; but pointers is not a feature that many Fortran
programmers know well or are comfortable using. An easy solution not
involving pointers is to extend the dimension of the value array by 1
and use the last dimension to differentiate between arrays at
different times. For example, if the data space is N x N,
declare the array (N, N, 2). Then store the problem’s
initial values in (_, _, 2) and define the scalar names new =
2 and old = 1. At the start of each iteration, swap old
and new to reset the arrays.
The old–new copy
problem did not appear in any C program. In programs that had new
and old values, the code swapped pointers to reset data structures.
Where unnecessary coping did occur is in structure assignment and
parameter passing. Structures in C are handled much like scalars.
Assignment causes the data space of the right-hand name to be copied
to the data space of the left-hand name. Similarly, when a structure
is passed to a function, the data space of the actual parameter is
copied to the data space of the formal parameter. If the structure
is large and the assignment or function call is in an inner loop,
then copying costs can grow quite large. While none of the ten
programs considered here manifested this problem, it did occur in
programs not included in the study. A simple fix is always to refer
to structures via pointers.
Optimizing loop
structures
Since scientific
programs spend almost all their time in loops, efficient loops are
the key to good performance. Conditionals, function calls, little
instruction level parallelism, and large numbers of temporary values
make it difficult for the compiler to generate tightly packed, highly
efficient code. Conditionals and function calls introduce jumps that
disrupt code flow. Users should eliminate or isolate conditionls to
their own loops as much as possible. Often logical expressions can
be substituted for if-then-else statements. For example, code 2
includes the following snippet
MaxDelta = 0.0
do J = 1, N
do I = 1, M
< code omitted >
Delta = abs(OldValue ? NewValue)
if (Delta > MaxDelta) MaxDelta = Delta
enddo
enddo
if (MaxDelta .gt. 0.001) goto 200
Since the only use of
MaxDelta is to control the jump to 200 and all that matters is
whether or not it is greater than 0.001, I made MaxDelta a
boolean and rewrote the snippet as
MaxDelta = .false.
do J = 1, N
do I = 1, M
< code omitted >
Delta = abs(OldValue ? NewValue)
MaxDelta = MaxDelta .or. (Delta .gt. 0.001)
enddo
enddo
if (MaxDelta) goto 200
thereby, eliminating the
conditional expression from the inner loop.
A microprocessor can
execute many instructions per instruction cycle. Typically, it can
execute one or more memory, floating point, integer, and jump
operations. To be executed simultaneously, the operations must be
independent. Thick loops tend to have more instruction level
parallelism than thin loops. Moreover, they reduce memory traffice
by maximizing data reuse. Loop unrolling and loop fusion are two
techniques to increase the size of loop bodies. Several of the codes
studied benefitted from loop unrolling, but none benefitted from loop
fusion. This observation is not too surpising since it is the
general tendency of programmers to write thick loops.
As loops become thicker,
the number of temporary values grows, increasing register pressure.
If registers spill, then memory traffic increases and code flow is
disrupted. A thick loop with many temporary values may execute
slower than an equivalent series of thin loops. The biggest gain
will be achieved if the thick loop can be split into a series of
independent loops eliminating the need to write and read temporary
arrays. I found such an occasion in code 10 where I split the loop
do i = 1, n
do j = 1, m
A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i)
B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i)
A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i)
B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i)
C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i)
D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i)
C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i)
D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i)
end do
end do
into two disjoint loops
do i = 1, n
do j = 1, m
A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i)
B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i)
A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i)
B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i)
end do
end do
do i = 1, n
do j = 1, m
C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i)
D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i)
C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i)
D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i)
end do
end do
Conclusions
Over the course of the last year, I have had the opportunity to work with
over two dozen academic scientific programmers at leading research universities.
Their research interests span a broad range of scientific fields. Except for two
programs that relied almost exclusively on library routines (matrix multiply and
fast Fourier transform), I was able to improve significantly the single processor
performance of all codes. Improvements range from 2x to 15.5x with a simple average
of 4.75x. Changes to the source code were at a very high level. I did not use
sophisticated techniques or programming tools to discover inefficiencies or effect
the changes. Only one code was parallel despite the availability of parallel systems
to all developers.
Clearly, we have a
problem—personal scientific research codes are highly
inefficient and not running parallel. The developers are unaware of
simple optimization techniques to make programs run faster. They
lack education in the art of code optimization and parallel
programming. I do not believe we can fix the problem by publishing
additional books or training manuals. To date, the developers in
questions have not studied the books or manual available, and are
unlikely to do so in the future.
Short courses are a
possible solution, but I believe they are too concentrated to be much
use. The general concepts can be taught in a three or four day
course, but that is not enough time for students to practice what
they learn and acquire the experience to apply and extend the
concepts to their codes. Practice is the key to becoming proficient
at optimization.
I recommend that
graduate students be required to take a semester length course in
optimization and parallel programming. We would never give someone
access to state-of-the-art scientific equipment costing hundreds of
thousands of dollars without first requiring them to demonstrate that
they know how to use the equipment. Yet the criterion for time on
state-of-the-art supercomputers is at most an interesting project.
Requestors are never asked to demonstrate that they know how to use
the system, or can use the system effectively. A semester course
would teach them the required skills. Government agencies that fund
academic scientific research pay for most of the computer systems
supporting scientific research as well as the development of most
personal scientific codes. These agencies should require graduate
schools to offer a course in optimization and parallel programming as
a requirement for funding.
About the Author
John Feo received his Ph.D. in Computer Science from
The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked
at Lawrence Livermore National Laboratory where he was the Group Leader of
the Computer Research Group and principal investigator of the Sisal Language
Project. In 1997, Dr. Feo joined Tera Computer Company where he was project
manager for the MTA, and oversaw the programming and evaluation of the MTA at
the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems
as an HPC application specialist. He works with university research groups to
optimize and parallelize scientific codes. Dr. Feo has published over two dozen
research articles in the areas of parallel parallel programming, parallel programming
languages, and application performance.