The N-Body problem addresses the gravitational evolution of
astrophysical systems. It models the dynamical changes of a
continuous mass distribution by approximating it with a number of
particles interacting solely under the influence of gravity. The most
computationally expensive part of this calculation is comprised of gravitational interactions between the particles
that represent the mass distribution; these interactions must be
performed for 500 to 10,000 time-steps for typical astrophysical
simulations. The values of choice for *N* are critical for the
quality of astrophysical simulation results (see Figure 1
). It is important
to use values of *N* that do not compromise the validity of models
for the set of physical phenomena under investigation. To this end,
*N* should be greater than for computations aiming at
determining cosmological parameters through simulation of clusters of
galaxies (like the density of the Universe) [24]. Other
useful N-Body simulations conducted by our group use *N* equal to
250,000 for Local Group runs, 1.3 million (10,000 time-steps) for a
Virgo cluster of galaxies (Figure 1), and 3 million (700
time-steps) for a 100Mpc CDM Cosmological Volume.

**Figure 1:** A comparison of a CRAY simulation with 67 thousand particles,
to a KSR-1 simulation with 1.3 million particles, shown at a redshift
of 2. The CRAY run was performed with a vectorized tree-code, whereas
the KSR run used the PKDGRAV code.

Improvements in the performance and quality of N-Body simulations has been sought in four general areas: 1) faster calculation of the gravitational accelerations; 2) multi-stepping, which is the reduction in number of time steps taken by particles in regions of the simulation where longer dynamical times are expected [24][25]; 3) volume renormalization, where regions of interest are identified and populated with a greater density of particles than the surrounding volume [21][20]; 4) the use of parallel and vector supercomputing techniques.

The need for rapid calculation of the gravitational accelerations has
led to two basic approaches. The first uses grid techniques, relying
mainly on the speed of FFT algorithms for the calculation of the
gravitational field. This class includes the ,
[17] and [9] algorithms. The other
approach uses multipole expansions within a hierarchical description
of the mass distribution (a tree). This second class includes well
known algorithms such as the Barnes-Hut tree code [2][1],
the Fast Multipole Method [14][13] and other variants
including the *Parallel k-D Tree Gravity code* (PKDGRAV).

All of the approaches mentioned above set some level of approximation in their accuracy of the calculated gravitational accelerations. Therefore, the comparison of the relative efficiency of different approaches is hard, because one has to take into account factors like the type of the physical problem under study, and acceleration-error distributions and correlations. For this reason, such a comparison is not attempted in this article. Instead, we focus on performance results for PKDGRAV that correspond to error tolerances which we have found to be adequate for typical astrophysical problems.