The two major physics discoveries of the first part of this century, quantum mechanics and Einstein's theory of special relativity present new challenges when treated together. The energy "uncertainty" introduced in quantum theory combines with the mass-energy equivalence of special relativity to allow the creation of particle/anti-particle pairs by quantum fluctuations when the theories are merged. As a result there is no self-consistent theory which generalizes the simple, one-particle Schrödinger equation into a relativistic quantum wave equation.

The most successful approach to this problem, developed in the early 30's, begins not with a single relativistic particle, but with a relativistic classical field theory, such as Maxwell's theory of electromagnetism. This classical field theory is then "quantized" in the usual way and the resulting quantum field theory realizes a consistent combination of quantum mechanics and relativity. However, this theory is inherently a many-body theory with the quanta of the normal modes of the classical field having all the properties of physical particles.

The resulting many-particle theory can be relatively easily handled if the particles are heavy on the energy scale of interest or if the underlying field theory is essentially linear. Such is the case for atomic physics where the electron-volt energy scale for atomic binding is about a million times smaller than the energy required to create an electron positron pair and where the Maxwell theory of the photon field is essentially linear.

However, the situation is completely reversed for the theory of the quarks and gluons that compose the strongly interacting particles in the atomic nucleus. While the natural energy scale of these particles, the proton, r meson, etc. is on the order of hundreds of millions of electron volts, the quark masses are about one hundred times smaller. Likewise, the gluons are quanta of a Yang-Mills field which obeys highly non-linear field equations. As a result, strong interaction physics has no known analytical approach and numerical methods offer the only possibility, at least at present, for making predictions from first principles and developing a fundamental understanding of the theory.

This theory of the strongly interacting particles, quantum chromodynamics or QCD, is especially interesting because the non-linearities in the theory have dramatic physics effects. One coherent, non-linear effect of the gluons is to "confine" both the quarks and gluons so that none of these particles can be found directly as excitations of the vacuum. Likewise, a continuous "chiral symmetry", normally exhibited by a theory of light quarks, is broken by the condensation of chirally oriented quark/anti-quark pairs in the vacuum. The resulting physics of QCD is thus entirely different from what one would expect from the underlying theory, with the interaction effects having a dominant influence.

The most successful numerical approach to quantum field theory begins with
a formulation of quantum mechanics developed by Feynman in which a quantum amplitude is
described as a weighted integral over all possible paths (not necessarily obeying the
classical equations) which start at the system's initial state and end at the final state.
For single particle quantum mechanics the quantum amplitude <q_{f}(t_{f})|
q_{i}(t_{i})> for a transition from position q_{i }at time t_{i}
to position q_{f} at time t_{f} is written as:

<q_{f}(t_{f})| q_{i}(t_{i})> = òdq[t]e^{iA[q]}

where A[q(t)] is the classical action for the path q(t) given by

A = ò_{ti}^{tf} {½ m q'^{2} - V(q(t))}dt

This is a sophisticated Wiener integration over function space and is typically an awkward formalism for analytic calculation. However, it is nicely suited for numerical work since it replaces the normal operator/Hilbert space formalism of quantum mechanics with an explicit integral.

The path integral appropriate for quantum field theory is similar to the equation above except that the integration must be performed over all possible time evolutions of field configurations rather than particle trajectories. In our physical problem, a field configuration specifies both the quark and gluon fields as particular functions of space. A particular time evolution then specifies these fields as functions of space and time. This problem is easily put in a numerically tractable from by replacing the space-time continuum by a grid or lattice of points, conventionally a uniform, four-dimension mesh.

The field theory analogue of the single-particle action given in the equation above is a similar polynomial in the field variables and their derivatives, integrated over space-time. Thus, the corresponding discrete field theory action will be a four-dimensional sum of a local density which depends on the lattice field variables at a specific lattice site and its nearest neighbors.

The actual integration appropriate for the lattice QCD evaluation of an observable O is typically performed as a Monte Carlo average,

<0| O |0> = 1/N S_{n=1}^{N} O({U}_{n}),

over an ensemble of configurations for the gluon fields {U}_{n}, N
³ n ³ 1. Each
configuration assign a specific 3x3 complex matrix U to each link connecting neighboring
sites in the lattice. The ensemble used above is generated by a Metropolis or molecular
dynamics algorithm so as to be distributed according to the positive definite statistic
weight:

e^{{b/6S}P^{ tr U}P^{}} det(D+m)

The sum is over all elementary squares or plaquettes, P, that can be
constructed out of four lattice links and U_{P} is the
ordered product of the corresponding U matrices associated with those links. The quark
fields correspond to anti-commuting classical variable and cannot be treated numerically
as an integral but instead are represented by the determinant above. Here D is a nearest
neighbor difference operator and 'm', the quark mass matrix. Typically, the force
generated by the determinant is computed using a noisy estimator which can be done using,
for example, a conjugate gradient method for computing the inverse of the sparse matrix
D+m. In addition to the quark mass matrix 'm', the coupling strength, related to the
parameter b, is the only other free parameter in the
calculation.

This is an ideal formulation for massively parallel computing. A typical
large-scale lattice calculation might work with 32^{4} hypercubic lattice. If each
processor in a parallel machine is assigned a 4^{4} subvolume, 4K processors would
be required. The most computationally demanding part of a conjugate-gradient iteration
requires about 500 floating point operations per lattice site or 128K flops/processor. A
3-component complex vector must be transferred both in and out of the processor for each
link that joins the 4^{4} subcube to its neighbors, or 4^{3} · 8 · 6 ·
2 or 6K words total. This suggests a reasonably favorable 20:1 computation to
communications load for each processor.

The most obvious application of this numerical approach to quantum field theory is the first-principles study of the physics of QCD. There are a variety of low energy properties of the strongly interacting particles, for example mass ratios and specific weak decay amplitudes, that have been very well measured for decades and should now be computed using this fundamental approach. Such calculations provide tests of the underlying theory and confidence in our control of numerical errors. The present state-of-the-art in these calculations may be discerned from the proceedings of the annual lattice meeting ref[1]. Although much progress has been made over the past decade, present calculations contain systemic errors on perhaps the 5-10% level from finite lattice spacing effects and possibly uncontrolled errors coming from working with quark masses which are too heavy and the frequent neglect of the computationally demanding det(D+m) factor in the above statistical weight. The new machine at Tsukuba, CP-PAX and our new QCDSP machine represent a 10-100 fold improvement over present resources and should make significant progress in reducing these errors.

Perhaps even more important than providing concrete numerical evidence for the validity of QCD, these numerical methods offer the possibility of predicting a variety of strong interaction effects that are either important in their own right or required to extract fundamental physical parameters from experiment. Two classic examples are the study of the QCD phase transition and calculation of the quark masses. Major experimental efforts at the Brookhaven and CERN accelerator laboratories use heavy ion collisions to create a very short-lived high temperature region in which the quark/anti-quark condensate mentioned above, melts, producing a new, chirally symmetric state of matter. Lattice QCD calculations are predicting with increasing confidence the temperature of the transition to this new quark-gluon plasma, its equation of state and the latent heat of the transition. Because the quarks are confined, their masses can be computed only indirectly from the masses of the bound states in which the quarks appear. These underlying quark masses are of great fundamental importance, being one of the few intrinsic properties known of these structure-less particles and possibly holding clues to their origin in some more fundamental theory. Although a variety of analytical methods have been developed to compute these masses, all are inherently uncertain with errors perhaps as large as 100%. Lattice calculations have already improved on these results, reducing errors to the »25% level.

A third use of this numerical approach to relativistic quantum field theory is more speculative in nature. Given the dominant role played by the non-linear interactions in QCD, it is natural to wonder if other strongly-coupled field theories might also exhibit unusual properties, revealing a dynamics very different from what might be naively guessed based on a simple linearization of the theory. This suggests a type of experimental computational physics where one simulates a variety of potentially interesting theories in the hope of discovering new, possibly useful behavior. By varying some of the elements of QCD, for example the dimension of the Yang Mills group or the number of species of light quarks, we may gain a deeper understanding of the physics of QCD. We may also discover new behavior that could suggest the form for new theories of matter on the next higher scale of energy. At present there is no successful theory explaining the observed families of quarks and leptons, their masses or the pattern of weak, electromagnetic and strong interactions that they experience. New, candidate theories would certainly be of interest.

1. Lattice 96, Proceeding of the International Symposium on Lattice Field Theory C. Bernard, M. Golterman, M. Ogilvie and J. Potvin,

This description was adapted from a talk given by one of us (N. Christ) at the International Symposium on Parallel Computing in Engineering and Science, Tokyo, January 27-28, 1997.