Global Positioning

Systems may help

geologists determine

which faults they

should be most

concerned with

before an

earthquake occurs.

On Oct. 17, 1989, a magnitude 7.1 earthquake struck in the vicinity of Loma Prieta, in the southern Santa Cruz Mountains. The quake caused 62 deaths, 3,757 injuries, and over $6 billion in property damage from Santa Cruz to San Francisco and Oakland, sixty miles to the northeast. Less than two years before, a U.S. Geological Survey (USGS) study had indicated that this segment of the San Andreas Fault was one of six with a 30 percent probability of experiencing a magnitude 6.5 or larger earthquake within 30 years.

In light of the data provided by that event, a revised USGS report concluded that a 33 percent probability now exists for one or more large earthquakes in the San Francisco Bay Area region between 1990 and 2000. The level of reliability for this assessment is one step below "most reliable." The report states that since the analysis is based only on the region's three major faults -- San Andreas, Hayward and Rodgers Creek -- "the calculated probabilities are necessarily a minimum estimate of the hazard." It also points out that "the most densely populated parts of the area lie atop of, or adjacent to, fault segments having the greatest potential for large earthquakes."

PROBABILITY ASSESSMENT

Scientists agree that earthquakes cannot be predicted, and possibly may never be, given that all faults behave differently. But with sufficient and timely data, and records of past earthquakes, the probability of a major event occurring on a given fault segment within a given time can be estimated with varying levels of reliability. Since little data existed on the fault segment near Loma Prieta, the probability of an event there carried a minimum level of reliability.

Probability assessments are largely based on records of historic events -- surface deformation, fault-slip rates, and the length of time since the last event. If the previous earthquake predates history, geologists must dig a trench across the fault, locate the strata displacements, and analyze them to determine when the event occurred and the amount of deformation it produced.

Fault-slip rate refers to the continuous subterranean movement, or slipping, of active faults that takes place between earthquakes. Since this is on the order of 1 to 3 centimeters a year, the surface deformation it produces occurs too slowly to excite measurable seismic waves. Fault-slip rates, therefore, must be derived by geodetic measurement of the small but detectable surface deformations caused by that movement.

By contrast, the fault slip that occurs during a major event is sudden and considerably greater. In the Loma Prieta earthquake, the Pacific plate pushed 6 feet to the northwest and thrust 4 feet up, over the North American plate, all in a matter of seconds. USGS Geophysicist Nancy King describes the process: "At depth on the fault, rocks are warm and ductile enough that friction is not an issue, they are slipping all the time. Strain builds up at shallow depths [of 10 to 15 kilometers] because rocks there are cold and brittle; they resist and lock up while the lower part continues moving. When the shallow part finally lets go, it catches up with the lower part, but it moves so quickly that it excites seismic waves. That is what we feel as an earthquake."

DETERMINING FAULT SLIP

King emphasized that fault-slip rate is a key factor in determining earthquake probability. "If we know that a fault segment is slipping at a rate of 1 centimeter per year, and the slip in the last big earthquake was 1 meter (100 centimeters), then a back-of-the-envelope guess at the recurrence time is 100 years. In real life, as we have found in the past 20 years, it is not quite this easy to predict earthquakes. In fact, we can't do it. Instead, we use fault-slip rate, along with