The Best of Creative Computing Volume 1 (published 1976)

Page 49 << PREVIOUS >> NEXT Jump to page:
Go to contents Go to thumbnails

Computer Simulation of the Atmosphere (predicting climatic trends)

graphic of page

Computer Simulation of the Atmosphere
Since the nineteen-hundreds, weather scientists have known that all weather is
part of a complex global fabric, and that conditions in one region are affected
by those in neighboring regions. However, with inadequate knowledge of
atmospheric physics and poor data-gathering facilities, global forecasting
remained a dream until after World War ll. The war sparked vigorous weather
research, and meteorologists for the first time began building numerical models
that bore some semblance to reality. More important, the first computers -
originally used for ballistics ranging-became available for peacetime use. ln
1946, famed computer pioneer John Von Neumann saw the value of high-speed
computing for meteorology and began to assemble a group of brilliant young
scientists at Princeton University. Using a machine known as the MANIAC (for
Mathematical Analyzer, Numerical Integrator and Computer), Von Neumann's group
in 1950 made a first-and wildly successful-computer run of their model. But
later tests revealed inadequacies- according to one account, the computer once
forecast a blizzard for Georgia in July. 
Since then, computers and models alike have grown steadily more sophisticated: 
computer simulation remains an expensive and arcane specialty flourishing at
only a handful of laboratories, including U.C.L.A., the Rand Corporation, the
National Center for Atmospheric Research in Boulder, Colo., England's
Meteorological Offlce and Princeton, where the descendants of the original group
have continued Von Neumann's work. Now funded by the National Oceanic and
Atmospheric Administration. the Princeton group is using the world's largest and
fastest computer -an Advanced Scientific Computer made by Texas Instruments. 
For purposes of numerical simulation, the earth's entire atmosphere is divided
into boxes extending several hundred kilometers on a side and a kilometer or so
in depth A typical model may deal with 60,000 of these boxes. The computer is
fed information about the boxes and about the basic laws of physics. lt is then
asked to compute on the basis of these laws, what will happen to the molecules
in each of the boxes as temperature, humidity and wind speed change in
neighboring boxes. ln other words, it is asked to predict the weather all over
the world, and to repeat this prediction every five minutes or so for as long as
the model holds together. 
The accuracy and range of the prediction obviously depend upon the reliability
of the data and the model-and perhaps upon some intrinsic limits not yet
understood. 'We're now issuing five-day forecasts,” says Donald Gilman, head
of the long-range forecast division of the National Weather Service. "The
consensus is that these models may let us see I0 to 14 days ahead for our daily
predictions, although estimates range from one to four weeks. We are appreciably
more accurate than we were 20 years ago, but it may be difficult to go on from
here. That's one of the things the Global Atmospheric Research Program is
designed to tell us: how much further we can expect to get. These models are
very sensitive to little disturbances. lf you give the model any sort of random
kick, such as an error in wind speed, on day one the results you get three
months later are very, very different from what you get without the kick. It
will be very difficult to distinguish small but real atmospheric disturbances
from random background ‘noise.' "

To predict climatic trends years or decades in advance, it is clearly
impractical to recompute the world's weather every five minutes. Even with large
"boxes," it takes tens of hours to run a model for a prediction of a week or
two. With finer, more accurate grids, say 65 kilometers on a side, computation
time becomes prohibitive.

Copyright 1974 by The New York Times Company. Reprinted with permission.

[image] Less than 1% of the Earth’s surface water is drinkable. While 70% of
the earth’s surface is covered with water, only 1% is “fresh” and a
substantial amount of that is polluted.

NASA has a satellite (E.R.T.) 570 miles in Space…to monitor Earth’s water
resources. The Earth Resource Technology Satellite orbits over the same spots on
Earth every 18 days. It can thus detect deterioration of water resources.

Page 49 << PREVIOUS >> NEXT Jump to page:
Go to contents Go to thumbnails