Humanity

The ice was always melting back. Rivers were getting wider and flowing over new lands. Warm times were lasting longer. Forests were growing larger and richer, as were the big grasslands and their wild grains. Some 12,000 years ago, hunters were in the embrace of a stable, congenial climate stretching back farther than memory, perhaps farther than the stories told by the elders. For hundreds of years, the ice had been shrinking and a large swath of landscape had been bathing in steadily increasing warmth. In the fertile lands of West Asia, clans of an ancient people that would become known as the Natufians had long been accustomed to the good hunting of game and the easy gathering of fruits and wild grains. This was the way of the world for people 12,000 years ago. Nothing in the sky or the sea or on the land and nothing in memory warned them of what was to come.

The change in climate was a surprise of life-bending power, as if the whole world turned against the clans. Cold, dry winds began sweeping the countryside, withering every living thing in their paths. First the fruits were lost and then the grasslands, and eventually the forests were driven back. The rivers shrank and some became choked with advancing ice. Without warning or recognition, in just a few years, a colder, harder, shorter, and more difficult way of life set in. From beginning to end, as far as the Natufians were concerned, the time of wind and cold lasted forever. Like the old time of warmth and plenty, the hard new climate stretched out beyond memory, holding humanity in its grip for something like 470,000 days.

Some of the Natufian clans in western Asia responded to the change by migrating, as they always had done in the face of dwindling supplies of wild game and grains. Other clans adopted a new way of life. They settled in, cleared the land, and began to cultivate fields of einkorn, wheat, barley, and rye. The new way of life sustained the Natufians and spread among all the people occupying the fertile lands of the eastern Mediterranean.

Then, apparently just as suddenly, about 10,800 years ago, this cold, stingy climate finally gave way to a benevolent new pattern that enriched the countryside. In the small spare hamlets in the hills above the eastern Mediterranean, the sedentary clans would have noticed the increasing number of mild days and welcomed the more frequent rains. The woodlands thickened, the lakes and rivers enlarged, and soon the clans—farmers now—began sowing seeds of grain in the new wetlands. The land fattened, and before long the clans grew larger and so too did their villages. In the span of just a few years, as food became more plentiful, life among the clans became longer and more congenial, then soon more populous, and before long more elaborate.

What in the world happened? Why did the climate over much of Earth so suddenly reverse itself? What is it about the way of the world that makes such a thing possible? Why did this crippling cold, windy, dry regime hold sway over the sparsely populated lands of the Northern Hemisphere for 13 centuries?

Answers to these questions would be a long time coming. When the first evidence emerged from bogs in Scandinavia in the 1930s, no one had any idea what it meant, and in fact, researchers would puzzle over this abrupt climate change for the better part of the twentieth century. When earth scientists finally pried the answers from the ground, the students of human history, archeologists and anthropologists, began to recognize the change as a defining event in the civilizing of humanity—the sudden loss of wild food, provoking the transition from hunting and gathering to domestication and agriculture. By the end of the century, as the climate record came into sharper focus, a series of lower-magnitude climate shifts would emerge as transforming events in the rise and fall of civilizations.

Before scientists would be in a position to realize what had happened, however, they would have to invent new techniques for investigating the history and new ways of thinking about the character of climate. Fundamental mysteries of the comings and goings of ice ages were going to have to be solved, and the science itself would have to be reformed. Meanwhile, this catastrophic change of climate was given a perfectly obscure name and squirreled away in the literature of a science that was so new it didn't yet have a name.

The alpine plant Dryas octopetala thrives in conditions of climate and soil that most vegetation finds totally inhospitable. A delicate little white flower, Dryas is a mountain avens, of the family Rosaceae, a rose by another name. In the dry cold of Arctic and alpine mountainsides, it creeps along the rocks and clings to barren tundra clays that have been ground fine by glaciers. Treeless slopes that look over landscapes dominated by ice occasionally bloom with the white flowers of Dryas octopetala.

Early in the twentieth century, European botanists ingeniously fashioned their knowledge of plants—such as the cold tolerance of Dryas octopetala—into a powerful tool for the study of ancient climates. The work of pioneering botanists such as Knud Jessen at the University of Copenhagen was based on two critical findings: that a plant's individual pollen grains are shaped uniquely to its species, and that these pollen grains, although often appearing to be delicately structured, are virtually indestructible. Tiny pollen grains with their identifying features still intact could be found in soil deposits in which the leafy structure of their plants had long since decomposed. Jessen was among the first investigators to employ fossilized pollen grains as markers to trace the ancient advance and retreat of glaciers over northern Europe.

In the 1930s, when Jessen and others dug into the lakebeds and bogs of Scandinavia, an interesting pattern emerged. Soil differences were sharply defined, giving the sediments a layer-cake appearance. Dark, peaty bands were filled with the fossilized remnants of many plants that flourished in times of warm temperatures and abundant precipitation. Strips of light-colored, silty inorganic clays containing little more than the pollens of tundra-loving Dryas octopetala were signs of cold, dry times and the presence of glaciers. To distinguish between the bands of ice age clay, these pioneering researchers referred to the sequential appearances of the alpine flower. So the thickest, deepest, oldest band became the Oldest Dryas, and so on to the shallowest band of clay, laid down most recently, the climate shock that 12,000 years ago helped shape the course of human settlement and civilization—the Younger Dryas.

The explanations for this event and scores like it in recent geological history are remarkable, and to a surprising degree, they are key elements in modern descriptions of climate and its potential for change. At the turn of the twenty-first century, the effort to understand Earth's climate is one of the largest enterprises in science. It is the focus of hundreds of talented theorists, computer modelers, and dogged field investigators around the world. Great mysteries remain. They are different from those investigated by the Scandinavians as they dug into the bogs and lakebeds to sort out the pollens in the layered sediments, although in some respects the gaps in our knowledge are as wide now as they were then. As the eminent American climatologist Raymond T. Pierrehumbert recently observed, "Our understanding of climate is poor enough that if we did not have the paleoclimate data saying that the Younger Dryas did happen, we would never have anticipated anything like that."

Like research into many processes that left their evidence in the sediments of the earth, the study of ancient climates—paleoclimatology—grew up as a branch of the crusty science of geology. This practical circumstance critically shaped most thinking about climate throughout the twentieth century. Disparate processes such as the variations of climate, the evolution of species, the building of mountains, and the fluvial formation of valleys—all of these were understood to be different from one another, of course. But evidence for them came from the same material, was excavated by the same hands, and was interpreted at the time according to the same principles.

Central to the earth sciences is the concept of geological time, a scale that measures events in units of millions of years, and a certain assumption about the rate of geological change. Just about everyone who studies an earth science comes away with a lasting impression of this frame of reference. If Earth's 4.5 billion years is represented as a single day, humanity's time on the geological clock would be a fraction of the last minute. However valuable geological time is for studying Earth as a planet, the concept poses a particularly high barrier to the idea of abrupt climate change.

Long after they had come to terms with the mounting evidence that ice ages had come and gone several times in the past, researchers still resisted the idea that any of these changes had taken place rapidly. Even at their fastest rate, climate changes were calculated in thousands of years, units of time that measured the advance and retreat of glaciers, just about the last features of the climate system to respond to change. When the first evidence of the Younger Dryas emerged from the bogs of Scandinavia, mainstream geologists were thinking of glacier movements as local episodes of no particular global importance. They believed that the planet as a whole enjoyed a stable, "equable" climate.

The conventional view was expressed in 1906 by Scottish Professor John Walter Gregory as if it were the law of the land: "The first striking fact in the geological history of climate is that the present climate of the world has been maintained since the date of the earliest, unaltered, sedimentary deposits." Gregory's description of a "fairly constant" warm, moist atmosphere was entirely consistent with the prevailing geological thinking of the day—that Earth was a planet undergoing a very long, gradual process of cooling from a molten state. Geological processes such as mountain formation were the result of a slowly shriveling Earth, its crust wrinkling as it cooled, like the skin of a ripening fruit. The idea of ice ages waxing and waning, of ice sheets and glaciers moving back and forth over large areas of Earth, did not easily conform to such a notion.

Aside from volcanoes and earthquakes, themselves manifestations of timeless processes, nothing happens rapidly in that kind of geological world. And as long as the study of ancient climates was a child of geology, it was going to have to conform. For years to come, transitions between warmings and coolings were going to be accomplished at a respectable, uniformly gradual geological pace, over tens of thousands of years.

This question of the rate of change was no trivial matter to geologists. Nothing less than the integrity of their science was at stake. Then as now, life outside of the laboratory was full of people who were willing to believe a much wider range of wonderful stories about how the world works than mere science had to offer. Foremost among such believers were biblical fundamentalists with their supernatural catastrophes such as the purging flood wrought by an unhappy almighty God. Only after long rancorous debate had geologists in the nineteenth century overturned the notion of explaining what they saw in terms of the Great Flood and instead accepted an ice age past. There was no going back.

There were other stories, such as the fantastical pseudoscientific writings of Immanuel Velikovsky that were popular in the 1950s, competing with science. Velikovsky's cataclysmic vision of Earth's past shaped by colliding planets and asteroids is still kept alive by a cult of faithful followers.

Against this popular tide, geologists had staked a central principle: The present is the key to the past No geological processes in the past happened any faster than any that happen at present. This concept of uniformity meant that the natural physical laws are constant over time.

Outside of geology, one brave soul, a renowned weather scientist who was accustomed to studying events that transpired at a very different pace, was willing to speculate publicly that really rapid catastrophic climate change was not beyond the realm of possibility. In 1922, the British meteorologist C. E. P. Brooks suggested the past had seen "a series of startling changes of climate which almost merit the term 'Revolutions' of the old catastrophic geologists." How the very thought must have appalled geologists of the day! A few years later, Brooks went even further, suggesting that the atmosphere was subject to certain feedback processes that could provoke major change as rapidly as in "a single season."

Although these speculations by a weather scientist were not taken seriously by geologists of the day, the ground under the old line of thinking was continuing to shift as debate dragged on about ice ages, about their number and their geographical extent, and especially about the mechanisms that might cause them. Out of this dialogue emerged an idea that would prove to be critical to the discovery of abrupt change: The climate system has more than one mode of operation—a cold phase and a warm phase, at least—and somehow it switches between them.

Among the first to express this new vision of an unstable climate was the respected U.S. Weather Bureau physicist William J. Humphreys. In 1932, in an Atlantic Monthly article entitled "This Cold, Cold World," Humphreys suggested that "it is a scientific certainty that we are not wholly safe from such a world catastrophe" as another ice age. "Perhaps nothing of this kind will happen for a long, long while, but sooner or later it surely will if the future can be inferred from the past," he wrote. "When this will be we have no sure means of knowing, but we do know that the climatic gait of this our world is insecure and unsteady, teetering, indeed, on an ice age, however near or distant the inevitable fall."

At mid-century, speculations such as these were so far outside of conventional thinking on these subjects that they did not even arouse debate. During this era, the science of climatology grew up as an applied discipline that itself was heavily invested in the idea that climate was essentially unchanging. Climatologists were in the business of compiling the statistics of weather—tabulating the data recorded by instruments that had been in place less than a century—and calculating its dominant patterns according to region and season. This information, this picture of "normal," was especially important to engineers who were designing dams, hydroelectric power plants, and other large public works projects. However unsteady the climatic gait in the eyes of weather scientists, however inevitable the fall to a new ice age, climatologists and geologists continued to hold the climate system of the past to the slow, gradual pace of change they saw in the climate system of the present.

The Younger Dryas and the reality of abrupt climate change did not emerge easily from the early data. Wrinkles and spikes in the record were open to a wide range of interpretations—seldom were they taken to be meaningful climate transitions. Dramatic changes in sediment layers under lakes, for example, most often were ascribed to local events such as floods or fires or landslides. The science was young, and the data were full of unexplained upendings. In their searches for the big swaying rhythms of ice ages, geologists saw evidence of dramatic rapid change as "local noise" that had to be filtered out of their reports.

New ways of investigating the past were going to be invented, and new technologies— inspired by the exploration for oil—were going to be brought to the problem. The early fossil pollen research was founded on a train of thought that would spawn a whole range of illuminating studies: One piece of evidence can "stand in," or serve as a proxy, for another. That is, although researchers can't directly measure the changes in temperature and other features of past climates, they can detect them indirectly in other biological and geological evidence.

Proxy evidence is a primary method of most modern investigations of ancient climates, ingeniously employing all kinds of data found on land and ice and in the sediments under the lakes and oceans of the world. In addition to fossil pollens, researchers would discover reliable climate proxy information in the chemistry or unique structure of the tiny shells of marine plankton, the fossilized dung heaps of packrats, the carcasses of beetles, the annual bands of growth in ocean corals, the growth rings of trees, the chemical makeup of the green eggshells of emu, and most important to the discovery of abrupt climate change, of course, the composition of one of the purest substances on the planet—polar ice.

Still, in the first half of the twentieth century, the pollen records that seemed so obvious in Scandinavia were not as obvious in North America, and at the time no one had any way to reliably correlate the record of the past found in one region with what looked like climate changes in another. Among the first to make the attempt was the Swedish geologist Gerhard De Geer. At the turn of the century, De Geer began studying the coupled layers—or varves—of clay exposed by the excavations for building foundations during the expansion of Stockholm. He recognized a recurring pattern of differing grain coarseness in the varves and correctly surmised that they had been caused by seasonal changes, the coarser grains being deposited during annual spring surges of melt water from nearby glaciers. Using methods of stratigraphic analysis that would become common to later climate investigations, De Geer had discovered a calendar. These regular seasonal laminations allowed him to count the years back in time. He developed meticulous chronologies of the past 12,000 years in Sweden. In 1920, De Geer traveled to North America, where he studied sediment laminations in New England and thought he recognized features from his Swedish varves. He failed to persuade his colleagues, however, and his pioneering methods of analysis fell out of favor with a whole generation of geologists.

At the same time, in the American Southwest, examining layers of another kind, another innovative researcher was inventing the science of dendrochronology (from the Greek words dendron for tree and chronologia, log of time), the dating and study of annual growth rings of trees. It was Arizona astronomer Andrew Ellicott Douglass who first recognized that the differences in the widths of the growth rings of trees were related to food supply, to the availability of water, and to temperature and so were a record of climate change. Tree-ring researchers typically extract a core from a tree trunk, count and measure its annual growth rings, and determine the age of the wood by carbon-14 or other dating methods. A wide ring indicates plentiful water and good growing conditions and a narrow ring often indicates drought. Douglass realized that the best climate records came from trees growing in "water-stressed" areas that relied exclusively on rainfall or fog. Using ponderosa pine and then giant sequoia in northern California, he was able to develop overlapping sequences going back more than 3,000 years.

Applying his new science in the Four Corners region of the southwestern United States, Douglass matched the growth patterns in local trees to those found in the timbers used as beams of the elegant stone and adobe dwellings of the ancient Anasazi Indians, thereby determining the date of those ruins. After 500 years, the Anasazi Pueblo civilization suddenly collapsed in the thirteenth century during 26 years of drought.

In the 1930s, while the Scandinavians were doing their early pollen investigations, the German scientist Wolfgang Schott was launching a line of research using another instance of unique miniature architecture as a proxy of ancient temperatures. Examining meter-long ocean sediment cores taken by a German expedition in the Atlantic Ocean, Schott found that different layers were composed of the calcium carbonate skeletons of different populations of tiny drifting aquatic plants or animals, or plankton—some that thrive in warm waters, others that are more tolerant to cold.

The study of seafloor sediments would prove to be one of the richest lines of climate research, revealing the Younger Dryas as an abrupt change of widespread consequence as well as other rapid climate events. Later scientists would look beyond the different population assemblages of marine organizations on the seafloor and measure oxygen isotope ratios and other chemical and physical properties in the sediments to tease out more details of ancient climate history.

First, however, certain technical problems had to be overcome. The first mechanical process of extracting a core from the seafloor consisted of dropping through the depths a heavily weighted pipe that pierced the soft sediment like a cookie cutter. This method deformed the material, as friction between the corer and the ooze obscured the layers of the upper part of the core and limited the sample length to just a few meters. The problem was solved by a series of improvements, beginning in the late 1940s with the development of a coring device equipped with an internal piston that was raised as the corer was driven into the sediment, its suction effect counterbalancing the core-wall friction and allowing longer cores to be obtained. More fundamental problems were not so congenial to technological solutions, however. Over much of the ocean, the seafloor is inhabited by worms and other creatures that burrow through the sedimentary slime, blurring the lines between the layers that signify climate shifts. And in many areas of the open ocean, the rate of buildup of sedimentary deposits was simply too slow or uneven to leave a clear record of changes in ocean conditions. As these sediment investigations expanded, however, researchers found key places in the oceans where up to 200 meters of continuous, undisrupted, annually layered cores reveal thousands of years of climate change.

After World War II, the dawning Atomic Age began to shed new light on the Younger Dryas, on ice ages, and on the question of the pace of climate change. The ability to see more deeply into the structure of matter would have applications far beyond the military and the new weapons of mass destruction it made possible. Nuclear chemistry gave earth scientists two powerful new instruments: a clock and a thermometer.

In 1947, at the University of Chicago, chemist Willard F. Libby discovered a powerful new technology known as radiocarbon dating. Libby would win the Nobel Prize in Chemistry in 1960 for developing this geological clock. The technique relies on the ability of nuclear technology to measure the occurrence of different isotopes or slightly different atoms of an element. All carbon atoms contain six protons; most also contain six neutrons, giving the element an "atomic weight" of 12, which chemists and earth scientists speak of as "carbon-12" and write as 12C. Some carbon atoms, however, contain seven neutrons (13C) and others contain eight neutrons (14C). Carbon isotopes with six or seven neutrons are stable, but 14C, with eight, is unstable, or radioactive. This so-called radiocarbon is constantly emitting radiation as it transforms itself to a stable element, nitrogen-14. For any given quantity of atoms, this transformation is half complete every 5,730 years, a period referred to as the "half-life" of 14C. Radiocarbon is constantly formed by cosmic rays bombarding the upper atmosphere, so all living organisms are always taking in a fresh supply of 14C, along with the more abundant 12C isotope, with the air they breathe and the food they eat. The relative proportion of 14C to 12C remains roughly constant over time. When an organism dies, however, radiocarbon is no longer replenished but continues to decay. By measuring the ratio of 14C to 12C isotopes in a sample of dead organic material, therefore, scientists are able to determine how long ago it died.

Libby tested his new method on samples of wood from trees in the ancient Two Creeks Forest Bed of Wisconsin and found, unexpectedly, that the last big ice age surge in North America appeared to coincide with the Younger Dryas in Europe. Yale University geologist Richard Foster Flint used the new radiocarbon dating methods in 1955 to calculate that ice age glaciers had retreated from the Great Lakes region at the same rate as glaciers move in modern times. His conclusion reassured his colleagues: Past as present, the pace of events "rests on a sound uniformitarian basis." But uniformity, and the gradual pace of change that it implied, was beginning to lose its grip on climate science, and the new radiocarbon clock would hasten the day.

At the same time, a new level of detail of climate history would be revealed by the work of the geochemist Harold C. Urey, another Nobel laureate at the University of Chicago and another veteran of atomic bomb development. In 1947, in a famous paper in the Journal of the Chemical Society in London, "The Thermodynamic Properties of Isotopic Substances," Urey proposed that the ratio of stable isotopes of oxygen—the common 16O and the rare 18O—could be used as a "paleothermometer." Many scientists date the beginning of paleoclimatology to this paper. In particular, Urey told a meeting of the

American Association for the Advancement of Science in 1948 that the ratio of the two oxygen isotopes in a common organic chemical was critically sensitive to temperature. "The temperature coefficient for the abundance of the oxygen isotope in calcium carbonate makes possible a new thermometer of great durability," he said.

At Urey's urging, the Italian geologist Cesare Emiliani measured the ratio of the two stable isotopes of oxygen—18O and 16O—in the skeletal calcium carbonate shells of foraminifera, tiny marine animals found in the sediment cores pulled up from the ocean depths. According to Urey's theory, the shells of the planktonic creatures took up more or less of the rare 18O isotope according to the temperature of the seawater during their lives. Testing cores everywhere he found them, Emiliani produced a temperature record going back nearly 300,000 years. Researchers argued at length over the meaning of Emiliani's data. Urey was correct in theory, but the isotope differences caused by ocean temperatures were confounded by the physics of another process. Evaporation of seawater preferentially takes up 16O, leaving behind an ocean enriched with 18O during periods when ice sheets are building up over land. Eventually, researchers would conclude that Emiliani had captured the history not only of ocean temperatures but of the volume of ice on land as well—the waxing and waning of ice sheets.

A revolution was under way. Researchers would refine the methods of nuclear chemistry that tease time and temperature from ancient relics and extend their application. In particular, Danish geophysicist Willi Dansgaard would devise a method of using ratios of the oxygen isotopes in polar ice to calculate the temperature of the air when the ice fell as snow.

At the same time, a new generation of earth scientists seemed to be more open to new ideas about the pace of geological events. Out of the haze of prewar conjecture emerged a picture of increasingly fine detail, full of surprises: more and longer-lasting advances of ice sheets than anyone thought; sharper changes in temperature and precipitation than anyone supposed. The rate of change would be subject to continuing debate, but as researchers periodically reviewed the accumulating evidence, on each occasion the time horizon for climate transitions seemed to be shorter. Along the way came an increasingly clear vision of the Younger Dryas.

A young geochemist, Wallace S. Broecker, and other researchers at Lamont Geological Observatory in New York set to work applying the radiocarbon clock to a variety of ocean and lake sediments and other climate evidence, and their 1960 report introduced a new sense of the catastrophic potential of climate. "Evidence from a number of geographically isolated systems suggests that the warming of world-wide climate which occurred at the close of the Wisconsin glacial times was extremely abrupt," Broecker wrote. The Lamont team examined new evidence from sediments in the Atlantic and the Gulf of Mexico, from rain-fed lakes in the American West, from the Great Lakes Basin, and from pollen assemblages in Europe, and in each case "the radiocarbon age determinations suggest that the changes occurred in less than 1,000 years close to 11,000 years ago."

Were the data pointing to the end of the Younger Dryas? Using 1950s methods, Broecker couldn't be sure. The margins of error in radiocarbon dating still were being measured in centuries. Observing the pollen data from Europe, Broecker reported that he had found evidence of the Younger Dryas in the western lakebeds and in the Great Lakes—but not in the deep-sea cores or the Gulf of Mexico. "Certainly if the abruptness of such a major change in world climate can be firmly established it will be of the utmost importance that acceptable theories are able to account for it," he wrote. The comment was prescient. Broecker would devote much of his long, illustrious career to just such an undertaking, becoming the leading theorist on the subject.

As it happened, in the 1950s, a little-known, highly experimental project under way in Greenland would lead to the dawn of a whole new day in the study of ancient climates. Did succeeding years of polar weather leave a reliable record of climate in polar ice? Is it possible to drill into it and pull up a core and to read it as one can read the layers of ooze on the floors of oceans and lakes? These were the questions that a small group of researchers confronted in Greenland. The answers would lead to an astonishing new vision. A great corrective lens would be applied to the study of ancient climates, and it would be made of polar ice.

0 0

Post a comment