The Neolithic Revolution
The change from hunting and gathering to agriculture is often called the Neolithic revolution, a name coined decades ago by archaeologist V. Gordon Childe (1951) to acknowledge the fundamental changes brought about by the beginnings of food production. While hunter-gatherers collected whatever foods nature made available, farmers employed nature to produce only those crops and animals that humans selected for their own exclusive purposes. Beyond domestication and farming, Neolithic activities had other far-reaching consequences, including new settlement patterns, new technologies, and significant biocultural effects. The emergence of food production eventually transformed most human societies either directly or indirectly and, in the process, brought about dramatic changes in the natural realm as well. The world has been a very different place ever since humans began developing agriculture.
Childe argued that maintaining fields and herds demanded a long-term commitment from early farmers. Obliged to stay in one area to oversee their crops, Neolithic people became more or less settled, or sedentary. As storable harvests gradually supported larger and more permanent communities, towns and cities developed in a few areas. Within these larger settlements, fewer people were directly involved in food production, and new craft specializations emerged, such as cloth weaving, pottery production, and metallurgy. While archaeologists still accept his general characterization of the Neolithic revolution, we’ve learned a lot in the decades since Childe’s original study. For example, we now recognize that sedentism actually preceded farming in certain locations where permanent settlements were sustained solely by gathering and hunting or fishing. For example, we noted that the Chumash of southern California and the Natufian hunter-gatherers of the Near East, among others, established sizable villages. It’s now also widely accepted that sedentism could often stimulate food production, rather than the other way around, and that even Upper Paleolithic hunter-gatherers were perfectly capable of understanding how to manipulate the life histories of plants and animals to their advantage. Archaeologists also now know that Neolithic lifeways evolved independently in several places around the world. They are less agreed on how and why it spread from one region to another (e.g., compare Barker, 2006, and Bellwood, 2005). Some researchers see most regional Neolithic developments as the culmination of local cultural sequences. Others argue for migrations and the diffusion of agriculture from “heartlands” in which such cultural changes first took place.
Regardless of how it spread, the Neolithic is viewed as revolutionary in its cumulative impact on human lives, not in the amount of time it took for these cultural changes to happen. Measured in human terms, the transition from hunter-gatherer to farmer in any region involved complex, often interrelated biocultural changes that likely played out across as many as 150 generations (Fuller, 2010, p. 11).
In the following discussion, we’ll examine the beginnings of domestication and farming by considering evidence drawn from around the world. Although the process everywhere shared many similarities and produced equally dramatic consequences, archaeologists working in the Americas rarely apply the term Neolithic to studies of New World farmers. Instead, they use regional terminology, such as Formative or Preclassic in Mesoamerica and Mississippian in eastern North America.
Explaining the Origins of Domestication and Agriculture
Archaeologists have always felt compelled to identify “firsts.” When did the earliest humans arrive in Australia? Where are the oldest sites in South America? When did the bow and arrow arrive in the Midwest? Concerns about these firsts also dominate archaeological research on domestication and agriculture and will undoubtedly continue to do so. But we’ll never know when or where the first person intentionally planted seeds in the hope of making a crop, and we’ll never track down the first person to hitch an animal to a plow or milk a goat. What researchers really hope to achieve by their emphasis on firsts is to understand what made these changes happen.
From research on the origins of agriculture, we know that ancient hunter-gatherers who lived before the earliest identified archaeological evidence of food production were both intelligent and observant enough to figure out what happens to seeds after you put them in the ground. In fact, we have every reason to believe that Mesolithic/Epipaleolithic and Archaic hunter-gatherers had a wealth of practical everyday knowledge and understanding about the natural world around them. So, when we search for the earliest evidence of agriculture in a region, the most important goal is not being able to say, “Ah, here’s where we draw the line on our chronology chart of agricultural beginnings” . A much more fundamental motivation is simply to understand why these people became farmers. Why then? Why there? What conditions brought about these changes? Why these crops? From the perspective of the modern world, we may find it hard to accept that earlier peoples didn’t generally aspire to be farmers and that many no doubt avoided the opportunity for as long as possible. But the archaeological record and history alike are filled with examples demonstrating that what we now view as the self-evident benefits of food production have seldom been seen in the same light by hunter-gatherers. In recent centuries, many of the remaining hunter-gatherers on every continent resisted, sometimes successfully, the efforts of societies based on food production to convert them to peaceful, taxpaying farmers, voters, and consumers of mass-produced goods. Such attitudes call into question the inevitability of agriculture in biocultural evolution. Did it become the predominant economic basis of human communities because it offered so many obvious advantages? Because it offered the fewest disadvantages? Or because there just weren’t a lot of alternatives? Ironically, although farming didn’t get started in a big way until later, it was the hunter-gatherers of the Mesolithic/ Epipaleolithic and Archaic periods who actually initiated the critical processes and even developed many of the innovations we usually credit to the Neolithic. Essentially, the lifestyles of some early Holocene hunter-gatherers anticipated many of the developments we associate with agriculture. Neolithic farmers were mostly the recipients of domesticated species and agricultural ways from their predecessors.
Defining Agriculture and Domestication
To avoid confusion later on, it’s useful to consider the difference between domestication and agriculture. These terms are often found together in discussions of the beginnings of food production, but they mean different things (Rindos, 1984).
Domestication is an evolutionary process. When we say that a certain plant or animal is domesticated, we mean that there’s interdependency between this organism and humans, such that part of its life history depends on human intervention. To achieve and maintain this relationship requires the genetic transformation of a wild species by selective breeding or other ways of interfering, intentionally or not, with a species’ natural life processes.
Agriculture differs from domestication because it’s a cultural activity, not an evolutionary process. It involves the propagation and exploitation of domesticated plants and animals by humans. Agriculture in its broad sense includes all the activities associated with both farming and animal herding. Although domestication and agriculture are typically examined together in archaeological discussions of Neolithic lifeways, domestication isn’t inevitably associated with an economic emphasis on food production. For example, cotton was an early domesticated plant, but it was grown for its fibers, not as human food. True agriculture would be unthinkable without domesticated plants and animals. Domestication makes agriculture possible, especially when we consider how humans have manipulated the life history strategies of other organisms to maximize particular qualities, such as yield per unit area, growth rate to maturity, ease of processing, seed color, average seed size, and flavor. The cultural activity we call agriculture ensures that the plants and animals with these desirable qualities are predictably available as human food and raw materials. One useful way to view this fundamental change in the relationship between humans and other animals and plants is as symbiosis, a mutually beneficial association between members of different species (Rindos, 1984).
Examine some competing explanations for the beginnings of agriculture. Although most of these approaches address the problem of explaining the development of farming in the Near East, it’s important to note that their proponents tend to view them as generally applicable to agricultural origins everywhere. The Near East dominates the discussion mostly because it received the lion’s share of research on this problem over the past century, not because it was some sort of primal hearth for the development of agriculture. Also, as you review these competing explanations, bear in mind that the central questions are open areas of research. Right now, no single approach is both sufficient and necessary to explain all known cases. Loosely following Verhoeven (2004), our overview examines two approaches, those that primarily invoke natural, or environmental, factors to explain the development of agriculture, and those largely based on cultural (including cognitive) factors.
a) Environmental Approaches
Most approaches to explain the origins of domesticated plants and animals and the beginnings of agriculture identify one or more natural mechanisms, such as climate change or human population growth, that may have promoted the biocultural changes documented in the archaeological record. The reasoning behind such hypotheses is that, when faced with increasing resource needs, a community typically has several options. The least disruption to everyday life can be achieved by reducing the population, extending the territory, or making more intensive use of the environment. Farming, of course, represents a more intensive use of the environment. Through their efforts, farmers attempt to increase the land’s carrying capacity by harnessing more of its energy for the production of crops or animals that will feed people .
In their most extreme form, environmental approaches call to mind environmental determinism, the notion that certain cultural outcomes can be predicted from—or are determined by—a combination of purely environmental causes. For example, V. Gordon Childe himself conjectured that climate changes at the end of the Pleistocene increased Europe’s rainfall while making southwestern Asia and North Africa much more arid (Childe, 1929, 1934). Humans, animals, and vegetation in the drought areas concentrated into shrinking zones around a few permanent water sources. At these oases, Childe hypothesized, the interaction between humans and certain plants and animals resulted in domestication of species such as wheat, barley, sheep, and goats, which people then began to use to their advantage. The eventual result was the spread of sedentary village communities across the Near East.
Hypotheses based on any form of determinism tend to be relatively straightforward, which is both their strength and their weakness. Because they hold so many factors constant, it’s easy to see how such approaches should work and why certain important outcomes should arise. The main drawback of such ideas is that their focus is typically too general to explain a given case because they omit the key contextual factors that are unique to a real event. In environmental approaches, such factors are often history and culture. What people are already familiar with and what they and their ancestors did in the past often, if not always, play a big role in their decisions. So, for example, a desert region might simultaneously sustain opportunistic hunter-gatherers, nomadic pastoralists, farmers using special deep planting procedures, and even lawnmowing suburbanites willing to pay for piped-in water, not because these groups are unaware of the possibilities posed by alternative ways of living, but because they are living their traditional ways of life and they prefer them. To return to what has come to be called Childe’s oasis theory, its simplicity quickly enabled archaeologist Robert Braidwood to demonstrate that the predicted outcomes didn’t exist in the archaeological record . Pollen and sediment profiles now confirm that at least some of the climatic changes hypothesized by Childe did occur in parts of the Near East prior to Neolithic times, and so they may have had a role in fostering new relationships between humans and other species in this marginal environment (Henry, 1989; Wright, 1993). Even so, both the causes and the apparent effects were complex. In places like the Near East, climate change that resulted in diminished or redistributed resources didn’t directly push people to become farmers (Munro, 2004), though it may have made farming one of the more reasonable options. Furthermore, the arid conditions familiar to us today in some parts of the Near East may be as much a result as a cause of Neolithic activities in the region (see, e.g., Nentwig, 2007). That is, the ecologically disruptive activities of farmers and herd animals during the Neolithic period may have contributed to the destructive process of desertification.
Their plowed fields exposed soil to wind erosion and evaporation, while the irrigation demands of their crops lowered the water table and increased salinization of the soil. And overgrazing herbivores rapidly reduced the vegetation that holds moisture and binds soil, thus destroying the fragile margin between grassland and desert. One large group of environmental hypotheses that, unlike Childe’s, continues to be examined by researchers looks to increased competition for resources. Whether the competition resulted from natural increases in population density or from climatic changes, such as rising sea levels, increased rainfall, or lower average seasonal temperatures, they’re seen as major factors that encouraged the domestication of plants and animals and, ultimately, the beginnings of agriculture (e.g., Boserup, 1965; Binford, 1968; Flannery, 1973; Cohen, 1977). These explanations share the view that agriculture developed in societies where competition for the resources necessary to sustain life favored increasing the diversity of staple foods in the diet. For one reason or another, population control or territorial expansion may not have been feasible or desirable choices in these societies. For example, competition may have arisen from decreased human mortality rates rather than increased fertility or possibly even been driven by the increasing proportion of people living to an old age. The point is that people faced a “prehistoric food crisis” (Cohen, 1977) unlike most modern cases because it was a chronic problem that worsened over decades and showed no sign of ever getting any better. Concentrated in a restricted territory or faced with the dwindling reliability of once-favored resources, such hunter-gatherers might have taken up horticulture or herding to enhance the productivity or distribution of one or more particularly useful species. It was this economic commitment that eventually led to the emergence of true farmers.
Binford’s (1968) “packing model” develops one such hypothesis involving demographic stresses. As modern climatic conditions became established in the early Holocene, people resided in every prime habitat in the temperate regions of Eurasia. Foraging areas became confined as territories filled, leading to increased competition for resources and a more varied diet. Forced to make more intensive use of smaller segments of habitat, hunter-gatherers applied their Mesolithic technology to a broader range of plant and animal species. A few of these resources proved more reliable, easier to catch or process, tastier, or even faster to reproduce than others, so they soon received greater attention. Archaeological evidence of such changes can be found on many Mesolithic/Epipaleolithic sites in the form of sickles, baskets and other containers, grinding slabs, and other processing tools.
As local populations continued to grow and other groups tried to expand their territory, their only choice would be to move into the marginal habitats that lay at the edges of the optimal, resource-rich parts of their territory (Binford, 1968). Because population stress would quickly reach critical levels in these marginal environments, where resources were already sparse, it was here that domesticated plants were first developed. To feed itself, the expanding population might have tried to expand the native ranges of some of the resources they knew from their homeland by sowing seeds of the wild plants. Over time, this activity resulted in domestication of those species and fundamental changes in the relationship between them and the people who by then depended on them.
Aspects of Binford’s approach appealed to many archaeologists, who agreed that the wild ancestors of some of the world’s most important domesticates originally held a low status in the diet. Many were small, hard seeds that were once seldom used except as secondary or emergency foods. Ethnographic research had also shown that, given their choice, hunter-gatherers everywhere prefer to eat fruit and meat (Yudkin, 1969). Still, grains and roots became increasingly important in the Mesolithic/ Epipaleolithic diet—supplemented by available animal or fish protein—and not only when the more desirable foods were in short supply. An interpretation based on increased competition for resources offered a testable explanation for why this happened.
Although he took issue with certain aspects of Binford’s approach, Flannery (1973) agreed with the basic thesis because it explained why the earliest archaeological evidence of plant domestication should be found in what would have been marginal environments. Flannery described the increasing breadth of the Epipaleolithic diet as a “broad spectrum revolution” in which hunter-gatherers turned to many kinds of food resources to make up for local shortfalls. Especially in marginal environments, this activity promoted the development of domesticates and, ultimately, the origins of true agriculture.
In short, environmental approaches identify forces external to humans as the active ingredients in the development of agriculture. In these hypotheses, human agency is primarily reactive. Something in the natural environment changes (for example, precipitation patterns, average annual temperature), and it makes life increasingly hard for hunter-gatherers. They react culturally to changed circumstances in various ways, some of which include incorporating a wider range of less preferred foods in their staple diet and colonizing marginal environments. At some point, they take up the alternative of applying cultural means to increase the production of one or more food species. So basically, these approaches envision the development of agriculture more as something that humans backed into from a lack of better alternatives rather than something they enthusiastically embraced.
b) Cultural Approaches
Not everyone agrees that the roots of domestication and agriculture are to be explained by the operation of external environmental factors, all of which, by design, place human culture and agency in a passive role. Some archaeologists contend that social and ideological factors, such as competitive feasting to enhance one’s status, tribute payments, or offerings to the deities (Price and Bar- Yosef, 2010), may have pushed societies to come up with more food than could be readily obtained on a regular basis from natural sources. The reasoning behind these hypotheses is that human agency and culture alone may be sufficient and necessary to explain many of the fundamental changes documented in the archaeological record. As you may suspect from your reading of the previous section, these approaches are also not immune to extreme positions. Just as we can identify some environmental approaches as teetering on the brink of determinism, we can find some cultural and cognitive approaches that seek to emphasize the role of human culture to the near exclusion of noncultural factors. In these approaches, such natural phenomena as climatic changes are either irrelevant to the explanation of cultural outcomes or were consciously exploited by people to further cultural objectives, so they weren’t merely phenomena to which people reacted. Robert Braidwood’s “nuclear zone” or “hilly flanks” hypothesis (Braidwood and Howe, 1960) is a good midtwentieth- century example to start with in looking at cultural approaches. Braidwood built on V. Gordon Childe’s earlier work and pointed out that subsequent research didn’t find evidence of the environmental changes on which Childe based his oasis theory. What’s more, the wild ancestors of common domesticated plants and animals in the Near East were in the foothills of the mountains, not around the oases, which is where they should be if Childe’s oasis theory is correct. Without a clear environmental trigger for the origins of domestication and agriculture, Braidwood and Howe (1960) reasoned that domestication and, ultimately, agriculture came about as early Holocene hunter-gatherers gradually became familiar with local plant and animal resources and grew increasingly inclined to the notion of domestication. In other words, domestication and agriculture happened when “culture was ready.” But Braidwood never adequately addressed the compelling questions that such an argument stimulates: Why was culture “ready”? Why then and not, say, 30,000 ya? Or 100,000 ya? Or never? In his examination of the beginnings of agriculture in Europe, Ian Hodder (1990) took a more evenhanded approach than Braidwood. Building on the symbolic meaning assigned to houses and household activities, Hodder developed an argument in which the process of domestication and the activities of agriculture were properly viewed as the human “transformation of nature into culture, with an expansion of cultural control and a domination of nature” (Verhoeven, 2004, p. 210). He identified both social and natural factors as possible pressures in bringing about the transition from foraging to agriculture at the end of the Pleistocene. The strength of Hodder’s approach rests in his assignment of considerable weight both to human agency and culture and to the widely accepted effects of environmental factors. Trevor Watkins (2010) makes a broadly similar claim. Watkins argues that the key factor in the “readiness” of human culture was the emergence of “larger and more cohesive social groups” during the Epipaleolithic, which stimulated human cognitive development and, ultimately, the development of agriculture.
So, to sum things up, cultural approaches to explain the origins of domestication and agriculture assume an active role for human agency and tend to discount, if not deny completely, the importance of natural, or environmental, factors. In these approaches, cultural changes, such as a transformation of the human relationship with the divine or of the conceptualization of self, can be enough in some cases to account for the changes we see in the archaeological record. The main drawback with these hypotheses is that sometimes it’s not immediately clear why such transformations would occur.
From Collecting to Cultivating
If today we had to choose an explanation for the origins of domestication and agriculture (and we should at least suggest a preference, since this is an introductory college textbook), we would adopt one of the moderate environmental approaches as the most robust because it explains the most real cases. Most such approaches also consider cultural factors, but they assign the greatest weight to the forces of nature. For us, that’s their greatest appeal. They don’t require researchers to assume that just because we’re biocultural animals, humans are somehow exempt from natural factors that affect all living things. Disasters such as the devastating tsunami of December 2004 that killed more than 180,000 people in a dozen countries are painful reminders that, for all our human posturing to the contrary, nature often has the final word. Ultimately, we have no reason to believe that the origins of domestication and agriculture can be explained only by natural forces or only by cultural factors. These are complex problems for which there may be multiple valid explanations. It could easily be the case that approaches such as those recently proposed by Barker (2006) and Verhoeven (2004), which seek explanations in the interaction of both natural and cultural forces, will prove to be the most productive route to follow. So, working within our admitted preference for environmental approaches, let’s now consider why and how hunter-gatherers became farmers in a real example drawn from the Near East. As Epipaleolithic gatherers in the Levant region harvested natural stands of wild cereal grasses such as wheat or barley, their movements would cause many of the ripened seed heads to shatter spontaneously, with considerable loss of grain. Each time someone used a gazelle-horn sickle to cut through a stalk, some of the seeds would fall to the ground. This normal process of seed dispersal is a function of the rachis, a short connector linking each seed to the primary stalk (Fig. 14-3). While the embryonic seed develops, the rachis serves as an umbilical that conveys the nutrients to be stored and later used by the germinating seed. Once the seed reaches its full development on the stalk, the rachis normally becomes dry and brittle, enabling the seed to break away easily. Even without human interference, wild cereal grasses tended to be particularly susceptible to natural genetic modification (much more so than, say, nut-bearing trees), since the plants grew together in dense patches, were highly polytypic, and were quick to reproduce. In fact, a stand of wild grasses was like an enormous genetic laboratory. The normal range of genetic variability among the grasses included some plants with slightly larger seeds and others with tougher or more flexible rachis segments, meaning that their seed heads would be slightly less prone to shattering. As people worked through the stands, seeds from these genetic variants would end up in the gathering baskets slightly more often. Later, as the gatherers carried their baskets to camp, stored or processed the grain, or moved from place to place, a disproportionate number of the seeds they dropped, defecated, or perhaps even scattered purposely in likely growing areas would carry the flexible-rachis allele. (The same thing happened with the larger seeds preferred by the collectors.) As these genetic variants became isolated from the general wild population, each subsequent harvest advanced the “selection” process in favor of the same desirable traits. Human manipulation became an evolutionary force in modifying the species, a process Darwin labeled “unconscious selection.” People didn’t have to be aware of genetic principles to act as effective agents of evolution. And where desirable traits could be readily discerned— larger grain size, plumper seed heads, earlier maturity, and so forth— human choice would even more predictably and consistently favor the preferred characteristics. The result within just a few growing seasons might be a significant shift in allele frequencies—that is, evolution—resulting from Darwinian selection processes, in this case the result of long-term pressure by gatherers, who consistently selected for those traits that improved the plant’s productivity and quality (Rindos, 1984). Of course, the rate of divergent evolution away from the wild ancestral forms of a plant (or animal) species accelerates as people continue to exercise control by selecting for genetically based characteristics they find desirable. With the cereal grasses, such as barley and wheat, the human-influenced varieties typically came to average more grains per seed head than their wild relatives had. The rachis became less brittle in domesticated forms, making it easier for people to harvest the grain with less loss because the seed head no longer shattered to disperse its own seed. At the same time, individual seed coats or husks (glumes) became less tough, making them easier for humans to process or digest. Many of these changes obviously would have been harmful to the plant under natural conditions. Frequently, a consequence of domestication is that the plant species becomes dependent on humans to disperse its seeds. After all, symbiosis is a mutual, two-way relationship. Whenever favorable plant traits developed, hunter-gatherers could be expected to respond to these improvements by quickly adjusting their collecting behavior to take the greatest advantage, in turn stimulating further genetic changes in the subject plants and eventually producing a cultigen, or domesticate, under human control. As continuous selection and isolation from other plants of the same species favored desirable genetic variants, the steps to full domestication would have been small ones. Likewise, the distances that separated hunter-gatherers from early farmers were also small ones. It’s usually impossible to determine archaeologically when harvesting activities may have expanded to include the deliberate scattering of selected wild seeds in new environments or the elimination of competing plants by “weeding” or even burning over a forest clearing. As they intensified their focus on wheat and barley in the Near East—or on species such as maize or runner beans in Mexico—huntergatherers finally abandoned the rhythm of their traditional food-collecting schedules and further committed themselves to increasing the productivity of these plants through cultivation.
Extra Resources:
WHY Agriculture?
Why humans ever started down the agricultural path has been a big question that archaeologists and others have tried to answer. Most societies have myths about crops as divine gifts. Some god or gods gave us corn, or grapes, or yams, or whatnot. The bible provides a myth that stands as an exception: Genesis describes agriculture as our curse for being cast out of Eden. Life was once good when we foraged for food, but now we are banned to a hard life of toil, working the ground to earn our daily bread, till we die and return to the ground.
The Victorian view of agriculture is that it was a one-off discovery or lucky accident, a Eureka moment. This stemmed from the notion that domestication was such a complex and difficult process that it could only have occurred once for each species. You still hear echoes of this even to this day, but it is simply not true. This is not to say that the process was not complicated as the study of banana domestication reveals, but domestication happened in many different places around the world. Moreover, plants such as cotton were domesticated at least twice on different continents from different ancestor species. Squash was independently domesticated in all three primary centers of the Americas: eastern North America, Mesoamerica, and South America. Chilies were domesticated in Mesoamerica and South America. Wheat was independently domesticated in several different areas of the Fertile Crescent. Millet was independently domesticated in Africa, India and China.
The record of domestication makes it is clear that all people of the world were talented naturalists. People did not realize that they were transforming wild species into domesticates and in most cases were not aware that changes beneficial to them were occurring. But people everywhere were smart, observant, and curious about nature and were certainly interested in providing a reliable and productive foods for their families.
Push & Pull Explanations
Explanations for the origins of agriculture can be differentiated into those representing pushes and those representing pulls. Pushes are what force us to act. Pulls are incentives to act. These are the sticks and carrots of our behavior. Push and Pull accounts extend to other complex questions examined by anthropologists/archaeologists such as origins of state level political organization.
In pull accounts, agriculture is seen as superior to hunting and gathering, so humans would seize the opportunity if presented. Initially the Superiority of agriculture was a self-evident presumption. Many early explanations were of this kind. In push accounts agriculture was not seen as superior to hunting & gathering, so humans would not take to farming unless forced; there had to be an external cause. Commonly identified external causes included stress from overcrowding (overpopulation) or famine from droughts. Many explanations of the 1960s, 70s, and 80s were like this.
Pull accounts are now back in vogue but of a different sort. The pull accounts these days have to do with using food surpluses to host feasts, to brew beer, and to supply other social rituals. It is easy to envision the incentive behind working hard in order to get more supplies for brewing fermented beverages. Such are almost universal in their use in social rituals or many varieties.
Niche Construction
These days archaeologists studying the origins of agriculture often refer to the modern biological concept of niche construction. This occurs when an organism alters its environment, often (but not always) in a manner that increases its chances of survival. In the standard biological model, the environment causes changes in species through natural selection—favored traits are spread. In this standard model species adapt to live within environmental constraints over which they have no control, such as heat or moisture, its basically a “one-way street”.
But many species also cause changes in their environment. This is especially true of species known as “ecosystem engineers,” such as beavers who build dams, ants, termites, and many other animals that build nests, or plants and microbes that release of toxic chemicals to inhibit the growth of competitors. This creates a feedback between natural selection and environmental modification; its a “two-way street.”
Organisms can affect their environment to such an extent that this causes a shift in what traits are naturally favored. If environmental alterations persist for several generations, this results in an ecological inheritance. Offspring not only inherit genes and behaviors from their parents but also a modified environment. This can result in what can be termed directional trends in evolution.
There are many species that can be characterized like this, but humans are a preeminent example: we are the poster child for niche construction. Part of this process for humans is the burning, or fire-stick management mentioned previously. But there are a host of other cultural practices of societies around the world that purposefully altered their environments to improve plant and animal distributions, densities, or other traits. Humans pass on these altered environments to their offspring along with the knowledge of the cultural practices for how to continue the process.
Fire and other human disturbances bring about and early successional plant community and many of the plants that humans depend on and ultimately domesticates are the early pioneers of disturbed habitats. They are weeds in an ecological sense, which means aggressive colonizers of disturbed soil (note that the weed you can get in Colorado also qualifies as an ecological weed). Weeds have evolved over millions of years to take advantage of natural disturbances such as wildfires, floods, volcanic eruptions, landslides, and animals. Humans excel at land disturbance, creating habitats that weeds proliferate in. We have been doing this in a significant way since at least 12,000 years ago. Anywhere humans went in the world, except the extreme north, there were weedy species to colonize the disturbed habitats that we created. Another name for these weeds is camp followers, because wherever humans camp they appear.
Weeds are usually annual plants, living for just a year, and most produce prodigious numbers of seeds. This directly benefits humans since seeds that get produced in abundance for many species are good eating. Some weeds also produce greens that are good to eat. Bananas are also like this, pioneering disturbed soil. Humans did not necessarily have to go searching for food since it was right there in the disturbed setting around camp, or the area that they burned three months ago, or the places they camped at the previous year. If human foragers scattered seed in areas naturally disturbed such as those exposed by receding floodwaters of a river or those burned bare before the rains start then they could generate a substantial return with minimal investment.
A Coevolutionary Process
The sustained interaction with weedy species started humans down the path toward agriculture. By manipulating the conditions of growth of various plants with the goal of increasing their relative abundance or predictability, humans initiated a coevolutionary process. The coevolving mutualism between humans and their managed resources caused changes in them that further promoted this dependency. A variety of cereals and pulses (legumes) were the direct result of this coevolutionary process, as where other crops such as bananas and squash. In a process like this there is no single cause and what might appear to be a cause might actually be a consequence, such as population pressure. It is rather clear the greatly accelerated growth in human populations occurred after the establishment of an agricultural lifestyle.