The Origin of Language

Although the capacity to remember and combine linguistic symbols may be latent in the apes (Miles 1983), human evolution was needed for this seed to flower into language.

A mutated gene known as FOXP2 helps explain why humans speak and chimps don’t (Paulson 2005). The key role of FOXP2 in speech came to light in a study of a British family, identified only as KE, half of whose members had an inherited, severe deficit in speech (Trivedi 2001). The same variant form of FOXP2 that is found in chimpanzees causes this disorder. Those who have the nonspeech version of the gene cannot make the fine tongue and lip movements that are necessary for clear speech, and their speech is unintelligible—even to other members of the KE family (Trivedi 2001). Chimps have the same (genetic) sequence as the KE family members with the speech deficit. Comparing chimp and human genomes, it appears that the speech friendly form of FOXP2 took hold in humans around 150,000 years ago. This mutation conferred selective advantages (linguistic and cultural abilities) that allowed those who had it to spread at the expense of those who did not (Paulson 2005).

Language offered a tremendous adaptive advantage to Homo sapiens. Language permits the information stored by a human society to exceed by far that of any nonhuman group. Language is a uniquely effective vehicle for learning. Because we can speak of things we have never experienced, we can anticipate responses before we encounter the stimuli. Adaptation can occur more rapidly in Homo than in the other primates because our adaptive means are more flexible.

Language origin hypotheses

In 1861, historical linguist Max Müller published a list of speculative theories concerning the origins of spoken language:

Bow-wow. The bow-wow or cuckoo theory, which Müller attributed to the German philosopher Johann Gottfried Herder, saw early words as imitations of the cries of beasts and birds.
Pooh-pooh. The pooh-pooh theory saw the first words as emotional interjections and exclamations triggered by pain, pleasure, surprise, etc.
Ding-dong. Müller suggested what he called the ding-dong theory, which states that all things have a vibrating natural resonance, echoed somehow by man in his earliest words.
Yo-he-ho. The yo-he-ho theory claims language emerged from collective rhythmic labor, the attempt to synchronize muscular effort resulting in sounds such as heave alternating with sounds such as ho.
Ta-ta. This did not feature in Max Müller’s list, having been proposed in 1930 by Sir Richard Paget. According to the ta-ta theory, humans made the earliest words by tongue movements that mimicked manual gestures, rendering them audible.

Most scholars today consider all such theories not so much wrong—they occasionally offer peripheral insights—as naïve and irrelevant. The problem with these theories is that they are so narrowly mechanistic. They assume that once human ancestors had discovered the appropriate ingenious mechanism for linking sounds with meanings, language automatically evolved and changed.

The ‘mother tongues’ hypothesis
The “mother tongues” hypothesis was proposed in 2004 as a possible solution to this problem. W. Tecumseh Fitch suggested that the Darwinian principle of ‘kin selection’ —the convergence of genetic interests between relatives—might be part of the answer. Fitch suggests that languages were originally ‘mother tongues’. If language evolved initially for communication between mothers and their own biological offspring, extending later to include adult relatives as well, the interests of speakers and listeners would have tended to coincide. Fitch argues that shared genetic interests would have led to sufficient trust and cooperation for intrinsically unreliable signals—words—to become accepted as trustworthy and so begin evolving for the first time.

Critics of this theory point out that kin selection is not unique to humans. So even if one accepts Fitch’s initial premises, the extension of the posited ‘mother tongue’ networks from close relatives to more distant relatives remains unexplained. Fitch argues, however, that the extended period of physical immaturity of human infants and the postnatal growth of the human brain give the human-infant relationship a different and more extended period of intergenerational dependency than that found in any other species.

The ‘obligatory reciprocal altruism’ hypothesis
Ib Ulbæk invokes another standard Darwinian principle—’reciprocal altruism —to explain the unusually high levels of intentional honesty necessary for language to evolve. ‘Reciprocal altruism’ can be expressed as the principle that if you scratch my back, I’ll scratch yours. In linguistic terms, it would mean that if you speak truthfully to me, I’ll speak truthfully to you. Ordinary Darwinian reciprocal altruism, Ulbæk points out, is a relationship established between frequently interacting individuals. For language to prevail across an entire community, however, the necessary reciprocity would have needed to be enforced universally instead of being left to individual choice. Ulbæk concludes that for language to evolve, society as a whole must have been subject to moral regulation.

Critics point out that this theory fails to explain when, how, why or by whom ‘obligatory reciprocal altruism’ could possibly have been enforced. Various proposals have been offered to remedy this defect. A further criticism is that language does not work on the basis of reciprocal altruism anyway. Humans in conversational groups do not withhold information to all except listeners likely to offer valuable information in return. On the contrary, they seem to want to advertise to the world their access to socially relevant information, broadcasting that information without expectation of reciprocity to anyone who will listen.

The gossip and grooming hypothesis
Gossip, according to Robin Dunbar in his book Grooming, Gossip and the Evolution of Language. Dunbar argues that as humans began living in increasingly larger social groups, the task of manually grooming all one’s friends and acquaintances became so time-consuming as to be unaffordable. In response to this problem, humans developed ‘a cheap and ultra-efficient form of grooming’—vocal grooming. To keep allies happy, one now needs only to ‘groom’ them with low-cost vocal sounds, servicing multiple allies simultaneously while keeping both hands free for other tasks. Vocal grooming then evolved gradually into vocal language—initially in the form of ‘gossip’. Dunbar’s hypothesis seems to be supported by the fact that the structure of language shows adaptations to the function of narration in general.

Critics of this theory point out that the very efficiency of ‘vocal grooming’—the fact that words are so cheap—would have undermined its capacity to signal commitment of the kind conveyed by time-consuming and costly manual grooming. A further criticism is that the theory does nothing to explain the crucial transition from vocal grooming—the production of pleasing but meaningless sounds—to the cognitive complexities of syntactical speech.

Ritual/speech coevolution
The ritual/speech coevolution theory was originally proposed by social anthropologist Roy Rappaport

These scholars argue that there can be no such thing as a ‘theory of the origins of language’. This is because language is not a separate adaptation but an internal aspect of something much wider—namely, human symbolic culture as a whole. Attempts to explain language independently of this wider context have spectacularly failed, say these scientists, because they are addressing a problem with no solution. Language would not work outside a specific array of social mechanisms and institutions. For example, it would not work for a nonhuman ape communicating with others in the wild. Not even the cleverest nonhuman ape could make language work under such conditions.

Lie and alternative, inherent in language … pose problems to any society whose structure is founded on language, which is to say all human societies. I have therefore argued that if there are to be words at all it is necessary to establish The Word, and that The Word is established by the invariance of liturgy. — Roy Rappaport Advocates of this school of thought point out that words are cheap. As digital hallucinations,[clarification needed] they are intrinsically unreliable. Should an especially clever nonhuman ape, or even a group of articulate nonhuman apes, try to use words in the wild, they would carry no conviction. The primate vocalizations that do carry conviction—those they actually use—are unlike words, in that they are emotionally expressive, intrinsically meaningful and reliable because they are relatively costly and hard to fake.

Language consists of digital contrasts whose cost is essentially zero. As pure social conventions, signals of this kind cannot evolve in a Darwinian social world—they are a theoretical impossibility. Being intrinsically unreliable, language works only if one can build up a reputation for trustworthiness within a certain kind of society—namely, one where symbolic cultural facts (sometimes called ‘institutional facts’) can be established and maintained through collective social endorsement. In any hunter-gatherer society, the basic mechanism for establishing trust in symbolic cultural facts is collective ritual. Therefore, the task facing researchers into the origins of language is more multidisciplinary than is usually supposed. It involves addressing the evolutionary emergence of human symbolic culture as a whole, with language an important but subsidiary component.

Critics of the theory include Noam Chomsky, who terms it the ‘non-existence’ hypothesis—a denial of the very existence of language as an object of study for natural science. Chomsky’s own theory is that language emerged in an instant and in perfect form, prompting his critics in turn to retort that only something that does not exist—a theoretical construct or convenient scientific fiction—could possibly emerge in such a miraculous way. The controversy remains unresolved.

Tool culture resilience and grammar in early Homo
While it is possible to imitate the making of tools like those made by early Homo under circumstances of demonstration, research on primate tool cultures show that non-verbal cultures are vulnerable to environmental change. In particular, if the environment in which a skill can be used disappears for a longer period of time than an individual ape’s or early human’s lifespan, the skill will be lost if the culture is imitative and non-verbal. Chimpanzees, macaques and capuchin monkeys are all known to lose tool techniques under such circumstances. Researchers on primate culture vulnerability therefore argue that since early Homo species as far back as Homo habilis retained their tool cultures despite many climate change cycles at the timescales of centuries to millennia each, these species had sufficiently developed language abilities to verbally describe complete procedures, and therefore grammar and not only two-word “proto-language”.

The theory that early Homo species had sufficiently developed brains for grammar is also supported by researchers who study brain development in children, noting that grammar is developed while connections across the brain are still significantly lower than adult level. These researchers argue that these lowered system requirements for grammatical language make it plausible that the genus Homo had grammar at connection levels in the brain that were significantly lower than those of Homo sapiens and that more recent steps in the evolution of the human brain were not about language.

Humanistic theory
The humanistic tradition considers language as a human invention. Renaissance philosopher Antoine Arnauld gave a detailed description of his idea of the origin of language in Port-Royal Grammar. According to Arnauld, people are social and rational by nature, and this urged them to create language as a means to communicate their ideas to others. Language construction would have occurred through a slow and gradual process. In later theory, especially in functional linguistics, the primacy of communication is emphasised over psychological needs.

The exact way language evolved is however not considered as vital to the study of languages. Structural linguist Ferdinand de Saussure abandoned evolutionary linguistics after having come to the firm conclusion that it would not be able to provide any further revolutionary insight after the completion of the major works in historical linguistics by the end of the 19th century. Saussure was particularly sceptical of the attempts of August Schleicher and other Darwinian linguists to access prehistorical languages through series of reconstructions of proto-languages.

Evolutionary research had many other critics, too. The Paris linguistic society famously banned the topic of language evolution in 1866 because it was considered as lacking scientific proof. Around the same time, Max Müller ridiculed popular accounts to explain language origin. In his classifications, the ‘bow-wow theory’ is the type of explanation that considers languages as having evolved as an imitation of natural sounds. The ‘pooh-pooh theory’ holds that speech originated from spontaneous human cries and exclamations; the ‘yo-he-ho theory’ suggests that language developed from grunts and gasps evoked by physical exertion; while the ‘sing-song theory’ claims that speech arose from primitive ritual chants.

Saussure’s solution to the problem of language evolution involves dividing theoretical linguistics in two. Evolutionary and historical linguistics are renamed as diachronic linguistics. It is the study of language change, but it has only limited explanatory power due to the inadequacy of all of the reliable research material that could ever be made available. Synchronic linguistics, in contrast, aims to widen scientists’ understanding of language through a study of a given contemporary or historical language stage as a system in its own right.

Although Saussure paid much focus to diachronic linguistics, later structuralists who equated structuralism with the synchronic analysis were sometimes criticised of ahistoricism. According to structural anthropologist Claude Lévi-Strauss, language and meaning—in opposition to “knowledge, which develops slowly and progressively”—must have appeared in an instant.

Structuralism, as first introduced to sociology by Émile Durkheim, is nonetheless a type of humanistic evolutionary theory which explains diversification as necessitated by growing complexity. There was a shift of focus to functional explanation after Saussure’s death. Functional structuralists including the Prague Circle linguists and André Martinet explained the growth and maintenance of structures as being necessitated by their functions.[70] For example, novel technologies make it necessary for people to invent new words, but these may lose their function and be forgotten as the technologies are eventually replaced by more modern ones.

Chomsky’s single step theory
According to Noam Chomsky’s single mutation theory, the emergence of language resembled the formation of a crystal; with digital infinity as the seed crystal in a super-saturated primate brain, on the verge of blossoming into the human mind, by physical law, once evolution added a single small but crucial keystone. Thus, in this theory, language appeared rather suddenly within the history of human evolution. Chomsky, writing with computational linguist and computer scientist Robert C. Berwick, suggests that this scenario is completely compatible with modern biology. They note “none of the recent accounts of human language evolution seem to have completely grasped the shift from conventional Darwinism to its fully stochastic modern version—specifically, that there are stochastic effects not only due to sampling like directionless drift, but also due to directed stochastic variation in fitness, migration, and heritability—indeed, all the “forces” that affect individual or gene frequencies … All this can affect evolutionary outcomes—outcomes that as far as we can make out are not brought out in recent books on the evolution of language, yet would arise immediately in the case of any new genetic or individual innovation, precisely the kind of scenario likely to be in play when talking about language’s emergence.”

Citing evolutionary geneticist Svante Pääbo they concur that a substantial difference must have occurred to differentiate Homo sapiens from Neanderthals to “prompt the relentless spread of our species who had never crossed open water up and out of Africa and then on across the entire planet in just a few tens of thousands of years. … What we do not see is any kind of ‘gradualism’ in new tool technologies or innovations like fire, shelters, or figurative art.” Berwick and Chomsky therefore suggest language emerged approximately between 200,000 years ago and 60,000 years ago (between the appearance of the first anatomically modern humans in southern Africa, and the last exodus from Africa, respectively). “That leaves us with about 130,000 years, or approximately 5,000–6,000 generations of time for evolutionary change. This is not ‘overnight in one generation’ as some have (incorrectly) inferred—but neither is it on the scale of geological eons. It’s time enough—within the ballpark for what Nilsson and Pelger (1994) estimated as the time required for the full evolution of a vertebrate eye from a single cell, even without the invocation of any ‘evo-devo’ effects.”

The single mutation theory of language evolution has been directly questioned on different grounds. A formal analysis of the probability of such a mutation taking place and going to fixation in the species has concluded that such a scenario is unlikely, with multiple mutations with more moderate fitness effects being more probable. Another criticism has questioned the logic of the argument for single mutation, and puts forward that from the formal simplicity of Merge, the capacity Berwick and Chomsky deem the core property of human language that emerged suddenly, one cannot derive the (number of) evolutionary steps that led to it.

The Romulus and Remus hypothesis

The Romulus and Remus hypothesis, proposed by neuroscientist Andrey Vyshedskiy, seeks to address the question as to why the modern speech apparatus originated over 500,000 years before the earliest signs of modern human imagination. This hypothesis proposes that there were two phases that led to modern recursive language. The phenomenon of recursion occurs across multiple linguistic domains, arguably most prominently in syntax and morphology. Thus, by nesting a structure such as a sentence or a word within themselves, it enables the generation of potentially (countably) infinite new variations of that structure. For example, the base sentence [Peter likes apples.] can be nested in irrealis clauses to produce [Mary said [Peter likes apples.]], [Paul believed [Mary said [Peter likes apples.]]] and so forth.

The first phase includes the slow development of non-recursive language with a large vocabulary along with the modern speech apparatus, which includes changes to the hyoid bone, increased voluntary control of the muscles of the diaphragm, the evolution of the FOXP2 gene, as well as other changes by 600,000 years ago.[82] Then, the second phase was a rapid Chomskian Single Step, consisting of three distinct events that happened in quick succession around 70,000 years ago and allowed for the shift from non-recursive to recursive language in early hominins.

A genetic mutation that slowed down the Prefrontal Synthesis (PFS) critical period of at least two children that lived together;
This allowed these children to create recursive elements of language such as spatial prepositions;
Then this merged with their parent’s non-recursive language to create recursive language.
It is not enough for children to have a modern Prefrontal Cortex (PFC) to allow for the development of PFS; the children must also be mentally stimulated and have recursive elements already in their language to acquire PFS. Since their parents would not have invented these elements yet, the children would have had to do it themselves, which is a common occurrence among young children that live together, in a process called cryptophasia. This means that delayed PFC development would have allowed for more time to acquire PFS, and develop recursive elements.

Delayed PFC development also comes with negative consequences, such as a longer period of reliance on one’s parents to survive, and lower survival rates. For modern language to have occurred, PFC delay had to have an immense survival benefit in later life, such as PFS ability. This suggests that the mutation that caused PFC delay and the development of recursive language and PFS occurred simultaneously, which lines up with evidence of a genetic bottleneck around 70,000 years ago.[85] This could have been the result of a few individuals who developed PFS and recursive language which gave them significant competitive advantage over all other humans at the time.[83]

Gestural theory
The gestural theory states that human language developed from gestures that were used for simple communication.

Two types of evidence support this theory.

Gestural language and vocal language depend on similar neural systems. The regions on the cortex that are responsible for mouth and hand movements border each other.
Nonhuman primates can use gestures or symbols for at least primitive communication, and some of their gestures resemble those of humans, such as the “begging posture”, with the hands stretched out, which humans share with chimpanzees.
Research has found strong support for the idea that verbal language and sign language depend on similar neural structures. Patients who used sign language, and who suffered from a left-hemisphere lesion, showed the same disorders with their sign language as vocal patients did with their oral language.[87] Other researchers found that the same left-hemisphere brain regions were active during sign language as during the use of vocal or written language.

Primate gesture is at least partially genetic: different nonhuman apes will perform gestures characteristic of their species, even if they have never seen another ape perform that gesture. For example, gorillas beat their breasts. This shows that gestures are an intrinsic and important part of primate communication, which supports the idea that language evolved from gesture.

Further evidence suggests that gesture and language are linked. In humans, manually gesturing has an effect on concurrent vocalizations, thus creating certain natural vocal associations of manual efforts. Chimpanzees move their mouths when performing fine motor tasks. These mechanisms may have played an evolutionary role in enabling the development of intentional vocal communication as a supplement to gestural communication. Voice modulation could have been prompted by preexisting manual actions.

From infancy, gestures both supplement and predict speech. This addresses the idea that gestures quickly change in humans from a sole means of communication (from a very young age) to a supplemental and predictive behavior that is used despite the ability to communicate verbally. This too serves as a parallel to the idea that gestures developed first and language subsequently built upon it.

Two possible scenarios have been proposed for the development of language, one of which supports the gestural theory:

Language developed from the calls of human ancestors.
Language was derived from gesture.
The first perspective that language evolved from the calls of human ancestors seems logical because both humans and animals make sounds or cries. One evolutionary reason to refute this is that, anatomically, the centre that controls calls in monkeys and other animals is located in a completely different part of the brain than in humans. In monkeys, this centre is located in the depths of the brain related to emotions. In the human system, it is located in an area unrelated to emotion. Humans can communicate simply to communicate—without emotions. So, anatomically, this scenario does not work. This suggests that language was derived from gesture[according to whom?] (humans communicated by gesture first and sound was attached later).

The important question for gestural theories is why there was a shift to vocalization. Various explanations have been proposed:

Human ancestors started to use more and more tools, meaning that their hands were occupied and could no longer be used for gesturing.
Manual gesturing requires that speakers and listeners be visible to one another. In many situations, they might need to communicate, even without visual contact—for example after nightfall or when foliage obstructs visibility.
A composite hypothesis holds that early language took the form of part gestural and part vocal mimesis (imitative ‘song-and-dance’), combining modalities because all signals (like those of nonhuman apes and monkeys) still needed to be costly in order to be intrinsically convincing. In that event, each multi-media display would have needed not just to disambiguate an intended meaning but also to inspire confidence in the signal’s reliability. The suggestion is that only once community-wide contractual understandings had come into force could trust in communicative intentions be automatically assumed, at last allowing Homo sapiens to shift to a more efficient default format. Since vocal distinctive features (sound contrasts) are ideal for this purpose, it was only at this point—when intrinsically persuasive body-language was no longer required to convey each message—that the decisive shift from manual gesture to the current primary reliance on spoken language occurred.
A comparable hypothesis states that in ‘articulate’ language, gesture and vocalisation are intrinsically linked, as language evolved from equally intrinsically linked dance and song.

Humans still use manual and facial gestures when they speak, especially when people meet who have no language in common. There are also a great number of sign languages still in existence, commonly associated with deaf communities. These sign languages are equal in complexity, sophistication, and expressive power, to any oral language. The cognitive functions are similar and the parts of the brain used are similar. The main difference is that the “phonemes” are produced on the outside of the body, articulated with hands, body, and facial expression, rather than inside the body articulated with tongue, teeth, lips, and breathing. (Compare the motor theory of speech perception.)

Critics of gestural theory note that it is difficult to name serious reasons why the initial pitch-based vocal communication (which is present in primates) would be abandoned in favor of the much less effective non-vocal, gestural communication.[citation needed] However, Michael Corballis has pointed out that it is supposed that primate vocal communication (such as alarm calls) cannot be controlled consciously, unlike hand movement, and thus it is not credible as precursor to human language; primate vocalization is rather homologous to and continued in involuntary reflexes (connected with basic human emotions) such as screams or laughter (the fact that these can be faked does not disprove the fact that genuine involuntary responses to fear or surprise exist).[citation needed] Also, gesture is not generally less effective, and depending on the situation can even be advantageous, for example in a loud environment or where it is important to be silent, such as on a hunt. Other challenges to the “gesture-first” theory have been presented by researchers in psycholinguistics, including David McNeill.[citation needed]

Tool-use associated sound in the evolution of language
Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. The Tool-use sound hypothesis suggests that the production and perception of sound also contributed substantially, particularly incidental sound of locomotion (ISOL) and tool-use sound (TUS).[98] Human bipedalism resulted in rhythmic and more predictable ISOL. That may have stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations, and to mimic natural sounds.[99] Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor-processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use.[98] A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties, meaning, or both could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved.

Mirror neurons and language origins
In humans, functional MRI studies have reported finding areas homologous to the monkey mirror neuron system in the inferior frontal cortex, close to Broca’s area, one of the language regions of the brain. This has led to suggestions that human language evolved from a gesture performance/understanding system implemented in mirror neurons. Mirror neurons have been said to have the potential to provide a mechanism for action-understanding, imitation-learning, and the simulation of other people’s behavior.[100] This hypothesis is supported by some cytoarchitectonic homologies between monkey premotor area F5 and human Broca’s area.[101]

Rates of vocabulary expansion link to the ability of children to vocally mirror non-words and so to acquire the new word pronunciations. Such speech repetition occurs automatically, quickly[102] and separately in the brain to speech perception.[103][104] Moreover, such vocal imitation can occur without comprehension such as in speech shadowing[105] and echolalia.[101][106] Further evidence for this link comes from a recent study in which the brain activity of two participants was measured using fMRI while they were gesturing words to each other using hand gestures with a game of charades—a modality that some have suggested might represent the evolutionary precursor of human language. Analysis of the data using Granger Causality revealed that the mirror-neuron system of the observer indeed reflects the pattern of activity of in the motor system of the sender, supporting the idea that the motor concept associated with the words is indeed transmitted from one brain to another using the mirror system.[107]

Not all linguists agree with the above arguments, however. In particular, supporters of Noam Chomsky argue against the possibility that the mirror neuron system can play any role in the hierarchical recursive structures essential to syntax.[108]

Putting-down-the-baby theory
According to Dean Falk’s “putting-down-the-baby” theory, vocal interactions between early hominid mothers and infants began a sequence of events that led, eventually, to human ancestors’ earliest words.[109] The basic idea is that evolving human mothers, unlike their counterparts in other primates, could not move around and forage with their infants clinging onto their backs. Loss of fur in the human case left infants with no means of clinging on. Frequently, therefore, mothers had to put their babies down. As a result, these babies needed to be reassured that they were not being abandoned. Mothers responded by developing ‘motherese’—an infant-directed communicative system embracing facial expressions, body language, touching, patting, caressing, laughter, tickling and emotionally expressive contact calls. The argument is that language somehow developed out of all this.[109]

In The Mental and Social Life of Babies, psychologist Kenneth Kaye noted that no usable adult language could have evolved without interactive communication between very young children and adults. “No symbolic system could have survived from one generation to the next if it could not have been easily acquired by young children under their normal conditions of social life.”[110]

From-where-to-what theory

An illustration of the ‘from where to what’ model of language evolution.
The from where to what model is a language evolution model that is derived primarily from the organization of language processing in the brain and two of its structures: the auditory dorsal stream and the auditory ventral stream.[111][112] It hypothesises seven stages of language evolution (see illustration). Speech originated for the purpose of exchanging contact calls between mothers and their offspring to find one another in the event they became separated (illustration part 1). The contact calls could be modified with intonations in order to express either a higher or lower level of distress (illustration part 2). The use of two types of contact calls enabled the first question-answer conversation. In this scenario, the child would emit a low-level distress call to express a desire to interact with an object, and the mother would respond with either another low-level distress call (to express approval of the interaction) or a high-level distress call (to express disapproval) (illustration part 3). Over time, the improved use of intonations and vocal control led to the invention of unique calls (phonemes) associated with distinct objects (illustration part 4). At first, children learned the calls (phonemes) from their parents by imitating their lip-movements (illustration part 5). Eventually, infants were able to encode into long-term memory all the calls (phonemes). Consequentially, mimicry via lip-reading was limited to infancy and older children learned new calls through mimicry without lip-reading (illustration part 6). Once individuals became capable of producing a sequence of calls, this allowed multi-syllabic words, which increased the size of their vocabulary (illustration part 7). The use of words, composed of sequences of syllables, provided the infra structure for communicating with sequences of words (i.e., sentences).

The theory’s name is derived from the two auditory streams, which are both found in the brains of humans and other primates. The auditory ventral stream is responsible for sound recognition, and so it is referred to as the auditory what stream. In primates, the auditory dorsal stream is responsible for sound localization. It is a so-called auditory where stream. Only in humans (in the left hemisphere), is it also responsible for other processes associated with language use and acquisition, such as speech repetition and production, integration of phonemes with their lip movements, perception and production of intonations, phonological long-term memory (long-term memory storage of the sounds of words), and phonological working memory (the temporary storage of the sounds of words). Some evidence also indicates a role in recognising others by their voices. The emergence of each of these functions in the auditory dorsal stream represents an intermediate stage in the evolution of language.

A contact call origin for human language is consistent with animal studies, as like human language, contact call discrimination in monkeys is lateralised to the left hemisphere.Mice with knock-out to language related genes (such as FOXP2 and SRPX2) also resulted in the pups no longer emitting contact calls when separated from their mothers.[128][129] Supporting this model is also its ability to explain unique human phenomena, such as the use of intonations when converting words into commands and questions, the tendency of infants to mimic vocalisations during the first year of life (and its disappearance later on) and the protruding and visible human lips, which are not found in other apes. This theory could be considered an elaboration of the putting-down-the-baby theory of language evolution.

Grammaticalisation theory
‘Grammaticalisation’ is a continuous historical process in which free-standing words develop into grammatical appendages, while these in turn become ever more specialised and grammatical. An initially ‘incorrect’ usage, in becoming accepted, leads to unforeseen consequences, triggering knock-on effects and extended sequences of change. Paradoxically, grammar evolves because, in the final analysis, humans care less about grammatical niceties than about making themselves understood. If this is how grammar evolves today, according to this school of thought, similar principles at work can be legitimately inferred among distant human ancestors, when grammar itself was first being established.

In order to reconstruct the evolutionary transition from early language to languages with complex grammars, it is necessary to know which hypothetical sequences are plausible and which are not. In order to convey abstract ideas, the first recourse of speakers is to fall back on immediately recognizable concrete imagery, very often deploying metaphors rooted[colloquialism] in shared bodily experience. A familiar example is the use of concrete terms such as ‘belly’ or ‘back’ to convey abstract meanings such as ‘inside’ or ‘behind’. Equally metaphorical is the strategy of representing temporal patterns on the model of spatial ones. For example, English speakers might say ‘It is going to rain’, modelled on ‘I am going to London.’ This can be abbreviated colloquially to ‘It’s gonna rain.’ Even when in a hurry, English speakers do not say ‘I’m gonna London’—the contraction is restricted to the job of specifying tense. From such examples it can be seen why grammaticalisation is consistently unidirectional—from concrete to abstract meaning, not the other way around.

Grammaticalisation theorists picture early language as simple, perhaps consisting only of nouns.[133]p. 111 Even under that extreme theoretical assumption, however, it is difficult to imagine what would realistically have prevented people from using, say, ‘spear’ as if it were a verb (‘Spear that pig!’). People might have used their nouns as verbs or their verbs as nouns as occasion demanded. In short, while a noun-only language might seem theoretically possible, grammaticalisation theory indicates that it cannot have remained fixed in that state for any length of time.

Creativity drives grammatical change. This presupposes a certain attitude on the part of listeners. Instead of punishing deviations from accepted usage, listeners must prioritise imaginative mind-reading. Imaginative creativity—emitting a leopard alarm when no leopard was present, for example—is not the kind of behaviour which, say, vervet monkeys would appreciate or reward. Creativity and reliability are incompatible demands; for ‘Machiavellian’ primates as for animals generally, the overriding pressure is to demonstrate reliability. If humans escape these constraints, it is because in their case, listeners are primarily interested in mental states.

To focus on mental states is to accept fictions—inhabitants of the imagination—as potentially informative and interesting. An example is metaphor: a metaphor is, literally, a false statement.[138] In Romeo and Juliet, Romeo declares “Juliet is the sun!”. Juliet is a woman, not a ball of plasma in the sky, but human listeners are not (or not usually) pedants insistent on point-by-point factual accuracy. They want to know what the speaker has in mind. Grammaticalisation is essentially based on metaphor. To outlaw its use would be to stop grammar from evolving and, by the same token, to exclude all possibility of expressing abstract thought.[134][139]

A criticism of all this is that while grammaticalisation theory might explain language change today, it does not satisfactorily address the really difficult challenge—explaining the initial transition from primate-style communication to language as it is known as of 2021. Rather, the theory assumes that language already exists. As Bernd Heine and Tania Kuteva acknowledge: “Grammaticalisation requires a linguistic system that is used regularly and frequently within a community of speakers and is passed on from one group of speakers to another”.[133] Outside modern humans, such conditions do not prevail.

Evolution-Progression Model
Human language is used for self-expression; however, expression displays different stages. The consciousness of self and feelings represents the stage immediately prior to the external, phonetic expression of feelings in the form of sound, i.e., language. Intelligent animals such as dolphins, Eurasian magpies, and chimpanzees live in communities, wherein they assign themselves roles for group survival and show emotions such as sympathy.[140] When such animals view their reflection (mirror test), they recognise themselves and exhibit self-consciousness.[141] Notably, humans evolved in a quite different environment than that of these animals. Human survival became easier with the development of tools, shelter, and fire, thus facilitating further advancement of social interaction, self-expression, and tool-making, as for hunting and gathering.[142] The increasing brain size allowed advanced provisioning and tools and the technological advances during the Palaeolithic era that built upon the previous evolutionary innovations of bipedalism and hand versatility allowed the development of human language.[citation needed]

Self-domesticated ape theory
According to a study investigating the song differences between white-rumped munias and its domesticated counterpart (Bengalese finch), the wild munias use a highly stereotyped song sequence, whereas the domesticated ones sing a highly unconstrained song. In wild finches, song syntax is subject to female preference—sexual selection—and remains relatively fixed. However, in the Bengalese finch, natural selection is replaced by breeding, in this case for colourful plumage, and thus, decoupled from selective pressures, stereotyped song syntax is allowed to drift. It is replaced, supposedly within 1000 generations, by a variable and learned sequence. Wild finches, moreover, are thought incapable of learning song sequences from other finches.[143] In the field of bird vocalisation, brains capable of producing only an innate song have very simple neural pathways: the primary forebrain motor centre, called the robust nucleus of arcopallium, connects to midbrain vocal outputs, which in turn project to brainstem motor nuclei. By contrast, in brains capable of learning songs, the arcopallium receives input from numerous additional forebrain regions, including those involved in learning and social experience. Control over song generation has become less constrained, more distributed, and more flexible.[143]

One way to think about human evolution is that humans are self-domesticated apes.[according to whom?] Just as domestication relaxed selection for stereotypic songs in the finches—mate choice was supplanted by choices made by the aesthetic sensibilities of bird breeders and their customers—so might human cultural domestication have relaxed selection on many of their primate behavioural traits, allowing old pathways to degenerate and reconfigure. Given the highly indeterminate way that mammalian brains develop—they basically construct themselves “bottom up”, with one set of neuronal interactions preparing for the next round of interactions—degraded pathways would tend to seek out and find new opportunities for synaptic hookups. Such inherited de-differentiations of brain pathways might have contributed to the functional complexity that characterises human language. And, as exemplified by the finches, such de-differentiations can occur in very rapid time-frames.[144]

Speech and language for communication
See also: Animal communication, Animal language, and Origin of speech
A distinction can be drawn between speech and language. Language is not necessarily spoken: it might alternatively be written or signed. Speech is among a number of different methods of encoding and transmitting linguistic information, albeit arguably the most natural one.[145]

Some scholars[Like whom?] view language as an initially cognitive development, its ‘externalisation’ to serve communicative purposes occurring later in human evolution. According to one such school of thought, the key feature distinguishing human language is recursion,[146] (in this context, the iterative embedding of phrases within phrases). Other scholars—notably Daniel Everett—deny that recursion is universal, citing certain languages (e.g. Pirahã) which allegedly lack this feature.[147]

The ability to ask questions is considered by some[Like whom?] to distinguish language from non-human systems of communication.[148] Some captive primates (notably bonobos and chimpanzees), having learned to use rudimentary signing to communicate with their human trainers, proved able to respond correctly to complex questions and requests. Yet they failed to ask even the simplest questions themselves.[citation needed] Conversely, human children are able to ask their first questions (using only question intonation) at the babbling period of their development, long before they start using syntactic structures. Although babies from different cultures acquire native languages from their social environment, all languages of the world without exception—tonal, non-tonal, intonational and accented—use similar rising “question intonation” for yes–no questions.[149][150] This fact is a strong evidence of the universality of question intonation. In general, according to some authors, sentence intonation/pitch is pivotal in spoken grammar and is the basic information used by children to learn the grammar of whatever language.[15]

Cognitive development and language
One of the intriguing[according to whom?] abilities that language users have is that of high-level reference (or deixis), the ability to refer to things or states of being that are not in the immediate realm of the speaker. This ability is often related to theory of mind, or an awareness of the other as a being like the self with individual wants and intentions. According to Chomsky, Hauser and Fitch (2002), there are six main aspects of this high-level reference system:

Theory of mind
Capacity to acquire non-linguistic conceptual representations, such as the object/kind distinction
Referential vocal signals Imitation as a rational, intentional system Voluntary control over signal production as evidence of intentional communication Number representation
Theory of mind
Simon Baron-Cohen (1999) argues that theory of mind must have preceded language use, based on evidence[clarification needed] of use of the following characteristics as much as 40,000 years ago: intentional communication, repairing failed communication, teaching, intentional persuasion, intentional deception, building shared plans and goals, intentional sharing of focus or topic, and pretending. Moreover, Baron-Cohen argues that many primates show some, but not all, of these abilities.[citation needed] Call and Tomasello’s research on chimpanzees supports this, in that individual chimps seem to understand that other chimps have awareness, knowledge, and intention, but do not seem to understand false beliefs. Many primates show some tendencies toward a theory of mind, but not a full one as humans have.[citation needed]

Ultimately, there is some consensus within the field that a theory of mind is necessary for language use. Thus, the development of a full theory of mind in humans was a necessary precursor to full language use.[citation needed]

Number representation
In one particular study, rats and pigeons were required to press a button a certain number of times to get food. The animals showed very accurate distinction for numbers less than four, but as the numbers increased, the error rate increased.[146] Matsuzawa (1985) attempted to teach chimpanzees Arabic numerals. The difference between primates and humans in this regard was very large, as it took the chimps thousands of trials to learn 1–9 with each number requiring a similar amount of training time; yet, after learning the meaning of 1, 2 and 3 (and sometimes 4), children easily comprehend the value of greater integers by using a successor function (i.e. 2 is 1 greater than 1, 3 is 1 greater than 2, 4 is 1 greater than 3; once 4 is reached it seems most children suddenly understand that the value of any integer n is 1 greater than the previous integer[citation needed]). Put simply, other primates learn the meaning of numbers one by one, similar to their approach to other referential symbols, while children first learn an arbitrary list of symbols (1, 2, 3, 4…) and then later learn their precise meanings.[151] These results can be seen as evidence for the application of the “open-ended generative property” of language in human numeral cognition.[146]

Linguistic structures
Lexical-phonological principle Hockett (1966) details a list of features regarded as essential to describing human language.[152] In the domain of the lexical-phonological principle, two features of this list are most important:

Productivity: users can create and understand completely novel messages.
New messages are freely coined by blending, analogizing from, or transforming old ones.
Either new or old elements are freely assigned new semantic loads by circumstances and context. This says that in every language, new idioms constantly come into existence.
Duality (of Patterning): a large number of meaningful elements are made up of a conveniently small number of independently meaningless yet message-differentiating elements.
The sound system of a language is composed of a finite set of simple phonological items. Under the specific phonotactic rules of a given language, these items can be recombined and concatenated, giving rise to morphology and the open-ended lexicon. A key feature of language is that a simple, finite set of phonological items gives rise to an infinite lexical system wherein rules determine the form of each item, and meaning is inextricably linked with form. Phonological syntax, then, is a simple combination of pre-existing phonological units. Related to this is another essential feature of human language: lexical syntax, wherein pre-existing units are combined, giving rise to semantically novel or distinct lexical items.[citations needed]

Certain elements of the lexical-phonological principle are known to exist outside of humans. While all (or nearly all) have been documented in some form in the natural world, very few coexist within the same species. Bird-song, singing nonhuman apes, and the songs of whales all display phonological syntax, combining units of sound into larger structures apparently devoid of enhanced or novel meaning. Certain other primate species do have simple phonological systems with units referring to entities in the world. However, in contrast to human systems, the units in these primates’ systems normally occur in isolation, betraying a lack of lexical syntax. There is new[when?] evidence to suggest that Campbell’s monkeys also display lexical syntax, combining two calls (a predator alarm call with a “boom”, the combination of which denotes a lessened threat of danger), however it is still unclear whether this is a lexical or a morphological phenomenon.[citations needed]

Pidgins and creoles

Pidgins are significantly simplified languages with only rudimentary grammar and a restricted vocabulary. In their early stage, pidgins mainly consist of nouns, verbs, and adjectives with few or no articles, prepositions, conjunctions or auxiliary verbs. Often the grammar has no fixed word order and the words have no inflection.[153]

If contact is maintained between the groups speaking the pidgin for long periods of time, the pidgins may become more complex over many generations. If the children of one generation adopt the pidgin as their native language it develops into a creole language, which becomes fixed and acquires a more complex grammar, with fixed phonology, syntax, morphology, and syntactic embedding. The syntax and morphology of such languages may often have local innovations not obviously derived from any of the parent languages.

Studies of creole languages around the world have suggested that they display remarkable similarities in grammar[citation needed] and are developed uniformly from pidgins in a single generation. These similarities are apparent even when creoles do not have any common language origins. In addition, creoles are similar, despite being developed in isolation from each other. Syntactic similarities include subject–verb–object word order. Even when creoles are derived from languages with a different word order they often develop the SVO word order. Creoles tend to have similar usage patterns for definite and indefinite articles, and similar movement rules for phrase structures even when the parent languages do not.

Note :

The KE family is a medical name designated for a British family, about half of whom exhibit a severe speech disorder called developmental verbal dyspraxia. It is the first family with speech disorder to be investigated using genetic analyses, by which the speech impairment is discovered to be due to genetic mutation, and from which the gene FOXP2, often dubbed the “language gene”, was discovered. Their condition is also the first human speech and language disorder known to exhibit strict Mendelian inheritance.

Reference : https://www.wired.com/2001/10/first-language-gene-found/