This is Part 7 of a subscriber-only preview of my upcoming book Outsourcing Consciousness: How Social Networks are Making Us Lose Our Minds. We will release the first six chapters through the end of the year. Read Part 1, Part 2, Part 3, Part 4, Part 5, and Part 6.
For as long as I can remember, I have been attracted to hobbies which set actions to beat and meter. I started playing rock and jazz drums when I was fairly young, which means I can move each of my four limbs with independence and with some command over tempo, complex and compound meter, styles, and multiple swing patterns. I was a multi-year drum major in high school and a reasonably accomplished conductor. I have composed electronic music, written pop songs, performed in bands, and played in orchestras, but never at anything approaching a professional level. Let us say that I am an advanced hobbyist – one who feels most at home at the intersection of improvisational feel styles and technical progressive styles. I should have rhythm. I do have rhythm, dammit.
And I can’t dance to save my life.
It is not for lack of trying. I have tried to learn. I have taken lessons. I have spent evenings with my wife, sober as a stone and drunk as a sailor – just in case involuntary inhibitions were the issue – prancing about the kitchen trying to two-step, waltz, tango, line dance, and swing. Nothing. It isn’t that I can’t do the steps. But there is a fluidity in most dance which requires nearly continuous movements of different degrees that are governed by the confines of tempo and meter, but which are made more awkward by rigid adherence of movements to the beat structure of a song. I am hindered as a dancer, not helped, by having developed and by continuing to reinforce neural pathways for rhythmic processing and motor control which are dedicated to the actions associated with playing instruments and conducting in which one or more limbs moves to strike precisely on a beat.
Neural connections, except in the most modularized regions of the brain, are always in competition with one another[i]. Our brains aggressively prune such connections based on the amount of information which passes between them[ii]. Connections which are used less often, as a rule, are trimmed. Those which are used more often are retained and often exert greater influence, potentially changing the character of the interaction between functional areas of the brain. The networks of synaptic connections between the motor, premotor, and sensorimotor control regions of my brain, the muscle fibers in my limbs, and the various structures which plan, recall, and prepare the engagement of those limbs are reinforced regularly by my choices and habits. It just so happens that for me that includes hundreds of hours of practicing Bonham triplets until my right foot finally gets it right[iii].
These challenges could be overcome. The long, slender axons which reach out from neurons in my brain to form synaptic connections with the acetylcholine receptors in the motor end plates of muscle fibers in my leg are part of a network which will restructure itself if called upon often enough. Even at my age, when my brain is less plastic and somewhat slower to redesign itself, if I really committed to learning to dance, I could manage it. Indeed, in some ways my history of drumming might eventually prove helpful. Existing and related motor skills can positively influence plasticity[iv], and motor skills are among those which retain more plasticity into middle age. Oh, it might take longer, but I could get there[v]. Or not. It might just be that I am destined to have two left feet forever.
But there are skills which Homo sapiens may acquire readily at certain points in their lives which are not so forgiving at other stages. Language is perhaps the chief among them. We know, for example, that achieving native-like proficiency in a language is dependent upon a period of language acquisition which begins to fade at 5 to 7 years of age[vi]. This is commonly referred to as a ‘critical period.’ Neurobiologists use this term to mean a window during which the human brain is more actively engaged in forming new neural pathways and acquiring new functions under the right conditions. Exposure to language after this point may facilitate comparable learning of vocabulary, but can make acquiring grammar, syntax, phonology, and other structural elements of language far more challenging[vii]. The problem is that the maturation of the brain causes certain neural pathways related to language acquisition, processing, production, and comprehension to become more fixed, less plastic, and more resistant to change[viii].
Perhaps more intriguing than the fading window of opportunity to acquire language is the capacity of young humans to acquire language in the first place. Structured language is an astonishingly complex thing, a network of billions – I don’t know, trillions – of implicit relationships. A simple word like ‘dog’ might have hundreds of thousands or millions of such relationships, whether to other words, concepts, memories, practiced behaviors, syntactical structures, or grammar. I don’t just mean idiomatic expressions in English which use ‘dog’ to describe an animal, an ugly person, an athlete with grit, a very hot summer day, an overworked person, a person who is not hustling, a person’s feet, or the act of following someone. I also mean the way that hearing the word ‘dog’ will speed up recognition of sometimes related concepts or grammatical usage, like ticks, barks, fights, tricks, shaking, cats, pets, the color brown, slobber, the moon, squirrels, or that one scene in Never Been Kissed where Leelee Sobieski gets a can of Alpo poured on her[ix].
I also mean how the potential connections of the word ‘dog’ to other words and phrases might affect how we think about its placement in a sentence – and what that placement could teach us about syntax and grammar with little other direct instruction. Even multi-modal connections, like the fact that hearing, writing, saying, or even thinking the word might activate the sound of a bark, the smell of a wet dog, the sensory memory of a bite, or of the warmth of your dog sitting at your feet when you were young[x]. Or even how the placement would let us know about the structure of phrases and sentences, or the idea of a thing, a noun, an animal, a four-legged animal, or other categories.
Building a working knowledge of language through a bottom-up, brute force understanding of rules about word meaning, word order, nested and recursive clauses, grammatical structures, syntax, and variations in the meaning of words and phrases is so complex that we knew by the late 1960s that it was a task beyond the wildest dreams of simple computational linguistics[xi]. Updates in computing power didn’t change this conclusion[xii]. Steven Pinker launched a career on the famous observation of its theoretical impossibility[xiii]. Noam Chomsky, too[xiv].
Both Pinker and Chomsky complemented their views on the computational impossibility of ‘learning’ language by joining other scientists who had proposed a Language Acquisition Device (LAD). The idea is that if neurobiology cannot explain how humans brute force their way into learning language, then language must be an output of a pre-existing feature of the human brain – a modular functional region, network, or organ which exists for the single purpose of language acquisition. The promoters of LAD would generally say that it emerged in Homo sapiens at some point between 1.8 million and 100,000 years ago, and probably much closer to the latter. If such a thing existed, I suppose, then there ought to be other evidence of commonalities in language. Chomsky coined the term ‘universal grammar’ for these commonalities.
Evidence supporting Chomsky and Pinker’s claims exists in abundance. The volume and quality of exposure to language necessary for human children to learn language is seemingly low – an idea Chomsky called the ‘poverty of the stimulus’ after earlier inspiration from B.F. Skinner. Human languages also seem to share certain fundamental structural characteristics that might be consistent with a universal grammar. Among these shared characteristics, Joseph Greenberg spotted shared word order typology and other morphological features[xv], which Matthew Dryer would later expand[xvi]. Semantic structure[xvii], semantic universals[xviii], relative clause formation and syntactic hierarchies[xix], and the emergence of Creole languages all point to shared roots – possibly shared neurobiological roots – among all human languages[xx].
Perhaps the most powerful evidence of a Language Acquisition Device and a resultant universal grammar is the sudden proliferation of symbolic artifacts and complex behaviors observed in the archaeological record between 70,000 and 100,000 years ago, often referred to as the “Upper Paleolithic Revolution.[xxi]” This period witnessed a remarkable surge in technological innovation, including the advent of blade tools and sophisticated hunting weapons[xxii], as well as the emergence of symbolic expression through personal ornaments, ochre use, and eventually cave art[xxiii]. The apparent abruptness of these changes and the subsequent rapid dispersal of modern humans out of Africa[xxiv] (relative to the preponderance of evolutionary graduallies since Caliban, anyway), are precisely what you might expect if you supposed the emergence of a single adaptation of large-scale influence rather than the steady continuation of a 1.7-million-year evolutionary process. In other words, all of this sounds like a job for the rapid emergence of something like the hypothesized Language Acquisition Device. Only now, this specialized device must bear the burden not only of explaining why we can learn a theoretically unlearnable thing like language, but also how humans became behaviorally modern, spread across the planet, plus life, the universe, and everything, I suppose.
In a sense, all of this is very obviously true. Humans unquestionably possess a peculiar gift for the acquisition and use of language. Human languages do bear remarkable similarities. The brain of Homo sapiens, however formidable, ought not to be capable of acquiring language through brute-force traditional learning with such imperfect stimulus. And there can be no denying that humanity positively exploded out of Africa in a relatively short period, bringing with them evidence of vast acceleration in social sophistication, creativity, ingenuity, and communication. It should not be surprising that we have considered the existence of a special adaptation in our brains engineered by nature with the singular purpose of making it easy for us to acquire language and which, by doing so, necessarily creates universality in certain structural features of those languages which emerge. It is an especially convenient feature that such an organ would probably be the result of more discontinuous and sharp evolutionary pressures rather than those which would produce the slow co-evolution of general cognitive, social-cognitive and language-specific capabilities. After all, that kills a second bird with the same stone by also giving us a gift-wrapped explanation for the sudden onset of the Upper Paleolithic Revolution.
Because the arguments for a Language Acquisition Device are entirely circumstantial, however – we have not yet positively identified such a structure in the brain – scholars have offered a range of alternative hypotheses for our peculiar linguistic ability over the years. The late Elizabeth Bates and Brian MacWhinney proposed some of the first and most powerful retorts, mostly from a psycholinguistic perspective. One of their earliest observations criticized assumptions about the fixed sequence by which language would be acquired in the presence of a specialized and modular device for it. Some LAD proponents had argued, for example, that syntactic knowledge (how we structure words and sentences) must be acquired before and separately from semantic knowledge (what words and sentences mean). For this reason, many also argued that the period in which children could develop this syntactic knowledge would be very brief and early indeed. Others contended that there would be universal constraints on word order and on how grammatical categories (e.g. nouns, verbs, etc.) would be learned. Bates and MacWhinney presented evidence contradicting nearly all of this. When we observe humanity’s different languages and environments, we see massive variation in the order and speed at which children acquire different features of language[xxv].
The ‘competition model’ Bates and MacWhinney suggested in response to those proposing an innate language faculty presented a means through which humans might acquire language without an innate device. More importantly, it did so in a manner that allowed for the observed variations in acquisition speed and order among different languages and environments[xxvi]. In the competition model, humans acquire language in respond to multi-modal stimuli Bates and MacWhinney called ‘cues,’ which were not nearly as mired in ‘poverty’ as others had suggested. These cues were things like word order, how word meanings implied grammatical and semantic information, intonation and rhythm, subject-verb agreement, structural markers, context, and many other properties[xxvii]. One of the critical insights of their work was that learning grammar is not nearly as distinct from learning words and their meanings as the LAD lads would have it[xxviii].
Before her untimely death, Bates also spent a great deal of time exploring how the brains of young children with damage in known language functional areas reorganized themselves to acquire features of language quite successfully[xxix]. For various prenatal[xxx] and perinatal (right around birth) injuries, and for both narrow injuries and those affecting broader damage to one side of the brain, she and her fellow researchers showed that neuroplasticity could overcome. The brain simply repurposed entirely different areas of the brain for the roles the damaged language centers would have served. Language acquisition in these cases often took place somewhat more slowly, to be fair, but that is greater evidence against a formal Language Acquisition Device, not less. If the human brain can rewire itself to learn language, learn it differently than it would have otherwise, and still end up at the same place, the notion of a modular and intrinsic system for language acquisition begins to look like a just-so story.
It is a bit of an unfair thing to say since we could use the term to apply to any claim that is ultimately circumstantial – including mine! Yet I think it remains a reasonable description for a special class of hypothesis which seems entirely designed to solve multiple intractable problems at once. Remember spontaneous generation, that dominant theory of the origin of life which clouded even the mind of Shakespeare? It solved all sorts of problems at once, too. It explained why rats were in your grain silo by explaining that the biogenesis of rats is the product of odor and active principles in wheat. You can recreate this experiment and make your own rats at home by sticking a dirty shirt in a jar with some wheat[xxxi]. Spontaneous generation also explained why some geese like to hang around rivers. You see, a barnacle grows on certain trees and spontaneously emerges as a goose[xxxii]. It explained how flies are given life through the transformation of rotten meat and how crocodiles come into being through the special properties of the mud of the Nile River. Defeating theories which conveniently explain a lot is hard. It took the likes of Darwin and Pasteur to tell us a far simpler story about microorganisms and the origins of life, which they did within a few years of one another in the middle of the 19th century.
Bates and MacWhinney told a simpler story about the facts of our world that no longer obligated us to suppose the existence of the Language Acquisition Device and its cousin universal grammar. Terrence Deacon picked up that story where they left it. His first volley brought our old friend Kanzi the Bonobo back into the conversation. As it happens, one of the most accomplished non-human communicators acquired his talent entirely by accident. In his infancy, young Kanzi was reared and held by an adoptive mother that was being trained by human scientists. It was through nothing more than indirect and unintended exposure to human language during an apparent critical period for the bonobo brain – a poverty of the stimulus, if you will – that Kanzi demonstrated an aptitude for features of human language that no amount of later direct instruction had ever achieved in his brethren. He understood simple syntax, could rapidly assimilate new symbols into his language, and could understand novel sentences through his basic integration of the words, syntax, grammar, and semantics of human language. Simply by being there at the right time. Although in fairness to the researchers, it was apparently a vastly shorter window than the critical period for human children.
Like we discovered with Kanzi’s attempts to fashion stone tools, there were stark limits to what he could do. Complex word order, sentence length, and recursion – the way language embeds phrases and clauses within others – were mostly beyond him. His ability to abstract was similarly limited. By resisting the assertion of a modular, specialized Language Acquisition Device, no one is arguing that the human brain does not have an extraordinary and unique aptitude for language. Our abilities are without equal. Unprecedented. And Kanzi’s abilities are a mere shadow of it. But that’s the thing. They are shadow of it – not something wholly different in kind, just wildly different in magnitude. Kanzi’s language was a severely limited model of human language acquired in infancy with little in the way of instruction or stimulus which forever facilitated the future acquisition of additional words and symbols. If the uniqueness of human language were the result of a specialized device for its acquisition and not certain features of general and social cognition interacting with the evolution in language itself, Kanzi would remain difficult to explain. It is in the exploration of this second dimension, however – how language itself evolves – that Deacon exposed the most common LAD theory as a largely unnecessary Rube Goldberg device – an excessively complicated machine overdesigned for a task with a much simpler solution. Like evolution in the face of thousands of years of spontaneous generation, it was a solution that was staring us in the face all along.
There are two possibilities concerning the impossibility of human language: that we evolved uniquely to respond to the astonishing cognitive requirements of language, or that language evolved uniquely to respond to the needs of us. Three possibilities, I suppose, if you consider ‘all of the above’ a valid response. If we suppose that the onus was wholly on our own biological evolution, we are chained to the tyranny of the biological generation. Excluding epigenetic sources of variation in the expression of human genetics, mutation may only take place when a new generation of humans is born. That’s every 20 or 25 years or so, and selection of traits happens over many such generations. While change can certainly take place rapidly, biological evolution is generally something best talked about on a geologic time scale. Forests spring up, volcanic islands form, deserts spread, and rivers shift course over the horizons that most large-scale biological evolution we call ‘rapid’ takes place.
Language, by comparison, mutates constantly. It also mutates generally in the direction of simpler acquisition. Irregular verbs regularize[xxxiii]. That’s why you say thrived instead of throve. Phonology simplifies[xxxiv]. That’s why you pronounce wine and whine the same way. Grammar simplifies[xxxv]. That’s why me, him, her and whom are basically all that remains of our vestigial case system. Vocabulary borrows efficiently from other languages to fill semantic gaps[xxxvi]. That’s why you know what sushi, schadenfreude, karma, taboos, and entrepreneurs are. Words become more flexible in their uses[xxxvii]. Word orders become more fixed and predictable[xxxviii]. Morphology becomes more cognitively efficient[xxxix]. And much of this happens in diffuse ways – sometimes word by word – allowing the mutations which cause language to be more easily acquirable to be more easily acquirable still[xl]. Many of these are analogous to the cues that Bates and MacWhinney’s competition model identified as critical to understanding the structure of language. In comparison to the geologic scale over which human biological evolution takes place, such mutations become fundamental features of language over a horizon which seems practically instantaneous.
But why? What evolutionary pressures would cause language to drift in the direction of simple acquisition by human children? Or, since our focus is specifically on the related innovation of storytelling and storyseeking which I have argued predate language somewhat, why would both of these symbolic systems evolve to become more acquirable? I think there are three clear answers with analogs across biological evolution: evolutionary utility, social urgency, and biological necessity.
The Evolutionary Utility of Rapid Acquisition of Language
The utility argument for evolutionary pressure on language to become easier to acquire is quite simple: language is really, really useful. The capacity to think about and communicate complex events in the abstract is all but a superpower for a species with the cognitive and physical capacity to make use of the information. ‘Seeing connections and meanings that might not be immediately obvious,’ as famed sociobiologist E.O. Wilson put it[xli], is who we are. It is a defining trait of the human being[xlii]. If the evolutionary value of this adaptation seems, well, obvious, you wouldn’t be the only one to say so. Harvard’s Martin Nowak once submitted an entire paper on this very point to Science that could be effectively subtitled ‘Duh[xliii].’
If we insist on more explicit explanations than ‘Duh’, however, we will not find them in short supply. Above all, verbal language and the technology of writing we have imposed upon it permit vastly more flexible, efficient, and deeper transmission of information than other methods of animal communication[xliv]. We can discuss food sources, dangers, geography, and neighboring peoples. We can come to agreements, conduct trade, organize, plan, entertain, and create. We can also learn and teach new skills much more easily, even if mirror neurons and gestures were sometimes good enough for our ancestors[xlv]. There are social benefits to language beyond mere information transmission. Language enhances social coordination[xlvi]. It helps maintain social cohesion and facilitate cultural transmission. It helps with social bonding[xlvii].
Would all these advantages necessarily lead to the selection of language capabilities by evolution? The answer may not be as obvious as it seems. The principal beneficiary of most of these traits is not necessarily an individual but a group. The aforementioned E.O. Wilson, along with Elliott Sober, David Sloan Wilson, and others – are all comfortable with the idea of group selection as an evolutionary mechanism. That is, they consider it plausible and sometimes likely that traits might be selected simply because they benefit groups of humans, even if they do not directly increase the reproductive success of the individual in question. Most of the field of evolutionary biology is more skeptical. Sure, group benefits may accrue back to the individual at the margin, the thinking goes, but in the end, evolution will be shaped by individual and sexual selection. For these scholars, if you can’t explain simply how it helped the person with a genetic mutation reproduce or survive until reproduction, you haven’t explained anything at all. Thankfully, exuding charm, game, good chat, rizz and every other generationally specific term for this ubiquitous human property have been helping hominins get laid since their hyoid bones dropped[xlviii] a million-and-a-half years ago.
As a testament to the co-evolutionary nature of language with the human brain, many scholars have also explored the idea that language evolved in the first place for the purposes of thought rather than communication. It is an intriguing idea, and to some extent abstract thought was certainly co-dependent on language to develop[xlix]. As I will discuss later in this chapter, there is significant evidence that not acquiring language leaves other symbolic and abstract capabilities severely deficient, too. Beyond the specifics of abstract thought, there is also reason to believe that a capacity for language is helpful to problem solving above and beyond what high levels of mere cognition would produce[l]. These are not universally held views, and recent scholarship has begun to veer in the direction of the many social and communicative benefits of language[li].
Either way, the evolutionary utility of the acquisition of language is probably best thought of in context of Nowak’s no-shit-sherlock submission to Science. It is self-evident. What is less evident is why that utility, however massive, would produce any particular pressure to acquire it earlier in life and as easily as possible, even if that came at some cost to other features. To understand that more specific pressure we must explore the pressures exerted by the social urgency and biological necessity of rapid language acquisition.
The Social Urgency of Rapid Acquisition of Language
Four chapters later, if you are still wondering why I dragged you through the tortuous world of semiotics, icons, indexes, symbols, and interpretants, this is the first part of your answer. Think back on the lovely figure of Venus found in Morocco I discussed in Chapter 2. You do not need the rich and sophisticated vocabulary or understanding of language you possess to understand the Venus figure in the sense that it signifies women or a woman. Most great apes understand relationships between things which resemble each other. For example, they demonstrate the ability to recognize themselves in the mirror[lii]. Many of them can even recognize 2D representations of familiar objects and individuals[liii]. Once again, these are examples of iconic relationships, which just means something which signifies something it resembles. Iconic relationships would not cease to exist simply because a symbolic system like language or a storytelling tradition which called upon them from time to time went extinct.
In another sense, the Venus figure signifies fertility. For any human who had seen or otherwise come to understand the temporal, spatial, and part-whole relationships among sexuality, sexual organs, and childbirth, this sign relation would be easily understood. And yes, many of our ape cousins are more than capable of understanding these co-called indexical relationships, too. For example, researchers have observed clear cause-and-effect understanding among apes in tool use[liv]. Other forms of inferential reasoning, which often calls upon an understanding of indexical relationships, also lies within their cognitive grasp[lv]. Like iconic relationships, an understanding of indexical relationships would also survive the loss of any symbolic system.
Said another way, iconic and indexical relationships are concrete. They do not lean upon the inhibitory potential of an expanded role of the prefrontal cortex and analogous functional areas in thinking, planning, and communicating. When Homo habilis chose to strike a hammerstone against a core to produce a flake in the style of the Oldowan pebble tool industry, his mind related the planned motion of the strike to the creation of the flake in an indexical way. At the same time, the use of pebbles which already resembled the end object in mind constituted an iconic relation. They looked like the tool image the maker carried in his head. Kanzi the bonobo had no difficulty selecting a useful stone. After being shown the necessary action, he had no difficulty understanding what needed to be done to make it a sharp tool. His struggles, as far as we can tell, related to the actual motor skills required to knap with any level of proficiency. This is consistent with the motor and premotor engagement we observed through neuroimaging of the simulation of Oldowan tool manufacture.
But if iconic and indexical signs are different, symbols are a different kind of different. Imagine that we said that a scribbled shape means something called a letter. Now imagine that if we form one, two, or more of these letters together it makes a grapheme, and that if we modulated the pitch of our voice, pursed our lips, and placed our tongue just so, the sound which emerged would be a phoneme. From these written graphemes and uttered phonemes, we might construct sensible and self-contained chunks of something more than mere sounds that we called morphemes. Many of those morphemes would be coterminous with something we might call a word, which when placed within structures we called phrases, clauses,and sentences through a set of rules we called syntax, and within a broader organizational schema we called grammar, would produce a semantic sense in the mind of some other being. An interpretant, if you will, guided by a network of things signifying other things whose meanings did not exist in themselves and which could not be inferred from staring at them long enough in isolation. Language is a symbolic system. And without human minds and cultures to reinforce it – or external memory like writing or computers to retain it – the system itself and the meaning of every symbol within it dies, utterly and forever. Irretrievable. Gone.
The social urgency of the rapid acquisition of language – or more broadly, the rapid acquisition of any symbolic system – is that it must be reproduced with every generation or else it dies forever along with each of its billions of implicit relationships among constituent symbols. The truth of the Acheulean industry is that there were almost certainly thousands of fits and starts of similar attempts to imagine a tool which had never been, to suppress the deterministic and concrete threads of thought which incessantly suggested those familiar iconic and indexical impulses. Pick the rock that looks like what you want the tool to be. Strike it to make it sharp from this shape. Be done with it. Those thousands of emerging hominin minds said no. Instead, they fashioned something from a symbol they held in their minds, an imagined thing which they would work toward through a process of operational thinking and motivated action. Maybe they succeeded. But then they died, and the symbol died with them. The miracle of the Acheulean industry is that it achieved escape velocity from the mind of a single member of Homo erectus and did not die, but instead infected the minds of so many others. And so that symbol survived – not unchanged, because evolution leaves nothing exactly as it was – but alive. Its lineage did not end with the death of the last mind which held the system containing it.
So it is with language and story. With each generation, all those phenomenal benefits of being able to communicate through language and story, to describe things other than the here-and-now, to pass down hard-fought-for knowledge, to understand and be understood by others, and to convey the beauty of our world were subject to extinction. Even if until the relatively recent past these words and stories existed within the simpler symbolic systems we refer to as protolanguage. If a thousand axeheads died in the minds of men without ever finding purchase in another, then a thousand protolanguages, simple but full of billions of words and symbols, died in much the same way. The ones that would survive were not the richest in vocabulary, nor the most florid, nor the most evocative or sophisticated. The survivors would have been those whose system could be acquired quickly, whose constituent symbols were more easily spread, whose meanings were more easily interrogated, and which lent themselves to being more readily incorporated into the consciousness of others. The survivors would have been those which adapted to reflect human neurobiology and social structures.
The sooner in the life of a hominin these adaptations allowed the symbolic system to take root in its mind, the better the chances of its survival for at least one more generation. But as urgent as the social pressure to make language easy to acquire must have been, even it may have played a supporting role to the biological necessity. You see, the LAD lads are not wrong about language being nearly impossible to acquire – as far as we know, there is only one way to do it, which makes this probably the strongest evolutionary pressure on language to adapt to us.
The Biological Necessity of Rapid Acquisition of Language
When I call both language and storytelling symbolic systems, I mean something specific. A symbolic system is not merely a collection of referential tokens, not just a sophisticated set of signs pointing to things in the world. Instead, it is a complex, self-organizing network of relationships between symbols. In this network, the meaning of each symbol depends not on its direct connection to some external reality, but on its relationships to other symbols within the system and the rules by which they are organized. When I say that language is a symbolic system, I am explaining why you cannot learn it by simply memorizing a dictionary. I am explaining why it is practically impossible to build a model for it by coding as many definitions, grammatical rules, and syntactic structures as you can think of. The meaning of each word in a language is not contained within itself or in some singular deterministic definition but emerges from its place in the vast web of other words and grammatical structures.
This systemic nature of symbols is what separates human language from even the most sophisticated forms of animal communication. It is what allows us to think abstractly, to imagine counterfactuals, to create art and culture. And it is what makes the acquisition of symbolic thinking such a monumental leap in cognitive evolution – a leap that reshaped our brains, our societies, and our very nature as a species. Its rules emerge from the structure of the system. Its meanings are derived from the relationships between it symbols and the structures they occupy. These meanings can rarely be interrogated, unpacked, and understood in isolation. And they can all but never be understood – at least not in full – if the symbolic system is not perceived during a critical phase in childhood.
The intersection of the human brain and language is obviously complicated, and there is much we do not know. However, I do think we know three things about why and how symbolic systems might be acquirable by young children – or precocious bonobos – but resist the best-laid plans of adults and machines alike. First, we know a great deal about the general plasticity of the human brain at this age and about the specific plasticity of functional areas involved in language comprehension and production. We also know a great deal about how the human brain becomes more fixed and less capable of certain types of rewiring and repurposing during maturation. Finally, and perhaps most importantly, we know a lot about how neural activity differs between early language acquisition and maturation. As with everything else we are discussing, bear in mind that these issues interact and overlap.
There are two big classes of neuroplasticity. The first – synaptic plasticity – simply refers to the tendency of certain neurons or regions of the brain to form new neural pathways and connections. When the brain is more plastic on this dimension, that means that it is more capable of and more likely to form new such connections. By calling the brain plastic we may also refer to the capacity for the brain to repurpose entire regions – even hemispheres – of the brain for new purposes. We might call this cortical map plasticity. The example I gave earlier of Elizabeth Bates’s work on the capacity of the brain to find new areas to accommodate language functions after prenatal or perinatal damage to traditional language regions falls into this category. Naturally, cortical map plasticity requires wiring new synapses as well, so these are hardly mutually exclusive categories.
The biological benefit to the rapid acquisition of the symbolic system of language is that there’s a relatively brief period when our brain can manage it. Recall the so-called critical period in childhood in which we can really come to understand the structure of a language. There is a subsequent sensitive period running through adolescence in which plasticity is higher than usual, too. Within language acquisition, there also appear to be much more definite and often shorter critical periods. For example, the regions of the brain responsible for integrating auditory processing of phonetic sounds appear to be most plastic between 6 and 12 months of age[lvi]. For reasons I will explore later in this chapter, there are few areas of human knowledge and skill acquisition which have more clearly defined critical and sensitive periods than language acquisition.
Language acquisition that takes place during these critical periods of high plasticity demonstrates other special traits relative to other acquired knowledge and skills, too. Despite its complexity, language is relatively easier than most skills for children to acquire, and less dependent on intentional instruction[lvii]. Once acquired, a language is also significantly tougher to forget than most any other skill, crystallizing into an almost modular feature of the mature human brain[lviii]. If our goal is to answer simply what incentive language might have to evolve to become more easily acquirable as soon as possible, however, the fact that childhood is the only time in which a primate brain has ever shown it can acquire language seems to stand on its own.
The corollary to the remarkable plasticity of the young brain is what takes place when neural pathways and the overall structure of the brain have become more mature and fixed. We know, for example, that barring reorganization in response to earlier damage, the most active language-related functional areas of the brain become increasingly specialized and lateralized to the left hemisphere of the brain. This is most acute in Broca’s and Wernicke’s areas[lix]. That is not to say that there is not ongoing plasticity through adolescence and beyond[lx], but in many ways the most important neural pathways associated with language have been well established by the time we are teenagers. This reduction in plasticity alone is probably enough to explain the difficulty in acquiring something as complex as language later without early life exposure, but there is something more interesting going on beneath the surface, too. There are extreme cases of language deprivation which can tell us a great deal more about the relationships between language, cognition, and brain development.
Probably the most famous historical example of the proverbial ‘raised by wolves’ story concerns a girl by the name of Genie. She was isolated from exposure to human language by abusive guardians until the age of 13. Her capacity to acquire language was critically challenged as a result, to be sure. But Susan Curtiss, then a PhD student at UCLA, identified certain other deficiencies stemming from this deprivation[lxi]. Most notably, Genie was also severely limited in her ability to engage in symbolic or abstract thought at all, not just in service of communication. Studies of deaf children born to hearing parents who don’t use sign language have demonstrated similar effects. Many such children find it hard to develop the concept of what might be going on in another human’s mind – an idea referred to as theory of mind – and struggle with other forms of abstract thought[lxii]. Further studies have reinforced the importance of early exposure to fully developed language systems with more realized grammatical structures and symbolic networks of meaning to the development of symbolic thought[lxiii]. Language impairments have been shown to lead to delays and deficiencies in symbolic play[lxiv] and the acquisition of more advanced symbolic numeracy[lxv]. And it is no wonder. Brain imaging studies consistently reinforce that both language and non-language symbolic cognition engage some of the same regions of the brain[lxvi].
With that in mind, let us step back and consider the structure of the brain for a moment. In the million and a half years or so it took to evolve from Homo erectus into the behaviorally modern Homo sapiens, an already highly encephalized brain grew and became more complex. The frontal lobe, including the prefrontal cortex, grew much more rapidly and changed its shape more than the rest of the brain[lxvii]. On size alone, the prefrontal cortex of Homo sapiens is about 12.5% larger than would be expected based on the size of the human brain in comparison to other great apes[lxviii]. But the neurons in the prefrontal cortex also exhibit some of the densest dendritic branching, among the highest spinal densities, and some of the most disparate and extensive connections to other brain regions[lxix]. Axonal branching patterns – the neural connections sent from neurons in the prefrontal cortex to other areas – demonstrate similarly exaggerated density. What’s more, there is some evidence that these traits were among those which grew the most during human evolution[lxx].
Don’t be intimidated by the technical terms. Everything I just wrote is a fancy way of saying that when other brain regions try to connect to neurons in the PFC or vice versa, they are significantly more likely to find a connection, especially in childhood. During development, neurons in your brain send out feelers to find other neurons in the same, neighboring, or distant parts of the brain to form synaptic connections. Axonal and dendritic ‘branching’ just means that each attempt to form such connections can ‘try’ more possibilities. There is practically no major structure of the brain whose neurons are so defined by their tendency to explore multiple possible connections with other areas of the brain than the prefrontal cortex. Likewise, there are few structures of the brain whose neurons engage in this process of seeking out multi-dimensional connections for as long as the prefrontal cortex[lxxi]. Yet the principal story arc of the maturation of the brain is this: that at some point your neurons send and receive fewer deeply branching possibilities[lxxii], that they begin to prune lesser used connections[lxxiii], and that the remaining connections will exert influence on other regions of the brain based on the quantity of signal passing between them[lxxiv].
This obviously has a lot of implications for our story. It should be clear, for example, that the acquisition of something as complex as language – which relies on memory, auditory, visual, cognitive, working memory, and executive and inhibitory functions from a range of functional regions of the brain – would be generally hindered by its maturation. For that reason, we might conclude our exploration of the reasons for language to evolve to become more acquirable here and call it a day. Yet there is something specific about the nature of this maturation that is central to the story of human consciousness and its susceptibility to story.
We know that the prefrontal cortex plays a primary role in inhibiting the instant acceptance of iconic and indexical relationships in favor of the consideration of other possibilities, including more abstract or symbolic relationships[lxxv] like those around which language is built. We also know that the prefrontal cortex is perhaps the brain structure whose neurons are most physically predisposed to the possibilities of connections it may form with other areas of the brain. We furthermore know that the prefrontal cortex is among the brain structures which is most different in size from those of other great apes, and which grew the most in relative terms on the path from Caliban to Homo sapiens, all of which would give it greater relative influence over those other regions. We know that a lack of exposure to fully structured language in childhood harms not only language acquisition but all forms of symbolic and abstract thought.
It seems clear enough from these facts that human neurobiology and modern human language would be necessarily symbiotic – or as Deacon would have it, that they would exhibit the properties of a parasite and a host. The brain needs exposure to language to step into its full mantle of storyteller and storyseeker, and language needs a brain capable of processing a system of deep symbolic relationships and internal structures. To acquire language absent an innate grammar, the brain needs language to deliver as many internal cues through as many modalities as possible as to its internal structure amid a poverty of stimulus, and language needs a brain which is predisposed toward brain structures which ruthlessly and obsessively expand the possible cues it may perceive and modalities through which it may perceive them.
The confluence of the extended critical period of childhood language acquisition plasticity in the human brain, the predominant influence of the prefrontal cortex relative to other animals, and the prefrontal cortex’s tendency to spam multi-dimensional connections with far-flung brain structures should lead to both a capacity and preference for inferential acquisition of symbolic systems. That is, when it comes to language, we understand the trees by perceiving the nature of the forest. The brain of a human child is not throwing vast computational power at learning the various rules of grammar or at marshalling an innate capacity for it to steadily acquire vocabulary to slot within it. For a brief period in our life, it is connecting a possibility engine that feverishly considers practically every possible relationship a thing might have to every biological antenna we own. There may well be a poverty of the stimulus, but the brain of the human child has cranked the sensitivity of every growing auditory, visual, cognitive, abstract, and mirror neuron network in its brain to 11 to detect every possible detail and connection, however faint.
Dad used that word in a different place in his sentence. Mom smiled when she said that thing. This word made me remember a smell. There was an ‘s’ sound on the end of that word this time. What does that mean? That was a long sentence that I thought was about the dog doing something to the cat at first, but now he’s looking at the cat darting around the house. Brother always seems to respond when Mom stops talking with a rising pitch on the last word. I wonder if there are words I should expect at certain places in those sentences that would tell me that it was going to be that kind of interaction. Sister turned away after telling me something. I don’t think she expects a response. Are there other things in what she said, or how, or where that might guide me to predict that? Mom looked confused when I said ‘last night.’ I thought that meant the concept of anything before now? I’m starting to doubt that[lxxvi].
While this internal monologue is fanciful and silly, of course, the role of cognitive flexibility in childhood language acquisition as compared to mature language production or comprehension is indeed visible in a variety of ways. For example, neuroimaging studies demonstrate that language processing and acquisition in infants activates a much wider range of brain regions than language processing or acquisition in adults and adolescents[lxxvii]. Similar studies have also demonstrated the disproportionate role of executive function, including the contributions of the prefrontal cortex, during childhood language acquisition, whereafter such function gives way to more integrated and efficient language networks[lxxviii]. Researchers have also identified the detection of statistical regularities in speech sounds by infants, indicative of precisely the kind of multi-modal inferential process to which I was referring[lxxix].
In recent years, advances in certain fields of software development have only reinforced the efficiency of forms of statistical inference in perceiving the nature of a system of relationships such as those that exist in language. The utility of so-called neural networks to simulate how children might acquire language has been explored for decades to varying degrees of success[lxxx]. Since 2018, however, the success of transformer-based generative large language models (e.g. ChatGPT) has been remarkable. If you have used these models, then you will have seen how convincing their command of human language appears to be. Even more convincing is the discovery that their process for acquiring language bears so many similarities to how human children acquire language.
An LLM’s acquisition of a language is an emergent property not of any set of rules it has been given or lessons it has learned, but of the simultaneous perception of patterns, contexts, and relationships present in various types of multi-modal stimulus to which it has been exposed (i.e. its training data). In much the same way, I argue that a child’s grasp of language emerges from how the brain of a child hands over the reins of language acquisition to a possibility-exploring functional area, leverages that functional area’s possibility-expanding dendritic and axonal branching habit, and subjects all of that through use to multi-modal exposure to social-cognitive cues and other stimuli. In a very real sense, my argument is simply an extension of the one Terrence Deacon made long before I did. There is a universal grammar, but it does not live in the genetic hardwiring of syntactic rules to your frontal lobe. It lives as an emergent property of the possibility-seeking, possibility-led, plastic possibility engine we call the brain of child. But it lives in more than just the brain. It also lives in that brain’s exposure to a language system which has evolved specifically to be acquired by just such a brain, and which exists within a social structure which has similarly adapted to provide that brain with the access to that language in as many different ways and with as many different cues as possible.
There is, of course, a vast evolutionary story to be told from here, but most of its details do not concern us just yet, if they do at all. Perhaps as a result of all these co-evolutionary adaptations which took hold once Homo erectus emerged with a symbolic mind, structured language finally emerged 70-100 thousand years ago. Or perhaps millennia before! Perhaps protolanguage is a phenomenon which immediately preceded structured language or perhaps we can look to the development of vocal apparatus in Homo erectus and suppose that it emerged much earlier. Perhaps writing emerged in Mesopotamia less than 6,000 years ago or perhaps there is an older system that we have since lost. In context of the interaction of story, social networks, and human consciousness, these details are interesting, but none matter very much.
In Homo erectus we observed the emergence of a species of archaic man some 1.8 million years ago that was capable of symbolic thought. Our first storyteller. The evolution from Homo erectus to behaviorally modern Homo sapiens over the next million-and-a-half years or so – our Exile from Eden – was a co-evolutionary process which transformed us from a species capable of symbolic thought to one whose very nature was defined by it. This co-evolutionary process took place among human neurobiology, human language, and human social structures. The complex symbolic system we call language could not exist without the symbolic mind to create it or the social structures to support it. The symbolic capacity of the human brain, in turn, evolved to become dependent on the stimulus of language to realize its full potential. Throughout this process, language and story in their earliest forms would have offered many benefits to societies in which they emerged. This, along with other co-evolutionary factors, would have exerted significant pressure on those language and story systems to restructure themselves to become acquirable earlier and more easily, lest they and their benefits disappear forever.
At the intersection of consciousness, social networks, and story, however, we must observe that everything that helps us to acquire these symbolic systems also influences how we think. That is especially true when we must consider questions in the abstract, when we encounter new symbols, and when we hear new stories. We are the symbolic species. We cannot help but to acquire and integrate new symbols we encounter into our understanding of the world – both in the case of the symbolic system we call language and the symbolic system that is our storytelling tradition. We cannot help how the new stories we encounter fit into the system, or how their meaning will invariably influence and transform the meaning of countless others. Both language and story have adapted over generations to make it easier to become fluent in their use. But the same evolutionary pressures which sought to avoid the extinction of a symbolic system also create pressures to make it easier for individual symbols to spread to new minds.
The result is that we want to incorporate new symbols into our thinking as children when our brains are largely plastic[lxxxi]. We want to do so as adults when our brains are largely fixed[lxxxii]. Humans look for any excuse to transform concrete problems into symbolic ones[lxxxiii]. We do this because of a mind whose nature has been forever tilted by the preeminent connectivity of brain structures committed to keeping possibilities open. We do it because of the influence of symbolic thought and representation on human cultures[lxxxiv]. We do it because humans and human culture have been engineered by evolution to easily acquire and spread new symbols and stories[lxxxv], often without our conscious knowledge or intent[lxxxvi].
Our brain is a possibility engine, evolved to seek out and find symbols, including the complex forms we call stories. But why? Is our mantle as Hamlet the storyseeker just a byproduct of the many reasons acquiring language is obviously beneficial? Or is there something special about those complex symbols we call stories that has has made them a powerful human adaptation – and in the networked age, a vulnerability – beyond the evolutionary utility of non-story language?
[i] Lichtman, J. W., & Colman, H. (2000). Synapse elimination and indelible memory. Neuron, 25(2), 269-278.
[ii] Katz, L. C., & Shatz, C. J. (1996). Synaptic activity and the construction of cortical circuits. Science, 274(5290), 1133-1138.
[iii] This is the unnerving bass drum pattern in the classic Led Zeppelin song ‘Good times, bad times’ and it is devilishly and famously tricky to master.
[iv] Bezzola, L., Mérillat, S., Gaser, C., & Jäncke, L. (2012). Training-induced neural plasticity in golf novices. Journal of Neuroscience, 32(35), 12754-12758.
[v] Seidler, R. D., Bernard, J. A., Burutolu, T. B., Fling, B. W., Gordon, M. T., Gwin, J. T., … & Lipps, D. B. (2010). Motor control and aging: links to age-related brain structural, functional, and biochemical effects. Neuroscience & Biobehavioral Reviews, 34(5), 721-733.
[vi] Newport, E. L. (1990). Maturational constraints on language learning. Cognitive Science, 14(1), 11-28.
[vii] Hakuta, K., Bialystok, E., & Wiley, E. (2003). Critical evidence: A test of the critical-period hypothesis for second-language acquisition. Psychological Science, 14(1), 31-38.
[viii] Kuhl, P. K. (2010). Brain mechanisms in early language acquisition. Neuron, 67(5), 713-727.
[ix] McNamara, T. P. (2005). Semantic priming: Perspectives from memory and word recognition. Psychology Press.
[x] Simmons, W. K., Martin, A., & Barsalou, L. W. (2005). Pictures of appetizing foods activate gustatory cortices for taste and reward. Cerebral Cortex, 15(10), 1602-1608.
[xi] Gold, E. M. (1967). Language identification in the limit. Information and Control, 10(5), 447-474.
[xii] Lake, B. M., & Baroni, M. (2018). Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. International Conference on Machine Learning (pp. 2873-2882).
[xiii] Pinker, S. (1979). Formal models of language learning. Cognition, 7(3), 217-283.
[xiv] Chomsky, N. (1980). Rules and representations. Behavioral and Brain Sciences, 3(1), 1-15.
[xv] Greenberg, J. H. (1963). Some universals of grammar with particular reference to the order of meaningful elements. In J. H. Greenberg (Ed.), Universals of Language (pp. 73-113). MIT Press.
[xvi] Dryer, M. S. (1992). The Greenbergian Word Order Correlations. Language, 68(1), 81-138.
[xvii] Jackendoff, R. (1972). Semantic Interpretation in Generative Grammar. MIT Press.
[xviii] Wierzbicka, A. (1972). Semantic Primitives. Athenäum Verlag.
[xix] Keenan, E. L., & Comrie, B. (1977). Noun Phrase Accessibility and Universal Grammar. Linguistic Inquiry, 8(1), 63-99.
[xx] Bickerton, D. (1981). Roots of Language. Karoma Publishers.
[xxi] Klein, R. G. (2008). Out of Africa and the evolution of human behavior. Evolutionary Anthropology: Issues, News, and Reviews, 17(6), 267-281.
[xxii] Shea, J. J. (2011). Homo sapiens is as Homo sapiens was: Behavioral variability versus “behavioral modernity” in Paleolithic archaeology. Current Anthropology, 52(1), 1-35.
[xxiii] Henshilwood, C. S., d’Errico, F., van Niekerk, K. L., Dayet, L., Queffelec, A., & Pollarolo, L. (2018). An abstract drawing from the 73,000-year-old levels at Blombos Cave, South Africa. Nature, 562(7725), 115-118.
[xxiv] Timmermann, A., & Friedrich, T. (2016). Late Pleistocene climate drivers of early human migration. Nature, 538(7623), 92-95.
[xxv] Bates, E., Dale, P. S., & Thal, D. (1995). Individual differences and their implications for theories of language development. In P. Fletcher & B. MacWhinney (Eds.), The handbook of child language (pp. 96-151). Blackwell.
[xxvi] Bates, E., & MacWhinney, B. (1989). Functionalism and the competition model. In B. MacWhinney & E. Bates (Eds.), The crosslinguistic study of sentence processing (pp. 3-73). Cambridge University Press.
[xxvii] MacWhinney, B., Bates, E., & Kliegl, R. (1984). Cue validity and sentence interpretation in English, German, and Italian. Journal of Verbal Learning and Verbal Behavior, 23(2), 127-150.
[xxviii] Marchman, V. A., & Bates, E. (1994). Continuity in lexical and morphological development: A test of the critical mass hypothesis. Journal of Child Language, 21(2), 339-366.
[xxix] Bates, E., & Roe, K. (2001). Language development in children with unilateral brain injury. In C. A. Nelson & M. Luciana (Eds.), Handbook of developmental cognitive neuroscience (pp. 281-307). MIT Press.
[xxx] Bates, E., Thal, D., Trauner, D., Fenson, J., Aram, D., Eisele, J., & Nass, R. (1997). From first words to grammar in children with focal brain injury. Developmental Neuropsychology, 13(3), 275-343.
[xxxi] Pasteur, L. (1993). On spontaneous generation (A. Levine, Trans.). Revue des cours scientifiques, 1(1863-1864), 257-264. (Original work published 1864)
[xxxii] Gerard, J. (1597). The herball or generall historie of plantes. John Norton.
[xxxiii] Lieberman, E., Michel, J. B., Jackson, J., Tang, T., & Nowak, M. A. (2007). Quantifying the evolutionary dynamics of language. Nature, 449(7163), 713-716.
[xxxiv] Hay, J., & Bauer, L. (2007). Phoneme inventory size and population size. Language, 83(2), 388-400.
[xxxv] McWhorter, J. H. (2011). Linguistic simplicity and complexity: Why do languages undress? De Gruyter Mouton.
[xxxvi] Haspelmath, M., & Tadmor, U. (Eds.). (2009). Loanwords in the world’s languages: A comparative handbook. De Gruyter Mouton.
[xxxvii] Traugott, E. C., & Dasher, R. B. (2001). Regularity in semantic change. Cambridge University Press.
[xxxviii] Givón, T. (1979). On understanding grammar. Academic Press.
[xxxix] Blevins, J. P., & Blevins, J. (Eds.). (2009). Analogy in grammar: Form and acquisition. Oxford University Press.
[xl] Wang, W. S. Y. (1969). Competing changes as a cause of residue. Language, 45(1), 9-25.
[xli] Wilson, E.O. (1999). Consilience: The Unity of Knowledge.
[xlii] Boyd, B. (2018). The evolution of stories: from mimesis to language, from fact to fiction. WIREs Cogn Sci, 9: e1444.
[xliii] Nowak MA; Komarova NL; Niyogi P. (2001). Evolution of universal grammar. Science. 5;291(5501):114-8
[xliv] Pinker, S., & Bloom, P. (1990). Natural language and natural selection. Behavioral and Brain Sciences, 13(4), 707-727.
[xlv] Csibra, G., & Gergely, G. (2011). Natural pedagogy as evolutionary adaptation. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1567), 1149-1157.
[xlvi] Dunbar, R. I. M. (1993). Coevolution of neocortical size, group size and language in humans. Behavioral and Brain Sciences, 16(4), 681-694.
[xlvii] Dunbar, R. I. M. (2004). Gossip in evolutionary perspective. Review of General Psychology, 8(2), 100-110.
[xlviii] Miller, G. F. (2000). The mating mind: How sexual choice shaped the evolution of human nature. Heinemann.
[xlix] Bickerton, D. (1990). Language and species. University of Chicago Press.
[l] Carruthers, P. (2002). The cognitive functions of language. Behavioral and Brain Sciences, 25(6), 657-674.
[li] Reboul, A. C. (2017). Cognition and communication in the evolution of language. Oxford University Press.
[lii] Gallup, G. G. (1970). Chimpanzees: Self-recognition. Science, 167(3914), 86-87.
[liii] Vonk, J., & MacDonald, S. E. (2004). Levels of abstraction in orangutan (Pongo abelii) categorization. Journal of Comparative Psychology, 118(1), 3-13.
[liv] Visalberghi, E., & Tomasello, M. (1998). Primate causal understanding in the physical and psychological domains. Behavioural Processes, 42(2-3), 189-203
[lv] Call, J. (2004). Inferences about the location of food in the great apes (Pan paniscus, Pan troglodytes, Gorilla gorilla, and Pongo pygmaeus). Journal of Comparative Psychology, 118(2), 232-241.
[lvi] Kuhl, P. K., Stevens, E., Hayashi, A., Deguchi, T., Kiritani, S., & Iverson, P. (2006). Infants show a facilitation effect for native language phonetic perception between 6 and 12 months. Developmental Science, 9(2), F13-F21.
[lvii] Paradis, M. (2004). A neurolinguistic theory of bilingualism. John Benjamins Publishing.
[lviii] Pallier, C., Dehaene, S., Poline, J. B., LeBihan, D., Argenti, A. M., Dupoux, E., & Mehler, J. (2003). Brain imaging of language plasticity in adopted adults: Can a second language replace the first? Cerebral Cortex, 13(2), 155-161.
[lix] Szaflarski, J. P., Schmithorst, V. J., Altaye, M., Byars, A. W., Ret, J., Plante, E., & Holland, S. K. (2006). A longitudinal functional magnetic resonance imaging study of language development in children 5 to 11 years old. Annals of Neurology, 59(5), 796-807.
[lx] Sowell, E. R., Thompson, P. M., Leonard, C. M., Welcome, S. E., Kan, E., & Toga, A. W. (2004). Longitudinal mapping of cortical thickness and brain growth in normal children. Journal of Neuroscience, 24(38), 8223-8231.
[lxi] Curtiss, S. (1977). Genie: A psycholinguistic study of a modern-day “wild child”. Academic Press.
[lxii] Schick, B., De Villiers, P., De Villiers, J., & Hoffmeister, R. (2007). Language and theory of mind: A study of deaf children. Child Development, 78(2), 376-396.
[lxiii] Pyers, J. E., & Senghas, A. (2009). Language promotes false-belief understanding: Evidence from learners of a new sign language. Psychological Science, 20(7), 805-812.
[lxiv] Lewis, V., Boucher, J., Lupton, L., & Watson, S. (2000). Relationships between symbolic play, functional play, verbal and non-verbal ability in young children. International Journal of Language & Communication Disorders, 35(1), 117-127.
[lxv] Carey, S. (2004). Bootstrapping & the origin of concepts. Daedalus, 133(1), 59-68.
[lxvi] Dehaene, S., & Cohen, L. (2007). Cultural recycling of cortical maps. Neuron, 56(2), 384-398.
[lxvii] Bruner, E., Manzi, G., & Arsuaga, J. L. (2003). Encephalization and allometric trajectories in the genus Homo: Evidence from the Neandertal and modern lineages. Proceedings of the National Academy of Sciences, 100(26), 15335-15340.
[lxviii] Semendeferi, K., Lu, A., Schenker, N., & Damasio, H. (2002). Humans and great apes share a large frontal cortex. Nature Neuroscience, 5(3), 272-276.
[lxix] Elston, G. N., Benavides-Piccione, R., & DeFelipe, J. (2006). The pyramidal cell in cognition: A comparative study in human and monkey. Journal of Neuroscience, 21(7), RC163-RC163.
[lxx] Jacobs, B., Schall, M., Prather, M., Kapler, E., Driscoll, L., Baca, S., … & Treml, M. (2001). Regional dendritic and spine variation in human cerebral cortex: a quantitative Golgi study. Cerebral Cortex, 11(6), 558-571.
[lxxi] Hoftman, G. D., & Lewis, D. A. (2011). Postnatal developmental trajectories of neural circuits in the primate prefrontal cortex: identifying sensitive periods for vulnerability to schizophrenia. Schizophrenia bulletin, 37(3), 493-503.
[lxxii] Huttenlocher, P. R., & Dabholkar, A. S. (1997). Regional differences in synaptogenesis in human cerebral cortex. Journal of Comparative Neurology, 387(2), 167-178.
[lxxiii] Chechik, G., Meilijson, I., & Ruppin, E. (1998). Synaptic pruning in development: A computational account. Neural Computation, 10(7), 1759-1777.
[lxxiv] Tononi, G., & Cirelli, C. (2006). Sleep function and synaptic homeostasis. Sleep Medicine Reviews, 10(1), 49-62.
[lxxv] Badre, D. (2008). Cognitive control, hierarchy, and the rostro-caudal organization of the frontal lobes. Trends in Cognitive Sciences, 12(5), 193-200.
[lxxvi] This is, incidentally, a real thing that happened in my household. Whether it was a linguistic learning process or a bit of struggle with concepts of time, my three-year old used ‘last night’ as the descriptor for anything that happened at an earlier time. One day, every such instance changed to ‘this one time.’
[lxxvii] Dehaene-Lambertz, G., Hertz-Pannier, L., & Dubois, J. (2006). Nature and nurture in language acquisition: anatomical and functional brain-imaging studies in infants. Trends in Neurosciences, 29(7), 367-373.
[lxxviii] Weiss-Croft, L. J., & Baldeweg, T. (2015). Maturation of language networks in children: A systematic review of 22 years of functional MRI. NeuroImage, 123, 269-281.
[lxxix] Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274(5294), 1926-1928.
[lxxx] Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48(1), 71-99.
[lxxxi] DeLoache, J. S. (2004). Becoming symbol-minded. Trends in Cognitive Sciences, 8(2), 66-70.
[lxxxii] Sperber, D. (1975). Rethinking symbolism. Cambridge University Press.
[lxxxiii] Amalric, M., & Dehaene, S. (2016). Origins of the brain networks for advanced mathematics in expert mathematicians. Proceedings of the National Academy of Sciences, 113(18), 4909-4917.
[lxxxiv] Geertz, C. (1973). The interpretation of cultures. Basic Books.
[lxxxv] Appel, M., & Richter, T. (2007). Persuasive effects of fictional narratives increase over time. Media Psychology, 10(1), 113-134.
[lxxxvi] Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal of Experimental Psychology: General, 118(3), 219-235.
An elegant, eloquent, and powerful start! Gonna be a great read.
It makes me consider the overwhelming power of AI taking over social networks.
And binge drinking, which likely hasn’t reached a pinnace yet.
At what point does binge become baseline, and powerful cannabinoids, ketamine, and (perhaps) other soon-to-be-available psychotropics dull the wider population even more than their smartphones already have?
Smartphone sports gambling probably gets there first.
When it goes Orwellian and is referred to as “Victory Bingeing”.
I got a small shudder when your autocorrect killed some humor when you simply copied and replied to part of my comments. That was not expected at all. Did you make a manual change?
“Pinnace” was intentional on my part, being part of the semantic meaning wordplay like Rusty and Shakespeare were having fun with in the article.
Spooky
I did make the change, thoughtlessly assuming that your autocorrect left intact a marine reference. I’m sorry - went right over my head. Hardly spooky, anything that goes on in there.
Rusty, do you think what you’re describing is a new field of study?
I don’t think so. It’s a multidisciplinary application, to be sure, but I think as we progress you’ll see that many of the underlying topics have pretty robust fields of scholarship to call upon.
May it bring scholars from around the world to a major university in Nashvegas.
Jim
In the beginning was the Word. And the Word was with God, and the Word was God.
Stepping down from the empyrean heights of the Gospel of John: humans have not done well when new media - new ways to tell the stories - have become widespread. Whether it was the printing press and the subsequent Reformation and Wars of Religion, to the newspaper and the revolutions in America and France, to broadcast media and the rise of both communism and fascism, to apparently social media today, it seems the less savory actors tend to harness the system first, only to unleash havoc that eventually settles into some sort of new, purportedly wiser order. That is, until the next form of media comes along.
Reminded me of this:
Kurt Vonnegut on the Shapes of Stories
David Comberg 1.34K subscribers
2,050,888 views | Oct 30, 2010
Short lecture by Kurt Vonnegut on the ‘simple shapes of stories.’