Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, PSYCHOLOGY (psychology.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 23 September 2017

Language Acquisition

Summary and Keywords

Language is a structured form of communication that is unique to humans. Within the first few years of life, typically developing children can understand and produce full sentences in their native language or languages. For centuries, philosophers, psychologists, and linguists have debated how we acquire language with such ease and speed. Central to this debate has been whether the learning process is driven by innate capacities or information in the environment. In the field of psychology, researchers have moved beyond this dichotomy to examine how perceptual and cognitive biases may guide input-driven learning and how these biases may change with experience. There is evidence that this integration permeates the learning and development of all aspects of language—from sounds (phonology), to the meanings of words (lexical-semantics), to the forms of words and the structure of sentences (morphosyntax). For example, in the area of phonology, newborns’ bias to attend to speech over other signals facilitates early learning of the prosodic and phonemic properties of their native language(s). In the area of lexical-semantics, infants’ bias to attend to novelty aids in mapping new words to their referents. In morphosyntax, infants’ sensitivity to vowels, repetition, and phrase edges guides statistical learning. In each of these areas, too, new biases come into play throughout development, as infants gain more knowledge about their native language(s).

Keywords: language, child development, learning, phonology, lexical-semantics, morphology, syntax

Introduction

By the age of three, typically developing children have learned the sounds, words, and grammar of their language well enough to understand and produce multiword sentences. Unlike other complex systems such as math or music, humans learn language without explicit instruction. This amazing feat has fascinated philosophers, linguists, and psychologists for centuries.

The contemporary study of language within psychology can be traced to the late 1950s, when behaviorism was the most prominent theoretical perspective. Skinner published a behaviorist theory of language acquisition, suggesting that reinforcement and punishment shape “verbal behavior,” thus allowing young children to learn language (Skinner, 1957). Linguist Noam Chomsky wrote a scathing review of this work (Chomsky, 1959), pointing out aspects of language that could not be explained by a simple behaviorist account. He posed questions such as: How are children capable of generating sentences that they have never heard before? How can adults so easily tell whether a sentence is grammatical? To address these points, Chomsky proposed a theory called universal grammar, according to which there is an innate structure in the brain that allows humans to acquire, comprehend, and produce the complex rules of language. Input, he said, played only a minor role. Although Skinner’s book and Chomsky’s review primarily addressed syntax (or grammar) learning, their dialogue propelled the entire field of language acquisition towards investigating the power of input versus the power of innate capacities. The tension between these two pathways led to an explosion of discoveries about how children learn language.

Since the 1950s, researchers have developed a more integrative approach to language acquisition. Advances in behavioral, neuroimaging, and electrophysiological methods for studying development have led to discoveries of young infants’ powerful learning abilities (e.g., statistical learning; Saffran, 2003) and perceptual biases (e.g., a preference for listening to speech over other signals; Vouloumanos & Werker, 2007). Consequently, in this article, we suggest that instead of being primarily driven by the input or by innate capacities, infants and young children acquire language by using biases to guide an impressive ability to learn from input. While some biases are present at birth, others emerge or change throughout development as children learn more about their native language or languages and the world around them.

The following text explores this integrative perspective by surveying the mechanisms behind the processing and learning of spoken language in the first few years of life. While language knowledge continues to evolve throughout the lifespan, part of what makes language development an enigma is how early it occurs. Therefore, the current review focuses on infancy and early childhood to highlight the learning mechanisms that underlie this early development. We discuss language learning in three domains: phonology, lexical-semantics, and morphosyntax (sounds and prosody, words, and grammar, respectively). Although as a rough approximation, infants learn sounds before words and words before grammar, evidence is accruing that these processes influence each other throughout development, and thus we highlight this interweaving throughout this article.

Phonology: Speech Perception and Phonemic Development

From the moment they are born, infants show a number of neurobiological and perceptual biases that set them on the road to language acquisition, but that also show an effect of early experience. These biases, combined with an emerging set of input-driven, language-specific constraints, provide the foundation for infants’ developing ability to perceive, discriminate, and categorize the sounds of their native language(s).

Speech Perception and Discrimination

Preparation for acquiring any of the world’s languages is evident in the newborn preference for listening to speech over carefully matched nonspeech (Vouloumanos & Werker, 2007), their ability to discriminate similar speech sounds (Eimas, Siqueland, Jusczyk, & Vigorito, 1971; see Saffran, Werker, & Werner, 2006, for a review), their sensitivity to acoustic phonetic cues that distinguish within as opposed to across word boundaries (Christophe et al., 1994), and their sensitivity to structural regularities among adjacent syllables (Gervain, Macagno, Cogoi, Peña, & Mehler, 2008). Moreover, the language areas of the brain are activated in tasks when young infants are presented with forward but not backward speech (Peña et al., 2003; Dehaene-Lambertz, Dehaene, & Hertz-Pannier, 2002), indicating neural specialization even at birth for human speech. Indeed, it is not just the acoustic characteristics of speech, but also the communicative intent (e.g., two people communicating, but using nonspeech sounds), that activates specialized areas in the brain (Shultz, Vouloumanos, Bennett, & Pelphrey, 2014).

While the neonate brain appears to be biased to process speech differently than other signals, the effect of the listening experience is evident at birth as well. Neonates show a preference for the language (or languages) heard in utero over an unfamiliar language (Moon, Cooper, & Fifer, 1993), as well as for their mother’s voice (DeCasper & Fifer, 1980), and even for vowel sounds played to them in utero (Moon, Lagercrantz, & Kuhl, 2013). Moreover, the pattern of neural activation to forward versus backward speech involves different brain activation in response to the native language than to an unfamiliar one (Minagawa-Kawai et al., 2011; May, Byers-Heinlein, Gervain, & Werker, 2011). Together, neurobiological and perceptual biases position infants to attend to and learn about the properties of any of the world’s languages, with a slight boost already in play for processing the language experienced in utero.

The languages of the world can be roughly classified into three rhythmical groups: those organized by stress recurrence, like English; those organized by syllable recurrence, like Spanish; and those organized by mora recurrence, like Japanese (Abercrombie, 1967). From the moment they are born, infants are sensitive to these differences and discriminate languages on the basis of rhythmical classes (Nazzi, Bertoncini, & Mehler, 1998). This capability is of extreme interest, as the rhythmical characteristics of languages are highly correlated with the underlying word order, and hence sensitivity to rhythm has been hypothesized to help bootstrap the acquisition of grammar (Mehler, Sebástian-Gallés, & Nespor, 2004; also see the section “Morphosyntax: Learning Grammar”). Infants even show rhythmical class discrimination between familiar languages, as in the case of a newborn infant exposed to two languages throughout gestation (Byers-Heinlein, Burns, & Werker, 2010). Over the first four to five months of life, language discrimination sensitivity becomes refined such that by five months, infants growing up monolingual can discriminate their native language from a different language that belongs to the same rhythmical class (e.g., by this age, Dutch infants can discriminate Dutch from English; Nazzi, Jusczyk, & Johnson, 2000). Interestingly, bilingual infants succeed at within-rhythmical class language discrimination at a younger age (Bosch & Sebastián-Gallés, 1997). Overall, infants’ early speech processing and discrimination begin with biases present at birth but are shaped by their early linguistic environment.

Phonemic Development

Perhaps the best-studied example of the interplay between perceptual biases and experience is the development of phonemic discrimination. From as early as they can be tested, infants discriminate the minimal differences between speech sounds that are used to contrast meaning in adult languages of the world (these are called phonemes). Studies using both behavioral and neuroimaging or electrophysiological measures show that newborn and even premature infants discriminate both minimal consonant differences (as in/ba/ versus /da/) and minimal vowel differences (as in/i/ versus /a/; Mahmoudzadeh et al., 2013). Moreover, the discrimination of these distinctions involves similar language areas in the brain as are used for consonant and vowel discrimination in adults, although the lateralization to the left hemisphere is less evident than it is later in development, suggesting a role for experience in such specialization (Dehaene-Lambertz, 2000).

The effect of experience in phonemic discrimination can be further seen in the developmental changes across the first year of life. From birth to four to six months, infants discriminate not only the phonetic contrasts used in their native language environment, but also many nonnative speech phonetic differences, including ones that they have never heard before (e.g., Streeter, 1976; Werker, Gilbert, Humphrey & Tees, 1981). Across the next six months, their sensitivities become attuned to the native language, such that infants show a decline in discrimination of many nonnative speech sound differences (e.g., Werker & Tees, 1984) and an improvement in discrimination of native speech sound distinctions (Kuhl et al., 2006). This pattern of decline for nonnative sounds and improvement for native is referred to as perceptual narrowing. Although perceptual narrowing is the most typical pattern of phonetic development, there are other patterns as well. For example, in some cases, such as the word initial distinction between /n/ and /ng/ used in Filipino (Narayan, Werker, & Beddor, 2010), experience seems required to even induce initial discrimination. Of interest, bilingual infants maintain sensitivity to the phoneme distinctions of both of their native languages (see Sebastián-Gallés, 2010, for a review). Thus, while early biases lay the bedrock, it is experience that leads to the development of expertise in native language phoneme perception.

While it is clear that the attunement to native phonemic discrimination is driven by the input, there is a growing body of research aimed at explaining the processes by which infants move from language-general to language-specific phonemic perception. Bottom-up approaches using artificial language studies have shown that infants aged six to eight months can track distributional frequency information and use it to both bifurcate and collapse potential phonetic distinctions (Maye, Werker, & Gerken, 2002; Maye, Weiss, & Aslin, 2008). Top-down approaches have shown that infants can use the lexical context in which speech sounds occur (Swingley, 2009) and/or the potentially meaningful distinction between two words (Yeung & Werker, 2009) to establish native categories. Social-interaction also plays a role, as evidenced by work showing that infant phonetic categories are most easily attuned in face-to-face contingent interaction with an adult (Kuhl, Tsao, & Liu, 2003).

As native language phonemic categories come to dominate perceptual discrimination, they also play a role in driving word learning (Swingley, 2009; Curtin & Werker, 2007). While 14-month-olds can confuse minimally different words in some word learning tasks (Stager & Werker, 1997), when the processing demands are made more minimal or the task more obviously one of learning labels (Yoshida, Fennell, Swingley, & Werker, 2009; Fennell & Waxman, 2009), they can succeed in using native phonetic distinctions to guide even the learning of new words. By the time they are 18–20 months of age, it is how the native language uses the phonemes, not just their discriminability, that guides word learning (Dietrich, Swingley, & Werker, 2007). Thus, any condition or experience that interferes with phonetic discrimination in early infancy or the development of native phonemic categories, could in turn affect later word learning. Indeed, there is a correlation between perceptual narrowing in infancy and later vocabulary size in childhood (Tsao, Liu, & Kuhl, 2004).

Multimodal Influences on Phonemic Development

Speech is not only an acoustic signal; in fact, it is also multimodal. The boost that we all get from watching someone speak in a noisy situation has been repeatedly validated experimentally (first shown by Sumby & Pollack, 1954). Just as watching a talking face facilitates adult perception of speech under noisy conditions, infants are also better able to discriminate speech when it is accompanied (Teinonen, Aslin, Alku, & Csibra, 2008) or preceded by (ter Schure, Junge, & Boersma, 2016) visual displays of talking faces. Indeed, infants can discriminate languages just on the basis of watching silent talking faces (Weikum et al., 2007).

It has been known for some time that young infants are sensitive to the correspondence between heard and seen speech. When presented with two side-by-side images of the same face (one visually articulating, for example, the vowel /i/ and the other the vowel /a/), infants will look preferentially to the face that matches the vowel sound that is played (Kuhl & Meltzoff; 1982; Patterson & Werker, 2003; see also Bristow et al., 2009, for similar evidence from event-related potential, or ERP, studies). Young infants are able to detect the auditory and visual (AV) match not only of the speech segments of the native language, but also of nonnative phones that they have never experienced (Pons, Lewkowicz, Soto-Faraco, & Sebastián-Gallés, 2009), suggesting that specific experience with the speech sounds in question is not required to establish the AV mapping between heard and seen speech. Nonetheless, experience has an effect on AV speech perception. After 10+ months of age, infants can no longer detect the match between heard and seen speech from a nonnative language (Pons et al., 2009).

Multimodal influences on speech perception extend beyond the information seen in talking faces. There is evidence for a role of the infants’ own oral-motor movements on speech perception as well. This was first shown empirically in a study of AV perception of /u/ versus /i/, wherein having a pacifier or teething toy placed in the mouth (yielding a /u/ versus /i/ mouth shape, respectively) changed AV matching of these same two sounds (Yeung & Werker, 2013). Researchers have also examined auditory-motor (AM) speech perception, showing an influence even without visual mediation. In one study, neuroimaging using magnetic encephalography (MEG) revealed that motor circuits in the infant brain are activated when listening to speech at 6 months of age, but less so by 10 months, after perceptual attunement (Kuhl et al., 2014). Furthermore, having a teething toy in the mouth that interferes with the ability to make a tongue movement reduces 6-month-old infants’ discrimination of a nonnative front (dental /da/) versus back (retroflex /Da/) distinction (Bruderer, Danielson, Kandhadai, & Werker, 2015). This work raises the hypothesis that impairments in oral-motor abilities, such as with cleft palate, could put an infant at risk for difficulties in phonetic discrimination and the later-emerging language capabilities that build on phonetic discrimination, such as word learning.

Lexical-Semantics: Word Learning

As infants learn the sounds of their language(s), they are also learning how to connect sounds to meaning. Typically developing children produce their first word at around 12 months of age (Benedict, 1979). However, using innovative methods, language researchers have discovered that infants represent and comprehend many of the complexities of words well before then. For example, eye-tracking experiments that record where infants look when they hear common nouns have provided evidence that infants understand a few words by as early as six months (Bergelson & Swingley, 2012). Over the past several decades, there have also been crucial discoveries about how infants are able to build up a lexicon so quickly, from zero to thousands of words by the age of four. One of the main questions driving this research asks which aspects of word learning are input-driven, which aspects are guided by biases or constraints, and how these constraints change across development.

Determining What a Word Is

Because we do not pause between words when we speak, one of the first challenges that infants face is parsing the continuous speech stream into individual lexical units. Researchers have discovered that seven- to eight-month-olds learn word boundaries by tracking the transitional probabilities (TPs) between syllables (i.e., the likelihood that one syllable follows another; Saffran, Aslin, & Newport, 1996). This type of learning, called statistical learning, is useful for word segmentation because the TP between syllables within a word is higher than between syllables that cross word boundaries. Indeed, not only do young infants discriminate high-TP syllables from low-TP syllables, they are also more likely to (a) use high-TP syllables to categorize objects at eight months (Erickson, Theissen, & Graf Estes, 2014) and (b) map high-TP syllables to novel objects at 17 months (Graf Estes et al., 2007). In other words, statistical learning helps infants pick out candidate word labels from the speech stream. There is electrophysiological evidence that even newborns track TPs in speech (Teinonen, Fellman, Näätänen, Alku, & Huotilainen, 2009), suggesting that this learning mechanism helps segmentation get off the ground. By 9–10 months, however, infants rely more on language-specific cues, such as which phonemes signal word boundaries in their language, and whether their language tends to stress the first or second syllable in bisyllabic words (e.g., Mattys, Jusczyk, Luce, & Morgan, 1999). This trajectory shows how phonology and lexical-semantics overlap and also suggests that word segmentation begins as an unconstrained process that becomes tuned to the specific constraints of the surrounding language over the first year of life.

While infants are learning where words begin and end, they are also learning about the internal phonetic structure of words—specifically, which types and combinations of sounds comprise plausible word labels in their language. Infants begin life accepting many sounds as labels, including nonlinguistic sounds such as beeps (Woodward & Hoyne, 1999), but they begin to narrow in on language-specific labels by their first birthday (e.g., MacKenzie, Graham, & Curtin, 2011; for a review, see Saffran & Graf Estes, 2006). The sounds that infants accept as word labels are thus constrained by around the same time that they narrow in on the phonemes of their language, but these constraints, too, are driven by native language input.

Mapping Words to Meaning

Across the first two years of life, infants learn which sounds make up words in their language. How do they then map these words onto meanings? This process is more difficult that just associating a sound with a referent, as demonstrated by the indeterminacy of reference, or the gavagai problem (Quine, 1960; see also Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005). If a person points to a scene and says “gavagai,” is she referring to a particular object? An action? A property? Quine (1960) pointed out that there are an indeterminate number of possible referents when you hear a new word for the first time. And yet, infants are surprisingly good at learning word meanings. Carey and Bartlett (1978) were the first to show that two-year-olds can map a new word onto a new meaning after only one brief exposure.

Since this seminal study, researchers have found that infants can map a novel word to a new object after only one learning session by as young as 13 months (Woodward, Markman, & Fitzsimmons, 1994). Researchers agree that word mapping must be primarily driven by input—each language has its own, essentially arbitrary, set of label-referent mappings that must be learned. The indeterminacy of reference problem, though, has led to proposals that constraints must be built in to get word learning off the ground. Markman (1990) suggested that children come to word learning with a set of linguistic constraints to guide them. For example, when children as young as 18 months hear a novel label, they assume that it refers to a novel object, rather than one whose name they already know (the mutual exclusivity constraint; Halberda, 2003). However, bilinguals and trilinguals do not show a robust use of mutual exclusivity at this age (Byers-Heinlein & Werker, 2009), and instead are more willing to accept a second label for the same object than are monolingual infants (Kandhadai, Hall, & Werker, 2017). These results show that this word-learning constraint is not input-independent, but instead reflects the properties of the word-to-world mappings the child has experienced in her language-learning environment.

In fact, recent evidence suggests that despite the indeterminacy of reference, infants may not need any language-specific constraints to begin learning words. For example, eye-tracking studies have shown that mutual exclusivity may be driven by a domain-general bias to attend to novel objects over familiar ones (Mather & Plunkett, 2009). In addition, some theories posit that infants are born with an innate sensitivity to social cues, such as eye-gaze, that guide early learning (Csibra & Gergely, 2009), although the origin of these social biases is under debate (Yurovsky & Frank, 2015). Another proposed replacement for linguistic constraints is cross-situational word learning, a mechanism by which young infants track the cooccurrence of labels and referents across multiple learning moments. Smith and Yu (2008) showed that 12-month-old infants are able to use cross-situational statistics to learn new words, and they suggest that this general associative mechanism may be critical early in life, before infants learn more language-specific strategies to map words to objects.

Indeed, during the second and third years of life, toddlers begin to use many language-specific mechanisms to map labels to referents. For example, two- to three-year-olds use learned linguistic cues, such as grammatical morphemes (e.g., -ing or quantifiers and determiners such as some or the) to determine if a novel word refers to a concrete object, substance, individual, adjective, or verb (Hall, Lee, & Bélanger, 2001; Hall, Waxman, & Hurwitz, 1993). By two years of age, toddlers also use the syntactic frame around a novel word (such as “The dog is gorping the bunny”) to infer the meaning of novel verbs (syntactic bootstrapping; Naigles, 1990; see Fisher, Gertner, Scott, & Yuan, 2010, for a review). Toddlers’ use of linguistic cues is not only a powerful word-learning mechanism, but it also demonstrates the emergence of morphosyntax knowledge (see the subsection “Early Signs of Grammatical Knowledge”), highlighting the interplay between lexical and syntax acquisition. Finally, two-year-olds use sociopragmatic cues, such as the events or discourse surrounding a novel word (Tomasello & Akhtar, 1995), to map novel words to meaning. This wide array of language-specific cues that toddlers use to map words to meaning demonstrates how word learning evolves as infants learn about the properties of their native language(s).

Mapping a label to a specific object, action, or property is one important step in learning a word, but children also have to accurately generalize words to other exemplars. The word ball does not refer to only the first ball an infant sees, but to a set of objects that make up the ball category. Indeed, word learning and conceptual development are intimately connected. For example, word labels and other communicative signals cue three- to six-month-old infants to form a category over visually and taxonomically similar referents (Ferguson & Waxman, 2016; Ferry, Hespos, & Waxman, 2010). One debate driving this area of research is whether early lexical categories are based on perceptual or conceptual similarity. For instance, during the second year of life, infants extend novel nouns along taxonomic characteristics, such as animacy (Booth, Waxman, & Huang, 2005), suggesting that they map novel nouns to conceptual categories even at this early age. However, Smith and colleagues found that toddlers extend novel nouns to objects of the same shape (Landau, Smith, & Jones, 1988) and suggest a perceptual account of early lexical categories and generalization (Colunga & Smith, 2005).

This debate points to a broader interest in understanding whether word learning is best described as a top-down or bottom-up process. Do infants and toddlers test cognitive models or hypotheses based on evidence in the input (top down), or do they learn words by tracking perceptual and statistical information (bottom up)? For example, some researchers argue that young children form a hypothesis when they encounter a novel word in an ambiguous situation (Trueswell, Nicol Medina, Hafri, & Gleitman, 2013, Xu & Tenenbaum, 2007), while others argue that they use associative learning, spatial grounding, and attention to novelty to build lexical-semantic representations across time (McMurray, Horst, & Samuelson, 2012; Smith & Yu, 2008). These contrasting theories emerged in the early 21st century, and further investigations into both will enhance our understanding of the mechanisms behind early word learning. Notably, the investigation into the roles of bottom-up versus top-down processes is echoed in the phonemic discrimination literature (see the subsection “Phonemic Development”), demonstrating similarities in learning processes across language domains.

The Structure of Early Word Knowledge

The past several decades of research on word learning have focused on the constraints and characteristics of the input that allow infants and toddlers to learn individual words. However, researchers have recently begun to explore how children structure their vocabulary knowledge. By two years, or even as early as 18 months of age, toddlers link words with related meanings, such that when they hear one word, such as dog, they activate related words, such as cat or leash (Rämä, Sirri, & Serres, 2013). These links help toddlers process words faster and more accurately (Borovsky, Ellis, Evans, & Elman, 2015). There is evidence that two-year-olds are able to encode lexical-semantic links during their first exposure to new words (e.g., Wojcik & Saffran, 2013), and researchers are continuing to explore the biases that guide the emergence of vocabulary structure.

Infants and young children are word-learning experts who segment, map, generalize, and structure new words without any explicit instruction. And yet, differences in vocabulary size emerge within the second year of life, and these differences are predictive of later academic success (Hart & Risley, 1995). What accounts for individual differences in vocabulary size? Weisleder and Fernald (2013) found that the number of words that 19-month-olds hear predicts both their language-processing speed and vocabulary growth 5 months later. Input quality, such as the number of turn-taking conversational events, has also been found to predict vocabulary size (Hirsh-Pasek et al., 2015).

While basic research in word learning is leading to interventions for children with language delays, researchers are still investigating the fundamental questions that have driven the field for decades. What biases or cognitive constraints do infants bring to the task of learning words? How do these change with experience? These questions also frame research in another domain of language learning: morphosyntax acquisition.

Morphosyntax: Learning Grammar

The term morphosyntax refers to the structural organization of a language. It consists of the set of rules governing the internal structure of the words (the morphology), and the rules determining how words are combined into bigger units such as phrases and sentences (the syntax). These two systems—morphology and syntax—are interdependent and part of the grammar of a language.

Early Signs of Grammatical Knowledge

At around 20–24 months of age, typically developing children start producing multiword utterances, entering initially a “two-word” stage (Guasti, 2002). Children’s earliest utterances have been traditionally described as “telegraphic” because they tend to lack grammatical elements such as determiners (e.g., the, a), prepositions (e.g., in, of), and auxiliary verbs (e.g., is, has). These omitted elements are the functors of a language, and they can be words that stand alone (e.g., the, in), or affixes that are attached (e.g., -ing: walk-ing). Functors typically have no lexical meaning but signal grammatical relations, building the scaffolding of phrases and sentences, and are extremely frequent elements. Children’s telegraphic speech typically consists of content words such as nouns (e.g., turtle, tea), verbs (e.g., dance, walk), and adjectives (e.g., slow, warm). Contrary to functors, content words have lexical meaning and are much less frequent. By the second half of their third year, children start producing complex sentence types—which include both content words and functors—(e.g., relative clauses: the turtle that walked very slowly …) and fluently use a variety of complex sentence forms by their fourth year.

The telegraphic nature of the children’s first multiword utterances initially led to the proposal that, at the earliest stages of development, language knowledge lacks functors and is limited to content words. Some views attributed this proposed developmental gap to the fact that functors are more abstract and semantically complex than content words (Brown, 1973), whereas others claimed that functors were acquired in a later, biologically determined, developmental stage than content words (Radford, 1990). However, studies assessing the perceptual abilities of infants have revealed that they build representations of their language’s functors well before they produce their first multiword utterances. Indeed, even newborn babies have been shown to discriminate functors from content words (Shi, Werker & Morgan, 1999) due to the differing acoustic properties of these two types of words: functors are typically phonologically minimal elements (that is, they tend to be shorter than content words, have reduced vowels, and simpler syllabic structure). Thus, by 6 to 8 months of age, infants can segment functors from the speech stream (Höhle & Weissenborn, 2003; Shi, Marquis, & Gauthier, 2006)—initially only the most frequent ones (Shi, Cutler, Werker, & Cruickshank, 2006)—and, by 8 to 11 months of age, they can already use them to segment adjacent content words (Marquis & Shi, 2012; Shi & Lepage, 2008). Further, 14- to 16-month-olds are aware of the cooccurrence of specific categories of functors and content words. Determiners, for instance, are typically adjacent to nouns, and 14- to 16-month-olds categorize novel words presented with familiar determiners as nouns. Further, they group similar functors into classes and distinguish them from other types of functors (e.g., determiners: the, your . . . from pronouns: I, you . . .; Shi & Melançon, 2010).

There is evidence suggesting that the acquisition of other aspects of grammar, such as word order, might also start earlier than previously thought. One-word-stage infants (16- to 18-month-olds) can correctly interpret simple sentences that vary in their word order (e.g., Big Bird is tickling Cookie Monster as opposed to Cookie Monster is tickling Big Bird; Hirsh-Pasek & Golinkoff, 1996) and, indeed, the infants’ first-produced multiword utterances tend to follow the word order of the target language (Brown, 1973).

Generativist and Usage-Based Approaches to Grammar

The debate about how children acquire grammar has been dominated by two fundamentally opposed approaches that exemplify the innateness versus input debate. The generativist approach posits that biologically predetermined processes drive acquisition, and usage-based approaches claim that acquisition results from input-driven processes. The generativist approach claims that children are born equipped with a universal grammar (UG)—that is, a biologically programmed set of abstract principles (Chomsky, 1980). According to this approach, the limited input to which learners are exposed during acquisition does not provide unambiguous evidence of certain abstract principles of grammar. Further, learners rarely receive feedback on the structures that are not possible in the target language. Thus, the child acquires the language’s grammar by linking her innate knowledge (the UG) to the specific properties of the language(s) to which she is exposed. The overgeneralization of rules by young children (e.g., using gived instead of gave), which involve errors not present in the adult input, is proposed as evidence of the existence of abstract syntactic representations.

Alternatively, usage-based approaches (Tomasello, 2000) claim that learners initially have no abstract linguistic knowledge. Instead, children learn simple and concrete items and gradually create more complex and abstract constructions and categories. Thus, the infant’s grammar grows in a piecemeal fashion, changing greatly during development and ultimately converging on the grammar spoken in their language community. Younger children’s failure to produce familiar constructions (e.g., I Verb-ed it) with novel verbs (e.g., gorp; target: I gorped it) when presented in a different construction (e.g., See? Ernie’s gorping Cookie Monster!), is proposed by this account as evidence for item-specific rather than abstract knowledge of linguistic structures (Tomasello, 2000).

In parallel to this debate, an increasing amount of research has focused on how the acquisition of morphosyntax is bootstrapped—that is, what information is leveraged to break into learning.

Semantic and Prosodic Bootstrapping

Several hypotheses have been put forward that aim to explain how the acquisition of morphosyntax is set in motion. All these hypotheses have limitations and hence have been proposed by some accounts as potentially acting in combination rather than being mutually exclusive. The semantic bootstrapping hypothesis (Pinker, 1984) claims that the child innately expects a correspondence between semantics (meaning) and syntax that allows her to build syntactic categories and identify features—specific characteristics—and rules of grammar. This hypothesis, generativist in nature, assumes that the child can readily identify basic semantic notions such as “concrete objects” and connect these to their corresponding syntactic categories (e.g., nouns) due to an innate set of linking or mapping rules.

The prosodic bootstrapping hypothesis (Gleitman & Wanner, 1982; Morgan & Demuth, 1996) claims that the input contains acoustic information—part of the prosody or intonation of the language—that correlates with properties of the grammatical structure, such as the boundaries to syntactic constituents. If sensitive to this acoustic information, infants could divide speech into smaller units such as phrases (i.e., combinations of words: the turtle, in Paris . . .). Chunking speech into phrases might in turn allow infants to detect syntactic regularities and build rudimentary representations of certain syntactic features, such as word order (Morgan & Demuth, 1996; Nespor, Guasti, & Christophe, 1996). This approach assumes that infants are sensitive to the correlations between prosodic and syntactic structure and use these to parse speech. Indeed, a wealth of evidence shows that infants are highly sensitive to prosodic information from the earliest stages of acquisition (Christophe, Dupoux, Bertoncini, & Mehler, 1994; Christophe, Mehler, & Sebastián-Gallés, 2001). Pauses, changes in pitch, and the lengthening of certain segments typically mark the boundaries of prosodic units such as phrases. Crucially, six- to nine-month-olds can use these prosodic cues to phrases to segment speech and discriminate well-formed phrases from identical sequences that differ in their acoustic/prosodic properties; i.e., that consist of the end of a phrase and the beginning of another (Jusczyk et al., 1992; Soderstrom, Seidl, Kemler Nelson, & Jusczyk, 2003).

Prosodic information is not only useful for segmenting words and phrases. It can also be leveraged for the complex task of discovering word order. Specifically, the location and realization of prosodic prominence within phrases has been proposed to potentially help infants discover the basic word order of verbs and objects (Christophe, Guasti, Nespor, Dupoux, & van Ooyen, 1997; Nespor et al., 2008). In languages where the V(erb) typically precedes the O(bject) (VO languages: English, Spanish), the prosodically prominent element in the phrase (the stressed syllable of the content word) is longer than the nonprominent element (the functor; e.g., in the phrase: in Paris, pa is longer than in). In languages where the O(bject) typically precedes the V(erb) (OV languages: Basque, Japanese), the prosodically prominent element within phrases has higher pitch and/or intensity instead (e.g., in the Japanese phrase: Paris niParis to,” pa has higher pitch and/or intensity then ni). By seven to eight months of age, monolingual and bilingual infants can use these prosodic contrasts to segment unknown artificial languages (Bernard & Gervain, 2012; Gervain & Werker, 2013). The greatest limitation of the prosodic bootstrapping hypothesis is the fact that some prosodic boundaries do not align with the boundaries of syntactic constituents. Therefore, this mechanism is proposed to work in parallel to other bootstrapping mechanisms, such as distributional learning, discussed next.

Distributional Learning

Distributional or statistical learning is a learning mechanism used by humans to segment the input and extract regularities. It is a domain-general mechanism used to parse speech, as well as nonlinguistic stimuli such as tone streams. It is found across modalities (e.g., with visual stimuli; Fiser & Aslin, 2002), and shared with other mammals such as rats. A considerable amount of evidence has shown that infants can compute and use the distribution of the elements in the input to segment speech (see subsection “Determining What a Word Is”). Prelexical infants can track the transitional probabilities of syllables to segment words (Saffran et al., 1996). Crucially, eight-month-olds can also track the differing frequency of occurrence of functors and content words in their linguistic input. The relative order of functors and content words correlates with the basic word order of verbs and objects: in VO languages such as English, functors—frequent elements—typically occur phrase-initially (e.g., in Paris), whereas in OV languages such as Japanese, functors typically occur phrase-finally (e.g., Paris ni “Paris to”). Thus, computing the frequency of occurrence and relative order of these elements, in addition to using prosodic information (see subsection “Semantic and Prosodic Bootstrapping”), might help infants to segment phrases and discover basic word order. Indeed, 8-month-old infants segment unknown artificial languages that contain frequent and infrequent elements according to the relative order of functors and content words characteristic of their native language, and 17-month-olds associate infrequent novel words with content words (Gervain, Nespor, Mazuka, Horie, & Mehler, 2008, Hochmann, Endress, & Mehler, 2010). The role of statistical learning in discovering word order reveals one overlap in the mechanisms for learning words and syntax.

Distributional learning is essential for the acquisition of another crucial aspect of morphosyntax: nonadjacent dependencies. Nonadjacent dependencies are sequentially distant elements that regularly cooccur, such as the relation between an auxiliary verb (e.g., is, as in: is jumping) and a functor attached to the main verb (e.g., -ing: She is jumping). The ability to track such dependencies seems to emerge at around 15–18 months of age; at this age, infants can track nonadjacent dependencies in strings of novel wordlike items (e.g., aXc: pel kicey jic), where the presence of the first element (a: pel) predicts the occurrence of the third (c: jic), regardless of the middle element (X: kicey; Gómez & Maye, 2005). By around 18 months of age, infants have gained knowledge of some of the target language’s nonadjacent dependencies and prefer passages that contain grammatical rather than ungrammatical combinations of functors (everybody is baking bread versus *everybody can baking bread; Santelmann & Jusczyk, 1998). Further, by 19 months of age, infants start tracking relationships that straddle phrase boundaries, such as the singular/plural distinction in Subject-Verb agreement (A team bakes versus *A team bakeØ; Soderstrom, Wexler, & Jusczyk, 2002). The development of infants’ ability to track nonadjacent dependencies across the second year of life suggests that distributional learning might bootstrap the learning of complex grammatical structures.

Perceptual Primitives

Finally, in addition to these bootstrapping mechanisms, the acquisition of grammar appears to be guided by what have been defined as perceptual primitives; that is, perceptually salient configurations that are detected automatically by the perceptual system. Humans are attuned to detecting repetition and edges in the language input. Even newborns detect repetition in speech (Gervain, Macagno, et al., 2008), and adults can learn repetition-based structures (e.g., ABA, ABB), but not other structures with similar complexity (e.g., piano tone triplets such as low-high-middle; Endress, Dehaene-Lambertz, & Mehler, 2007). Edges, in turn, have been proposed to be highly salient elements that help listeners encode information. Adults learn nonadjacent dependencies and generalize repetition-based structures when these occur at the beginning or end of a sequence, but fail if the same constructions are sequence-internal (Endress & Mehler, 2009; Endress, Scholl, & Mehler, 2005). Functors typically occur at the edges of phrases, and affixes at the edges of words (Endress & Mehler 2010; Gervain, Nespor et al., 2008), and mothers tend to place new information at the final edge of utterances. Besides these proposed perceptual primitives, an early bias has been found in the functional role of vowels and consonants: Vowels are used by adults and infants to learn simple rules, whereas consonants are used to extract words from input and distinguish between lexical items (Bonatti, Peña, Nespor, & Mehler, 2005; Pons & Toro, 2010; Toro, Shukla, Nespor, & Endress, 2008).

In sum, a combination of general learning mechanisms, such as distributional learning of the properties of functors and content words, sensitivity to the prosodic structure of speech at the phrase level, and the gradual acquisition of lexical items, might help prelexical infants bootstrap their knowledge of syntactic categories and certain syntactic features. For instance, the cumulative information about word order provided by the correlated prosodic and statistical information might lead infants to discover this major syntactic feature before their first birthday. Importantly, though ample research has shown the important role of visual information in the perception of auditory speech in infancy (see the subsection “Multimodel Influences on Phonemic Development”), its potential role in the acquisition of grammar remains largely unexplored.

The acquisition of grammar appears thus to be driven by the interplay of different bootstrapping mechanisms, perceptual primitives, and early biases, along with cumulative knowledge. Crucially, the relative weight of these different factors changes throughout the different phases of development, and the potential role of other mechanisms or sources of information, such as visual information, remains to be determined.

Conclusion

When Skinner and Chomsky laid out their theories of language acquisition in the late 1950s, they pitted the role of the environment against the role of innate knowledge—Skinner arguing that language is learned from input, and Chomsky countering that there is an innate language module in the brain. This debate sparked decades of research on the mechanisms that allow humans, and not any other species, to learn a structured communication system without any explicit instruction. The current research suggests an integrated approach by which biases guide what infants learn from the surrounding environment. For example, in the area of phonology, newborns’ bias to attend to speech over other signals may facilitate early learning of the prosodic and phonemic properties of their native language(s). In the area of lexical-semantics, infants’ bias to attend to novelty may aid in mapping new words to their referents. In morphosyntax, infants’ sensitivity to vowels, repetition, and phrase edges may guide statistical learning. In each of these areas, too, there is evidence of new biases coming into play throughout development, as infants gain more knowledge about their native language(s). The current article also highlights how phonology, lexical-semantics, and morphosyntax interact with one another throughout learning. For example, learning the sounds of the native language(s) may guide the learning of word labels (and vice versa), and segmenting word labels is crucial to learning word order.

The field of language acquisition reached an exciting juncture in the early 21st century. The majority of theories began to point to a dynamic integration of the input and perceptual/cognitive biases across development (e.g., Christiansen & Chater, 2016). With this integrative view, new questions emerged:

  • How does hearing more than one language affect emerging biases? In the phonological domain, bilingual infants show differences in speech discrimination development (Bosch & Sebastián-Gallés, 1997). In lexical-semantics, bilinguals are more willing to accept a second label for an object category (Kandhadai, Hall & Werker, 2017). Further work is needed to explore ways in which a multilingual environment may affect other language-learning mechanisms.

  • How do visual, motor, and other nonauditory modalities of input influence language learning? There are recent discoveries of the effects of visual and sensory-motor proprioceptive information on infant speech perception (e.g., Bruderer et al., 2015). It is likely that in other areas of language acquisition, the traditional focus on auditory input has led to an impoverished understanding of the information available in the environment and how this information guides the development of perceptual biases.

  • Are the same mechanisms recruited for sign-language and spoken-language acquisition? There is some evidence that language learning is similar across both modalities. For example, infant babbling in sign shows the same complexity and development as babbling in speech (Petitto & Marentette, 1991). Further research is needed to not only increase our understanding of language learning in the signing population, but to also reveal how the affordances of visual versus auditory language affect the development of learning constraints.

  • How do domain-general cognitive processes, such as attention and memory, influence language learning, and how do changes in these processes affect the development of language learning biases? The incorporation of memory development into theories of word learning has led to new discoveries (see Wojcik, 2013, for a review), but more work is needed.

  • How do caregivers adjust their interactions with infants over development to bootstrap language learning? Parents respond contingently when infants babble, and this contingent responding helps guide phonological learning (Goldstein & Schwade, 2008). It is possible that parents adjust their interactions in other ways to influence language learning, and understanding this adjustment could change how we think about the content of input. Relatedly, investigating what infants see and attend to in a noisy environment will also advance our understanding of input (Smith, Yu, Yoshida, & Fausey, 2015).

Each of these future directions will provide a more nuanced understanding of how infants and young children use rich input to learn language, as well as how this learning is affected by the changing biases across development.

Acknowledgments

We thank Padmapriya Kandhadai, Laurel Fais, and Viridiana Benitez for their feedback on previous versions of this paper. This work was supported by a Marie Curie International Outgoing Fellowship within the EU Seventh Framework Programme for Research and Technological Development (2007–2013) under REA grant agreement no. 624972 awarded to Irene de la Cruz-Pavía, and by a NSERC Discovery Grant (81103), SSHRC Operating Grant (435-2014-0917), and NIH Operating Grant (R21HD079260) to J. Werker.

Further Reading

Fisher, C., Gertner, Y., Scott, R. M., & Yuan, S. (2010). Syntactic bootstrapping. Wiley Interdisciplinary Reviews: Cognitive Science, 1(2), 143–149. A review of how early syntax knowledge bootstraps early word learning.Find this resource:

Gervain, J., & Werker, J. F. (2008). How infant speech perception contributes to language acquisition. Language and Linguistic Compass, 2, 1149–1170. A more in-depth review of many of the same topics covered in this article.Find this resource:

Meier, E. P. (2016). Sign language acquisition. In S. Thomason (Ed.), Oxford handbooks online. Oxford University Press. An overview of sign language acquisition that complements the focus on spoken language presented in this chapter.Find this resource:

Romberg, A. R., & Saffran, J. R. (2010). Statistical learning and language acquisition. Wiley Interdisciplinary Reviews: Cognitive Science, 1(6), 906–914. A comprehensive overview of the role of statistical learning across speech perception, word learning, and syntax acquisition.Find this resource:

Sandoval, M., & Gomez, R. L. (2013). The development of nonadjacent dependency learning in natural and artificial languages. Wiley Interdisciplinary Reviews: Cognitive Science, 4(5), 511–522. A review of the role non-adjacent dependency learning in syntax acquisition.Find this resource:

Sebastián-Gallés, N. (2010). Bilingual language acquisition: Where does the difference lie? Human Development, 53, 245–255. An overview of bilingual language acquisition, focusing on many of the first steps in infancy also covered in this article.Find this resource:

Shi, R. (2014). Functional morphemes and early language acquisition. Child Development Perspectives, 8(1), 6–11. A review of the acquisition of functional elements of syntax.Find this resource:

Smith, L. B., Suanda, S. H., & Yu, C. (2014). The unrealized promise of infant statistical word–referent learning. Trends in Cognitive Sciences, 18(5), 251–258. An overview of the role of associative mechanisms in early word learning.Find this resource:

Waxman, S. R., & Gelman, S. A. (2009). Early word-learning entails reference, not merely associations. Trends in Cognitive Sciences, 13(6), 258–263. An overview of early word learning mechanisms, focusing on the mapping of words to meaning.Find this resource:

Werker, J. F., Yeung, H. H., & Yoshida, K. (2012). How do infants become experts at native speech perception? Current Directions in Psychological Science, 21(4), 221–226. A review of different proposed paths to perceptual narrowing.Find this resource:

References

Abercrombie, D. (1967). Elements of general phonetics. Edinburgh, U.K.: Edinburgh University Press.Find this resource:

Benedict, H. (1979). Early lexical development: Comprehension and production. Journal of Child Language, 6(2), 183–200.Find this resource:

Bergelson, E., & Swingley, D. (2012). At 6–9 months, human infants know the meanings of many common nouns. Proceedings of the National Academy of Sciences, 109(9), 3253–3258.Find this resource:

Bernard, C., & Gervain, J. (2012). Prosodic cues to word order: What level of representation? Frontiers in Psychology, 3(451), 1–6.Find this resource:

Bonatti, L. L., Peña, M., Nespor, N., & Mehler, J. (2005). Linguistic constraints on statistical computations: The role of consonants and vowels in continuous speech processing. Psychological Science, 16(6), 451–459.Find this resource:

Booth, A. E., Waxman, S. R., & Huang, Y. T. (2005). Conceptual information permeates word learning in infancy. Developmental Psychology, 41(3), 491.Find this resource:

Borovsky, A., Ellis, E. M., Evans, J. L., & Elman, J. L. (2015). Lexical leverage: Category knowledge boosts real‐time novel word recognition in 2‐year‐olds. Developmental Science, 19(6), 918–932.Find this resource:

Bosch, L., & Sebastián-Gallés, N. (1997). Native-language recognition abilities in 4-month-old infants from monolingual and bilingual environments. Cognition, 65(1), 33–69.Find this resource:

Bristow, D., Dehaene-Lambertz, G., Mattout, J., Soares, S., Gliga, T., Baillet, S., & Mangin, J. F. (2009). Hearing faces: How the infant brain matches the face it sees with the speech it hears, Journal of Cognitive Neuroscience, 21, 905–921.Find this resource:

Brown, R. (1973). A first language: The early stages. Cambridge, MA: Harvard University Press.Find this resource:

Bruderer, A. G., Danielson, D. K., Kandhadai, P., & Werker, J. F. (2015). Sensorimotor influences on speech perception in infancy. Proceedings of the National Academy of Sciences, 112(44), 13531–13536.Find this resource:

Byers-Heinlein, K., Burns, T. C., & Werker, J. F. (2010). The roots of bilingualism in newborns. Psychological Science, 21(3), 343–348.Find this resource:

Byers-Heinlein, K., & Werker, J. F. (2009). Monolingual, bilingual, trilingual: Infants’ language experience influences the development of a word‐learning heuristic. Developmental Science, 12(5), 815–823.Find this resource:

Carey, S., & Bartlett, E. (1978). Acquiring a single new word. Papers and Reports on Child Language Development, 15, 17–29.Find this resource:

Chomsky, N. (1959). A review of B. F. Skinner’s Verbal Behavior. Language, 35(1), 26–58.Find this resource:

Chomsky, N. (1980). Rules and representations. Behavioral and Brain Sciences, 3(1), 1–15.Find this resource:

Christiansen, M. H., & Chater, N. (2016). The now-or-never bottleneck: A fundamental constraint on language. Behavioral and Brain Sciences, 39, e62.Find this resource:

Christophe, A., Dupoux, E., Bertoncini, J., & Mehler, J. (1994). Do infants perceive word boundaries? An empirical study of the bootstrapping of lexical acquisition. Journal of the Acoustical Society of America, 95(3), 1570–1580.Find this resource:

Christophe, A., Guasti, M. T., Nespor, M., Dupoux, E., & van Ooyen, B. (1997). Reflections on phonological bootstrapping: Its role for lexical and syntactic acquisition. Language and Cognitive Processes, 12(5/6), 585–612.Find this resource:

Christophe, A., Mehler, J., & Sebastián-Gallés, N. (2001). Perception of prosodic boundary correlates by newborn infants. Infancy, 2(3), 385–394.Find this resource:

Colunga, E., & Smith, L. B. (2005). From the lexicon to expectations about kinds: A role for associative learning. Psychological Review, 112(2), 347.Find this resource:

Csibra, G., & Gergely, G. (2009). Natural pedagogy. Trends in Cognitive Sciences, 13(4), 148–153.Find this resource:

Curtin, S., & Werker, J. F. (2007). The perceptual foundation of phonological development. In G. Gaskell (Ed.), Oxford handbook of psycholinguistics (pp. 579–599). Oxford: Oxford University Press.Find this resource:

DeCasper, A. J., & Fifer, W. P. (1980). Of human bonding: Newborns prefer their mothers’ voices. Science, 208(4448), 1174–1176.Find this resource:

Dehaene-Lambertz, G. (2000). Cerebral specialization for speech and non-speech stimuli in infants. Journal of Cognitive Neuroscience, 12(3), 449–460.Find this resource:

Dehaene-Lambertz, G., Dehaene, S., & Hertz-Pannier, L. (2002). Functional neuroimaging of speech perception in infants. Science, 298(5600), 2013–2015.Find this resource:

Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences, 104, 454–464.Find this resource:

Eimas, P. D., Siqueland, E. R., Jusczyk, P. W., & Vigorito, J. (1971). Speech perception in infants. Science, 171(968), 303–306.Find this resource:

Endress, A. D., Dehaene-Lambertz, G., & Mehler, J. (2007). Perceptual constraints and the learnability of simple grammars. Cognition, 105(3), 577–614.Find this resource:

Endress, A. D., & Mehler, J. (2009). Primitive computations in speech processing. Quarterly Journal of Experimental Psychology, 62(11), 2187–2209.Find this resource:

Endress, A. D., & Mehler, J. (2010). Perceptual constraints in phonotactic learning. Journal of Experimental Psychology: Human Perception and Performance, 36(1), 235–250.Find this resource:

Endress, A. D., Scholl, B. J., & Mehler, J. (2005). The role of salience in the extraction of algebraic rules. Journal of Experimental Psychology: General, 134(3), 406–419.Find this resource:

Erickson, L. C., Thiessen, E. D., & Estes, K. G. (2014). Statistically coherent labels facilitate categorization in 8-month-olds. Journal of Memory and Language, 72, 49–58.Find this resource:

Fennell, C. T., & Waxman, S. R. (2009). What paradox? Referential cues allow for infant use of phonetic detail in word learning. Child Development, 81(5), 1376–1383.Find this resource:

Ferguson, B., & Waxman, S. R. (2016). What the [beep]? Six-month-olds link novel communicative signals to meaning. Cognition, 146, 185–189.Find this resource:

Ferry, A. L., Hespos, S. J., & Waxman, S. R. (2010). Categorization in 3‐ and 4‐month‐old infants: An advantage of words over tones. Child Development, 81(2), 472–479.Find this resource:

Fiser, J., & Aslin, R. N. (2002). Statistical learning of new visual feature combinations by infants. Proceedings of the National Academy of Sciences, 99(24), 15822–15826.Find this resource:

Gervain, J., Macagno, F., Cogoi, S., Peña, M., & Mehler, J. (2008). The neonate brain detects speech structure. Proceedings of the National Academy of Sciences, 105(37), 14222–14227.Find this resource:

Gervain, J., Nespor, M., Mazuka, R., Horie, R., & Mehler, J. (2008). Bootstrapping word order in prelexical infants: A Japanese-Italian cross-linguistic study. Cognitive Psychology, 57(1), 56–74.Find this resource:

Gervain, J., & Werker, J. F. (2013). Prosody cues word order in 7-month-old bilingual infants. Nature Communications, 4, 1490.Find this resource:

Gleitman, L. R., Cassidy, K., Nappa, R., Papafragou, A., & Trueswell, J. C. (2005). Hard words. Language Learning and Development, 1(1), 23–64.Find this resource:

Gleitman, L. R., & Wanner, E. (1982). The state of the state of the art. In E. Wanner & L. Gleitman (Eds.), Language acquisition: The state of the art (pp. 3–48). Cambridge, U.K.: Cambridge University Press.Find this resource:

Goldstein, M. H., & Schwade, J. A. (2008). Social feedback to infants’ babbling facilitates rapid phonological learning. Psychological Science, 19(5), 515–523.Find this resource:

Gómez, R. L., & Maye, J. (2005). The developmental trajectory of nonadjacent dependency learning. Infancy, 7(2), 183–206.Find this resource:

Graf Estes, K., Evans, J. L., Alibali, M. W., & Saffran, J. R. (2007). Can infants map meaning to newly segmented words? Statistical segmentation and word learning. Psychological Science, 18(3), 254–260.Find this resource:

Guasti, M. T. (2002). Language acquisition: The growth of grammar. Cambridge, MA: MIT Press.Find this resource:

Halberda, J. (2003). The development of a word-learning strategy. Cognition, 87(1), B23–B34.Find this resource:

Hall, D. G., Lee, S. C., & Bélanger, J. (2001). Young children’s use of syntactic cues to learn proper names and count nouns. Developmental Psychology, 37(3), 298.Find this resource:

Hall, D. G., Waxman, S. R., & Hurwitz, W. M. (1993). How two‐and four‐year‐old children interpret adjectives and count nouns. Child Development, 64(6), 1651–1664.Find this resource:

Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore: Paul H Brookes Publishing.Find this resource:

Hirsh-Pasek, K., Adamson, L. B., Bakeman, R., Owen, M. T., Golinkoff, R. M., Pace, A., et al. (2015). The contribution of early communication quality to low-income children’s language success. Psychological Science, 26(7), 1071–1083.Find this resource:

Hirsh-Pasek, K., & Golinkoff, R. M. (1996). The preferential looking paradigm reveals emerging language comprehension. In D. McDaniel, C. McKee, & H. Cairns (Eds.), Methods for assessing children’s syntax (pp. 105–124). Cambridge, MA: MIT Press.Find this resource:

Hochmann, J.-R., Endress A. D., & Mehler, J. (2010). Word frequency as a cue to identify function words in infancy. Cognition, 115, 444–457.Find this resource:

Höhle, B., & Weissenborn, J. (2003). German-learning infants’ ability to detect unstressed closed-class elements in continuous speech. Developmental Science, 6, 122–127.Find this resource:

Jusczyk, P. W., Hirsh-Pasek, K., Kemler Nelson, D. G., Kennedy, L. J., Woodward, A., & Piwoz, J. (1992). Perception of acoustic correlates of major phrasal units by young infants. Cognitive Psychology, 24, 252–293.Find this resource:

Kandhadai, P., Hall, D. G., & Werker, J. F. (2017). Second label learning in bilingual and monolingual infants. Developmental Science, 20(1), e12429.Find this resource:

Kuhl, P. K., & Meltzoff, A. (1982). The bimodal perception of speech in infancy. Science, 218(4577), 1138–1141.Find this resource:

Kuhl, P. K., Ramirez, R., Bosseler, A., Lin, J.-F., & Imada, T. (2014). Infants’ brain responses to speech suggest analysis by synthesis. Proceedings of the National Academy of Sciences, 111, 11238–11245.Find this resource:

Kuhl, P. K., Stevens, E., Hayashi, A., Deguchi, T., Kiritani, S., & Iverson, P. (2006). Infants show a facilitation effect for native language phonetic perception between 6 and 12 months. Developmental Science, 9, F13–F21.Find this resource:

Kuhl, P. K., Tsao, F., & Liu, H. (2003). Foreign-language experience in infancy: Effects of short-term exposure and social interaction on phonetic learning. Proceedings of the National Academy of Sciences, 100(15), 9096–9101.Find this resource:

Landau, B., Smith, L. B., & Jones, S. S. (1988). The importance of shape in early lexical learning. Cognitive Development, 3(3), 299–321.Find this resource:

MacKenzie, H., Graham, S. A., & Curtin, S. (2011). Twelve‐month‐olds privilege words over other linguistic sounds in an associative learning task. Developmental Science, 14(2), 249–255.Find this resource:

Mahmoudzadeh, M., Dehaene-Lambertz, G., Fournier, M., Kongolo, G., Goudjil, S., Dubois, J., et al. (2013). Syllabic discrimination in premature human infants prior to complete formation of cortical layers. Proceedings of the National Academy of Sciences, 110(12), 4846–4851.Find this resource:

Markman, E. M. (1990). Constraints children place on word meanings. Cognitive Science, 14(1), 57–77.Find this resource:

Marquis, A., & Shi, R. (2012). Initial morphological learning in preverbal infants. Cognition, 122, 61–66.Find this resource:

Mather, E., & Plunkett, K. (2009). Learning words over time: The role of stimulus repetition in mutual exclusivity. Infancy, 14(1), 60–76.Find this resource:

Mattys, S. L., Jusczyk, P. W., Luce, P. A., & Morgan, J. L. (1999). Phonotactic and prosodic effects on word segmentation in infants. Cognitive Psychology, 38(4), 465–494.Find this resource:

May, L., Byers-Heinlein, K., Gervain, J., & Werker, J. F. (2011). Language and the newborn brain: Does prenatal language experience shape the neonate neural response to speech? Frontiers in Psychology, 222, 22.Find this resource:

Maye, J., Weiss, D., & Aslin, R. N. (2008). Statistical phonetic learning in infants: Facilitation and feature generalization. Developmental Science, 11, 122–134.Find this resource:

Maye, J., Werker, J. F., & Gerken, L. A. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101–B111.Find this resource:

McMurray, B., Horst, J. S., & Samuelson, L. K. (2012). Word learning emerges from the interaction of online referent selection and slow associative learning. Psychological Review, 119(4), 831.Find this resource:

Mehler, J., Sebástian-Gallés, N., & Nespor, M. (2004). Biological foundations of language: Language acquisition, cues for parameter setting, and the bilingual infant. In M. Gazzaniga (Ed.), The new cognitive neuroscience (pp. 825–836). Cambridge, MA: MIT Press.Find this resource:

Minagawa-Kawai, Y., van der Lely, H., Ramus, F., Sato, Y., Mazuka, R., & Dupoux, E. (2011). Optical brain imaging reveals general auditory and language-specific processing in early infant development. Cerebral Cortex, 21(2), 254–261.Find this resource:

Moon, C., Cooper, R. P., & Fifer, W. P. (1993). Two-day-olds prefer their native language. Infant Behavior and Development, 16(4), 495–500.Find this resource:

Moon, C., Lagercrantz, H., & Kuhl, P. K. (2013). Language experienced in utero affects vowel perception after birth: A two-country study. Acta Paediatrica, 102, 156–160.Find this resource:

Morgan, J. L., & Demuth, K. (1996). Signal to syntax: An overview. In J. L. Morgan & K. Demuth (Eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition (pp. 1–22). Mahwah, NJ: Lawrence Erlbaum Associates.Find this resource:

Naigles, L. (1990). Children use syntax to learn verb meanings. Journal of Child Language, 17(2), 357–374.Find this resource:

Narayan, C., Werker, J. F., & Beddor, P. S. (2010). The interaction between acoustic salience and language experience in developmental speech perception: Evidence from nasal place discrimination. Developmental Science, 13(3), 407–420.Find this resource:

Nazzi, T., Bertoncini, J., & Mehler, J. (1998). Language discrimination by newborns: Toward an understanding of the role of rhythm. Journal of Experimental Psychology: Human Perception and Performance, 24(3), 756–766.Find this resource:

Nazzi, T., Jusczyk, P. W., & Johnson, E. K. (2000). Language discrimination by English-learning 5-month-olds: Effects of rhythm and familiarity. Journal of Memory and Language, 43, 1–19.Find this resource:

Nespor, M., Guasti, M. T., & Christophe, A. (1996). Selecting word order: The rhythmic actication principle. In U. Kleinhenz (Ed.), Interfaces in phonology (pp. 1–26). Berlin: Akademie Verlag.Find this resource:

Nespor, M., Shukla, M., van de Vijver, R., Avesani, C., Schraudolf, N., & Donati, C. (2008). Different phrasal prominence realizations in VO and OV languages. Lingue e Linguaggio, VII(2), 1–29.Find this resource:

Patterson, M. L., & Werker, J. F. (2003). Two-month-old infants match phonetic information in lips and voice. Developmental Science, 6(2), 191–196.Find this resource:

Peña, M., Maki, A., Kovacic, D., Dehaene-Lambertz, G., Koizumi, H., Bouquet, F., et al. (2003). Sounds and silence: An optical topography study of language recognition at birth. Proceedings of the National Academy of Sciences, 100(20), 11702–11705.Find this resource:

Petitto, L. A., & Marentette, P. F. (1991). Babbling in the manual mode: Evidence for the ontogeny of language. Science, 251(5000), 1493–1496.Find this resource:

Pinker, S. (1984). Language learnability and language development. Cambridge, MA: Harvard University Press.Find this resource:

Pons, F., Lewkowicz, D. J., Soto-Faraco, S., & Sebastián-Gallés, N. (2009). Narrowing of intersensory speech perception in infancy. Proceedings of the National Academy of Sciences, 106(26), 10598–10602.Find this resource:

Pons, F., & Toro, J. M. (2010). Structural generalizations over consonants and vowels in 11-month-old infants. Cognition, 116(3), 361–367.Find this resource:

Quine W. V. O. (1960). Word and object. Cambridge, MA: MIT Press.Find this resource:

Radford, A. (1990). Syntactic theory and the acquisition of English syntax. Cambridge, MA: Blackwell.Find this resource:

Rämä, P., Sirri, L., & Serres, J. (2013). Development of lexical–semantic language system: N400 priming effect for spoken words in 18-and 24-month old children. Brain and Language, 125(1), 1–10.Find this resource:

Saffran, J. R. (2003). Statistical language learning mechanisms and constraints. Current Directions in Psychological Science, 12(4), 110–114.Find this resource:

Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274(5294), 1926–1928.Find this resource:

Saffran, J. R., & Graf Estes, K. (2006). Mapping sound to meaning: Connections between learning about sounds and learning about words. Advances in Child Development and Behaviour, 34, 1–39.Find this resource:

Saffran, J. R., Werker, J., & Werner, L. (2006). The infant's auditory world: Hearing, speech, and the beginnings of language. In R. Siegler & D. Kuhn (Eds.), Handbook of child development (pp. 58–108). New York: Wiley.Find this resource:

Santelmann, L. M., & Jusczyk, P. W. (1998). Sensitivity to discontinuous dependencies in language learners: Evidence for limitations in processing space. Cognition, 69, 105–134.Find this resource:

Shi, R., Cutler, A., Werker, J. F., & Cruickshank, M. (2006). Frequency and form as determinants of functor sensitivity in English-acquiring infants. Journal of the Acoustical Society of America, 119, EL61–EL67.Find this resource:

Shi, R., & Lepage, M. (2008). The effect of functional morphemes on word segmentation in preverbal infants. Developmental Science, 11, 407–413.Find this resource:

Shi, R., Marquis, A., & Gauthier, B. (2006). Segmentation and representation of function words in preverbal French-learning infants. In D. Bamman, T. Magnitskaia, & C. Zaller (Eds.), BUCLD 30: Proceedings of the 30th annual Boston University conference on language development (Vol. 2, pp. 549–560). Boston: Cascadilla Press.Find this resource:

Shi, R., & Melançon, A. (2010). Syntactic categorization in French-learning infants. Infancy, 15, 517–533.Find this resource:

Shi, R., Werker, J. F., & Morgan, J. L. (1999). Newborn infants’ sensitivity to perceptual cues to lexical and grammatical words. Cognition, 72, B11–B21.Find this resource:

Shultz, S., Vouloumanos, A., Bennett, R. H., & Pelphrey, K. (2014). Neural specialization for speech in the first months of life. Developmental Science, 17, 766–774.Find this resource:

Skinner, B. F. (1957). Verbal behavior. New York: Appleton.Find this resource:

Smith, L. B., & Yu, C. (2008). Infants rapidly learn word-referent mappings via cross-situational statistics. Cognition, 106(3), 1558–1568.Find this resource:

Smith, L. B., Yu, C., Yoshida, H., & Fausey, C. M. (2015). Contributions of head-mounted cameras to studying the visual environments of infants and young children. Journal of Cognition and Development, 16(3), 407–419.Find this resource:

Soderstrom, M., Seidl, A., Kemler Nelson, D. G., & Jusczyk, P. W. (2003). The prosodic bootstrapping of phrases: Evidence from prelingual infants. Journal of Memory and Language, 49, 249–267.Find this resource:

Soderstrom, M., Wexler, K., & Jusczyk, P.W. (2002). English-learning toddlers’ sensitivity to agreement morphology in receptive grammar. In B. Skarabela, S. A. Fish, & A. H. J. Do (Eds.), Proceedings of the 26th annual Boston University conference on language development (pp. 643–652). Somerville, MA: Cascadilla Press.Find this resource:

Stager, C. L., & Werker, J. F. (1997). Infants listen for more phonetic detail in speech perception than in word-learning tasks. Nature, 388(6640), 381–382.Find this resource:

Streeter, L. A. (1976). Language perception of 2-month-old infants shows effects of both innate mechanisms and experience. Nature, 259(5538), 39–41.Find this resource:

Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26(2), 212.Find this resource:

Swingley, D. (2009). Contributions of infant word learning to language development. Philosophical Transactions of the Royal Society B, 364, 3617–3622.Find this resource:

Teinonen, T., Aslin, R. N., Alku, P., & Csibra, G. (2008). Visual speech contributes to phonetic learning in 6-month-old infants. Cognition, 108(3), 850–855.Find this resource:

Teinonen, T., Fellman, V., Näätänen, R., Alku, P., & Huotilainen, M. (2009). Statistical language learning in neonates revealed by event-related brain potentials. BMC Neuroscience, 10(1), 21.Find this resource:

ter Schure, S., Junge, C., & Boersma, P. (2016). Discriminating non-native vowels on the basis of multimodal, auditory or visual information: Effects on infants’ looking patterns and discrimination. Frontiers in Psychology, 7, 525.Find this resource:

Tomasello, M. (2000). Do young children have adult syntactic competence? Cognition, 74, 209–253.Find this resource:

Tomasello, M., & Akhtar, N. (1995). Two-year-olds use pragmatic cues to differentiate reference to objects and actions. Cognitive Development, 10(2), 201–224.Find this resource:

Toro, J. M., Shukla, M., Nespor, M., & Endress, A. D. (2008). The quest for generalizations over consonants: Asymmetries between consonants and vowels are not the by-product of acoustic differences. Perception and Psychophysics, 70(8), 1515–1525.Find this resource:

Tsao, F. M., Liu, H. M., & Kuhl P. K. (2004). Speech perception in infancy predicts language development in the second year of life: A longitudinal study. Child Development, 75, 1067–1084.Find this resource:

Trueswell, J. C., Medina, T. N., Hafri, A., & Gleitman, L. R. (2013). Propose but verify: Fast mapping meets cross-situational word learning. Cognitive Psychology, 66(1), 126–156.Find this resource:

Vouloumanos, A., & Werker, J. F. (2007). Listening to language at birth: Evidence for a bias for speech in neonates. Developmental Science, 10(2), 159–164.Find this resource:

Weikum, W. M., Vouloumanos, A., Navarra, J., Soto-Faraco, S., Sebastián-Gallés, N., & Werker, J. F. (2007). Visual language discrimination in infancy. Science, 316(5828), 1159.Find this resource:

Weisleder, A., & Fernald, A. (2013). Talking to children matters early language experience strengthens processing and builds vocabulary. Psychological Science, 24(11), 2143–2152.Find this resource:

Werker, J. F., Gilbert, J. H. V., Humphrey, G. K., & Tees, R. C. (1981). Developmental aspects of cross-language speech perception. Child Development, 52(1), 349–355.Find this resource:

Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7(1), 49–63.Find this resource:

Wojcik, E. H. (2013). Remembering new words: Integrating early memory development into word learning. Frontiers in Psychology, 4, 151.Find this resource:

Wojcik, E. H., & Saffran, J. R. (2013). The ontogeny of lexical networks toddlers encode the relationships among referents when learning novel words. Psychological Science, 24(10), 1898–1905.Find this resource:

Woodward, A. L., & Hoyne, K. (1999). Infants’ learning about words and sounds in relation to objects. Child Development, 70(1), 65–77.Find this resource:

Woodward, A. L., Markman, E. M., & Fitzsimmons, C. M. (1994). Rapid word learning in 13- and 18-month-olds. Developmental Psychology, 30(4), 553.Find this resource:

Xu, F., & Tenenbaum, J. B. (2007). Word learning as Bayesian inference. Psychological Review, 114(2), 245.Find this resource:

Yeung, H. H., & Werker, J. F. (2009). Learning words’ sounds before learning how words sound: 9-month-olds use distinct objects as cues to categorize speech information. Cognition, 113(2), 234–243.Find this resource:

Yeung, H. H., & Werker, J. F. (2013). Lip movements affect infant audiovisual speech perception. Psychological Science, 24(5), 603–612.Find this resource:

Yoshida, K. A., Fennell, C. T., Swingley, D., & Werker, J. F. (2009). Fourteen month-old infants learn similar sounding words. Developmental Science, 12(3), 412–418.Find this resource:

Yurovsky, D., & Frank, M. C. (2015). Beyond naïve cue combination: Salience and social cues in early word learning. Developmental Science, 1–17.Find this resource: