The Oxford Research Encyclopedia of Psychology will be available via subscription on September 26, 2018. Visit About to learn more, meet the editorial board, or learn about librarian resources.

Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, PSYCHOLOGY ( (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy and Legal Notice).

date: 25 September 2018

Speech Comprehension and Cognition in Adult Aging

Summary and Keywords

The comprehension of spoken language is a complex skill that requires the listener to map the acoustic input onto the meaningful units of speech (phonemes, syllables, and words). At the sentence level, the listener must detect the syntactic structure of the utterance in order to determine the semantic relationships among the spoken words. Each higher level of analysis is thus dependent on successful processing at the prior level, beginning with perception at the phoneme and word levels.

Unlike reading, where one can use eye movements to control the rate of input, speech is a transient signal that moves past the ears at an average rate of 140 to 180 words per minute. Although seemingly automatic in young adults, comprehension of speech can represent a greater challenge for older adults, who often exhibit a combination of reduced working memory resources and slower processing rates across a number of perceptual and cognitive domains. An additional challenge arises from reduced hearing acuity that often occurs in adult aging. A major concern is that, even with only mild hearing loss, the listening effort required for success at the perceptual level may draw resources that would ordinarily be available for encoding what has been heard in memory, or comprehension of syntactically complex speech. On the positive side, older adults have compensatory support from preserved linguistic knowledge, including the procedural rules for its use. Our understanding of speech perception in adult aging thus rests on our understanding of such sensory-cognitive interactions.

Keywords: speech comprehension, adult aging, cognition, syntactic complexity, age-related hearing loss


Speech and Language

All societies on earth have a system of oral communication capable of transmitting to others information about events, states of mind, intentions and wishes, past and future. In cases where oral communication cannot be conducted, sign languages have developed that are equally rich in lexicon and syntax but use manual movements, supplemented by facial and body gestures, to express objects, actions, and their syntactic relations (Meier, 1991; Senghas & Coppola, 2001). The focus of this review is specifically on speech perception and the comprehension of spoken language in healthy aging, and how these may be affected by the sensory and cognitive changes that accompany adult aging.

Barring serious sensory loss or neuropathology, and in spite of its complexity and demands on sensory and cognitive functions, speech comprehension remains remarkably well preserved in adult aging. An important factor in this success is a semantic memory system that retains knowledge of word meanings, the syntactic rules of one’s language, and the procedural guidelines for implementing these rules for comprehension and production. This knowledge domain remains relatively stable even in those cases where cognitive function begins to decline (Wingfield & Stine-Morrow, 2000). Although word retrieval tends to become harder as we age, and especially for the retrieval of proper names (Burke, MacKay, Worthley, & Wade, 1991; Maylor, 1990); vocabulary size (the knowledge of word meanings) tends to increase with age, declining only in the oldest old (Kempler & Zelinski, 1994; Verhaeghen, 2003; see also Nicholas, Barth, Obler, Au, & Albert, 1997).

Characteristics of Spoken Language

Real-world listening is associated with a number of naturally occurring perturbations that can differentially affect speech recognition in older adults. For example, older adults, even those with good hearing acuity as measured by pure-tone thresholds, show poorer speech recognition than younger adults for speech heard in a noisy background (Humes, 1996; Tun & Wingfield, 1999) or for speech uttered with an unfamiliar accent (Adank & Janse, 2010; Van Engen & Peelle, 2014). Although everyday conversational speech may sound clear, under close examination it can be seen that individual words are often under-articulated (Lindblom, Brownlee, Davis, & Moon, 1992; Pollack & Pickett, 1963). As will be shown, this can place the older adult at a particular disadvantage.

One potential source of rescue from difficulty in recognizing spoken words is that listeners can often gain advantage from being able to see the speaker, thus picking up visual articulatory cues (Dodd, 1977; Sumby & Pollack, 1954). There is an extant debate in the literature as to whether older adults gain more, or less, advantage from audiovisual speech information. This may depend on the relative efficacy of older adults’ auditory or visual processing capability (cf. Stevenson et al., 2015; Tye-Murray et al., 2008; Tye-Murray, Spehar, Myerson, Hale, & Sommers, 2016). Considerable research, however, has demonstrated a powerful positive effect of linguistic context on the ease of word recognition.

Statistical Properties of Spoken Discourse

It has been estimated that the average college-educated adult has a speaking vocabulary of 75,000 to 100,00 words, and a comprehension vocabulary of very much more (Oldfield, 1966; see also Brysbaert, Stevens, Mandera, & Keuleers, 2016). One is thus bound to ask how the rapidly changing acoustic signal representing natural speech can be matched against this number of potential lexical entries so rapidly. A part of the answer lies in the structure of natural language, in which the context of a conversation makes some words more predictable than others. It has been shown, for example, that one hears only about 10 to 15 words before a previously uttered word is repeated. In addition, some words tend to recur in discourse much more frequently than others. For example, in writing the most frequently occurring word is “the,” while in speaking it is “I.” In fact, early studies of natural language corpora have shown that the 50 most common words in English make up about 60% of all the words we speak (Miller, 1951).

These language statistics illustrate the point that an important feature of natural language is that it is redundant. We use the word “redundancy” in its technical sense, meaning that most of the words we hear will be highly predictable. In the case of fluent discourse, estimates have suggested that English is as much as 30% to 50% redundant (Chapanis, 1954; Shannon, 1951). That is, on average one could miss hearing every third word, or in some contexts every other word, and be able to infer the words that had been missed.

Because of this natural redundancy and their great experience with language, older adults tend to perform quite well in day-to-day speech comprehension (e.g., Ayasse, Lash, & Wingfield, 2017; Benichov, Cox, Tun, & Wingfield, 2012). As we will see, however, there are circumstances in which age-related cognitive changes can limit the effectiveness of speech comprehension.

Cognitive Aging

Balanced against preserved linguistic knowledge and a lifetime of language experience, adult aging is often accompanied by cognitive changes that can have an adverse effect on speech recognition and higher-level speech comprehension. There are three areas of age-related cognitive change that have the most direct bearing on the effectiveness of speech comprehension. The first of these is an age-related reduction in processing speed across a range of perceptual and cognitive domains (Cerella, 1994; Salthouse, 1996). Although there has been debate over whether different processes are slowed at the same or different rates (cf. Cerella, 1990, 1994; Fisk & Fisher, 1994; Hale & Myerson, 1996; Myerson, Adams, Hale, & Jenkins, 2003), slowed processing remains one of the hallmarks of cognitive aging.

The second factor is an age-related decline in working memory capacity (Salthouse, 1994; Zacks, Hasher, & Li, 1999). We use the term “working memory” to refer to the temporary retention of information no longer present in the environment, and to its manipulation within this memory system to guide behavior (Postle, 2006). The capacity of working memory is assumed to be limited, and especially so in older adults (Salthouse, 1994). As will be illustrated in subsequent sections of this review, working memory can both carry and constrain comprehension of meaningful speech.

The third factor affecting speech processing is an age-related change in the allocation of cognitive resources. This may manifest as reduced attentional resources and/or as a greater susceptibility to interference from off-target distraction, with the latter often referred to as an age-related inhibition deficit (Hasher & Zacks, 1988; Lash & Wingfield, 2014; Lustig, Hasher, & Zacks, 2007). Discussions of these factors as components of the broader notion of executive function can be found in, for example, McCabe, Roediger, McDaniel, Balota, and Hambrick (2010) and Wingfield (2016).

This review will discuss how these factors can impact speech recognition and language comprehension in adult aging. The review builds from perception of spoken words, to sentences, to discourse. Two additional sections address listening in complex environments, and the special issue of hearing loss as a common accompaniment of normal aging.

Age Differences in Phoneme and Word Recognition

Spoken language is comprised of sentences, which can further be divided into individual words, syllables, and phonemes. In this section, we discuss age effects that have been observed in speech perception at the phoneme and lexical (word) levels (see also Johns, Myers, & Skoe, 2018).

Aging and Phoneme Recognition

As a word is spoken, listeners encode the acoustic input and map this input onto phoneme categories as a step to successful word recognition. Different speakers can pronounce phonemes somewhat differently (e.g., Allen, Miller, & DeSteno, 2003; Hillenbrand, Getty, Clark, & Wheeler, 1995; Petersen & Barney, 1952), and listeners often have to contend with an ambiguous sound that does not clearly belong to one phoneme category or another. The nature of such perceptual assignment has been revealed in studies of categorical perception, in which listeners hear sounds with their acoustic properties systematically manipulated and are asked to indicate the phoneme he or she believes has been presented.

In order to understand categorical perception, consider the phonemes /g/ and /k/. These two phonemes differ only in their voice onset time (VOT; Lisker & Abramson, 1964), such that /g/ is voiced from the onset of its articulation, whereas with /k/ there is a lag between the start of the utterance and the onset of voicing. Since VOT can be on a continuum, varying the VOT can create perceptual ambiguity. That is, progressively decreasing the voice onset time will yield a stimulus that sounds more like a /g/ and increasing the voice onset time will produce a stimulus that sounds more like a /k/. Listeners will still perceive this variation in a categorical manner.

Speech Comprehension and Cognition in Adult AgingClick to view larger

Figure 1. Percentage of responses by young adults identifying acoustic inputs as a /b/, /d/, or /g/ as the acoustic features progressively change (stimulus value on the abscissa). Although different stimulus values vary in a continuous fashion (x-axis), participants identify the sounds in a categorical manner (y-axis). Note the ambiguous regions at stimulus values 3–4, and 9–10, which represent clear category boundaries in the response function.

Source: From Liberman et al. (1957).

An example of categorical response is shown in Figure 1, taken from a classic study of categorical phoneme perception by Liberman et al. (1957), which plots the percentage of young adults’ responses that identified an acoustic input as a /b/, /d/, or /g/ as the VOT was progressively lengthened (indicated by the stimulus value label on the abscissa). It can be seen from the response curves that there is a sharp shift in the phoneme being perceived, reflecting clear categorical boundaries (Treisman, Faulkner, Naish, & Rosner, 1995), despite uniformly varying acoustic features of the stimuli. Although there are two ambiguous regions in the figure (the first is observed at stimulus values 3–4, and the second at stimulus values 9–10), as one departs from these ambiguous regions, Figure 1 shows that sounds with a range of stimulus features are categorized as one phoneme or another. This is the essence of categorical perception.

A second defining feature of categorical perception appears when listeners are presented with two tokens (acoustic samples) and they are asked to judge whether the stimuli are the same or different. Consistent with the nature of categorical boundaries, discrimination is poor for tokens that lie within a phoneme category and sharply improves when the two tokens flank either side of an individual’s category boundary. Studies have shown that older adults follow a similar principle (Abada, Baum, & Titone, 2008; Baum, 2003; Mattys & Scharenborg, 2014).

Context Effects on Phoneme Identification

The placement of a phoneme category boundary and corresponding perception of an ambiguous token may change depending on the surrounding acoustic or semantic context. For example, listeners are more likely to identify an ambiguous token as belonging to a particular phoneme category if that leads to the perception of a real word (Ganong, 1980). The classic Ganong effect can be illustrated by presenting listeners with, for example, the word “gift,” with the initial phoneme made to be acoustically ambiguous such that it could be heard as a /g/ or a /k/. When presented with a VOT value that produces ambiguity as to the phoneme being heard, listeners are more likely to perceptually categorize the sound as a /g/ if it was presented with /ift/, thus forming the real word “gift,” rather than hearing it as a /k/, which would yield a non-word (“kift”). On the other hand, the exact same acoustic token will be heard as a /k/ if it forms the real word “kiss,” rather than a non-word, “giss.” That is, the phoneme category flexibly shifts in either direction to bias the perception toward a real word.

Baum (2003) tested the Ganong effect with younger and older adults and found that older adults are more likely to shift the category boundary further to form a real word than younger listeners. That is, older listeners will categorize more of the ambiguous tokens as /g/, forming the word “gift,” when younger listeners will classify those same sounds as /k/, forming the non-word “kift.” This age difference is evidence that older adults are relatively more influenced by surrounding context than younger adults, such that their phoneme identification is more responsive to the possibility of perceiving a real word. Older adults are also more likely than younger adults to identify an ambiguous phoneme as consistent with a word that fits within a sentence context (Abada et al., 2008).

There are several reasons why older adults may be guided relatively more by the context surrounding an ambiguous phoneme than younger adults. One explanation may be that age-related differences in peripheral and central auditory processing lead older adults to use word knowledge and context to compensate for a reduced ability to adequately perceive the speech input (e.g., Gordon-Salant, Yeni-Komshian, Fitzgibbons, & Barrett, 2006). On the other hand, Mattys and Scharenborg (2014) have noticed that a stronger Ganong effect persists among older listeners even when younger and older adults are matched on perceptual discrimination. These latter authors have suggested that age differences in linguistic expertise and reduced inhibitory control might contribute to this age difference in the strength of the Ganong effect. Although all adults can be considered language experts, older adults’ more engrained use of their language experiences might require them, while making a phoneme category judgment, to have to work harder to suppress a strong lexical bias when the ambiguous acoustic input would form a real word or form a particular word that fits within a sentence context.

The Word Frequency Effect

A well-established finding is that listeners are faster at recognizing words that occur frequently in the language as compared to words that appear less often. This phenomenon was first demonstrated by Howes and Solomon (1951), who showed that the duration of exposure needed to correctly identify a visually presented word was inversely proportional to its relative frequency of occurrence in written texts. Because of a close (although not perfect) correlation between the frequency with which a word occurs in print and its frequency of occurrence in spoken conversation, Howes (1957) was able to use data from written word counts to show an analogous word frequency effect for ease of recognition of spoken words heard in a background of wide-spectrum noise. A more direct estimate specific to spoken word frequency has subsequently been made available based on a sample of 50 million words from movie subtitles (SUBTLEX; Brysbaert & New, 2009).

Regardless of the frequency norms employed, the word frequency effect operates across a number of domains and tasks, whether testing the speed of identifying a spoken word (Revill & Spieler, 2012), determining how much of a spoken word’s onset is minimally necessary to identify a spoken word (Grosjean, 1980), or in production, the speed with which a visually presented object can be named (Oldfield, 1966). One’s expectation for high-frequency words translates into a processing benefit for older adults, who have been reported to show a larger word frequency effect than young adults. There may be multiple reasons why older adults exhibit larger frequency effects, with likely factors being reduced hearing acuity, language expertise, and an age-related inhibition deficit (cf. Janse & Newman, 2013; Revill & Spieler, 2012).

Interference From Soundalike Words

While older adults’ recognition is facilitated for high-frequency words, Sommers (1996; Sommers & Danielson, 1999) has shown that older adults’ recognition of spoken words can be impeded by a reduced ability to inhibit interference from other words that share similar sounds with the target word. In these studies, the metric used to determine whether words are acoustically similar was a one-phoneme rule: words are considered to be acoustically related if they differ by no more than one phoneme, whether through addition, deletion, or substitution (Luce & Pisoni, 1998). For example, “at” and “mat” are related to “cat” under this guideline and are referred to as phonological neighbors. With this definition, the number of phonological neighbors (a word’s neighborhood density), and the relative word frequency of those neighbors, can be calculated by using an online corpus (e.g., Balota et al., 2007).

It has been shown that words with sparse neighborhoods (i.e., fewer acoustic competitors) are easier to identify than words with dense neighborhoods (Chen & Mirman, 2015; Luce & Pisoni, 1998; McClelland & Elman, 1986). This effect of phonological similarity has been shown to have a differentially detrimental effect on word recognition for older than for younger adults, presumably due to the age-related inhibition deficit (Sommers, 1996; Sommers & Danielson, 1999; see also Ben-David et al., 2011). When soundalike competitors of the stimulus word are also of a higher word frequency than the stimulus word, these detrimental effects are especially increased for older listeners (Revill & Spieler, 2012).

Benefit From Linguistic Context

As was discussed earlier, utterances in natural speech are often highly predictable. This predictability underlies the finding that the ease of word recognition for a word seen or heard within a sentence context is inversely proportional to its contextual probability within that sentence (Morton, 1969). For example, for the sentence stem “He mailed the letter without a . . .”, the word “stamp” is more highly probable or expected to complete the sentence compared to the word “signature” (Lahar, Tun, & Wingfield, 2004). Because of this higher probability, “stamp” would be more easily recognized than “signature” in this context using a variety of paradigms and modalities. This finding also holds true for older adults. Indeed, using a variety of experimental paradigms, older adults have been shown to make as good as, or even better, use of a sentence context than younger adults to aid word recognition (Benichov et al., 2012; Cohen & Faulkner, 1983; Perry & Wingfield, 1994; Pichora-Fuller, Schneider, & Daneman, 1995; Wingfield, Aberdeen, & Stine, 1991).

Because the words we hear are generally embedded within a linguistic and/or real-world context, older adults’ word recognition tends to be quite good, even with some degree of hearing loss (Benichov et al., 2012). It is primarily in cases where words are heard with minimal context, or if the context itself is poorly heard, that one sees significant age differences in word recognition. There are, however, two areas of cognitive constraints that limit the older adult’s use of context as an aid to speech recognition. One of these relates to an age-related inhibition deficit, and the other to an age-related reduction in working memory capacity.

Reduced Efficiency in Inhibiting Semantic Competition

Lash and colleagues (2013) conducted a study using word-onset gating (Grosjean, 1980). In this procedure, computer editing was used to initially present participants with just the first 50 milliseconds of a target word. If the participant was unable to identify the target word they were presented with the first 100 milliseconds of the word, then the first 150 milliseconds of the word, and so on until the word was correctly identified (Lash, Rogers, Zoller, & Wingfield, 2013). This study made use of published norms in which people were presented with a sentence with the last word of the sentence missing (sentence stems) and were asked give the first word that came to mind as a reasonable final word of the sentence. The percentage of individuals completing a stem with a particular word was taken as a measure of the transition probability of that word in the sentence context.

As might be expected, when the target words were presented without a constraining context (“The word is. . .”) Lash and colleagues found that older adults required more of a word’s onset duration to identify the word than younger adults, and older adults with a mild-to-moderate hearing loss required even more of a word’s onset duration for it to be identified. When words were presented with increasingly constraining linguistic contexts, however, not only were words correctly recognized with shorter acoustic onsets, but the original difference in ease of recognition between the younger and older adults, and the older adults with hearing impairments, decreased. Indeed, when the probability of a word based on the prior linguistic context reached a relatively high level (an average of 0.50), effects of age and hearing acuity on word recognition no longer appeared.

A feature of the norms used by Lash and colleagues was a complete listing of all of the alternative sentence endings that people gave to the sentence stems. For example, the responses participants gave to the sentence stem “In the morning Jake took out the. . .” were “garbage,” “trash,” “dog,” and “car” with probabilities of 0.48, 0.24, 0.19, and 0.03, respectively, and the responses participants gave to the sentence stem “Helen reached up to dust the. . .” included “shelves,” “cabinets,” “mantle,” “lamp,” “counter,” “ceiling,” and “chandelier” among others, with probabilities of 0.37, 0.18, 0.15, 0.06, 0.05, 0.02, and 0.02, respectively. Importantly, these stem completion tasks yielded generally similar response profiles across younger and older adults (Lahar et al., 2004).

There were two findings of note when Lash and her colleagues made use of these data. The first is that word recognition was adversely affected when there were a large number of alternative words suggested by the sentence stems, such as in the example “Helen reached up to dust the. . .” The second is that the degree of interference from such contextually relevant competitors was differentially greater for the older adults than the younger adults. It was also shown that this interference effect was independent of the older adults’ hearing acuity. Lash and colleagues interpreted this as resulting from an inhibition deficit in adult aging, such that the older adults exhibited reduced effectiveness in blocking semantically activated competing responses from interfering with recognition of the sentence-final words. These findings in regard to context and competition can be seen as analogous, at the sentence level, to the age differences found for effects of phonological neighborhood density.

Other Sources of Context

We noted earlier that many words in fluent speech are surprisingly under-articulated, which ordinarily goes unnoticed because of the perceptual support from a linguistic context. Janse and Ernestus (2011) compared the effectiveness of written versus spoken sentence contexts presented prior to presentation of perceptually unclear words that had been extracted from samples of fluent speech. They found that a spoken context was more effective for word recognition than a written context, which demonstrated the importance of acoustic context as well as a purely linguistic-semantic context.

This effect of acoustic context is due in large measure to coarticulation. In contrast to the clear word boundaries in written text, the words of fluent speech often run together, with the articulatory movements at the end of one word already beginning to form the initial sounds of a following word. Considerable research has shown that this can serve as an acoustic cue to the identity of the following word’s initial sounds (Daniloff & Hammarberg, 1973; Mann & Repp, 1980; Yeni-Komshian & Soli, 1981). As Janse and Ernestus emphasize, although a constraining semantic context improves recognition of naturally under-articulated words, the acoustic context in which the word occurs offers additional facilitation, which can be observed for both younger and older adults (Janse & Ernestus, 2011). When we make reference to the value of linguistic context to aid word recognition, we thus must also include the acoustic context that accompanies natural speech.

In addition to coarticulation, other acoustic cues available to listeners come in the form of syntactically tied prosody. “Prosody” is a generic term that includes pitch contour, word stress, and timing patterns, such as pauses and the lengthening of words prior to a clause boundary in a sentence. These are “suprasegmental” features that can aid a listener in sentence comprehension (Shattuck-Hufnagel & Turk, 1996). Such prosodic cues can be effectively used by both older and younger adults, and to the extent that some cues are more helpful than others, listeners maintain this understanding of relative importance even in aging (Hoyte, Brownell, & Wingfield, 2009). It can also be shown that older adults rely on such prosodic features to a relatively greater extent in determining syntactic clause boundaries in sentences than do their younger adult counterparts (Kjelgaard, Titone, & Wingfield, 1999).

Use of Retrospective Context

The excellent use older adults make of linguistic context to aid word recognition, however, comes with a caveat. This relates to those occasions when the identity of a spoken word remains uncertain until the listener hears more of the context that follows the indistinct word (Connine, Blasko, & Hall, 1991; Grosjean, 1985).

The top panel of Figure 2 is taken from a study by Wingfield, Alexander, and Cavigelli (1994) that explored this caveat. It shows the acoustic waveform of an audio recording of a sentence spoken by a talker speaking in a relaxed, conversational manner. It can be seen that the words “him,” “warm,” and “and” run together with no clear temporal breaks between them. The same can be seen earlier in the sentence for the words “put on his,” as is typical of natural running speech.

The lower left panel of Figure 2 shows that when the word “warm” was spliced out of its linguistic surrounding and presented in isolation, just under 40% of a group of younger adults, and just under 30% of a group of older adults, were able to identify the word. This is an example of the under-articulation of words that is commonly encountered in contextually constrained fluent speech (Janse & Ernestus, 2011; Lindblom et al., 1992; Pollack & Pickett, 1963).

To illustrate the facilitating effects of linguistic context, Wingfield and colleagues then presented the same word along with the word that had occurred prior to it in the original recording, and then the two words that had preceded the target word, and so on, up to four prior words. It can be seen in the lower left panel of Figure 2 that the percentage of younger and older adults who were able to correctly identify the indistinct word “warm” increased monotonically with increasing numbers of prior words (Wingfield et al., 1994). It can also be seen that, although the younger adults showed higher recognition scores at all levels of context, the slopes of the functions for the younger and older adults from their relative baselines are similar.

The right panel in Figure 2 shows similar data, but this time when the word “warm” was presented first out of context, and then successively presented with one to four words that had followed it in the original recordings. It can be seen that both age groups gained from context that followed the indistinct word, but also that with a following context the rate of gain for the older adults was shallower than for the younger adults.

Speech Comprehension and Cognition in Adult AgingClick to view larger

Figure 2. Top panel shows the waveform of a naturally uttered sentence containing the word “warm.” Lower panels show the percentage of younger and older adults correctly identifying the target word “warm” when it was heard in isolation and when heard with increasing numbers of words that had preceded it (left panel) or followed it (right panel) in the original utterance.

Source: Adapted from Wingfield et al. (1994).

For the context following an indistinct word to be effective, a trace of that word or its phonological representation must be maintained in memory to allow for a retrospective analysis of the indistinct word. It is in this operation that the older adults were relatively less effective than the younger adults. This account does not in itself specify the nature of this memory trace. There have been suggestions in the literature that reduced working memory resources may affect older adults’ use of context to facilitate speed and accuracy in perceptual identification of words embedded in a linguistic context (Janse & Jesse, 2014). The characteristics of the trace needed for retrospective word recognition described here may implicate a phonological short-term memory buffer as part of a broader working memory system (Jacquemot & Scott, 2006).

Finally, it can be seen in Figure 2 that the initial increment of improvement from hearing a word in isolation to hearing it with the one word immediately preceding or following it is somewhat larger for the younger than for the older adults. This is likely due to the younger adults’ ability to detect and/or make use of the subtle acoustic consequences of coarticulation that can occur as adjoining words glide together as they are produced. Beyond an immediate neighboring word, however, coarticulatory cues would be inoperative, such that any effects on recognition of the indistinct word beyond this point would be from the linguistic context.

False Hearing

One negative consequence of older adults’ sensitivity to linguistic context is that they are more likely to misidentify a spoken word as being a word with a similar sound to the target word, but one that is more strongly suggested by a preceding semantic context. The term “false hearing” has been used to refer such misidentifications when they are accompanied by the listener having a high level of confidence that the more contextually congruent word was in fact the word they had heard (Rogers, Jacoby, & Sommers, 2012). This phenomenon of false hearing has been illustrated by presenting spoken word pairs, with the first word followed by a word that has a similar sound to a word that is semantically associated with the first word. That is, correct identification of the second word requires the listener to focus on its acoustic properties, and to ignore, or inhibit, the contextually activated similar-sounding word. (Rogers and colleagues increased the perceptual challenge by presenting the second, to-be-identified word in a background of acoustic noise.)

Using this paradigm, Rogers and colleagues (2012) found that older adults were more likely than younger adults to misidentify the second word as a similar-sounding word more consistently with the context induced by the first word of the word pair. Moreover, the older adults showed a stronger tendency to do so with a high level of confidence. That is, although both age groups showed this false hearing effect, the effect was differentially greater for the older adults than for the younger adults.

Although false hearing will be more likely when the target word is acoustically degraded, the greater propensity for older adults to show the effect appears even when this is taken into account (Rogers et al., 2012; Rogers & Wingfield, 2015). These results imply that older adults have a less flexible listening strategy than younger adults, a finding consistent with the previously cited postulate of an age-related inhibition deficit. Such a deficit would have made it harder for the older adults to reject a semantically related lure in favor of a semantically incongruent word that was actually presented.

Sentence Comprehension

As has previously been noted, older adults generally perform well in understanding speech in everyday life, where listeners most often encounter plausible, simple sentences, with a canonical word order (Goldman-Eisler, 1968). When given such everyday sentences in an experimental setting, older adults are equal or nearly equal to younger adults in accuracy and speed of comprehension. For example, Wingfield and colleagues (2006) found that even at relatively fast speech rates, older adults performed just as accurately as younger adults in answering comprehension questions for simple sentences (Wingfield, McCoy, Peelle, Tun, & Cox, 2006). A different picture emerges, however, for sentences that express their meanings with more complex syntactic structures.

Age Differences and Syntactic Complexity

In an early study, Obler, Fein, Nicholas, and Albert (1991) tested comprehension of sentences varying in syntactic structure. Obler and her colleagues found that older adults made more comprehension errors than younger adults when sentences contained a double negative, such as “The bureaucrat who was not dishonest refused the bribe.” The general effect of structural complexity on comprehension and recall of sentences has been replicated many times with a variety of syntactic structures. The consistent finding is that although even younger adults will tend to show more comprehension errors for sentences with increasing syntactic complexity, this effect is differentially greater for older adults (e.g., DeCaro, Peelle, Grossman, & Wingfield, 2016; Fallon, Peelle, & Wingfield, 2006).

These effects can be attributed at least in part to the cognitive changes that occur with adult aging, one of which is a decrease in working memory capacity (Salthouse, 1996). Working memory has been shown to both carry and constrain one’s ability to comprehend complex sentences (Just & Carpenter, 1992), with these demands reflected in increased patterns of neural activation during comprehension, especially for older adults (Cooke et al., 2002; Peelle, Troiani, Wingfield, & Grossman, 2010; Wingfield & Grossman, 2006). Object-relative sentences, such as, “Brothers that sisters assist are fortunate,” for example, require the listener to keep the subject of the sentence in mind for a longer period of time than subject-relative sentences (e.g., “Sisters that assist brothers are fortunate”), thus requiring a greater draw on working memory (e.g., Cooke et al., 2002; DeCaro et al., 2016). Sentences with double negatives of the sort used by Obler and colleagues (1991) also require more working memory resources to allow listeners to hold the words in mind as they perform multiple operations to develop the sentence meaning. Given the age-related decrease in working memory capacity (Salthouse, 1994), and the link between working memory capacity and comprehension of sentences with complex syntactic structures (Just & Carpenter, 1992), it is not surprising that age effects would be observed for processing these complex syntactic structures.

As a test of this likelihood, DeCaro and colleagues (2016) constructed spoken sentences specifically designed to place a greater load on working memory. This was done by increasing the distance between the agent performing an action and the action being performed in simpler (subject-relative) and more complex (object-relative) sentences. For example, a subject-relative sentence with a short gap between the agent and the action might be, “Sisters that assist brothers with short brown hair are fortunate.” A sentence with the same subject-relative structure with a long gap between the agent and the action might be: “Sisters with short brown hair that assist brothers are fortunate.” Examples of sentences with a more complex object-relative structure with, respectively, short and long gaps might be: “Brothers with short brown hair that sisters assist are fortunate,” and “Brothers that sisters with short brown hair assist are fortunate.” The task was simply to listen to the sentence as it was presented and to indicate who was the agent of the action.

Using such materials, DeCaro and colleagues found that older adults made more comprehension errors than younger adults when working memory was taxed by the more complex syntactic structure and by increasing the gap between the agent and action. This same study also demonstrated that scores on a test of working memory capacity, along with differences in hearing acuity, accounted for a significant amount of the variance in comprehension accuracy. This latter point is important as sound levels were set to ensure audibility of the stimuli for all participants.

It could be argued that inhibitory control, which is related to working memory, may also be implicated in the comprehension of sentences with more complex syntactic structures. For example, since a subject-relative structure is a more canonical or common structure in English (Goldman-Eisler, 1968), listeners tend to expect such a structure and must inhibit these expectations when presented with a sentence with a non-canonical, less expected object-relative structure (cf. Gibson, Bergen, & Piantadosi, 2013; Levy, 2008; Novick, Trueswell, & Thompson-Schill, 2005; Padó, Crocker, & Keller, 2009).

Multiple Solutions to the Comprehension Challenge

Much of psycholinguistic theory has assumed that sentence comprehension requires a full syntactic analysis as a prerequisite to comprehension of the sentence meaning. When placed under time pressure or other processing challenges, however, it has been argued that listeners may often take a processing shortcut, basing comprehension on probabilistic inferences and plausibility rather than operations represented in formal syntactic processing models (e.g., Ferreira, 2003; Ferreira, Bailey, & Ferraro, 2002; Frank & Bod, 2011; Gibson et al., 2013; Padó et al., 2009; Rönnberg et al., 2013).

Such a process may be described as shallow processing, a resource-conserving strategy in which the meaning of a sentence is rapidly inferred based on word order (syntactic constraints) and thematic plausibility (semantic input; Ferreira, 2003). Because what we hear is usually plausible, this shallow processing scheme will ordinarily be successful and would be indistinguishable from a model that assumes that a full syntactic analysis and thematic integration of the sentence elements has taken place. A listener’s underlying strategy can be revealed, however, when he or she hears a syntactically challenging sentence that conveys an unexpected or unlikely meaning (Ferreira & Patson, 2007).

Beginning with Obler and colleagues (1991), studies have found evidence that when a word-by-word syntactic analysis of an utterance yields an implausible or unlikely meaning, older adults are more likely to produce comprehension errors consistent with a plausible understanding of the sentence than younger adults. Figure 3 plots data from Obler et al. (1991) that shows the percentage of cases in which participants made errors of comprehension when the meaning of a sentence was plausible or implausible. In the latter case the errors came in the form of listeners assuming a plausible meaning to the sentence even though the literal meaning was implausible. One can see in Figure 3 that although implausible sentences yielded more errors of comprehension than plausible sentences, the size of this difference became progressively larger for the older adults in the study.

Speech Comprehension and Cognition in Adult AgingClick to view larger

Figure 3. Percentage of comprehension errors as a function of participants’ ages. Errors reflect the participant responding to the plausibility of a sentence over its literal meaning. The data shown are averaged across several syntactic forms.

Source: Figure based on data from Table 6, p. 443 of Obler et al. (1991).

These results could be interpreted as reflecting a decline in syntactic processing ability with age (Obler et al., 1991). On the other hand, they could reflect adoption of a shallow processing strategy when a listener’s resources are taxed, as would occur more often for older than for younger adults. Amichetti, White, and Wingfield (2016) examined this latter possibility by presenting sentences with plausible or implausible meanings to young adults with age-normal hearing, older adults with good hearing acuity for their ages, and age-matched older adults with mild-to-moderate hearing loss.

As in all such studies, care was taken to ensure that the speech was audible for all three participant groups. Audibility is typically tested by participants’ ability to repeat accurately a word or a short sentence at the intensity level to be used in the experiment. A common method is to present speech stimuli at a constant level above each individual’s hearing threshold. For example, stimuli might be presented at 15 decibels (dB) above each person’s threshold; this would be referred to as 15 dB sensation level (SL).

Four types of sentences were presented in Amichetti and colleagues’ study: sentences with a subject-relative construction with a plausible action (e.g., “The eagle that attacked the rabbit was large”), subject-relative sentences with the agent and recipient switched to yield an implausible action (e.g., “The rabbit that attacked the eagle was large”), plausible sentences with a syntactically more complex object-relative construction (e.g., “The rabbit that the eagle attacked was large”), and object-relative sentences with an implausible action (e.g., “The eagle that the rabbit attacked was large”). The task was to listen to a sentence and to indicate the agent of the action.

Results showed that the likelihood of interpreting a sentence according to its literal meaning was reduced when that meaning expressed an implausible relationship, with this likelihood further increased for sentences that expressed their meaning with the more syntactically complex, resource-demanding, object-relative structure. These effects were further amplified for the older adults and those with poorer hearing acuity.

What we have referred to as shallow processing implies that the listener will sometimes analyze a sentence to a “good enough” level for understanding. In the examples above, it was seen that older adults will often use plausibility as a shortcut to understanding a sentence, even when the literal meaning dictates otherwise. Another example is so-called “garden path” sentences, such as “The old man the boats.” Reading or hearing this sentence will lead to confusion since it is more common to assume that “old man” is an adjective modifying a noun when in this case “old” is serving as a noun and “man” is serving as a verb. Successful comprehension of the sentence requires the ability to inhibit the initial presumption, another case where older adults encounter more difficulty than younger adults (Christianson, Williams, Zacks, & Ferreira, 2006; May, Zacks, Hasher, & Multhaup, 1999). Results such as these suggest that listeners may not always use a single processing strategy for successful sentence comprehension, with the existence of such alternative solutions revealed when literal syntax and plausibility do not coincide.

Discourse Processing

In order to understand spoken discourse the listener must develop an integrated representation of the ideas expressed in the narrative, often requiring inferences, and elaborated on with world knowledge to create a global understanding of the discourse. This representation has been called a situation model of the narrative (van Dijk & Kintsch, 1983), with such models more easily constructed when the listener has some knowledge of the domain represented in the narrative (Kintsch, 1994; Miller, Cohen, & Wingfield, 2006). To the extent that older adults may have more domain knowledge than their younger counterparts, one may see an aspect of speech comprehension in which the older adult has an advantage.

Although studies of older adults’ ability to understand spoken phonemes, words, and sentences answer questions regarding the mechanisms underlying age-related changes in speech comprehension, the study of discourse and dialog processing can determine the degree to which age-related changes in these lower-level processes affect older adults’ ability to understand everyday speech communications.

Age Differences in Discourse Comprehension

There are two areas in which studies have shown older adults to be at a disadvantage relative to young adults in comprehension and recall of discourse passages. One of these relates to effectiveness in forming inferences, especially when the items needed to form the inference are widely spaced in the narrative (Cohen, 1979; Zacks, Hasher, Doren, Hamm, & Attig, 1987). The second is a less effective allocation of attentional resources across a passage. This can be modeled by examining the allocation of processing time when listeners are given control over the pacing of a narrative input (Titone, Prentice, & Wingfield, 2000). Readers interested in the use of the self-pacing technique, sometimes referred to as an “Auditory Moving Window,” can find good discussions in Ferreira, Henderson, Anes, Weeks, and McFarlane (1996) and Fallon et al. (2006).

In spite of these differences, cross-sectional studies on discourse comprehension have found that older adults can often perform well on comprehension of, and memory for, spoken discourse, likely due to the rich contextual information of longer passages of speech (Avivi-Reich, Daneman, & Schneider, 2014; Schneider, Daneman, Murphy, & See, 2000).

Resource-draining perceptual effort plays an important part in age differences when they do appear in discourse comprehension, as older adults, even those with relatively good hearing for their ages, typically have poorer hearing acuity than their younger adult counterparts. For example, when sound levels or signal-to-noise ratios are individually tailored to each participant so as to reduce the degree of perceptual effort, older adults may perform equally as well as younger adults on comprehension questions following presentation of a spoken discourse (Gordon, Daneman, & Schneider, 2009). Such findings indicate the importance of acoustic factors in speech comprehension, and the importance of taking into account hearing acuity, especially for older adults where reduced hearing acuity is a common accompaniment of the aging process (Murphy, Daneman, & Schneider, 2006; Schneider et al., 2000; Schneider, Daneman, & Pichora-Fuller, 2002).

Additional Correlates of Discourse Comprehension

A 10-year cross-sequential study conducted by Payne and colleagues (2014) explored the effects of aging and other factors on older adults’ memories for spoken discourse. Participants (ranging in age from 65 to 94) were tested on measures of reasoning, speed of processing, and memory (including memory for spoken discourse), at baseline and at one, two, three, five, and 10 years later. Since the broad range of ages were followed over the course of 10 years, researchers were able to differentiate between effects due to aging, per se, and effects due to differences in age-cohort experiences (for example, 40-year-olds would not have experienced the same historical or cultural events as 80-year-olds).

Payne and colleagues (2014) found that at baseline (i.e., when participants were first tested), better memory for spoken discourse was correlated with more years of formal education, greater global cognitive function, and better episodic memory. Additionally, age at baseline uniquely predicted longitudinal changes in memory for spoken discourse. Based on modeled trajectories, participants who were older at baseline tended to experience a steeper decline in their memory for spoken discourse over the course of 10 years than those who were younger at baseline. This indicates that age-related declines in memory for spoken discourse tend to accelerate with increasing age.

Payne and colleagues had two additional findings of note. The first was that, independent of age, a decline in memory for spoken discourse was correlated with a decline in reasoning ability, and, at least for these participants, vocabulary knowledge did not have a significant effect on narrative recall. This study did not include measures of hearing acuity, leaving open the question of whether the observed decline-trajectories might be affected by hearing loss or other changes typically associated with adult aging.

Listening in Complex Environments

One of the hallmarks of age-related hearing loss is a special difficulty in hearing speech in noise (Humes, 1996). Background noise that mixes with (and hence degrades the acoustic quality of) speech input is referred to as energetic masking. Although we live in a noisy world, background noise is often not continuous, but instead fluctuates in intensity (Cooke, 2006). In general, speech recognition is better when there are such “gaps” or “dips” in fluctuating background noise, compared to when the background noise is continuous (Wagener, Brand, & Kollmeier, 2006).

The degree of benefit from quiet periods in background noise depends on the frequency and duration of the quiet periods (Dubno, Horwitz, & Ahlstrom, 2002; Wang & Humes, 2010). Studies have shown that older adults do not benefit as much as young adults from such variation in the background noise, nor do individuals with reduced hearing acuity (Eisenberg, Dirks, & Bell, 1995). Such effects, however, are mitigated when the target speech has relatively high predictability, such that missing bottom-up information caused by the acoustic masking can be supplemented by top-down expectations (Moore, 2003; Verschuure & Brocaar, 1983).

When the background “noise” is another speaker talking in the background, there are also gaps in the form of pauses that can occur at syntactic clause and sentence boundaries, and also prior to words the speaker is slow to retrieve. Indeed, it has been estimated that as much as 40% to 50% of speaking time in everyday discourse is occupied by such pauses (Butterworth, 1989; Goldman-Eisler, 1968). In consequence there may thus be periods in which a target speaker’s voice is acoustically masked, alternating with periods during which the masking level is reduced or absent.

In contrast to energetic masking, informational masking occurs when the background noise itself is meaningful speech. In this case, age-related cognitive factors take on special importance. For example, Tun and Wingfield (1999) found that older adults’ ability to recall sentences that had been heard with another talker in the background was predicted by the listener’s speed on a complex processing task, suggesting that individual differences in processing speed contribute to the ability to utilize the acoustic “windows” afforded when natural pauses occur in the competing speech. This is consistent with the “glimpsing” models of speech perception in noise proposed by, for example, Cooke (2006) and Moore (2003).

Independent of utilizing such gaps and dips in background noise, speech processing difficulties in the presence of a second speaker are due to more than intelligibility problems caused by energetic masking (Conway, Cowan, & Bunting, 2001; Tun, O’Kane, & Wingfield, 2002; Tun & Wingfield, 1999). Related to the ability to attend to a single speaker with a background of other speakers, sometimes referred to as selective listening, is the ability to “filter” (Broadbent, 1971), or inhibit (Hasher & Zacks, 1988), non-target speech. This suppression would either be at an early perceptual level, or later, in the form of suppression of off-task information from entering working memory. There are at least two kinds of inhibition: the inhibition necessary to maintain the speech stream segregated from the background, and a higher-level inhibition required when the listener is dealing with a background of meaningful speech. It is the case that both selective listening and the ability to rapidly switch attention between multiple speakers is especially challenging for the older adult (Tun et al., 2002).

Effects of Speech Rate and Restoring Lost Processing Time

The cognitive aging literature has isolated a number of hallmarks of the aging process, many of which would be expected to have an impact on older adults’ ability to perceptually process, and extract meaning from, natural speech. Among the earliest and most frequently cited of these hallmarks is a general slowing across a variety of perceptual, cognitive, and motor domains (Birren, 1964; Cerella, 1994; Salthouse, 1991, 1996; Welford, 1958). One would thus expect that perceptual operations that require rapid processing should be especially vulnerable in adult aging. A prime example of such rapid input, and one that presents a special challenge to the aging auditory system, is the rapidity of natural speech.

Natural speech rates vary widely, ranging from a “slow” rate of 90 words per minute (wpm) in thoughtful conversation, to well over 210 wpm for a radio or TV newsreader working from a prepared script (Stine, Wingfield, & Myers, 1990). On average, typical conversation speech rates vary between a very rapid 140 wpm to 180 wpm (Miller, Grosjean, & Lomanto, 1984). In reading, one can use eye movements to control the rate of input, or reread a section of text. In the case of speech, however, the processing operations that cannot be accomplished as the speech is arriving in real time must be conducted on a trace of that speech in memory.

A common way to explore the effects of speech rate on speech perception has been to use computer programs that time-compress the speech signal. The advantage of time compression of speech lies in tight experimental control, allowing investigators the ability to systematically increase speech rate in a controlled way. The alternative, asking a talker to speak rapidly, tends to reduce articulatory clarity of the speech and to disrupt normal temporal patterning, such that some studies have shown that time-compressed speech is more intelligible than equivalent speech rates produced by individuals attempting to speak rapidly (Ernestus & Warner, 2011; Gordon-Salant, Zion, & Espy-Wilson, 2014).

The sampling method of time compression is based on computer programs that periodically delete small, unnoticed segments of the overall speech signal, with the remaining segments then abutted in time. By removing these small segments to a proportionally equal degree from both the words and the linguistically-determined pauses in the speaker’s utterances, one maintains the relative temporal pattern of the sentences and discourse. Removing steady-state portions of words (e.g., vowels), while leaving rapid transients in the speech signal (e.g., stop consonants) intact, can yield better word intelligibility (Gordon et al., 2009; Schneider, Daneman, & Murphy, 2005). An advantage of uniform compression, however, is that it retains relative timing features such as the previously noted lengthening of clause-final words that signal the arrival of a major linguistic boundary in English (Shattuck-Hufnagel & Turk, 1996).

Studies using time-compressed speech have shown, first, that speech rates can be increased by 10% to 15% or more without affecting comprehension for either young or older adults. As speech rates are further increased, although young adults’ ability to report speech content progressively declines, the rate of decline is differentially greater for older than younger adults (Gordon-Salant & Fitzgibbons, 1993; Konkle, Beasley, & Bess, 1977). It has also been shown that the more linguistically complex the speech, the greater will be the age separation with increasing speech rates (Stine, Wingfield, & Poon, 1986; Wingfield, Peelle, & Grossman, 2003).

In part, older adults’ vulnerability to time-compressed speech can be attributable to the reduced acoustic richness of the individual words that challenges the older adult at the perceptual level (e.g., Heiman, Leo, Leighbody, & Bowler, 1986). In large measure, however, the older adults’ deficit lies in the loss of ordinarily available processing time at the linguistic level. This latter point was illustrated by Wingfield, Tun, Koh, and Rosen (1999), who inserted brief pauses after sentences and linguistic clauses in compressed passages.

To respect the relative importance of the linguistic boundaries in the passages, the durations of the silent periods were adjusted to be proportionally longer between sentences than between clauses within sentences. It was shown that, as long as the compression ratio was not too great, older adults’ recall of time-compressed speech returned to their performance baseline for normal, non-compressed speech. It was also the case, however, that with very fast rates (e.g., 300 wpm) recovery was not possible, even with the insertion of pauses at clause and sentence endings. Importantly, it was also shown that where one inserts the pauses (i.e., at the ends of major clauses within sentences and between sentences in discourse) is more important than the duration of the pauses (Wingfield et al., 1999).

Listeners’ intuitive understanding of the importance of linguistic clauses as natural processing points can be revealed when young and older adults are allowed to interrupt a recorded speech passage at points of their choosing to achieve the goal of accurate recall of what has been heard. Whether listening to normal or time-compressed speech, young and older adults spontaneously interrupt speech input for recall after the same major linguistic constituents, although as would be implied from the above discussion, older adults’ recall of the segments once selected is differentially poorer than the young adults’, especially with increasing degrees of time compression (Wingfield & Lindfield, 1995; see also Fallon, Kuchinsky, & Wingfield, 2004).

As previously noted, depending on the speech materials, both younger and older adults’ speech recognition and comprehension of sentence meanings may show little loss when the speech has been compressed by 10% to 15% from its original rate. This mirrors the fact that speech recognition is ordinarily quite robust, such that both young and older adults deal successfully on a daily basis with speech that often varies both within the space of a single conversation by a single speaker, and across individuals who can speak at very different average rates, albeit typically slower than the rates used in many studies using time-compressed speech (cf. Koch & Janse, 2016; Wingfield et al., 2003).

The Impact of Age-Related Hearing Loss

Hearing loss, whether mild, moderate, or more severe, is the third most prevalent chronic medical condition among older adults, exceeded only by arthritis and hypertension (Lethbridge-Ceijku, Schiller, & Bernadel, 2004). Although many older adults retain good hearing, 40% to 45% of adults over the age of 65 show some degree of hearing impairment, with this figure rising to 83% in the population over the age of 70 (Cruickshanks et al., 1998; Goman, Reed, & Lin, 2017). For older adults with hearing loss, even when speech recognition is successful, perceptual effort is a constant in their lives, with many older adults reporting mental fatigue at the end of a day of effortful listening (Fellinger, Holzinger, Gerich, & Goldberg, 2007).

Some years ago the British psychologist Patrick Rabbitt introduced an intriguing argument: what may be called an effortfulness hypothesis (Rabbitt, 1968, 1991). As developed over the years, this is the hypothesis that the increased effort required to identify unclear speech, whether due to background noise or reduced hearing acuity, requires the allocation of extra attentional resources, thus reducing the resources that might otherwise be available to support higher-level comprehension at the linguistic level, or encoding this information in memory (McCoy et al., 2005; Schneider & Pichora-Fuller, 2000; Surprenant, 1999; Tun, McCoy, & Wingfield, 2009; Wingfield, Tun, & McCoy, 2005). The critical point to this argument is that perceptual difficulty can contribute to comprehension or recall deficits, even when it can be demonstrated that the stimulus words themselves had been correctly identified, albeit with some effort.

Although this down-stream consequence of perceptual effort can also affect young adults with hearing loss (Piquado, Benichov, Brownell, & Wingfied, 2012), its effect has been shown to be magnified for older adults. This is so because older adults have more limited attentional and working memory resources (Pichora-Fuller et al., 2016; Wingfield et al., 2005), and that the processing itself is more effortful especially for those with reduced hearing acuity (Ayasse et al., 2017).

In addition to these issues, there is evidence from both cross-sectional and longitudinal studies that untreated hearing loss is associated with accelerated cognitive decline, even when the data are statistically controlled for age, sex, education, presence of diabetes, smoking history, and hypertension (Humes, Busey, Craig, & Kewley-Port, 2013; Lin, 2011; Lin et al., 2013; see also Gates, Anderson, McCurry, Feeney, & Larson, 2011). It should be noted that the correlations obtained in these and related studies, although statistically significant, tend to be small in magnitude, with an analysis conducted by Humes and Young (2016) suggesting that a relatively small percentage of the variance in cognitive function is explained by hearing loss. This relationship, however, should not be ignored.

One may speculate as to the degree to which the relationship between hearing loss and cognitive decline is due to continuous cognitive effort in hearing speech that takes a toll on cognitive reserves, the degree to which it may be caused or exacerbated by social isolation that often accompanies more serious hearing loss, or whether a concurrent decline in hearing acuity and cognitive function are both a reflection of an aging nervous system (see Lin, 2011, for a discussion). Whatever the proportional contribution of these or other factors, it is a public health concern that the hearing aid adoption rate among those who would benefit from hearing aids is well under 50%, even in countries where cost is fully subsidized (Kochkin, 1999; Valente & Amlani, 2017).


The aging process includes a number of cognitive changes (e.g., reduced working memory capacity, slowed processing, reduced inhibition effectiveness) that can affect older adults’ ability to comprehend speech. Speech processing occurs on multiple levels (phonemic, word, sentence, and discourse), each accompanied by a unique set of difficulties for the older listener. At the sensory level, adult aging is also often accompanied by reduced hearing acuity, as well as a special difficulty when trying to listen to speech in a noisy background. As a consequence, even when speech recognition is successful, this success may come at the cost of cognitive resources that might otherwise be available for higher-level comprehension of sentences and discourse, especially when the meaning is expressed with complex syntax. Thus, age-related effects are seen primarily when speech comprehension is made more difficult and the processing demands are increased by, for example, degraded acoustic features, increased syntactic complexity, or decreased context.

Although the above limitations are very real, linguistic knowledge acquired, maintained, and increased through decades of experience can to a large degree compensate for the cognitive and sensory declines that hinder the task of speech comprehension for the older adult. As in many aspects of adult aging, speech comprehension reflects a balance between loss and preservation. Barring serious neuropathology, this balance is favorable for older adults, who, with the above caveats, can be expected to maintain spoken language comprehension with a high rate of success.


The authors acknowledge support from National Institutes of Health Grant R01 AG019714 from the National Institute on Aging (A.W.), T32 GM084907 (N.D.A.), and T32 NS007292 (A.R.J.).


Abada, S. H., Baum, S. R., & Titone, D. (2008). The effects of contextual strength on phonetic identification in younger and older listeners. Experimental Aging Research, 34, 232–250.Find this resource:

Adank, P., & Janse, E. (2010). Comprehension of a novel accent by young and older listeners. Psychology and Aging, 25, 736–740.Find this resource:

Allen, J. S., Miller, J. L., & DeSteno, D. (2003). Individual talker differences in voice onset-time. Journal of the Acoustical Society of America, 113, 544–552.Find this resource:

Amichetti, N. M., White, A. G., & Wingfield, A. (2016). Multiple solutions to the same problem: Strategies of sentence comprehension by older adults with impaired hearing. Frontiers in Psychology, 7(789).Find this resource:

Avivi-Reich, M., Daneman, M., & Schneider, B. A. (2014). How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment. Frontiers in Systems Neuroscience, 8(21).Find this resource:

Ayasse, N. A., Lash, A., & Wingfield, A. (2017). Effort not speed characterizes comprehension of spoken sentences by older adults with mild hearing impairment. Frontiers in Aging Neuroscience, 8(329).Find this resource:

Balota, D. A., Yap, M. J., Cortese, M. J., Hutchison, K. A., Kessler, B., Loftis, B., . . . Treiman, R. (2007). The English Lexicon Project. Behavior Research Methods, 39, 445–459.Find this resource:

Baum, S. R. (2003). Age differences in the influence of metrical structure on phonetic identification. Speech Communication, 39, 231–242.Find this resource:

Ben-David, B. M., Chambers, C. G., Daneman, M., Pichora-Fuller, M. K., Reingold, E. M., & Schneider, B. A. (2011). Effects of aging and noise on real-time spoken word recognition: Evidence from eye movements. Journal of Speech, Language, and Hearing Research, 54, 243–262.Find this resource:

Benichov, J., Cox, L. C., Tun, P. A., & Wingfield, A. (2012). Word recognition within a linguistic context: Effects of age, hearing acuity, verbal ability and cognitive function. Ear and Hearing, 33, 250–256.Find this resource:

Birren, J. (1964). The psychology of aging. Englewod Cliffs, NJ: Prentice Hall.Find this resource:

Broadbent, D. E. (1971). Decision and stress. New York, NY: Academic Press.Find this resource:

Brysbaert, M., Stevens, M., Mandera, P., & Keuleers, E. (2016). How many words do we know? Practical estimates of vocabulary size dependent on word definition, the degree of language input and the participant’s age. Frontiers in Psychology, 7(1116).Find this resource:

Butterworth, B. (1989). Lexical access in speech production. In W. Marslen-Wilson (Ed.), Lexical representation and process (pp. 108–135). Cambridge, MA: MIT Press.Find this resource:

Brysbaert, M., & New, B. (2009). Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods, 41, 977–990.Find this resource:

Burke, D. M., MacKay, D. G., Worthley, J. S., & Wade, E. (1991). On the tip of the tongue: What causes word finding failures in young and older adults? Journal of Memory and Language, 30, 542–579.Find this resource:

Cerella J. (1990). Aging and information-processing rate. In J. E. Birren & K. W. Schaie (Eds.), Handbook of the psychology of aging (3rd ed., pp. 201–221). New York, NY: Academic Press.Find this resource:

Cerella, J. (1994). Generalized slowing and Brinley plots. Journal of Gerontology: Psychological Sciences, 459, P65–P71.Find this resource:

Chapanis, A. (1954). The reconstruction of abbreviated printed messages. Journal of Experimental Psychology, 48, 496–510.Find this resource:

Chen, Q., & Mirman, D. (2015). Interaction between phonological and semantic representations: Time matters. Cognitive Science, 39, 538–558.Find this resource:

Christianson, K., Williams, C. C., Zacks, R. T., & Ferreira, F. (2006). Younger and older adults’ “good-enough” interpretations of garden-path sentences. Discourse Process, 42(2), 205–238.Find this resource:

Cohen, G. (1979). Language comprehension in old age. Cognitive Psychology, 11, 412–429.Find this resource:

Cohen, G., & Faulkner, D. (1983). Word recognition: Age differences in contextual facilitation effects. British Journal of Psychology, 74, 239–251.Find this resource:

Connine, C. M., Blasko, D. G., & Hall, M. (1991). Effects of subsequent sentence context in auditory word recognition: Temporal and linguistic constraints. Journal of Memory and Language, 30, 234–250.Find this resource:

Conway, A. R. A., Cowan, N., & Bunting, M. F. (2001). The cocktail party phenomenon revisited: The importance of working memory capacity. Psychonomic Bulletin and Review, 8, 331–335.Find this resource:

Cooke, A., Zurif, E. B., DeVita, C., Alsop, D., Koenig, P., Detre, J., . . . Grossman, M. (2002). Neural basis for sentence comprehension: Grammatical and short-term memory components. Human Brain Mapping, 15, 80–94.Find this resource:

Cooke, M. (2006). A glimpsing model of speech perception in noise. Journal of the Acoustical Society of America, 119, 1562–1573.Find this resource:

Cruickshanks, K. J., Wiley, T. L., Tweed, T. S., Klein, B. E. K., Klein, R., Mares-Perlman, J. A., & Nondahl, D. M. (1998). Prevalence of hearing loss in older adults in Beaver Dam, Wisconsin: The epidemiology of hearing loss study. American Journal of Epidemiology, 148, 879–886.Find this resource:

Daniloff, R., & Hammarberg, R. (1973). On defining coarticulation. Journal of Phonetics, 1, 185–194.Find this resource:

DeCaro, R., Peelle, J. E., Grossman, M., & Wingfield, A. (2016). The two sides of sensory-cognition interactions: effects of age, hearing acuity, and working memory span on sentence comprehension. Frontiers in Psychology, 7(236).Find this resource:

Dodd, B. (1977). The role of vision in the perception of speech. Perception, 6, 31–40.Find this resource:

Dubno, J. R., Horwitz, A. R., & Ahlstrom, J. B. (2002). Benefit of modulated maskers for speech recognition by younger and older adults with normal hearing. Journal of the Acoustical Society of America, 111, 2897–2907.Find this resource:

Eisenberg, L. S., Dirks, D. D., & Bell, T. S. (1995). Speech recognition in amplitude modulated noise in listeners with normal and listeners with impaired hearing. Journal of Speech and Hearing Research, 38, 222–233.Find this resource:

Ernestus, M., & Warner, N. (2011). An introduction to reduced pronunciation variants. Journal of Phonetics, 39, 253–260.Find this resource:

Fallon, M., Kuchinsky, S., & Wingfield, A. (2004). The salience of linguistic clauses in young and older adults’ running memory for speech. Experimental Aging Research, 30, 359–371.Find this resource:

Fallon, M., Peelle, J. E., & Wingfield, A. (2006). Spoken sentence processing in young and older adults modulated by task demands: Evidence from self-paced listening. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 61(1), P10–P17.Find this resource:

Fellinger, J., Holzinger, D., Gerich, J., & Goldberg, D. (2007). Mental distress and quality of life in the hard of hearing. Acta Psychiatrica Scandinavica, 115, 243–245.Find this resource:

Ferreira, F. (2003). The misinterpretation of noncanonical sentences. Cognitive Psychology, 47, 164–203.Find this resource:

Ferreira, F., Bailey, K. G., & Ferraro, V. (2002). Good-enough representations in language comprehension. Current Directions in Psychological Science, 11, 11–15.Find this resource:

Ferreira, F., Henderson, J. M., Anes, M. D., Weeks, P. A., & McFarlane, D. K. (1996). Effects of lexical frequency and syntactic complexity in spoken-language comprehension: Evidence from the auditory moving-window technique. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22(2), 324–335.Find this resource:

Ferreira, F., & Patson, N. (2007). The “good enough” approach to language comprehension. Language and Linguistic Compass, 1, 71–83.Find this resource:

Fisk, A. D., & Fisher, D. L. (1994). Brinley plots and theories of aging: The explicit, muddled, and implicit debates. Journal of Gerontology: Psychological Sciences, 49, P81–P89.Find this resource:

Frank, S. L., & Bod, R. (2011). Insensitivity of the human sentence-processing system to hierarchical structure. Psychological Science, 22, 829–834.Find this resource:

Ganong, W. F. (1980). Phonetic categorization in auditory word perception. Journal of Experimental Psychology: Human Perception and Performance, 6, 110–125.Find this resource:

Gates, G. A., Anderson, M. L., McCurry, S. M., Feeney, M. P., & Larson, E. B. (2011). Central auditory dysfunction as a harbinger of Alzheimer dementia. Archives of Otolaryngology, Head and Neck Surgery, 137, 390–395.Find this resource:

Gibson, E., Bergen, L., & Piantadosi, S. T. (2013). Rational integration of noisy evidence and prior semantic expectations in sentence interpretation. Proceedings of the National Academy of Sciences, 110, 8051–8056.Find this resource:

Goldman-Eisler, F. (1968). Psycholinguistics: Experiments in spontaneous speech. New York, NY: Academic Press.Find this resource:

Gordon, M. S., Daneman, M., & Schneider, B. A. (2009). Comprehension of speeded discourse by younger and older listeners. Experimental Aging Research, 35, 277–296.Find this resource:

Gordon-Salant, S. (1987). Ager-related differences in speech recognition performance as a function of test format and paradigm. Ear and Hearing, 8, 270–276.Find this resource:

Gordon-Salant, S., & Fitzgibbons, P. J. (1993). Temporal factors and speech recognition performance in young and elderly listeners. Journal of Speech and Hearing Research, 36, 1276–1285.Find this resource:

Gordon-Salant, S., Yeni-Komshian, G. H., Fitzgibbons, P. J., & Barrett, J. (2006). Age-related differences in identification and discrimination of temporal cues in speech segments. Journal of the Acoustical Society of America, 119, 2455–2466.Find this resource:

Gordon-Salant, S., Zion, D. J., & Espy-Wilson, C. (2014). Recognition of time-compressed speech does not predict recognition of natural fast-rate speech by older listeners. Journal of the Acoustical Society of America, 136, EL268–EL274.Find this resource:

Goman, A. M., Reed, N. S., & Lin, F. R. (2017). Addressing estimated hearing loss in 2060. JAMA Otolaryngology-Head and Neck Surgery, 143, 733–734.Find this resource:

Grosjean, F. (1980). Spoken word recognition processes and the gating paradigm. Perception and Psychophysics, 28, 267–283.Find this resource:

Grosjean, F. (1985). The recognition of words after their acoustic offset: Evidence and implications. Perception and Psychophysics, 38, 299–310.Find this resource:

Hale, S., & Myerson, J. (1996). Experimental evidence for differential slowing in the lexical and nonlexical domains. Aging, Neuropsychology, & Cognition, 3, 154–165.Find this resource:

Hasher, L., & Zacks, R. T. (1988). Working memory, comprehension, and aging: a review and a new view. The Psychology of Learning and Motivation, 22, 193–225.Find this resource:

Heiman, G. W., Leo, R. J., Leighbody, G., & Bowler, K. (1986). Word intelligibility decrements and the comprehension of time-compressed speech. Perception and Psychophysics, 40, 407–411.Find this resource:

Hillenbrand, J., Getty, L. A., Clark, M. J., & Wheeler, K. (1995). Acoustic characteristics of American English vowels. Journal of the Acoustical Society of America, 97, 3099–3111.Find this resource:

Howes, D. (1957). On the relation between the intelligibility and frequency of occurrence of English words. Journal of the Acoustical Society of America, 29, 296–305.Find this resource:

Howes, D. H., & Solomon, R. L. (1951). Visual duration threshold as a function of word-probability. Journal of Experimental Psychology, 41, 401–410.Find this resource:

Hoyte, K. J., Brownell, H., & Wingfield, A. (2009). Components of speech prosody and their use in detection of syntactic structure by older adults. Experimental Aging Research, 35, 129–151.Find this resource:

Humes, L. E. (1996). Speech understanding in the elderly. Journal of the American Academy of Audiology, 7, 161–167.Find this resource:

Humes, L. E., Busey, T. A., Craig, J., & Kewley-Port, D. (2013). Are age-related changes in cognitive function driven by age-related changes in sensory processing? Attention, Perception, and Psychophysics, 75, 508–524.Find this resource:

Humes, L. E., & Young, L. A. (2016). Sensory-cognitive interactions in older adults. Ear and Hearing, 37, 52S–61S.Find this resource:

Jacquemot, C., & Scott, S. K. (2006). What is the relationship between phonological short-term memory and speech processing? Trends in Cognitive Sciences, 10(11), 480–486.Find this resource:

Janse, E., & Ernestus, M. (2011). The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. Journal of Phonetics, 39(3), 330–343.Find this resource:

Janse, E., & Jesse, A. (2014). Working memory affects older adults’ use of context in spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 1842–1862.Find this resource:

Janse, E., & Newman, R. S. (2013). Identifying nonwords: Effects of lexical neighborhoods, phonotactic probability, and listener characteristics. Language and Speech, 56, 421–441.Find this resource:

Johns, A. R., Myers, E. B., & Skoe, E. (2018). Sensory and cognitive contributions to age‐related changes in spoken word recognition. Language and Linguistics Compass, 12(2), e12272.Find this resource:

Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, 122–149.Find this resource:

Kempler, D., & Zelinski, E. M. (1994). Language in dementia and normal aging. In F. A. Huppert, C. Brayne, & D. W. O’Connor (Eds.), Dementia and normal aging (pp. 331–365). New York, NY: Cambridge University Press.Find this resource:

Kintsch, W. (1994). Text comprehension, memory, and learning. American Psychologist, 49, 294–303.Find this resource:

Koch, X., & Janse, E. (2016). Speech rate effects on the processing of conversational speech across the adult lifespan. Journal of the Acoustical Society of America, 139, 1618–1636.Find this resource:

Kochkin, S. (1999). “Baby Boomers” spur growth in potential market, but penetration rate decline. Hearing Journal, 52, 33–48.Find this resource:

Konkle, D. F., Beasley, D. S., & Bess, F. H. (1977). Intelligibility of time-compressed speech in relation to chronological aging. Journal of Speech and Hearing Research, 20, 108–115.Find this resource:

Kjelgaard, M. K., Titone, D., & Wingfield, A. (1999). The influence of prosodic structure on the interpretation of temporary syntactic ambiguity by young and elderly listeners. Experimental Aging Research, 25, 187–207.Find this resource:

Lahar, C. J., Tun, P. A., & Wingfield, A. (2004). Sentence-final word completion norms for young, middle-aged, and older adults. Journal of Gerontology: Psychological Sciences, 59B(1), P7–P10.Find this resource:

Lash, A., Rogers, C. S., Zoller, A. & Wingfield, A. (2013). Expectation and entropy in spoken word recognition: Effects of age and hearing acuity. Experimental Aging Research, 39, 235–253.Find this resource:

Lash, A., & Wingfield, A. (2014). A Bruner-Potter effect in audition? Spoken word recognition in adult aging. Psychology and Aging, 29, 907.Find this resource:

Liberman, A. M., Harris, K. S., Hoffman, H. S., & Griffith, B. C. (1957). The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology, 54, 358–368.Find this resource:

Lin, F. R. (2011). Hearing loss and cognition among older adults in the United States. Journal of Gerontology: Medical Sciences, 66A, 1131–1136.Find this resource:

Lin, F. R., Yaffe, K., Xia, J., Xue, Q.-L., Harris, T. B., Purchase-Helzner, E., . . . Simonsick, E. M. (2013). Hearing loss and cognitive decline in older adults. JAMA Internal Medicine, 173, 293–299.Find this resource:

Lindblom, B., Brownlee, S., Davis, B., & Moon, S. J. (1992). Speech transforms. Speech Communication, 11, 357–368.Find this resource:

Lisker, L., & Abramson, A. S. (1964). A cross-language study of voicing in initial stops: Acoustical measurements. Word, 20, 384–422.Find this resource:

Luce, P. A., & Pisoni, D. B. (1998). Recognizing spoken words: The neighborhood activation model. Ear and Hearing, 19, 1–36.Find this resource:

Lustig, C., Hasher, L., & Zacks, R. T. (2007). Inhibitory deficit theory: Recent developments in a “new view.” In D. S. Gorfein & C. M. MacLeod (Eds.), The place of inhibition in cognition (pp. 145–162). Washington, DC: American Psychological Association.Find this resource:

Lethbridge-Ceijku, M., Schiller, J. S., & Bernadel, L. (2004). Summary health statistics for U.S. adults: National interview survey. Vital Health Statistics, 10, 1–151.Find this resource:

Levy, R. (2008). Expectation-based syntactic comprehension. Cognition, 106, 1126–1177.Find this resource:

Mann, V. A., & Repp, B. H. (1980). Influence of vocalic context on perception of the [f]-[s] distinction. Perception and Psychophysics, 28, 213–228.Find this resource:

Mattys, S. L., & Scharenborg, O. (2014). Phoneme categorization and discrimination in younger and older adults: A comparative analysis of perceptual, lexical, and attentional factors. Psychology and Aging, 29, 150–162.Find this resource:

May, C. P., Zacks, R. T., Hasher, L., & Multhaup, K. S. (1999). Inhibition in the processing of garden-path sentences. Psychology and Aging, 14(2), 304–313.Find this resource:

Maylor, E. A. (1990). Recognizing and naming faces: Aging, memory retrieval, and the tip of the tongue state. Journal of Gerontology, 7, 317–323.Find this resource:

McCabe, D. P., Roediger, H. L., III., McDaniel, M. A., Balota, D. A., & Hambrick, D. Z. (2010). The relationship between working memory capacity and executive functioning: Evidence for a common executive attention construct. Neuropsychology, 24, 222–243.Find this resource:

McClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18(1), 1–86.Find this resource:

McCoy, S. L., Tun, P. A., Cox, L. C., Colangelo, M., Stewart, R. A., & Wingfield, A. (2005). Hearing loss and perceptual effort: Downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology, 58A, 22–33.Find this resource:

Meier, R. P. (1991). Language acquisition by deaf children. American Scientist, 79, 60–70.Find this resource:

Meier, R. P. (1991). Language acquisition by deaf children. American Scientist, 79, 60–70.Find this resource:

Miller, G. A. (1951). Language and communication. New York, NY: McGraw-Hill.Find this resource:

Miller, J. L., Grosjean, F., & Lomanto, C. (1984). Articulation rate and its variability in spontaneous speech: A reanalysis and some implications. Phonetica, 41, 215–225.Find this resource:

Miller, L. M. S., Cohen, J. A., & Wingfield, A. (2006). Knowledge reduces demands on working memory during reading. Memory and Cognition, 34, 1355–1367.Find this resource:

Moore, B. (2003). Temporal integration and context effects in hearing. Journal of Phonetics, 31, 563–574.Find this resource:

Morton, J. (1969). Interaction of information in word recognition. Psychological Review, 76, 165–178.Find this resource:

Murphy, D. R., Daneman, M., & Schneider, B. A. (2006). Why do older adults have difficulty following conversations? Psychology and Aging, 21(1), 49.Find this resource:

Myerson, J., Adams, D. R., Hale, S., & Jenkins, L. (2003). Analysis of group differences in processing speed: Brinley plots, Q-Q plots, and other conspiracies. Psychonomic Bulletin and Review, 10, 224–237.Find this resource:

Nicholas, M., Barth, C., Obler, L. K., Au, R., & Albert, M. L. (1997). Naming in normal aging and dementia of the Alzheimer’s type. In H. Goodglass & A. Wingfield (Eds.), Anomia: Neuroanatomical and cognitive correlates (pp. 166–188). San Diego, CA: Academic Press.Find this resource:

Novick, J. M., Trueswell, J. C., & Thompson-Schill, S. L. (2005). Cognitive control and parsing: Re-examining the role of Broca’s area in sentence comprehension. Cognitive, Affective, & Behavioral Neuroscience, 5, 263–281.Find this resource:

Obler, L. K., Fein, D., Nicholas, M., & Albert, M. L. (1991). Auditory comprehension in aging: Decline in syntactic processing. Applied Psycholinguistics, 12, 433–452.Find this resource:

Oldfield, R. C. (1966). Things, words and the brain. Quarterly Journal of Experimental Psychology, 18, 340–353.Find this resource:

Padó, U., Crocker, M. W., & Keller, F. (2009). A probabilistic model of semantic plausibility in sentence processing. Cognitive Science, 33, 794–838.Find this resource:

Payne, B. R., Gross, A. L., Parisi, J. M., Sisco, S. M., Stine-Morrow, E. A., Marsiske, M., & Rebok, G. W. (2014). Modelling longitudinal changes in older adults’ memory for spoken discourse: Findings from the ACTIVE cohort. Memory, 22, 990–1001.Find this resource:

Peelle, J. E., Troiani, V., Wingfield, A., & Grossman, M. (2010). Neural processing during older adults’ comprehension of spoken sentences: Age differences in resource allocation and connectivity. Cerebral Cortex, 20, 773–782Find this resource:

Perry, A. R., & Wingfield, A. (1994). Contextual encoding by young and elderly adults as revealed by cued and free recall. Aging and Cognition, 1, 120–139.Find this resource:

Peterson, G. E., & Barney, H. L. (1952). Control methods used in a study of the vowels. The Journal of the Acoustical Society of America, 24, 175–184.Find this resource:

Pichora-Fuller, M. K., Kramer, S. E., Eckert, M. A., Edwards, B., Hornsby, B. W., Humes, L. E., . . . Naylor, G. (2016). Hearing impairment and cognitive energy: The framework for understanding effortful listening (FUEL). Ear and Hearing, 37, 5S–27S.Find this resource:

Pichora-Fuller, M. K., Schneider, B. A., & Daneman, M. (1995). How young and old adults listen to and remember speech in noise. The Journal of the Acoustical Society of America, 97, 593–608.Find this resource:

Piquado T., Benichov, J. I., Brownell, H., & Wingfied, A. (2012). The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening. International Journal of Audiology, 51, 576–583.Find this resource:

Revill, K. P., & Spieler, D. H. (2012). The effect of lexical frequency on spoken word recognition in young and older listeners. Psychology of Aging, 27, 80–87.Find this resource:

Pollack, I., & Pickett, J. M. (1963). The intelligibility of excerpts from conversation. Language and Speech, 6, 165–171.Find this resource:

Postle, B. R. (2006). Working memory as an emergent property of the mind and brain. Neuroscience, 139, 23–38.Find this resource:

Rabbitt, P. (1968). Channel capacity, intelligibility, and immediate memory. Quarterly Journal of Experimental Psychology, 20, 241–248.Find this resource:

Rabbitt, P. (1991). Mild hearing loss can cause apparent memory failures which increase with age and reduce with IQ. Acta Otolaryngologia, 476, 167–176.Find this resource:

Rogers, C. S., Jacoby, L. L., & Sommers, M. S. (2012). Frequent false hearing by older adults: The role of age differences in metacognition. Psychology and Aging, 27, 33–45.Find this resource:

Rogers, C. S., & Wingfield, A. (2015). Stimulus-independent semantic bias misdirects word recognition in older adults. Journal of the Acoustical Society of America, 138, EL26.Find this resource:

Rönnberg, J., Lunner, T., Zekveld, A., Sörqvist, P., Danielsson, H., Lyxell, B., . . . Rudner, M. (2013). The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Frontiers in Systems Neuroscience, 7(31).Find this resource:

Salthouse, T. A. (1991). Theoretical perspectives on cognitive aging. Hillsdale, NJ: Erlbaum.Find this resource:

Salthouse, T. A. (1994). The aging of working memory. Neuropsychology, 8, 535–543.Find this resource:

Salthouse, T. A. (1996). The processing-speed theory of adult age differences in cognition. Psychological Review, 103, 403–428.Find this resource:

Schneider, B. A., Daneman, M., & Murphy, D. R. (2005). Speech comprehension difficulties in older adults: Cognitive slowing or age-related changes in hearing. Psychology and Aging, 20, 261–271.Find this resource:

Schneider, B. A., Daneman, M., Murphy, D. R., & See, S. K. (2000). Listening to discourse in distracting settings: The effects of aging. Psychology and Aging, 15, 110–125.Find this resource:

Schneider, B. A., Daneman, M., & Pichora-Fuller, M. K. (2002). Listening in aging adults: From discourse comprehension to psychoacoustics. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 56, 139–152.Find this resource:

Schneider, B. A., & Pichora-Fuller, M. K. (2000). Implications of perceptual deterioration for cognitive aging research. In F. I. M. Craik & T. A. Salthouse (Eds.), Handbook of aging and cognition (2d ed., pp. 155–220). Mahwah, NJ: Erlbaum.Find this resource:

Senghas, A., & Coppola, M. (2001). Children creating language: How Nicaraguan Sign Language acquired a spatial grammar. Psychological Science, 12(4), 323–328.Find this resource:

Shannon, C. E. (1951). Prediction and entropy in printed English. Bell System Technical Journal, 30, 50–64.Find this resource:

Shattuck-Hufnagel, S., & Turk, A. E. (1996). A prosody tutorial for investigators of auditory sentence processing. Journal of Psycholinguistic Research, 25, 193–247.Find this resource:

Sommers, M. S. (1996). The structural organization of the mental lexicon and its contribution to age-related declines in spoken-word recognition. Psychology and Aging, 11, 333–341.Find this resource:

Sommers, M. S., & Danielson, S. M. (1999). Inhibitory processes and spoken word recognition in young and older adults: The interaction of lexical competition and semantic context. Psychology and Aging, 14, 458–472.Find this resource:

Stevenson, R. A., Nelms, C. E., Baum, S. H., Zurkovsky, L., Barense, M. D., Newhouse, P. A., & Wallace, M. T. (2015). Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition. Neurobiology of Aging, 36, 283–291.Find this resource:

Stine, E. A. L., Wingfield, A., & Myers, S. D. (1990). Age differences in processing information from television news: The effects of bisensory augmentation. Journal of Gerontology: Psychological Sciences, 45, P1–P8.Find this resource:

Stine, E. L., Wingfield, A., & Poon, L. W. (1986). How much and how fast: Rapid processing of spoken language in later adulthood. Psychology and Aging, 1, 303–311.Find this resource:

Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26, 212–215.Find this resource:

Surprenant, A. M. (1999). The effect of noise on memory for spoken syllables. International Journal of Psychology, 34, 328–333.Find this resource:

Titone, D., Prentice, K. J., & Wingfield, A. (2000). Resource allocation during spoken discourse processing: Effects of age and passage difficulty as revealed by self-paced listening. Memory & Cognition, 28, 1029–1040.Find this resource:

Treisman, M., Faulkner, A., Naish, P. L., & Rosner, B. S. (1995). Voice-onset time and tone-onset time: The role of criterion-setting mechanisms in categorical perception. The Quarterly Journal of Experimental Psychology, 48, 334–366.Find this resource:

Tun, P. A., McCoy, S., & Wingfield, A. (2009). Aging, hearing acuity, and the attentional costs of effortful listening. Psychology and Aging, 24, 761–766.Find this resource:

Tun, P. A., O’Kane, G., & Wingfield, A. (2002). Distraction by competing speech in young and older adult listeners. Psychology and Aging, 17, 453–467.Find this resource:

Tun, P. A., & Wingfield, A. (1999). One voice too many: Adult age differences in language processing with different types of distracting sounds. Journal of Gerontology: Psychological Sciences, 54B, P317–P327.Find this resource:

Tye-Murray, N., Sommers, M., Spehar, B., Myerson, J., Hale, S., & Rose, N. S. (2008). Auditory-visual discourse comprehension by older and young adults in favorable and unfavorable conditions. International Journal of Audiology, 47(Suppl. 2), S31–S37.Find this resource:

Tye-Murray, N., Spehar, B., Myerson, J., Hale, S., & Sommers, M. (2016). Lipreading and audiovisual speech recognition across the adult lifespan: Implications for audiovisual integration. Psychology and Aging, 31, 380–389.Find this resource:

Valente, M., & Amlani, A. M. (2017). Cost as a barrier for hearing aid adoption. JAMA Otolaryngology-Head and Neck Surgery, 143, 647–648.Find this resource:

Van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension. New York, NY: Academic Press.Find this resource:

Van Engen, K. J., & Peelle, J. E. (2014). Listening effort and accented speech. Frontiers in Human Neuroscience, 8(577).Find this resource:

Verhaeghen, P. (2003). Aging and vocabulary score: A meta-analysis. Psychology and Aging, 18, 332–339.Find this resource:

Verschuure, J., & Brocaar, M. P. (1983). Intelligibility of interrupted meaningful and nonsense speech with and without intervening noise. Perception and Psychophysics, 33, 232–240.Find this resource:

Wagener, K. C., Brand, T., & Kollmeier, B. (2006). The role of silent intervals for sentence intelligibility in fluctuating noise in hearing-impaired listeners. International Journal of Audiology, 45, 26–33.Find this resource:

Wang, X., & Humes, L. E. (2010). Factors influencing recognition of interrupted speech. Journal of the Acoustical Society of America, 128, 2100–2111.Find this resource:

Welford, A. T. (1958). Ageing and human skill. Oxford, UK: Oxford University Press.Find this resource:

Wingfield, A. (2016). The evolution of models of working memory and cognitive resources. Ear and Hearing, (Suppl. 1), 35S–43S.Find this resource:

Wingfield, A., Aberdeen, J. S., & Stine, E. A. L. (1991). Word onset gating and linguistic context in spoken word recognition by young and elderly adults. Journal of Gerontology: Psychological Sciences, 46, P127–P129.Find this resource:

Wingfield, A., Alexander, A. H., & Cavigelli, S. (1994). Does memory constrain utilization of top-down information in spoken word recognition? Evidence from normal aging. Language and Speech, 37, 221–235.Find this resource:

Wingfield, A., & Grossman, M. (2006). Language and the aging brain: Patterns of neural compensation revealed by functional brain imaging. Journal of Neurophysiology, 96, 2830–2839.Find this resource:

Wingfield, A., & Lash, A. (2016). Audition and language comprehension in adult aging: Stability in the face of change. In K. W. Schaie & S. L. Willis (Eds.), Handbook of the psychology of aging (8th ed., pp. 165–185). London, UK: Elsevier.Find this resource:

Wingfield, A., & Lindfield, K. C. (1995). Multiple memory systems in the processing of speech: Evidence from aging. Experimental Aging Research, 21, 101–121.Find this resource:

Wingfield, A., McCoy, S. L., Peelle, J. E., Tun, P. A., & Cox, L. C. (2006). Effects of adult aging and hearing loss on comprehension of rapid speech varying in syntactic complexity. Journal of the American Academy of Audiology, 17, 487–497.Find this resource:

Wingfield, A., Peelle, J. E., & Grossman, M. (2003). Speech rate and syntactic complexity as multiplicative factors in speech comprehension by young and older adults. Journal of Aging, Neuropsychology and Cognition, 10, 310–322.Find this resource:

Wingfield, A., & Stine-Morrow, E. A. L. (2000). Language and speech. In F. I. M. Craik & T. A. Salthouse (Eds.), Handbook of aging and cognition (2d ed., pp. 359–416). Mahwah, NJ: Erlbaum.Find this resource:

Wingfield, A., Tun, P. A., Koh, C. K., & Rosen, M. J. (1999). Regaining lost time: Adult aging and the effect of time restoration on recall of time-compressed speech. Psychology and Aging, 14, 380–389.Find this resource:

Wingfield, A., Tun, P. A., & McCoy, S. L. (2005). Hearing loss in older adulthood what it is and how it interacts with cognitive performance. Current Directions in Psychological Science, 14, 144–148.Find this resource:

Yeni-Komshian, G. H., & Soli, S. D. (1981). Recognition of vowels from information in fricatives: Perceptual evidence of fricative-vowel coarticulation. Journal of the Acoustical Society of America, 70, 966–975.Find this resource:

Zacks, R. T., Hasher, L., Doren, B., Hamm, V., & Attig, M. S. (1987). Encoding and memory of explicit and implicit information. Journal of Gerontology, 42(4), 418–422.Find this resource:

Zacks, R. T., Hasher, L., & Li, K. Z. H. (1999). Human memory. In F. I. M. Craik & T. A. Salthouse (Eds.), Handbook of aging and cognition (pp. 200–230). Mahwah, NJ: Erlbaum.Find this resource: