Author Archives: Peter Richtsmeier

About Peter Richtsmeier

Postdoctoral research in Speech, Language, and Hearing Sciences at Purdue University.

Menn and Matthei (1992) The “two-lexicon” account of child phonology (Part 2)

In the previous post, I described Menn and Matthei’s assessment of progress on the two-lexicon model. They highlight several advantages of the model, but also note problems, including the apparent competition between children’s “selection rules” (or rules specific to the output lexicon), as well as non-deterministic cross-word patterns. To combat these and other problems, MM suggest that the formalism of the two-lexicon model migrate from a generative perspective to a more connectionist one. At this point, they make a very handy list of the key generalizations they would like to capture with a revised, connectionist two-lexicon model, or with any model of child speech production for that matter. I have restated them here, while keeping MM’s original groupings.

Reduction of Information

  1. Children recognize more words than they can say
  2. Children recognize more phonemic contrasts than they can realize in speech
  3. Early productions tend to cluster together in terms of phonetic properties
  4. Early productions also tend to contain a limited set of phonetic elements

Mapping

  1. Children’s productions appear to be simplified (compared to adult forms) and often appear systematic (many words share a pattern)

Inertia of the System

  1. Early, frequently produced words may retain a high level of fidelity, resulting in “phonological idioms” compared to more recently acquired production forms
  2. Changes in systematic productions tend to happen to newly acquired words; more established words are more resistant to change

We could also add to this list MM’s frequent observation that imitated production forms tend to be much more like adult forms.

To provide a general feel for a connectionist model of early speech production, MM lay out the “initial settings” for such a model. With respect to connections, MM posit simultaneous and sequential connections. Simultaneous connections link the speech modalities of motor commands, auditory percepts, and kinesthetic sensation (of one’s own productions). The three modalities, motor/auditory/kinesthetic or MAK, must be wired together efficiently by learning. Sequential connections are within-modality connections that represent change over time. So, a simultaneous connection might link together the feeling, action plan, and acoustic record of a [b], while sequential acoustic connections might link together the [b] burst to the following formants of an [a] vowel in the syllable [ba]. Although MM do not make this explicit, it appears that sequences of connections also represent stored forms, or words.

Next, MM lay out a series of what I will call linking mechanisms. First, sequential auditory patterns can be stored and learned by attention to adult speech. Second, there is an internal feedback loop, which MM relate to babbling, which has a basic predictive property that allows the model to guess how a sequential motor pattern might sound and thereby modify it to observe whether the result is the same or different (essentially a supervised learning component provided by the stored, “correct” adult forms). Third, imitation will result in links between stored adult-produced auditory sequences and the child’s own MAK sequences. Fourth, stored adult sequences will be associated with real-word states (meanings), which then leads associations between the child’s own MAK sequences and real-world states.

MM give a fair amount of attention to the idea that adults might assist in the development of a child’s MAK sequences. The basic idea is that an adult mimics the phonetic properties of a child’s utterance (absolute pitch, formant values, etc.). Here’s an explanatory quote: “A purely sound-based imitation of the child by the adult…will produce links between the child’s internal MAK associations and the sound of the adult’s voice, the child’s innate normalization abilities should be enhanced.”

Once normalization is established (although I’m not sure why it needs to be established first in this proposal), the child might seek to produce words in a more adult-like fashion. MM propose that social factors like semantically contingent responding by parents (Snow, 1977) could provide such a mechanism. MM conclude by saying that their connectionist model is not fully developed, and that many attractive qualities of the old two-lexicon model, like the selection rules, have been replaced by vaguer concepts. However, they believe that the absolute boundaries of the input and output lexicons in the original model simply do not serve us, and we should abandon them.

My primary concern with the connectionist model that MM propose is that it seems to completely abandon the original problem that the two-lexicon model addresses. Looking back at their list of key generalizations, I would single out two, but the connectionist model does not clearly address either. First, how is it that children can recognize more words/sounds than they can produce? Second, why are children’s early productions both simplified and systematic?

It’s difficult to see how the proposed connectionist model makes headway on these problems. In fact, it seems as if they have been replaced with several other problems in the study of child speech. The discussion of speech normalization is a perfect example. Given general agreement that toddlers have a good understanding of the perceptual form of their native language, this problem could be assumed to be solved at the time that production begins. For example, I know of no evidence that children ever attempt to imitate the absolute values of any acoustic property of adult forms, which seems to be a major problem if we want to address normalization.

To conclude, I generally see the box-and-arrow iteration of the two-lexicon model as being preferable, if only for specificity. Athough I agree with MM that the box-and-arrow model could be replaced advantageously by a connectionist model, the advantages are simply not clear enough here. In the future, I will present a more recent attempt at a connectionist network by Menn and colleagues, which may address the perception-production disparity more directly.

 

REFERENCES

Snow, C. E. (1977). The development of conversation between mothers And babies. Journal of Child Language, 4, 1-13.

Menn and Matthei (1992) The “two-lexicon” account of child phonology (Part 1)

Menn and Matthei (hereafter MM) begin with some information about the historical development of the two-lexicon model. They quote a paper by Ferguson, Peizer, and Weeks (1973), who noted a general human tendency to know more words than are typically said. That is, both children and adults know words that they rarely or never say. Thus, there seems to be a set of lexical representations for which the details of production are either murky or nonexistent, and we might hypothesize a split between input and output representations (Ingram, 1974), in other words, two separate lexicons.

So long as there is a consistency in children’s pronunciations, however, separate lexicons are unnecessary. If there is a regular mapping between the input representation (presumed to be identical to the adult forms) and the output representation, then a set of rewrite rules that capture the mapping are sufficient, and no output lexicon is needed. However, children are rarely consistent, and MM provide the example of two words (“down” and “stone”) that move in and out of a nasal harmony rule: They start out with no harmony ([dawn] and [don], resp.); the harmony rule then applies to other words (/binz/ –> [minz] and /dæns/ –> [næns]); finally, the harmony rule overtakes “down” and “stone”. With inconsistent mapping across similar words, rewrite rules are not helpful, or at least require arbitrary exceptions. Granted, two-lexicon models must also have lexical exceptions, but there are other advantages.

One of these advantages is that arbitrary exceptions in a one-lexicon system lead to more serious problems. The example is from Smith (1973) as interpreted by Macken (1980). The data comes from the child, Amahl, who displayed a pattern of velar harmony (/tr^k/ –> [kr^k]). Eventually, the pattern gave way to accurate production of alveolars, but one word, “took”, persisted as a regressive idiom, [gUk].

Macken assumes that this is possible because Amahl must have learned /gUk/ as the underlying form. Thus, when the harmony rule disappeared, /gUk/ would still surface as if harmony applied. As MM point out, however, this assumes that the child perceives “took” as /gUk/, which would lead us to expect that Amahl would not understand “took” as produced correctly. This seems highly unlikely, especially given our present-day understanding of children’s perceptual abilities. Furthermore, the example above with “down” and “stone” resisting a nasal harmony rule does not make sense if we assume exceptions are cases where the child has learned his own productions as underlying forms. At the very least, it would suggest that the underlying forms of words where nasal harmony does apply are perceived as if they had initial nasals. That defeats the advantage of the one-lexicon model, however, where we assume child and adult underlying forms are the same.

An output lexicon is helpful in this case because it provides a space for pronunciation representations that may be linked by a rule that operates across words or by arbitrary connections between input and output forms. Just as importantly, the output lexicon still allows children to be able to accurately perceive those words. That is, the output lexicon provides a storage facility for consistent or variable output representations while allowing for stable and accurate perception.

Despite the advantages, MM detail several problems they see with the two-lexicon model. First, it appears that selection rules—or the rules that lead to childlike forms in the output lexicon—sometimes operate over two words. This is problematic, however, if we take up the very standard assumption that combining words is done by the syntax and word combinations do not exist in the lexicon.

Another problem is that selection rules may sometimes be in competition with one another for a given word. MM give the example of productions by the child Daniel (also discussed by Menn in previous papers, I believe) of “boot” and “boat”, which are variably produced as [bup-dut] and [bop-dot] respectively. Thus, there appear to be separate labial harmony and alveolar harmony rules that compete in terms of realization of the same word. MM point out that there isn’t any sort of formalism in the two-lexicon model that allows for rule competition.

Other problems are given through the examination of daily changes in a couple of diary studies. For example, a child Jacob exhibited something like a vowel convergence, where [i] was produced like [ε]. So “tea” is first produced as [di] and then as [dεi]. “Key” was produced first as [ki], then as [xiε], and finally as [xε]. At the same time words with a mid front vowel switched between a low and high specification: “tape” was produced with both [i] and [e]. Ultimately, MM conclude that these similar words must be influencing each other in terms of production, but in a very unruly way. Similar cases are given for stress placement on two-syllables words beginning with [k] and over-application of the plural/3rd singular/possessive morpheme.

I’ll stop here for now. My next post will summarize what MM want to explain and then review the connectionist model that MM propose as a revised two-lexicon system.

 

REFERENCES

Ferguson, C. A., Peizer, D. B., & Weeks, T. A. (1973). Model-and-replica phonological grammar of a child’s first words. Lingua, 31, 35-65.

Ingram, D. (1974). Phonological rules in young children. Journal of Child Language, 1, 49-64.

Macken, M. A. (1980). The child’s lexical representation: The ‘puzzle-puddle-pickle’ evidence. Journal of Linguistics, 16, 1-17.

Smith, N. V. (1973). The Acquisition of Phonology: A Case Study. Cambridge: Cambridge University Press.

N. Hewlett (1990) Processes of development and production (Part 2)

Hewlett begins discussion of dual lexicon models with basic premise that, if children have accurate perception but inaccurate production, then “there is not just a single, modality-independent lexicon in which phonological representations are stored.” (p. 28) Hewlett lists several advantages to this basic framework. First, lexical avoidance (Schwartz & Leonard, 1982) is easily explained. Second, the “rules” like fronting and gliding that apply to child speech do not need to occur in real time. In many ways, this is helpful for explaining why the rules apply to environments, rather than to particular words. Exceptions abound, however! These exceptions include regressive idioms, where a child produces a word incorrectly even though similar words are generally produced correctly; and progressive idioms, where a child produces one word correctly when similar words are produced incorrectly. The problem with idioms is where Hewlett strikes out on his own, proposing a revised dual lexicon model.

It seems likely that reproducing the box-and-arrow model from the chapter would be a violation of copyright, so I will do my best to provide verbal descriptions for now. There are four four key boxes in the model (clockwise from upper left): the input lexicon, the output lexicon, a motor processor, and a motor programmer. The input lexicon is where incoming acoustic signals are matched to stored lexical items. Hewlett states explicitly that, “The input lexicon contains perceptual representations in terms of auditory-perceptual features.”

Realization rules link the input lexicon to the output lexicon, which contains articulatory representations. From there, an articulatory representation can be sent to the motor processor, where a motor plan is assembled using syllabic units. There is an alternative route, however, going through the motor programmer. If a realization rule does not exist, or if there is cause to eschew the realization rule, then the perceptual representation is sent to the motor programmer, where a motor representations is built from scratch. From there, it can either go directly to the motor processing component for implementation, or it can go to the output lexicon for storage, or probably both. Additional levels of production mechanism follow motor processing, including a segmental level of motor processing (which is acquired after the onset of speech), a motor execution level where muscle contractions are planned, and finally the signal sent to the vocal tract, representing the actual articulations.

How well does Hewlett’s model handle the data discussed in my last post? First, lexical avoidance is explained by postulating an entry in the input lexicon that has no corresponding motor plan (Hewlett is unclear here, but I think he means there is no corresponding entry in the output lexicon). Realization rules in which sound contrasts are neutralized (fronting, gliding, etc.) are the result of multiple input lexicon entries being mapped to the same output entry. Improvement in speech accuracy over time is handled by various forms of feedback, including the revision of output lexicon forms by passing input forms through the motor programmer.

There are many positive aspects of Hewlett’s model, and it does improve on the model proposed by Kiparsky and Menn (1977). However, the empirical coverage of the model is still quite limited. Here are a few examples. First, although Hewlett is careful to point out how important phonology is for explaining paradigmatic phonological rules, his model does not include a robust phonological grammar. The input and output lexicons are connected by an arrow, but this obscures what a difficult relationship this must be. How, for example, are output lexical items merged when they remain distinct in the input lexicon (e.g., when the words ‘rock’ and ‘walk’ are pronounced identically, or when /r/ and /w/ are pronounced identically, in general)? What mechanism is responsible for the merger? Notice that previous generative approaches are not helpful here because part of the challenge is to show how the input lexicon–including words like ‘rock’ and ‘walk’–links to the output lexicon–where ‘rock’ and ‘walk’ become merged. Grammars which do not split the lexicon into input and output components are therefore shielded from this problem. Progressive and regressive idioms are also unexplained by the single arrow between the input and output lexicons. The model has no way of explaining why some words might not follow an otherwise consistent grammatical pattern.

Second, how do articulatory representations develop? Consider who a child comes to produce their first word. Based on Hewlett’s model, we can reasonably assume that the child has an accurate perceptual representation of the word in their input lexicon. How is that word then matched up to any motor representation. Presumably, babbling plays some role in the developmental process, but this is not discussed outside of input from the motor programmer. We might look to work by Guenther to solve this problem (e.g., Guenther, 2006), but Hewlett leaves the process unspecified.

Finally, Lise Menn consistently mentions the important of explaining why speech accuracy improves during imitation, but Hewlett’s model is not specific enough to account for this fact.

Overall, Hewlett’s chapter provides an outstanding review of much of the work on child speech production and phonology up to 1990. His model offers several advances compared to similar models proposed by Menn (Kiparsky & Menn, 1977; Menn, 1983), but many facts about speech development remain unexplained.

N. Hewlett (1990) Processes of development and production (Part 1)

I’m following up on my review of Kiparsky and Menn (1977) with a review of Hewlett (1990), which extends the dual-lexicon model in several interesting ways, including a more detailed production component and an updated literature review. Unfortunately, the chapter is so long that it doesn’t really seem appropriate to review it all at once. In fact, this post will probably be too long. If you’d prefer shorter posts, let me know!

-Peter

******************************

Hewlett reviews major findings in normal and disordered phonological/speech development, with the goal of motivating a model of early speech production building on previous work [1, 2]. The coverage in the manuscript is extensive, and the criticism is often very insightful. Below is a short description of the findings that Hewlett covers.

Hewlett begins his review with very early speech development, including babbling.* Babbled sounds are typically the same sounds in early words, and babbling usually overlaps with the first real word productions [3]. Relevant work not discussed by Hewlett include research from Boysson-Bardies and colleagues showing that babbling sounds are language dependent and even sounds that are common in babbling around the world often have language-specific phonetic characteristics [4, 5].

When word production begins in earnest, Hewlett argues that certain aspects of early speech are consistent. First, early ‘proto-words’ [6] are highly variable in their form. Thus, although the child’s production goal might be consistent—for example, they are always referring to ‘milk’—the form is entirely inconsistent. Second, early words are generally single words or unanalyzed phrases (the parts of the phrase don’t recombine).

Hewlett argues that a separate stage can be identified around 1;6 (years; months), which roughly corresponds to what is often called the ‘word spurt’. Hewlett further elaborates on phonological systematicity during early word production. Young children apply systematic patterns to their speech. These patterns might include consonant cluster reduction (‘snow’ is pronounced [no]), or application of a child-language-specific rewrite rule (/r/ à [w] word-initially and word-medially), or application of a prosodic template, such as a [CVjVC] template [7]. Hewlett writes, “The important implication of this is that the child’s pronunciation patterns exhibit regularities which yield to a systematic description within a phonological framework.” (p. 19) Thus, the enterprise of child phonology has been either to 1) describe the child’s phonological inventory, including contrasts and phonotactic restrictions, or 2) write rules that describe how children get from the adult form, which children are presumed to know based on their perceptual abilities. I will not go into great detail about these proposals, but Hewlett reviews well-known rules such as /r/ à [w]. Finally, although Hewlett discusses the issue later in the paper, this stage of phonological development includes many examples of ‘lexical avoidance’, or cases in which children avoid words with particular sounds [8].

At this point, Hewlett reviews models of phonological development, including proposals by Jakobson [9], Stampe [10], and Menn ([2]; the dual-lexicon model, also described in [1], which I reviewed in a previous posting). He then goes on to describe children’s perceptual abilities, which are generally agreed on to be quite good. And, of course, the explosion of the infant literature starting in the early 1990s confirms that infants are very good at learning linguistic/phonological patterns before they begin to speak.

As a sort of contrasting section to `phonological development’ as described above, Hewlett reviews `phonetic development’, in which he focuses on the measurement of speech production. Several findings are noteworthy. First, children’s speech is known to be more variable, including long durations for linguistic targets and greater variability. Regarding variability, recent work by my current mentor Lisa Goffman, and her collaborations with her mentor Anne Smith, have greatly added to our understanding of speech motor variability in children. Some examples: [11] showed that oral-motor stability is below adult levels even at 14 years of age. [12] showed that, contrary to what one might expect from a frequency-based explanation, native English speaking children and adults produce iambs with more stability compared to trochees.

Continuing with Hewlett’s discussion of phonetic development, children’s formants tend to be more variable than adult’s formants [13]. Hewlett discusses the issue of whether children show more or less coarticuation than adults. A number of researchers, Susan Nittrouer being one example [14], have claimed that children actually show greater amounts of coarticulation. The implication is that children have less segmentalized speech, and therefore their early speech consists of unanalyzed whole words. This claim has been hotly debated (or was hotly debated 20 years ago), but it appears that coarticulation is often just different in children [15], without there being either more or less coarticulation in child speech.

Hewlett also discusses the issue of `covert contrasts’ or `incomplete neutralization’—cases where children appear to be producing two sounds the same but are actually producing them distinctly. For example, both /r/ and /w/ might be realized as something like a [w], but in fact, the productions are distinct, and children can reliably identify which word they intended from their own productions [16]. Elsewhere, I have argued that this is a systemic problem with analyses of child phonology. Because so much of the literature on `phonological processes’ in child speech is based on transcription data, it is unclear whether these cases reflect phonological processes or covert contrasts (in which case, `phonological’ must mean something entirely different than what it is usually taken to mean).

Hewlett concludes his review of phonetic development with three findings. First, sounds that appear in babbling may disappear from a child’s sound inventory after the onset of word production. Second, although adults are very good at compensating for a bite block and hitting acoustic targets, children may be less good at this [17]. Third, Hewlett notes that children seem readily able to acquire a foreign accent as well as a foreign language (although some more recent work [18] suggests that accent acquisition generally falls on a continuum based on age of acquisition). Regarding the last two findings, Hewlett concludes that children must be better than adults at learning to produce new sounds.

References

[1] Kiparsky, P. & Menn, L. (1977). On the acquisition of phonology. In Language Learning and Thought, J. Macnamara (Ed.). New York: Academic Press.

[2] Menn, L. (1983). Development of articulatory, phonetic, and phonological capabilities In Language Production, Vol II, B Butterworth (Ed.). London: Academic Press

[3] Locke, J. L. (1983). Phonological Acquisition and Change. New York: Academic Press.

[4] Boysson-Bardies, B. d., Halle, P., Sagart, L., & Durand, C. (1989). A crosslinguistic investigation of vowel formants in babbling. Journal of Child Language, 16(1), 1-17.

[5] Boysson-Bardies, B. d., & Vihman, M. M. (1991). Adaptation to language: Evidence from babbling and first words in four languages. Language, 67(2), 297-319.

[6] Menyuk P. & Menn, L. (1979). Early strategies for the perception and production of words and sounds. In Language Acquisition, P. Fletcher, M. Garman (Eds.). Cambridge, UK: Cambridge University Press. pp. 49-70.

[7] Priestly, T. M. S. (1977). One idiosyncratic strategy in the acquisition of phonology. Journal of Child Language, 4, 45-66.

[8] Schwartz, R. G., & Leonard, L. B. (1982). Do children pick and choose? An examination of phonological selection and avoidance in early lexical acquisition. Journal of Child Language, 9, 319-336.

[9] Jakobson, R. (1968). Child Language, Aphasia and Phonological Universals. The Hague: Mouton.

[10] Stampe, D. (1969). The acquisition of phonetic representation. Papers from the 5th Rebional Meeting of the Chicago Linguistic Society, 443-454.

[11] Smith, A. & Zelaznik, H. (2004) Development of functional synergies for speech motor coordination in childhood and adolescence. Developmental Psychobiology, 45, 22-33.

[12] Goffman, L. (1999). Prosodic influences on speech production in children with specific language impairment and speech deficits: Kinematic, transcription, and acoustic evidence. Journal of Speech, Language, and Hearing Research, 42, 1499-1517.

[13] Eguchi, S. & Hirsch, I. (1969). Development of speech sounds in children. Acta Otolaryngology Supplement, 257.

[14] Nittrouer, S., Studdert-Kennedy, M., & McGowan, R. S. (1989). The emergence of phonetic segments: Evidence from the spectral structure of fricative-vowel syllables spoken by children and adults. Journal of Speech and Hearing Research, 32, 120-132.

[15] Goodell, E. W. & Studdert-Kennedy, M. (1993). Acoustic evidence for the development of gestural coordination in the speech of 2-year-olds: A longitudinal study. Journal of Speech and Hearing Research, 36, 707-727.

[16] Kornfeld, J. R., & Goehl. (1974). A new twist to an old observation: Kids know more than they say. Chicago, IL: Chicago Linguistic Society.

[17] Oller, D. K. & MacNeilage, P. F. (1983). Development of speech production: Perspectives from natural and perturbed speech. In The Production of Speeech, P. F. MacNeilage (Ed.). New York: Springer Verlag, pp. 91-108.

[18] Flege, J. E., Munro, M. J. & MacKay, I. (1995). Factors affecting degree of perceived foreign accent in a second language, Journal of the Acoustical Society of America, 97, 3125-3134.

Kiparsky and Menn (1977).

Kiparsky, Paul, and Menn, Lise. (1977). On the acquisition of phonology. In John Macnamara (Ed.), Perspectives in Neurolinguistics and psycholinguistics. New York, NY: Academic Press. pp. 47-78.

Kiparsky and Menn (hereafter KM) present a theoretical argument for children as active discoverers of grammar, building structural representations based on evidence from the ambient language. In the process, KM propose a dual lexicon. The split includes one path between phonetic and phonological forms (i.e., some phonological processes map acoustic forms to the underlying phonological representations that link related words) and another path between incoming phonetic forms and the phonetic output that children create.

The chapter begins with “The Learning of the Phonetic Repertoire”, a discussion of the two major proposals for child phonology that existed in 1977. The first was Roman Jakobson’s, who proposes that phonology develops according to a universal system of contrasts, and contrasts are learned by children in the order of most to least universal. For example, children should contrast /d/ and /g/ before they contrast /d/ and /b/ (pp. 48-49). The problem with Jakobson’s approach is that it says nothing about the order in which the sounds themselves will be acquired. Furthermore, the absence of a contrast may indicate that children are intentionally, or selectively, avoiding a particular sound, but Jakobson says nothing about this or why sound evasion should happen. Therefore, KM consider Jakobson’s theory to be difficult to falsify.

Stampe’s theory is specific about when sounds will be acquired, but makes a distinction between phonological rules and phonological processes. Rules are the grammatical means by which speakers convert from phonological to phonetic word forms, such as the flapping or homorganic nasal cluster rules. Processes, on the other hand, are innate rule-like conversions that explain the kinds of errors that children make. For example, children produce voiced word-final stops without voicing (/d/ –> [t]/__#) because of a devoicing process. Speakers of languages like English, which do voice final stops, must overcome these processes.

KM describe several problems with this view. First, it appears that Stampe’s theory requires children to learn phonological rules in the same order as they would unlearn phonological processes. This is an empirical but unstudied question.* Second, KM find no reason to assume that adult speakers maintain rules on the one hand and processes on the other (i.e., German speakers do not appear to be stuck in a word-final devoicing process, and regardless, they must still learn the allomorphy that relates allomorphs with voiced and voiceless final stops).

KM also criticize both Jakobson and Stampe as being overly deterministic and not allowing for the kind of variability inherent to child language learners. As evidence, they point to the fact that children break up consonant clusters in a variety of ways, and to the fact that children often produce phonological idioms, words that are produced more accurately than the phonological processes apparently at work in their language would predict. In sum, KM state that we need a new model of phonological development. However, they do not focus much on the development of sounds or sound contrasts. Instead, they focus on the fact that children’s production abilities lag behind their perceptual abilities.

KM propose the dual lexicon to account for a distinction between cognitive grammar learning and articulatory sound implementation. Children may learn the cognitive grammar at whatever pace (KM describe it as going on over many years, although I think that today’s infant literature would generally contradict that**), but the development of a productive sound repertoire is separate from the cognitive grammar. Thus, we have two lexicons.

The second part of the book, “The Learning of Morphophonemics”, is somewhat orthogonal to the dual lexicon proposal, so I do not discuss it.

Here, I identify what I think are outstanding issues in the paper, some of which will be addressed in future posts. First, is the dual lexicon meant to be only a description of the grammar, or is it also a processing model? In other words, when formulating a message, does a child start with the phonological grammar, which is translated into a phonetic form, which is then translated into the child’s pronunciation? KM suggest that, in fact, their may be yet another step, in which physical limitations act on the message, as would be the case for a lisp. Second, KM propose that children do not have allomorphy. Is this really true? It seems to me that children could be learning meanings and linking related word forms at a fairly early age. However, I’m not familiar with the literature on this topic. Third, the logical dependency of the dual lexicon and KM’s proposal of the child as “language discoverer”*** view is not clear to me.

The Dual Lexicon Model

Hello All,

I’ve lately become interested in the dual lexicon model, originally conceived of by Lisa Menn and put in print in Kiparsky and Menn (1977). My basic interest is in what I consider to be an outstanding problem in phonological development, and the primary motivator of the dual lexicon model, namely, why children’s production abilities lag behind their perceptual abilities.

To satisfy my interests, I’ve started reading more about the model and its various instantiations, and I’m posting my notes to Phonoloblog. In a moment, I’ll post a review of Kiparsky and Menn (1997). Future posts will cover Hewlett (1989), Menn and Matthei (1992), Smolensky (1996), and a section of Hale and Reiss (2008). If you’d like to see a particular manuscript reviewed, let me know. I’d also love to get feedback about the proposals I’m reviewing (or about my reviews). Thanks!

–Peter

1st Poster Session

Poster Session

NOTE: I’ve edited this post less than the last one, so it may be harder to read.

Do articulatory constraints play a role in speech errors? (Slis & Van Dies Hout) — Past research has shown that vowel context influences whether or not you get speech errors (by Goldstein and others). Using EMA data, the authors showed that there is tongue tip movement during production of /k/ and tongue dorsum movement in production of /t/, at least in words that contain both of these consonants. Let’s call these non-matching articulations. They then looked at nonmatching articulations in a variety of vowel contexts for English speakers. The amount of movement varied by vowel. The follow-up question is whether this variation is sometimes not normal and whether (as we expect based on past research by Goldstein and others) these not normal or aberrant articulations occur more often with some vowels versus others. For example, sometimes the amount of non-matching articulation is much greater than something like a standard deviation from typical non-matching articualtion. The basic idea is to have a system of automatic speech error recognition based on the kinematics and conforming to past research on errors being conditioned by vowels. This next step is currently ongoing and should be completed shortly.

Continue reading

Goldrick, Matthew. (in press). Utilizing psychological realism to advance phonological theory.

Hello All,

I just finished reading a draft of Matt Goldrick’s chapter from the upcoming Handbook of Phonological Theory (2nd Edition). I enjoyed it and found it helpful in the way it covers the relationship between theoretical work on generative grammar and psycholinguistic work. So, I wrote a short summary, which I’m posting

*****************

Goldrick essentially reviews the role of phonotactics in psycholinguistic literature, but takes as a starting point the term “psychological reality” as it was used by Sapir (1933) to refer to the cognitive status of a grammar. Goldrick argues that it is vital for linguists to approach their research with at least some understanding of psychological reality—how things happen in real time, for instance—and that theories of grammar can only be improved by consideration of related data from the psycholinguistic literature.

As an example, Goldrick discusses the division between pre-lexical processing and lexical processing in the speech perception literature. Pre-lexical processing refers to a cognitive function which takes in fine-grained acoustic information (something like the signal sent along the auditory nerve) and spits out a pre-lexical but phonologically detailed representation. Lexical processing then takes this representation that has been passed to it and finds the corresponding entry in the mental lexicon . The term ‘function’ is used consistently to refer to a theoretical mapping of inputs (say, the signal from the auditory nerve) to outputs (phonemic representations). Following Marr (1982) and Smolensky (2006), Goldrick contrasts these levels of description with a higher algorithmic level—which details how a function is computed—and a lower neural level—which explains how the brain acheives a function. Of course, description at all three levels is necessary.

Goldrick works through evidence that categorical and gradient phonotactics influence both pre-lexical and lexical processing stages withing the larger cognitive task of single word recognition. As an example, identification tasks show listeners erroneously hear ill-formed sequences as well-formed ones; discrimination tasks show listeners have difficulty keeping separate words with ill-formed sequences and well-formed words that contain the likely repair strategies for the ill-formed words. Importantly, identification and discrimination errors do not always lead to real words, so they are arguably pre-lexical processing effects. More broadly, the reviewed psycholinguistic literature supports the existence of phonotactic representations apart from lexical ones, and it appears the representations are actively engaged in multiple cognitive functions, including both pre-lexical and lexical processing.

As something of a cautionary tale to linguistics, Goldrick talks about what can be gleaned from studies of wordlikeness judgments. First, he points out that the cognitive mechanisms employed in a judgment task are poorly defined (as Rob Fiorentino would say, it’s a very offline task), so it’s difficult to say what in the task reflects grammar (a mapping between surface and underlying forms) and what reflects other cognitive functions. We know, for example, that lexical neighborhood affects influence judgments (Bailey & Hahn, 2001). We also know from Luce and Vitevitch’s work (see refs below) that having real works in other tasks with nonwords increases the effects of lexical neighborhoods, and recently Shademan (2006, 2007) has shown that including real words in a judgment task does the same thing. Albright (2009) argues that the distribution of phonotactic probabilities within a nonword set also influences the relative roles of lexical and phonotactic effects on judgments. Finally, Goldrick notes that judgments may be the result of prior processes, such as perceptual effects that warp the percept. For example, Dupoux, Kakehi, Hirose, Pallier, and Mehler (1999) showed that Japanese listeners “repair” illegal consonant clusters by inserting an epenthetic vowel (cf. Berent et al., 2008, in PNAS).

For linguists, all the literature above means that claims to study competence apart from performance are not tenable, at least if our data are from wordlikeness tasks. We can’t study competence from these tasks because we know that the judgments are influenced by extra-grammatical factors. Therefore, Goldrick’s initial goal of emphasizing the importance of psychological reality within the study of linguistics holds. Beyond that, Goldrick offers several steps for future research, including some ideas for the study of the interaction of phonotactic and lexical knowledge.

Representative References

Albright, Adam (2009). Feature-based generalisation as a source of gradient acceptability. Phonology 26: 9-41.

Bailey, Todd M. and Ulrike Hahn (2001). Determinants of wordlikeness: Phonotactics or lexical neighborhoods? Journal of Memory and Language 44: 568-591.

Berent, Iris, Tracy Lennertz, Jongho Jun, Miguel A. Moreno, & Paul Smolensky (2008). Language universals in human brains. PNAS 105: 5321-5325.

Dupoux, Emmanuel, Kazuhiko Kakehi, Yuki Hirose, Christophe Pallier, and Jacque Mehler (1999). Epenthetic vowels in Japanese: A perceptual illusion? Journal of Experimental Psychology: Human Perception and Performance 25:1568-1578.

Marr, David (1982). Vision. San Francisco: W. H. Freeman and Company.

Sapir, Edward (1933). La Réalité psychologique des phonèmes. Journal de Psychologie Normale et Pathologuique 30: 247-265. English translation reprinted in David G. Mandelbaum (ed.) (1949), Selected Writings of Edward Sapir in Language, Culture and Personality 46-60. Berkeley, CA: University of California Press.

Shademan, Shabnam (2006). Is phonotactic knowledge grammatical knowledge? In Donald Baumer, David Montero, and Michael Scanlon (eds.) Proceedings of the 25th West Coast Conference on Formal Linguistics 371-379. Somerville, MA: Cascadilla Press.

Smolensky, Paul (2006). Computational levels and integrated connectionist/symbolic explanation. In Paul Smolensky and Géraldine Legendre The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar (Vol. 2, Linguistic and Philosophical Implications) 503-592. Cambridge, MA: MIT Press.

Vitevitch, Michael S. (2003). The influence of sublexical and lexical representations in the processing of spoken words in English. Clinical Linguistics & Phonetics 17: 487-499.

Vitevitch, Michael S., Paul A. Luce (1999). Probabilistic phonotactics and neighborhood density in spoken word recognition. Journal of Memory and Language 40: 374-408.

*****************

How do we feel about acronyms?

I’ve been sitting in on a class on developmental language disorders here at Purdue. The course instructor, Larry Leonard, was describing the Rice/Wexler Test of Early Grammatical Impairment, which “assesses the use of tense and agreement morphology by children ages 3 through 8 years” (from my handout). Apparently, some people know this test by its initials, TEGI, and some subset of those people use TEGI as an acronym pronounced /tigi/. Larry went on to say that, amongst certain circles of the speech and hearing world, /tigi/ is looked down upon. This reminded me of a rant by NPR sports contributor Frank Deford about the pronunciation of the baseball term RBI as /rɪbi/.

So, here’s my question: do acronyms generally have social stigma when compared to a competing initialism (or alphabetization, as I was taught)? Is this a case of a prescriptively bad phonological process?

Note: A brief search didn’t turn up discussion of this issue on Language Log, although the acronym/initialism distinction seems well covered. And here’s a sampling of pronunciations from Nintendo fans!

A phonologist’s notes from the Neurobiology of Language Conference

Hello, Phonologists! A quick introduction—I’m Peter Richtsmeier. I have a Ph.D. in Linguistics from the University of Arizona, with expertise in phonological acquisition and learning theory, and I’m currently working as a postdoctoral fellow in the Speech, Language, and Hearing Sciences Department at Purdue.

I’m posting some scattered notes from last week’s Neurobiology of Language Conference (Thurs, Oct 15 – Fri, Oct 16, 2009; Chicago, IL). These are largely idiosyncratic as I’m not a neuroscientist and, for many presentations and almost all posters, I didn’t take detailed notes. If there are others out there that attended, you may want to supplement this posting. Well, here we go!

Panel Discussion: Motor Contribution to Speech Percetion: Essential or Ancillary?
Speakers: Luciano Fadiga (U Ferrara, Italy) and Gregory Hickok (UC Irvine, US)

Summary: The panel discussions were essentially debates with additional input from moderators and the audience. This panel discussion was in many ways a discussion about the Motor Theory of speech perception (Liberman & Mattingly, 1985) and the revival this theory has seen following the discovery of mirror neurons. Luciano argued for something like an updated Motor Theory: “Our hypothesis is that the motor system [specifically, the motor cortex and mirror neurons therein] provides fundamental information to perceptual processing of speech sounds and that this contribution becomes fundamental to focus attention on others’ speech” (from the abstract, prose in brackets was added by me). Greg argued that neuroscientific data does not support Motor Theory. In particular, the fact that lesions to the motor cortex do not prevent accurate speech perception fundamentally undermines any claim about the “necessity” of motor areas for speech perception and, by extension, the lesion data undermines Motor Theory.

My personal bias here is in opposition to Motor Theory. Rather than belaboring the point, I will refer you to Greg’s blog, Talking Brains (co-managed by David Poeppel), where he has posted extensively over the past few months about the shortcomings of both Motor Theory and claims about the importance of mirror neurons in speech perception. In fact, it’s worth noting that everyone at the conference was in agreement that there is relatively poor documentation regarding the mere existence of mirror neurons in humans (cf. recent polemic article by Caramazza and colleagues). They also agreed that mirror neurons are probably there, but it seems premature to make a very strong claim about how these neurons might affect speech perception at this time, especially when auditory models of speech perception are, well, kind of obvious. And good.

A final personal note: Phonology is constructed from perception in many ways.

Panel Discussion Highlights:

  • Luciano distances himself from what he calls mirror neuron “trash”, including the Magical Tapping Bears (40£ a bear!!! omg!!!)
  • Attendee Tom Bever claims that, contrary to popular belief, he and moderator Michael Arbib are not old enough to have known William James. Michael responds that he knew William James.
  • Luciano makes to end the session by saying that he really needs a cigarette. Moderator Michael Arbib concludes the session by saying, “Well folks, I guess it’s all been a lot of smoke and mirrors.”

Keynote Lecture: What can Brain Imaging Tell Us about Developmental Disorders of Speech and Language?
Speaker: Kate Watkins (U Oxford, UK)

Summary: Kate gave the only developmental keynote address, so naturally I was most engaged here. She’s fairly well known for her work with the KE family (Note that the KE family provided us with evidence that some language functioning depends on the FOXP2 gene. Some of the seminal research on this gene was done by Simon Fisher, another keynote speaker at the conference). Recently, Kate has branched out to neuroimaging studies of children with Specific Language Impairment (SLI) and developmental stuttering. This was not entirely clear to me before I heard her talk, but just in case anyone else out there is confused, developmental disorders such as SLI and stuttering rarely arise from lesions. Rather, they appear to result from myriad issues of neuronal size and number, as well as myelination. Kate’s research has shown that there are some interesting neurological correlates to these disorders, however. For example, children with SLI, like members of the KE family, have less gray matter in the caudate nucleus, a subcortical region implicating a motor deficit. Siblings of children with SLI also have diminutive caudate nuclei, suggesting that the size of this region primarily reflects a risk factor, and that many of the disorder’s sequelae must arise from something more complicated than a lone impaired region.

The other finding I thought worth mentioning is that children with SLI also show cortical areas with greater gray matter mass than their normally developing peers (but also reduced neural activity), including in the left frontal opercular cortex (posterior half of Broca’s area). Kate didn’t really discuss the behavioral outcomes of increased gray matter, but she suggested that the increase was likely the result of abnormal gyrification, or brain folding. Cool.

Personal note: One of my advisors here at Purdue, Larry Leonard, wrote the book on SLI.

Highlights:

  • Kate is the only female keynote speaker, bringing some relief to what often felt like a boy’s-only club
  • The presentation starts with Kate appearing to be a pleasant but disorganized British academic type who can’t seem to figure out how to get her slides to project. Oops! Turns out that the A/V staff hadn’t turned the projector on!

I’m finding that just covering these two sections has exhausted me, so this’ll be all for now. I may review some of the posters I liked sometime in the coming week, but some enouragement might be helpful to make it happen.

Peter