Category Archives: Papers

Menn and Matthei (1992) The “two-lexicon” account of child phonology (Part 2)

In the previous post, I described Menn and Matthei’s assessment of progress on the two-lexicon model. They highlight several advantages of the model, but also note problems, including the apparent competition between children’s “selection rules” (or rules specific to the output lexicon), as well as non-deterministic cross-word patterns. To combat these and other problems, MM suggest that the formalism of the two-lexicon model migrate from a generative perspective to a more connectionist one. At this point, they make a very handy list of the key generalizations they would like to capture with a revised, connectionist two-lexicon model, or with any model of child speech production for that matter. I have restated them here, while keeping MM’s original groupings.

Reduction of Information

  1. Children recognize more words than they can say
  2. Children recognize more phonemic contrasts than they can realize in speech
  3. Early productions tend to cluster together in terms of phonetic properties
  4. Early productions also tend to contain a limited set of phonetic elements


  1. Children’s productions appear to be simplified (compared to adult forms) and often appear systematic (many words share a pattern)

Inertia of the System

  1. Early, frequently produced words may retain a high level of fidelity, resulting in “phonological idioms” compared to more recently acquired production forms
  2. Changes in systematic productions tend to happen to newly acquired words; more established words are more resistant to change

We could also add to this list MM’s frequent observation that imitated production forms tend to be much more like adult forms.

To provide a general feel for a connectionist model of early speech production, MM lay out the “initial settings” for such a model. With respect to connections, MM posit simultaneous and sequential connections. Simultaneous connections link the speech modalities of motor commands, auditory percepts, and kinesthetic sensation (of one’s own productions). The three modalities, motor/auditory/kinesthetic or MAK, must be wired together efficiently by learning. Sequential connections are within-modality connections that represent change over time. So, a simultaneous connection might link together the feeling, action plan, and acoustic record of a [b], while sequential acoustic connections might link together the [b] burst to the following formants of an [a] vowel in the syllable [ba]. Although MM do not make this explicit, it appears that sequences of connections also represent stored forms, or words.

Next, MM lay out a series of what I will call linking mechanisms. First, sequential auditory patterns can be stored and learned by attention to adult speech. Second, there is an internal feedback loop, which MM relate to babbling, which has a basic predictive property that allows the model to guess how a sequential motor pattern might sound and thereby modify it to observe whether the result is the same or different (essentially a supervised learning component provided by the stored, “correct” adult forms). Third, imitation will result in links between stored adult-produced auditory sequences and the child’s own MAK sequences. Fourth, stored adult sequences will be associated with real-word states (meanings), which then leads associations between the child’s own MAK sequences and real-world states.

MM give a fair amount of attention to the idea that adults might assist in the development of a child’s MAK sequences. The basic idea is that an adult mimics the phonetic properties of a child’s utterance (absolute pitch, formant values, etc.). Here’s an explanatory quote: “A purely sound-based imitation of the child by the adult…will produce links between the child’s internal MAK associations and the sound of the adult’s voice, the child’s innate normalization abilities should be enhanced.”

Once normalization is established (although I’m not sure why it needs to be established first in this proposal), the child might seek to produce words in a more adult-like fashion. MM propose that social factors like semantically contingent responding by parents (Snow, 1977) could provide such a mechanism. MM conclude by saying that their connectionist model is not fully developed, and that many attractive qualities of the old two-lexicon model, like the selection rules, have been replaced by vaguer concepts. However, they believe that the absolute boundaries of the input and output lexicons in the original model simply do not serve us, and we should abandon them.

My primary concern with the connectionist model that MM propose is that it seems to completely abandon the original problem that the two-lexicon model addresses. Looking back at their list of key generalizations, I would single out two, but the connectionist model does not clearly address either. First, how is it that children can recognize more words/sounds than they can produce? Second, why are children’s early productions both simplified and systematic?

It’s difficult to see how the proposed connectionist model makes headway on these problems. In fact, it seems as if they have been replaced with several other problems in the study of child speech. The discussion of speech normalization is a perfect example. Given general agreement that toddlers have a good understanding of the perceptual form of their native language, this problem could be assumed to be solved at the time that production begins. For example, I know of no evidence that children ever attempt to imitate the absolute values of any acoustic property of adult forms, which seems to be a major problem if we want to address normalization.

To conclude, I generally see the box-and-arrow iteration of the two-lexicon model as being preferable, if only for specificity. Athough I agree with MM that the box-and-arrow model could be replaced advantageously by a connectionist model, the advantages are simply not clear enough here. In the future, I will present a more recent attempt at a connectionist network by Menn and colleagues, which may address the perception-production disparity more directly.



Snow, C. E. (1977). The development of conversation between mothers And babies. Journal of Child Language, 4, 1-13.

Menn and Matthei (1992) The “two-lexicon” account of child phonology (Part 1)

Menn and Matthei (hereafter MM) begin with some information about the historical development of the two-lexicon model. They quote a paper by Ferguson, Peizer, and Weeks (1973), who noted a general human tendency to know more words than are typically said. That is, both children and adults know words that they rarely or never say. Thus, there seems to be a set of lexical representations for which the details of production are either murky or nonexistent, and we might hypothesize a split between input and output representations (Ingram, 1974), in other words, two separate lexicons.

So long as there is a consistency in children’s pronunciations, however, separate lexicons are unnecessary. If there is a regular mapping between the input representation (presumed to be identical to the adult forms) and the output representation, then a set of rewrite rules that capture the mapping are sufficient, and no output lexicon is needed. However, children are rarely consistent, and MM provide the example of two words (“down” and “stone”) that move in and out of a nasal harmony rule: They start out with no harmony ([dawn] and [don], resp.); the harmony rule then applies to other words (/binz/ –> [minz] and /dæns/ –> [næns]); finally, the harmony rule overtakes “down” and “stone”. With inconsistent mapping across similar words, rewrite rules are not helpful, or at least require arbitrary exceptions. Granted, two-lexicon models must also have lexical exceptions, but there are other advantages.

One of these advantages is that arbitrary exceptions in a one-lexicon system lead to more serious problems. The example is from Smith (1973) as interpreted by Macken (1980). The data comes from the child, Amahl, who displayed a pattern of velar harmony (/tr^k/ –> [kr^k]). Eventually, the pattern gave way to accurate production of alveolars, but one word, “took”, persisted as a regressive idiom, [gUk].

Macken assumes that this is possible because Amahl must have learned /gUk/ as the underlying form. Thus, when the harmony rule disappeared, /gUk/ would still surface as if harmony applied. As MM point out, however, this assumes that the child perceives “took” as /gUk/, which would lead us to expect that Amahl would not understand “took” as produced correctly. This seems highly unlikely, especially given our present-day understanding of children’s perceptual abilities. Furthermore, the example above with “down” and “stone” resisting a nasal harmony rule does not make sense if we assume exceptions are cases where the child has learned his own productions as underlying forms. At the very least, it would suggest that the underlying forms of words where nasal harmony does apply are perceived as if they had initial nasals. That defeats the advantage of the one-lexicon model, however, where we assume child and adult underlying forms are the same.

An output lexicon is helpful in this case because it provides a space for pronunciation representations that may be linked by a rule that operates across words or by arbitrary connections between input and output forms. Just as importantly, the output lexicon still allows children to be able to accurately perceive those words. That is, the output lexicon provides a storage facility for consistent or variable output representations while allowing for stable and accurate perception.

Despite the advantages, MM detail several problems they see with the two-lexicon model. First, it appears that selection rules—or the rules that lead to childlike forms in the output lexicon—sometimes operate over two words. This is problematic, however, if we take up the very standard assumption that combining words is done by the syntax and word combinations do not exist in the lexicon.

Another problem is that selection rules may sometimes be in competition with one another for a given word. MM give the example of productions by the child Daniel (also discussed by Menn in previous papers, I believe) of “boot” and “boat”, which are variably produced as [bup-dut] and [bop-dot] respectively. Thus, there appear to be separate labial harmony and alveolar harmony rules that compete in terms of realization of the same word. MM point out that there isn’t any sort of formalism in the two-lexicon model that allows for rule competition.

Other problems are given through the examination of daily changes in a couple of diary studies. For example, a child Jacob exhibited something like a vowel convergence, where [i] was produced like [ε]. So “tea” is first produced as [di] and then as [dεi]. “Key” was produced first as [ki], then as [xiε], and finally as [xε]. At the same time words with a mid front vowel switched between a low and high specification: “tape” was produced with both [i] and [e]. Ultimately, MM conclude that these similar words must be influencing each other in terms of production, but in a very unruly way. Similar cases are given for stress placement on two-syllables words beginning with [k] and over-application of the plural/3rd singular/possessive morpheme.

I’ll stop here for now. My next post will summarize what MM want to explain and then review the connectionist model that MM propose as a revised two-lexicon system.



Ferguson, C. A., Peizer, D. B., & Weeks, T. A. (1973). Model-and-replica phonological grammar of a child’s first words. Lingua, 31, 35-65.

Ingram, D. (1974). Phonological rules in young children. Journal of Child Language, 1, 49-64.

Macken, M. A. (1980). The child’s lexical representation: The ‘puzzle-puddle-pickle’ evidence. Journal of Linguistics, 16, 1-17.

Smith, N. V. (1973). The Acquisition of Phonology: A Case Study. Cambridge: Cambridge University Press.

N. Hewlett (1990) Processes of development and production (Part 2)

Hewlett begins discussion of dual lexicon models with basic premise that, if children have accurate perception but inaccurate production, then “there is not just a single, modality-independent lexicon in which phonological representations are stored.” (p. 28) Hewlett lists several advantages to this basic framework. First, lexical avoidance (Schwartz & Leonard, 1982) is easily explained. Second, the “rules” like fronting and gliding that apply to child speech do not need to occur in real time. In many ways, this is helpful for explaining why the rules apply to environments, rather than to particular words. Exceptions abound, however! These exceptions include regressive idioms, where a child produces a word incorrectly even though similar words are generally produced correctly; and progressive idioms, where a child produces one word correctly when similar words are produced incorrectly. The problem with idioms is where Hewlett strikes out on his own, proposing a revised dual lexicon model.

It seems likely that reproducing the box-and-arrow model from the chapter would be a violation of copyright, so I will do my best to provide verbal descriptions for now. There are four four key boxes in the model (clockwise from upper left): the input lexicon, the output lexicon, a motor processor, and a motor programmer. The input lexicon is where incoming acoustic signals are matched to stored lexical items. Hewlett states explicitly that, “The input lexicon contains perceptual representations in terms of auditory-perceptual features.”

Realization rules link the input lexicon to the output lexicon, which contains articulatory representations. From there, an articulatory representation can be sent to the motor processor, where a motor plan is assembled using syllabic units. There is an alternative route, however, going through the motor programmer. If a realization rule does not exist, or if there is cause to eschew the realization rule, then the perceptual representation is sent to the motor programmer, where a motor representations is built from scratch. From there, it can either go directly to the motor processing component for implementation, or it can go to the output lexicon for storage, or probably both. Additional levels of production mechanism follow motor processing, including a segmental level of motor processing (which is acquired after the onset of speech), a motor execution level where muscle contractions are planned, and finally the signal sent to the vocal tract, representing the actual articulations.

How well does Hewlett’s model handle the data discussed in my last post? First, lexical avoidance is explained by postulating an entry in the input lexicon that has no corresponding motor plan (Hewlett is unclear here, but I think he means there is no corresponding entry in the output lexicon). Realization rules in which sound contrasts are neutralized (fronting, gliding, etc.) are the result of multiple input lexicon entries being mapped to the same output entry. Improvement in speech accuracy over time is handled by various forms of feedback, including the revision of output lexicon forms by passing input forms through the motor programmer.

There are many positive aspects of Hewlett’s model, and it does improve on the model proposed by Kiparsky and Menn (1977). However, the empirical coverage of the model is still quite limited. Here are a few examples. First, although Hewlett is careful to point out how important phonology is for explaining paradigmatic phonological rules, his model does not include a robust phonological grammar. The input and output lexicons are connected by an arrow, but this obscures what a difficult relationship this must be. How, for example, are output lexical items merged when they remain distinct in the input lexicon (e.g., when the words ‘rock’ and ‘walk’ are pronounced identically, or when /r/ and /w/ are pronounced identically, in general)? What mechanism is responsible for the merger? Notice that previous generative approaches are not helpful here because part of the challenge is to show how the input lexicon–including words like ‘rock’ and ‘walk’–links to the output lexicon–where ‘rock’ and ‘walk’ become merged. Grammars which do not split the lexicon into input and output components are therefore shielded from this problem. Progressive and regressive idioms are also unexplained by the single arrow between the input and output lexicons. The model has no way of explaining why some words might not follow an otherwise consistent grammatical pattern.

Second, how do articulatory representations develop? Consider who a child comes to produce their first word. Based on Hewlett’s model, we can reasonably assume that the child has an accurate perceptual representation of the word in their input lexicon. How is that word then matched up to any motor representation. Presumably, babbling plays some role in the developmental process, but this is not discussed outside of input from the motor programmer. We might look to work by Guenther to solve this problem (e.g., Guenther, 2006), but Hewlett leaves the process unspecified.

Finally, Lise Menn consistently mentions the important of explaining why speech accuracy improves during imitation, but Hewlett’s model is not specific enough to account for this fact.

Overall, Hewlett’s chapter provides an outstanding review of much of the work on child speech production and phonology up to 1990. His model offers several advances compared to similar models proposed by Menn (Kiparsky & Menn, 1977; Menn, 1983), but many facts about speech development remain unexplained.

N. Hewlett (1990) Processes of development and production (Part 1)

I’m following up on my review of Kiparsky and Menn (1977) with a review of Hewlett (1990), which extends the dual-lexicon model in several interesting ways, including a more detailed production component and an updated literature review. Unfortunately, the chapter is so long that it doesn’t really seem appropriate to review it all at once. In fact, this post will probably be too long. If you’d prefer shorter posts, let me know!



Hewlett reviews major findings in normal and disordered phonological/speech development, with the goal of motivating a model of early speech production building on previous work [1, 2]. The coverage in the manuscript is extensive, and the criticism is often very insightful. Below is a short description of the findings that Hewlett covers.

Hewlett begins his review with very early speech development, including babbling.* Babbled sounds are typically the same sounds in early words, and babbling usually overlaps with the first real word productions [3]. Relevant work not discussed by Hewlett include research from Boysson-Bardies and colleagues showing that babbling sounds are language dependent and even sounds that are common in babbling around the world often have language-specific phonetic characteristics [4, 5].

When word production begins in earnest, Hewlett argues that certain aspects of early speech are consistent. First, early ‘proto-words’ [6] are highly variable in their form. Thus, although the child’s production goal might be consistent—for example, they are always referring to ‘milk’—the form is entirely inconsistent. Second, early words are generally single words or unanalyzed phrases (the parts of the phrase don’t recombine).

Hewlett argues that a separate stage can be identified around 1;6 (years; months), which roughly corresponds to what is often called the ‘word spurt’. Hewlett further elaborates on phonological systematicity during early word production. Young children apply systematic patterns to their speech. These patterns might include consonant cluster reduction (‘snow’ is pronounced [no]), or application of a child-language-specific rewrite rule (/r/ à [w] word-initially and word-medially), or application of a prosodic template, such as a [CVjVC] template [7]. Hewlett writes, “The important implication of this is that the child’s pronunciation patterns exhibit regularities which yield to a systematic description within a phonological framework.” (p. 19) Thus, the enterprise of child phonology has been either to 1) describe the child’s phonological inventory, including contrasts and phonotactic restrictions, or 2) write rules that describe how children get from the adult form, which children are presumed to know based on their perceptual abilities. I will not go into great detail about these proposals, but Hewlett reviews well-known rules such as /r/ à [w]. Finally, although Hewlett discusses the issue later in the paper, this stage of phonological development includes many examples of ‘lexical avoidance’, or cases in which children avoid words with particular sounds [8].

At this point, Hewlett reviews models of phonological development, including proposals by Jakobson [9], Stampe [10], and Menn ([2]; the dual-lexicon model, also described in [1], which I reviewed in a previous posting). He then goes on to describe children’s perceptual abilities, which are generally agreed on to be quite good. And, of course, the explosion of the infant literature starting in the early 1990s confirms that infants are very good at learning linguistic/phonological patterns before they begin to speak.

As a sort of contrasting section to `phonological development’ as described above, Hewlett reviews `phonetic development’, in which he focuses on the measurement of speech production. Several findings are noteworthy. First, children’s speech is known to be more variable, including long durations for linguistic targets and greater variability. Regarding variability, recent work by my current mentor Lisa Goffman, and her collaborations with her mentor Anne Smith, have greatly added to our understanding of speech motor variability in children. Some examples: [11] showed that oral-motor stability is below adult levels even at 14 years of age. [12] showed that, contrary to what one might expect from a frequency-based explanation, native English speaking children and adults produce iambs with more stability compared to trochees.

Continuing with Hewlett’s discussion of phonetic development, children’s formants tend to be more variable than adult’s formants [13]. Hewlett discusses the issue of whether children show more or less coarticuation than adults. A number of researchers, Susan Nittrouer being one example [14], have claimed that children actually show greater amounts of coarticulation. The implication is that children have less segmentalized speech, and therefore their early speech consists of unanalyzed whole words. This claim has been hotly debated (or was hotly debated 20 years ago), but it appears that coarticulation is often just different in children [15], without there being either more or less coarticulation in child speech.

Hewlett also discusses the issue of `covert contrasts’ or `incomplete neutralization’—cases where children appear to be producing two sounds the same but are actually producing them distinctly. For example, both /r/ and /w/ might be realized as something like a [w], but in fact, the productions are distinct, and children can reliably identify which word they intended from their own productions [16]. Elsewhere, I have argued that this is a systemic problem with analyses of child phonology. Because so much of the literature on `phonological processes’ in child speech is based on transcription data, it is unclear whether these cases reflect phonological processes or covert contrasts (in which case, `phonological’ must mean something entirely different than what it is usually taken to mean).

Hewlett concludes his review of phonetic development with three findings. First, sounds that appear in babbling may disappear from a child’s sound inventory after the onset of word production. Second, although adults are very good at compensating for a bite block and hitting acoustic targets, children may be less good at this [17]. Third, Hewlett notes that children seem readily able to acquire a foreign accent as well as a foreign language (although some more recent work [18] suggests that accent acquisition generally falls on a continuum based on age of acquisition). Regarding the last two findings, Hewlett concludes that children must be better than adults at learning to produce new sounds.


[1] Kiparsky, P. & Menn, L. (1977). On the acquisition of phonology. In Language Learning and Thought, J. Macnamara (Ed.). New York: Academic Press.

[2] Menn, L. (1983). Development of articulatory, phonetic, and phonological capabilities In Language Production, Vol II, B Butterworth (Ed.). London: Academic Press

[3] Locke, J. L. (1983). Phonological Acquisition and Change. New York: Academic Press.

[4] Boysson-Bardies, B. d., Halle, P., Sagart, L., & Durand, C. (1989). A crosslinguistic investigation of vowel formants in babbling. Journal of Child Language, 16(1), 1-17.

[5] Boysson-Bardies, B. d., & Vihman, M. M. (1991). Adaptation to language: Evidence from babbling and first words in four languages. Language, 67(2), 297-319.

[6] Menyuk P. & Menn, L. (1979). Early strategies for the perception and production of words and sounds. In Language Acquisition, P. Fletcher, M. Garman (Eds.). Cambridge, UK: Cambridge University Press. pp. 49-70.

[7] Priestly, T. M. S. (1977). One idiosyncratic strategy in the acquisition of phonology. Journal of Child Language, 4, 45-66.

[8] Schwartz, R. G., & Leonard, L. B. (1982). Do children pick and choose? An examination of phonological selection and avoidance in early lexical acquisition. Journal of Child Language, 9, 319-336.

[9] Jakobson, R. (1968). Child Language, Aphasia and Phonological Universals. The Hague: Mouton.

[10] Stampe, D. (1969). The acquisition of phonetic representation. Papers from the 5th Rebional Meeting of the Chicago Linguistic Society, 443-454.

[11] Smith, A. & Zelaznik, H. (2004) Development of functional synergies for speech motor coordination in childhood and adolescence. Developmental Psychobiology, 45, 22-33.

[12] Goffman, L. (1999). Prosodic influences on speech production in children with specific language impairment and speech deficits: Kinematic, transcription, and acoustic evidence. Journal of Speech, Language, and Hearing Research, 42, 1499-1517.

[13] Eguchi, S. & Hirsch, I. (1969). Development of speech sounds in children. Acta Otolaryngology Supplement, 257.

[14] Nittrouer, S., Studdert-Kennedy, M., & McGowan, R. S. (1989). The emergence of phonetic segments: Evidence from the spectral structure of fricative-vowel syllables spoken by children and adults. Journal of Speech and Hearing Research, 32, 120-132.

[15] Goodell, E. W. & Studdert-Kennedy, M. (1993). Acoustic evidence for the development of gestural coordination in the speech of 2-year-olds: A longitudinal study. Journal of Speech and Hearing Research, 36, 707-727.

[16] Kornfeld, J. R., & Goehl. (1974). A new twist to an old observation: Kids know more than they say. Chicago, IL: Chicago Linguistic Society.

[17] Oller, D. K. & MacNeilage, P. F. (1983). Development of speech production: Perspectives from natural and perturbed speech. In The Production of Speeech, P. F. MacNeilage (Ed.). New York: Springer Verlag, pp. 91-108.

[18] Flege, J. E., Munro, M. J. & MacKay, I. (1995). Factors affecting degree of perceived foreign accent in a second language, Journal of the Acoustical Society of America, 97, 3125-3134.

Kiparsky and Menn (1977).

Kiparsky, Paul, and Menn, Lise. (1977). On the acquisition of phonology. In John Macnamara (Ed.), Perspectives in Neurolinguistics and psycholinguistics. New York, NY: Academic Press. pp. 47-78.

Kiparsky and Menn (hereafter KM) present a theoretical argument for children as active discoverers of grammar, building structural representations based on evidence from the ambient language. In the process, KM propose a dual lexicon. The split includes one path between phonetic and phonological forms (i.e., some phonological processes map acoustic forms to the underlying phonological representations that link related words) and another path between incoming phonetic forms and the phonetic output that children create.

The chapter begins with “The Learning of the Phonetic Repertoire”, a discussion of the two major proposals for child phonology that existed in 1977. The first was Roman Jakobson’s, who proposes that phonology develops according to a universal system of contrasts, and contrasts are learned by children in the order of most to least universal. For example, children should contrast /d/ and /g/ before they contrast /d/ and /b/ (pp. 48-49). The problem with Jakobson’s approach is that it says nothing about the order in which the sounds themselves will be acquired. Furthermore, the absence of a contrast may indicate that children are intentionally, or selectively, avoiding a particular sound, but Jakobson says nothing about this or why sound evasion should happen. Therefore, KM consider Jakobson’s theory to be difficult to falsify.

Stampe’s theory is specific about when sounds will be acquired, but makes a distinction between phonological rules and phonological processes. Rules are the grammatical means by which speakers convert from phonological to phonetic word forms, such as the flapping or homorganic nasal cluster rules. Processes, on the other hand, are innate rule-like conversions that explain the kinds of errors that children make. For example, children produce voiced word-final stops without voicing (/d/ –> [t]/__#) because of a devoicing process. Speakers of languages like English, which do voice final stops, must overcome these processes.

KM describe several problems with this view. First, it appears that Stampe’s theory requires children to learn phonological rules in the same order as they would unlearn phonological processes. This is an empirical but unstudied question.* Second, KM find no reason to assume that adult speakers maintain rules on the one hand and processes on the other (i.e., German speakers do not appear to be stuck in a word-final devoicing process, and regardless, they must still learn the allomorphy that relates allomorphs with voiced and voiceless final stops).

KM also criticize both Jakobson and Stampe as being overly deterministic and not allowing for the kind of variability inherent to child language learners. As evidence, they point to the fact that children break up consonant clusters in a variety of ways, and to the fact that children often produce phonological idioms, words that are produced more accurately than the phonological processes apparently at work in their language would predict. In sum, KM state that we need a new model of phonological development. However, they do not focus much on the development of sounds or sound contrasts. Instead, they focus on the fact that children’s production abilities lag behind their perceptual abilities.

KM propose the dual lexicon to account for a distinction between cognitive grammar learning and articulatory sound implementation. Children may learn the cognitive grammar at whatever pace (KM describe it as going on over many years, although I think that today’s infant literature would generally contradict that**), but the development of a productive sound repertoire is separate from the cognitive grammar. Thus, we have two lexicons.

The second part of the book, “The Learning of Morphophonemics”, is somewhat orthogonal to the dual lexicon proposal, so I do not discuss it.

Here, I identify what I think are outstanding issues in the paper, some of which will be addressed in future posts. First, is the dual lexicon meant to be only a description of the grammar, or is it also a processing model? In other words, when formulating a message, does a child start with the phonological grammar, which is translated into a phonetic form, which is then translated into the child’s pronunciation? KM suggest that, in fact, their may be yet another step, in which physical limitations act on the message, as would be the case for a lisp. Second, KM propose that children do not have allomorphy. Is this really true? It seems to me that children could be learning meanings and linking related word forms at a fairly early age. However, I’m not familiar with the literature on this topic. Third, the logical dependency of the dual lexicon and KM’s proposal of the child as “language discoverer”*** view is not clear to me.

Primacy of the base

This is a follow up to a quick comment I left in the Reading Group thread. I am not entirely up on the history of the field, so maybe these points are trivial. If so, excuse me.

I found the discussion of rule ordering in section 5 to be interesting. There seem to be a couple of issues that popped up with regard to rule ordering in the 1940s. One is historicity–how seriously are we going to take the time/motion metaphor? Another is the issue of primacy–if a, b, and c are derivable from one source, which one, if any, is primary? And a third is Harris’ claim that extrinsic rule ordering masks natural relationships between classes of derivations.

The first and last issues seem especially interesting after the Mr. Verb Kerfluffle. One of the things that was suggested there was that if you have rules, rule ordering is natural. Goldsmith shows that for some phonologists in the 1940s, rule ordering wasn’t a natural step at all. And it seems to me that a lot of phonology after SPE was concerned with addressing that last bit–making the rule ordering natural (there might be something about the Elsewhere Condition here, but I don’t feel qualified to talk about it).

What put me in mind of the richness of the base (RoB) was the middle part about primacy. RoB is the OT claim that the set of possible inputs to the grammar is universal, thus getting rid of the issue of primacy. In the hypothetical case of a, b, and c the grammar has to make sure that whatever the input /a/, /b/, /c/, etc., nothing maps to b in an environment where b is disallowed. Although RoB doesn’t rule out the use of archiphonemes (or underspecification) it does make them seem unneccesary since you can construct a grammar that will always map a and b to c in the appropriate context for example.

Automatic alternations and conspiracies

Last week I suggested some of us read and discuss John Goldsmith‘s recent paper in Phonology 25.1 (“Generative phonology in the late 1940s“, doi:10.1017/S0952675708001395). I’m not really sure what’s the best way to go about this, so I’ll just suggest the following: anyone interested can pick a point of discussion and write a post about it, and anyone interested in responding to that point can comment specifically on that post.

OK, now that I’ve written that out, that just sounds like plain old blogging. I guess what I’m trying to suggest is that we don’t limit the discussion to just one post and its associated comments: if the point of discussion that you want to pick is sufficiently different from what’s already been posted, then I encourage you to start a new post rather than to comment on the old one. We can maybe tie all the threads together later.

OK, that still just sounds like plain old blogging. Forget I ever said anything. Let’s just move on to my (first?) suggested point of discussion, focusing on §2 of the paper (pp. 40-42 of the published version, pp. 4-6 of the preprint).

Continue reading

phonoloblog reading group

In my last post I mentioned wanting to read the following paper just published in Phonology:

Generative phonology in the late 1940s (pp 37 – 59)
John A. Goldsmith

I’ve now read it, and I’d like to suggest that the two or three people who might be reading these words read it, too, so we can have a little online discussion about it. If you don’t have access to the journal, you can find a pre-print here (a quick skim reveals it to be about 95% identical in content to the published version). You might also want to heed the encouragement that Goldsmith offers in the next-to-last paragraph:

Needless to say, I encourage the reader to read Wells’ paper for himself, and to judge whether it is not a cautious and careful exegesis of the benefits that can be reaped from derivational analysis, aimed at an audience that was leery of confusing synchronic and diachronic analysis. As a phonologist working at the beginning of the 21st century, I would argue that we should not characterise the work of linguists such as Wells, Harris and Hockett as the last gasp of a dying structuralism, but as a body of scholarship out of which generative phonology was a natural development.
Surely this conclusion is reasonable and, ultimately, not at all surprising. My admiration for generative phonology is in no way diminished by the realisation that its key ideas were being considered and developed by the mid 1940s. It is, after all, the ideas that matter to us now.

(And if that JSTOR link doesn’t work for ya, try this.)

OK, we’ll reconvene sometime next week. I’ll plan to start, but if anyone feels like chiming in before I do, please feel free.

UMass paper archive (and lingBuzz, too)

This post on Kai von Fintel’s Semantics etc. blog reminds me that there’s a little-publicized archive of UMass linguistics papers, searchable and browsable by subject area. Here’s the phonology area, and here’s the phonetics area; there are quite a few other areas, almost all of them populated by several papers.

Kai’s link to Kratzer & Selkirk on Spellout does not go to this archive, but rather to lingBuzz, which I first mentioned on phonoloblog just over a year ago. Continue reading

Remarks and replies

In case you haven’t been following this virtual thread:

  1. Bill Idsardi‘s six-page paper “A simple proof that Optimality Theory is computationally intractable” appeared in the latest issue of LI (vol. 37, 271-275).
  2. András Kornai has a one-page reply (“Is OT NP-hard?”) on ROA.
  3. Idsardi has a three-page rejoinder (“Misplaced optimism”), also on ROA.
  4. Update: And another by Kornai (“Guarded optimism”).

This is exactly the sort of thing that should be happening on ROA (and, I would hope, also here on phonoloblog).

Small paper, big names

The New York Observer, a small paper in New York City, has an article today on the “City Girl Squawk“. The particular dialect features they’re discussing don’t come across very well in the article, but at least they played clips when getting quotes from the prominent linguists they interviewed: Bert Vaux, John Singler, Bill Labov, and Walt Wolfram.

At NYU, we sometimes get requests from the media to talk about different aspects of linguistics (e.g. why some names, like “Bennifer” or “Brangelina”, make good blends.) Since these requests have come from New York Newsday or even from Fox News, I think sometimes we’re wary about being portrayed negatively or in a “gee whiz, look at that stuff they study!” kind of way. But this article does a good job of using experts to shed light on a pop culture phenomenon that intersects with the academic world.

continuing phonetics-phonology discussion

I’m adding this post in light of Eric’s plea regarding comments and posts – many comments on recent posts in phonoloblog have been quite involved, enough for Eric to suggest that contributors make new posts instead of long comments. Marc and Travis have taken this advice, but (so far) I have not – I added a long comment to Marc’s post regarding Port & Leary’s Language article, only because it directly follows up on comments from both Port and Leary.

To make up for it, I’ve made this post just to alert readers that comment threads are continuing in some of these recent phonoloblog posts.

Whistled languages: phonology and Unesco

The most recent issue of Phonology (22.2) contains an article by Annie Rialland about the phonetics and phonology of a number of so-called ‘whistled languages’ (Rialland’s website has a prefinal version as a pdf).

In some sense, whistled languages use the phonology of a spoken language, such as Spanish in the case of the most well-known instance of this type of language, Silbo Gomero from one of the Canary Islands, La Gomera. Yet they implement this phonology in a radically different way — by whistling rather than moving organs in the vocal tract. Since this special type articulatory phonetics is more limited than the usual one, this in turn influences the phonology somewhat. All of this can be found in Rialland’s fascinating article.

The topic of whistled languages is also very suitable for explaining some basic principles of the phonetics-phonology interface. When I needed to write something for a Dutch popular science website for adolescents, I therefore took Rialland’s article as my basis. Spanish has a five vowel system, and Rialland shows that these vowels can be distinguished on the basis of F2 alone; it is the F2 which is whistled in Silbo Gomero. This fact can be used as a handle to explain what formants are, and what a vowel system is; here is the article I wrote (in Dutch, obviously).

I notified Rialland of the fact that I published this piece, and here is what she answered:

This paper will also serve an unexpected function for you: the Government of the Canary Islands is currently trying to get a recognition of Silbo (and also other whistled languages) as a patrimony of humanity by UNESCO. All of the papers in scientific journals (of any age) will help.

Explaining phonology to young people can have unexpected political consequences.

More on ordering and such

Re-reading just the opening few pages of Odden’s “Ordering” paper (bits of which were originally discussed here) reveals at least two more mischaracterizations of both Optimality Theory and rule-ordering theory. These mischaracterizations would just be funny if it weren’t for the fact that they are being perpetrated by a phonologist who has arguably made a career out of poking holes in theories (or socks). The fact that I have to waste my time (and yours, if you continue reading this) poking holes in the hole-poking is just plain sad.

Continue reading