A leap of faith?

Back in October 2005, there was a discussion about the anti-OT bias of some derivational phonologists. In his book manuscript, Andrea Calabrese had alleged that “magical thinking” is especially common among those OT practitioners who would “attempt to provide a synchronic explanation to all aspects of the phonology of a language.” It was pointed out in the discussion that the magical thinking actually dates back to SPE and is still present to some degree in Calabrese’s own work.

I guess it comes as no surprise that it is possible to find a similar bias against OT among phoneticians. Consider the following quote from Joaquín Romero’s recent review of Laboratory Approaches to Spanish Phonology (Mouton de Gruyter, 2004), edited by Timothy Face:

As is unfortunately rather common in some of the work currently being done on laboratory phonology, it seems that some times the experimental data is viewed simply as a complement to the theoretical phonological analysis. […] In other instances, as in those papers that present OT analyses of the data, the distance between the experimental results and the assumptions made in the theoretical analysis represents such a leap of faith and such a simplification of the physical reality that one wonders why the authors bothered to gather experimental data at all.

The same sentiment is echoed in Geoff Morrison’s review of Phonetically-Based Phonology (Cambridge University Press, 2004), edited by Bruce Hayes, Robert Kirchner, and Donca Steriade:

Several papers present detailed arguments with respect to how the phonology is based on articulatory or perceptual factors, then follow this up with an implementation couched in OT mechanics. I felt that the latter was generally unnecessary. The important arguments had been made and the OT tableaux added nothing to my understanding. They seemed to be an abstraction which actually took me further away from the phonetically-based reality that the authors espoused.

I suspect that what we see here is not a specific bias against a framework in which input-output mappings are governed by interacting constraints. Rather, it’s a more general bias against all formal approaches to phonology [CORRECTION: possibly a general predisposition against phonological formalism–see ACW’s comment below], even if the analysis happens to incorporate more phonetic detail than is usually included. Just for the sake of argument, an absurdly extremist extension of this predisposition (which admittedly few people are likely to hold) would seem to entail that phonology is not a problem-solving system, in the sense discussed by Donca Steriade at the end of her 2001 paper The Phonology of Perceptibility Effects: the P-map and its consequences for constraint organization:

At the most basic level one can dispute the premise this account shares with most modern phonology, namely that phonology is a problem-solving system, or – as Goldsmith (1993) puts it – “an intelligent system”. If the phonotactic [constraint against word-final voiced obstruents–TGB] is not viewed as a problem to be solved, or as a standard of well-formedness that is independent of the lexicon’s contents, but rather as a static generalization over the words that happen to be attested in one’s language, then no Too-Many-Solutions problem arises: learners, on this view, do not seek to find a solution to [the phonotactic constraint–TGB] but to learn whatever patterns happen to be instantiated by their lexicon.

If phonology is not a problem-solving system, then I suppose that one could put everything in the lexicon and call it a day. I’m sure many phonoloblog readers would agree that this move would be a simplification of the phonological reality which would add nothing to our understanding of how phonological systems work.

Reference

Goldsmith, John. 1993. The Last Phonological Rule. University of Chicago Press.

9 thoughts on “A leap of faith?

  1. ACW

    I don’t understand how you make the leap to “a more general bias against all formal approaches to phonology”. I agree that, were that the opposing viewpoint, it would be easy to demolish. But I think it’s a simplification if not a complete straw-man. Surely there is defensible middle ground between believing that all observed phonological regularities are the result of active synchronic phonology, and believing that none are.

  2. Travis Bradley

    Thanks, ACW (corrections made above). It is indeed oversimplifying to speak in such absolute terms, and also inaccurate (and unintended) to have implied that the reviewers actually hold the extreme position as opposed to some more reasonable middle-ground. Romero does see “the excessive dependence on formalism” as a pending issue to be resolved in future laboratory phonology work, but of course this is not the same thing as calling for a ban on phonological formalism altogether.

    The main point I was trying to make was in response to Romero’s claim that the OT analyses in question require such a “leap of faith” to go from experimental results to theoretical formalism, and to Morrison’s claim that phonetically-based OT analyses add nothing to our understanding of the role of articulatory and perceptual factors in phonology. The question is whether these authors find fault with something inherent in a specifically OT implementation of the phonetic factors that drive sound patterns. Or is it the case that any formal phonological implementation–OT or otherwise–would require the same leap of faith or be seen as an additional and unnecessary abstraction?

  3. an agnostic

    I’m agnostic on the derivational vs OT debate, but both are formal, a great thing. Re: “magical” thinking in phonology — one instance that’s cleared up is whether there are intermediate forms (IFs) / derivations — there are. As Potts & Pullum show (2002, Model theory…), to capture opaque effects, the GEN function must return not the infinite set of forms which vary from the base form (BF) but rather the infinite set of derivations based on the BF. IFs exist to the extent that this is defined as a form related to two distinct other forms by correspondences: the “sympathy” form in McCarthy’s model, or the “turbid” form which is projected but not yet pronounced in Goldrick’s model, each related to BF & surface form (SF, = form related to only one other form, & which is not the BF).

    The EVAL function therefore evaluates candidate derivations containing multiple related forms (at least ordered pairs), not singleton forms. Because each infinite set of IFs & SFs is countable, their cross-product is as well; so it’s not as bad as it would seem, yet novel constraints must be added to assess IF-SF correspondences. Whether the transderivational benefits outweigh the costs remains to be seen, but wishing away IFs & derivations is more “magical” thinking in linguistics, something all sides practice sometimes (e.g., BF of “run” = /rIn/ in SPE). We note, though, that we’ve traded in a single derivation for infinitely many, and a finite number of IFs for infinitely many.

    It’s equally magical to label opaque effects as marginal conceptually just b/c they are marginal numerically. The opaque : non-opaque ratio tells us little about how important the former are in deciding between rival theories. To analogize from biology, the vestigial : functional ratio for organs is extremely low, yet the mere existence of a few glaring vestigial organs is sufficient to posit a sequence of forms leading up to the present form. For example, first whales were land-roaming tetrapods, then they moved into the water, and now their back leg bones are vestigial. Only creationists magically assert these forms truly are functional (along w/ the human appendix, etc.), when they clearly aren’t motivated by current selection pressures (~ surface constraints); their existence can be illuminated only by examining their function during an intermediate stage.

  4. Adam Ussishkin and Natasha Warner

    One way to summarize Travis Bradley’s posting would be as follows:

    Some derivational phonologists are anti-OT.
    Some phoneticians are anti-OT too (because they feel OT analyses add little to the prose discussion of experimental results).
    These phoneticians are probably against all formalisms, not just OT.
    This is because they don’t view phonology as a “problem solving system.”
    If phonology is not a problem solving system, then everything should just be listed in the lexicon, with no generalizations formed.
    That would be bad.

    There exist a fair number of papers where people have done an interesting experiment, discussed the interesting implications of the finding, and then added a theoretical discussion involving constraints and tableaux in order to make it a phonology paper. We suspect that this sometimes occurs purely in the interests of the job market. Such an OT analysis often doesn’t add much to the questions one is actually investigating, except to demonstrate that whatever the finding is, it could also be depicted/modeled in the OT formalism.

    There also exist papers of a different sort, where the writer has a formal phonological analysis of some formal phonological question. They then add a small amount of experimental data or cite someone else’s experimental data (possibly overgeneralizing from it), in order to have the formal theory backed up by phonetic experimental evidence. This is formal phonology with an overlay of phonetic data, and it may also occur in the interests of the job market sometimes.

    Step 2 above (“some phoneticians don’t like OT”) probably relates to both of these types of work. If these same types of work were done using some other formalism instead of OT, we suspect step 3 above (“phoneticians who don’t like OT don’t like other formalisms either”) would hold.

    This is likely separate from the issue in step 4 (“is phonology a problem solving system?”). Researchers might be complaining about the two types of papers above, but not mean that there couldn’t be a good paper with a formalism used in it. For us, it’s a matter of what kind of question a researcher is trying to answer. We delineate three types of questions (A-C) one might ask below.

    A. If you’re trying to answer a question about how various forms of morphemes (surface or underlying) are related, such as “how are reduplicated forms in language X related to their bases?”, it seems like some kind of formalism is likely to be helpful. If comparisons across many languages and forms lead to some generalizations, that may be a theory, and maybe a particular formalism is especially good at modeling those generalizations. We’re not convinced that any one formalism has ever turned out to be especially good at modeling all phonological patterns. Many have turned out to be especially well-suited to a few patterns (e.g. autosegments for tones).

    B. If you’re trying to answer questions like “which phonological patterns appear to be caused by facts about speech perception?” (referring to patterns either in the lexicon or in the mapping between UR and surface or in historical sound changes), then experimental data on speech perception is likely to be necessary and a formalism may not add much. Another question of this sort might be “how are listeners affected by phonological patterns during spoken word recognition?” (a psycholinguistic question).

    C. Finally, if your question is “can and should generalizations based in phonetics be represented in a phonological formalism?” then obviously you’re going to need both experimental data and a formalism, to see how it works, but it may not add much to answering any other question. (Of course, an individual paper might address this type of question as well as one of the other types above.)

    So it seems that if a person is mostly interested in Type A questions, they need a formalism, but if they’re mostly interested in Type B questions, they may not. Type B questions are not about the “problem-solving system” aspect of phonology, but they’re interesting questions about phonology anyway, we think.

    Returning to our analysis of the Bradley posting, we think it is important to note that step 3 above (“some phoneticians don’t like any formalism”) is not necessarily motivated by step 4 above (“such phoneticians don’t think phonology is a problem-solving system”). Step 4 does not necessarily hold, and it also does not necessarily lead to step 5 (“everything would have to be listed in the lexicon”). What we argue above is that formalism is fine for the problem-solving aspect of phonology (relating various forms of morphemes and/or UR), but often not necessary for other types of phonologically related questions.

    As for step 5 (“everything would be listed in the lexicon”), the jump from the previous step to this conclusion leaves out many possibilities. Detailed information (information traditionally considered as derivable and therefore not listed) could be listed in the lexicon, and yet speakers’ and listeners’ cognitive systems could still form generalizations over that information. We believe that there is much to learn through psycholinguistic and other methods about how humans organize information and form generalizations. We do not think that this is a question that can be decided quickly based on one’s preference for or against the use of formalisms to represent phonological patterns.

  5. Eric Bakovic

    Just a quick comment on the very last paragraph of Adam & Natasha’s comment above. I took Travis to be using the phrase “listed in the lexicon” in a very specific sense that includes not “form[ing] any generalizations over that information”. Otherwise, “listed in the lexicon” would just mean “specifying information you don’t have to”, which may have been an interesting issue when memory/storage was thought to be at a premium, but is no longer something that cognitive scientists seem to worry about, at least not in the same way.

    Although we formally-inclined types still tend to present our analyses based on underlying representations that contain little or no predictable information, I think (though I may be wrong) that there are few of us who would stubbornly subscribe to the notion that such information can’t be specified. There are certainly no good formal reasons to subscribe to such a notion (your rules/constraints should work no matter what you throw at them, the extreme of this view being OT’s Richness of the Base hypothesis), and there seems to me to be much “external evidence” (much of it arrived at experimentally) that such a notion is probably wrong. I can’t speak for Travis, of course, but I think there’s at least a more charitable interpretation of what he meant in his post in this case.

  6. Greg Kochanski

    A basic problem is that no one has any realistic proposal as to how the brain might implement OT. (Or, I might add, how the brain might implement other formal approaches to phonology.)

    There are really two ways of looking at OT:
    1) OT is a theory of how the brain actually works. If so, one needs to think about connections to the neurobiology. Such connection seem dubious.

    2) OT isn’t a theory of how the brain actually works. If so, it’s useful if it simplifies things, or expresses things compactly or makes useful generalizations. Theorists haven’t tended to address these questions. For instance, is it more compact to represent all English stress assignments with a set of OT rules, or is it more compact to simply store that part of the dictionary?

    So, yes, some of us phoneticians don’t see a use for OT. I don’t consider that a bias, so much as a healthy skepticism. Until OT proves its value, why does it deserve more than minimal support?

    Mind you, by “value”, I mean value to someone who is not a phonologist.

  7. Eric Bakovic

    Greg Kochanski artificially narrows the range of options by writing that “[t]here are really two ways of looking at OT”:

    1) OT is a theory of how the brain actually works. If so, one needs to think about connections to the neurobiology. Such connection seem dubious.
    2) OT isn’t a theory of how the brain actually works. If so, it’s useful if it simplifies things, or expresses things compactly or makes useful generalizations. Theorists haven’t tended to address these questions. […]

    It’s not clear to me why Kochanski thinks that figuring out “how the brain actually works” and “mak[ing] useful generalizations” are mutually exclusive theoretical endeavors.

    Re: 1) — The way I understand it, Paul Smolensky and Géraldine Legendre are people who have indeed thought deeply about these connections for quite a long time, and have recently published two tomes on the topic. Is Kochanski responding to this work, or dismissing it out of hand?

    Re: 2) — This depends on what Kochanski means by “useful” (which I imagine is closely related to what he means by “value”). The OT literature that I like to read and attempt to contribute to is all about making generalizations that are useful to (some) phonological theorists (including, e.g., Smolensky).

  8. Pingback: Teaching English Language Learners

Leave a Reply

Your email address will not be published.