Possible and probable languages

This new book (just announced on LINGUIST List) is not about phonology (at least I don’t think it is, given who wrote it and from what I can tell from the blurb). But I think it’s of particular relevance to (present-day) phonologists.

(I’m hoping that my semi-random thoughts on this below will generate some discussion here, especially if someone (else) decides to read the book.)

The book’s blurb says:

[Newmeyer] considers why some language types are impossible and why some grammatical features are more common than others. The task of trying to explain typological variation among languages has been mainly undertaken by functionally-oriented linguists. Generative grammarians entering the field of typology in the 1980s put forward the idea that cross-linguistic differences could be explained by linguistic parameters within Universal Grammar, whose operation might vary from language to language. Unfortunately, this way of looking at variation turned out to be much less successful than had been hoped for. Professor Newmeyer’s alternative to parameters combines leading ideas from functionalist and formalist approaches which in the past have been considered incompatible.

I’m especially curious about what is meant by “typological variation” (in the second sentence) and “cross-linguistic differences” (in the third sentence); specifically, I wonder whether this is meant to refer to something about the relative commonality of grammatical features (in the first sentence). Based on the rest of the blurb, my suspicion is that Newmeyer is talking more about grammatical commonalities (or ‘linguistic tendencies’) than with “typological variation” or “cross-linguistic differences”. To me, “cross-linguistic differences” simply refers to the fact that different languages do different things, and “typological variation” refers to the fact that one can group languages according to properties they share in common and talk about differences across those groups (language types). Although the study of differences among languages and language types can lead to discoveries of, or insights into, linguistic tendencies, I don’t see the latter as being the same thing at all. To me, linguistic tendencies are things that seem to recur in languages to a degree that is in some significant sense higher than chance would predict, but are not absolute.

In his review of the book (found below the blurb on the book’s OUP page), David Adger writes:

[Newmeyer] argues, with characteristic clarity and verve, that, although Universal Grammar underlies much of human language, it is irrelevant to explaining typological generalisations. For that, we must look to performance, rather than competence.

Now here’s the term “typological generalisations”, which is a little more ambiguous (to me) than the other terms above, and perhaps encompasses all three of them. I’m very curious now: does anyone who accepts some distinction between competence and performance seriously question whether our theories of competence should deal with cross-linguistic differences? Surely a theory of competence should allow for some difference in the analysis of the internalized grammar of the speaker of one language vs. the internalized grammar of the speaker of another language.

Should our theories of competence deal with typological variation? Depending on what this means, exactly, this question is perhaps a little more debatable than the previous one. This may be true only to the extent that these two questions are distinguishable; for example, OT is often said to be “inherently typological” because there is a relatively simple theory about how different grammars can differ from each other — only through differences in constraint ranking — typically assumed by OT adherents. Does this make OT a theory of typological variation? I’m not sure, but that’s likely due to the fact that I’m not a typologist.

The real question seems to be whether our theories of competence should deal with linguistic tendencies, a question that Newmeyer appears to be answering in the negative. This is hardly a surprise to (most) present-day phonologists, it seems to me, at least as a question worth investigating. There has long been a back-and-forth in phonology about the relative merits of formal explanation vs. explanation-from-substance. Lately, though, the debate seems to be not so much about whether there is a substantive basis to (some) phonological phenomena, but rather about where the substance is located: is it “in the grammar” or somewhere “out there”? (The question is vague or subtle enough to make it such that I think many more of us might agree on what we think about it, but don’t know because we talk past each other.)

One area in which this question is of particular interest to me is the idea of “rule naturalness”. For example, the observation (or perception?) that assimilation is a very common phonological process, and that other feature-changing rules are not as common, was one of the major arguments for the abandonment of “linear” (SPE-style) rules in favor of “nonlinear” (autosegmental) rules, where an assimilation is the mere addition of an association line (plus the delinking of the original feature value of the assimilation target, but that’s meant to be automatic in some way). But the idea that more-simply-stated rules are favored in grammars (and are thus more common) is inextricably tied to a grammar-evaluation metric that, in my view, hasn’t been well enough justified, at least not outside of these very types of theory-internal considerations. (I’d even venture the claim that most phonologists don’t even think of the evaluation metric much beyond its usefulness as a methodological rule of thumb that favors generally “simpler” analyses over “more complex” ones.) I suspect that the explanation for the “naturalness” of assimilation is more substantive than formal; indeed, one of the other strong arguments for autosegmental representations over SPE-style representations was (and still is?) its more transparent relation to that phonological substance of substances, phonetics.

OK, I’ve gone on enough. I hope others will want to contribute their two cents or more.

2 thoughts on “Possible and probable languages

  1. Ed

    I recently read a paper where I think someone argued that a particular type of ranking would be more common than another because of some reason or another… oh crap, I’ve got to go look that up to make more sense. Anyway it sounds like the kind of argument you are talking about transposed into OT.

  2. Pavel Iosad

    From what I’ve seen of Newmeyer’s work, he is clearly much more interested in syntax and well-known typological statements about it, such as word-order typology. What Adger seems to refer to is work like that of Hawkins who argued that strictly left- and right-branching structures arise because of ease of parsing considerations (which look more like performance), as opposed to an explanation that relies on a “hard-wired” parameter that is part of UG (=competence).

    Is OT a theory of typological variation? Depends on what you mean by “theory”, of course (there was a paper by Matthew Dryer about that). It is a theory of variation in that given a (full) list of constraints, it is a matter of simple combinatorics to do a calculus of all possible natural language grammars and show which ones are impossible (which makes OT falsifiable). It isn’t a theory of variation, if we refuse to admit that a statement like “X is a fact of language L because constraint ranking R which predicts X is part of the grammar of L” is a good explanation of why X is a fact of L (and we have to say what an explanation really is). In that sense, OT is clearly a competence-oriented theory of typological variation.

    “Simplicity” as a means of evaluating competing theories should only be used ceteris paribus; anyway, it appears that there is (always?) a trade-off between the number and “quality” of assumptions you have to make and the simplicity of the rules you formulate within those assumptions. Thus, in order to be able to simply add and erase association lines, you have to assume they “exist” in some way outside your diagram, something that clearly requires elaboration.

    As for the main question (“Should out theories of competence deal with tendencies?”), the answer looks pretty simple to me (but hardly too explanatory): only if there is no other way to account for them. In this sense, it looks like research agendas like those started by John Ohala or Juliette Blevins’ Evolutionary Phonology hold much promise in that they may be able to eliminate many of the facts that would otherwise be explananda for a theory of competence.

Leave a Reply

Your email address will not be published. Required fields are marked *