This new book (just announced on LINGUIST List) is not about phonology (at least I don’t think it is, given who wrote it and from what I can tell from the blurb). But I think it’s of particular relevance to (present-day) phonologists.
(I’m hoping that my semi-random thoughts on this below will generate some discussion here, especially if someone (else) decides to read the book.)
The book’s blurb says:
[Newmeyer] considers why some language types are impossible and why some grammatical features are more common than others. The task of trying to explain typological variation among languages has been mainly undertaken by functionally-oriented linguists. Generative grammarians entering the field of typology in the 1980s put forward the idea that cross-linguistic differences could be explained by linguistic parameters within Universal Grammar, whose operation might vary from language to language. Unfortunately, this way of looking at variation turned out to be much less successful than had been hoped for. Professor Newmeyer’s alternative to parameters combines leading ideas from functionalist and formalist approaches which in the past have been considered incompatible.
I’m especially curious about what is meant by “typological variation” (in the second sentence) and “cross-linguistic differences” (in the third sentence); specifically, I wonder whether this is meant to refer to something about the relative commonality of grammatical features (in the first sentence). Based on the rest of the blurb, my suspicion is that Newmeyer is talking more about grammatical commonalities (or ‘linguistic tendencies’) than with “typological variation” or “cross-linguistic differences”. To me, “cross-linguistic differences” simply refers to the fact that different languages do different things, and “typological variation” refers to the fact that one can group languages according to properties they share in common and talk about differences across those groups (language types). Although the study of differences among languages and language types can lead to discoveries of, or insights into, linguistic tendencies, I don’t see the latter as being the same thing at all. To me, linguistic tendencies are things that seem to recur in languages to a degree that is in some significant sense higher than chance would predict, but are not absolute.
[Newmeyer] argues, with characteristic clarity and verve, that, although Universal Grammar underlies much of human language, it is irrelevant to explaining typological generalisations. For that, we must look to performance, rather than competence.
Now here’s the term “typological generalisations”, which is a little more ambiguous (to me) than the other terms above, and perhaps encompasses all three of them. I’m very curious now: does anyone who accepts some distinction between competence and performance seriously question whether our theories of competence should deal with cross-linguistic differences? Surely a theory of competence should allow for some difference in the analysis of the internalized grammar of the speaker of one language vs. the internalized grammar of the speaker of another language.
Should our theories of competence deal with typological variation? Depending on what this means, exactly, this question is perhaps a little more debatable than the previous one. This may be true only to the extent that these two questions are distinguishable; for example, OT is often said to be “inherently typological” because there is a relatively simple theory about how different grammars can differ from each other — only through differences in constraint ranking — typically assumed by OT adherents. Does this make OT a theory of typological variation? I’m not sure, but that’s likely due to the fact that I’m not a typologist.
The real question seems to be whether our theories of competence should deal with linguistic tendencies, a question that Newmeyer appears to be answering in the negative. This is hardly a surprise to (most) present-day phonologists, it seems to me, at least as a question worth investigating. There has long been a back-and-forth in phonology about the relative merits of formal explanation vs. explanation-from-substance. Lately, though, the debate seems to be not so much about whether there is a substantive basis to (some) phonological phenomena, but rather about where the substance is located: is it “in the grammar” or somewhere “out there”? (The question is vague or subtle enough to make it such that I think many more of us might agree on what we think about it, but don’t know because we talk past each other.)
One area in which this question is of particular interest to me is the idea of “rule naturalness”. For example, the observation (or perception?) that assimilation is a very common phonological process, and that other feature-changing rules are not as common, was one of the major arguments for the abandonment of “linear” (SPE-style) rules in favor of “nonlinear” (autosegmental) rules, where an assimilation is the mere addition of an association line (plus the delinking of the original feature value of the assimilation target, but that’s meant to be automatic in some way). But the idea that more-simply-stated rules are favored in grammars (and are thus more common) is inextricably tied to a grammar-evaluation metric that, in my view, hasn’t been well enough justified, at least not outside of these very types of theory-internal considerations. (I’d even venture the claim that most phonologists don’t even think of the evaluation metric much beyond its usefulness as a methodological rule of thumb that favors generally “simpler” analyses over “more complex” ones.) I suspect that the explanation for the “naturalness” of assimilation is more substantive than formal; indeed, one of the other strong arguments for autosegmental representations over SPE-style representations was (and still is?) its more transparent relation to that phonological substance of substances, phonetics.
OK, I’ve gone on enough. I hope others will want to contribute their two cents or more.