Last week I suggested some of us read and discuss John Goldsmith‘s recent paper in Phonology 25.1 (“Generative phonology in the late 1940s“, doi:10.1017/S0952675708001395). I’m not really sure what’s the best way to go about this, so I’ll just suggest the following: anyone interested can pick a point of discussion and write a post about it, and anyone interested in responding to that point can comment specifically on that post.
OK, now that I’ve written that out, that just sounds like plain old blogging. I guess what I’m trying to suggest is that we don’t limit the discussion to just one post and its associated comments: if the point of discussion that you want to pick is sufficiently different from what’s already been posted, then I encourage you to start a new post rather than to comment on the old one. We can maybe tie all the threads together later.
OK, that still just sounds like plain old blogging. Forget I ever said anything. Let’s just move on to my (first?) suggested point of discussion, focusing on §2 of the paper (pp. 40-42 of the published version, pp. 4-6 of the preprint).
Here is what I’m interested in:
Wells begins by noting that he is aware that there are some pitfalls in front of him, and that he has no intention of falling into them. It is clear that he knows that the motive force behind the dynamic change is, at least in many cases, the appearance of an illicit phonemic sequence, but he also is aware that we must be careful in how we deal with this fact. It would not do, for example, to say:
- When, by the placing of a morpheme in a certain phonemic environment, a phonemically non-occurrent sequence would arise, an alternation or change in this sequence is called automatic if it yields a phonemically occurring sequence (p. 102).
Wells unambiguously says (p. 102) that ‘we would be willing to regard gálakt and stómat as basic to automatic alternations if (a) their nominative singulars were gála and stóma, or (b) if they were gálakto and stómato, or (c) if they were both diﬀerent from their basic alternants in any other way, provided that that way was the same or comparable in both cases and all other essentially similar ones; but not otherwise’. In contemporary terminology, Wells puts the requirement on the constraint-based theory that the change eﬀected in order to satisfy the constraint must be the same in all cases — and in even more contemporary terms, he requires that the constraint violation triggers a speciﬁc rule. That is exactly what a generative phonological rule does.
This last quoted paragraph is followed by a footnote citing Sommerstein (1974) as “an extended argument against putting these two things [a constraint violation and a specific rule] together” (the footnote also cites “Goldsmith (1993, 1999) for discussion”). One of the lynchpins for Sommerstein’s argument is the existence of ‘conspiracies’, in the sense of Kisseberth (1970). (See note 1.)
As some of you know, I’m very fond of conspiracies. I wrote this poor-excuse-for-a-squib which was sort of about conspiracies, and this paper and this one both have substantial subsections (§6.3 and §14.3, respectively) demonstrating how my analysis of antigemination represents a sub-case of a conspiracy. I also mention ‘conspiracies’ whenever anyone mentions ‘opacity’. I’m a conspiracy theorist, you might say. (Well, somebody had to say it. And now that that’s out of the way…)
Back to the passage quoted above. It’s not clear to me that Wells did anything other than define the notion ‘automatic’ in one conceivable way — one that excludes conspiracy-like situations — as opposed to another, equally conceivable way — one that includes conspiracy-like situations.
Here’s what I mean more specifically. Take a hypothetical conspiracy like: high vowels glide and low vowels delete to avoid hiatus. This is infamously impossible to express in the formal terms of SPE without substantial modification of basic assumptions: separate rules of gliding and deletion are necessary, with hiatus avoidance being at best an accidental side-effect of their co-existence in the grammar.
But gliding and deletion are both ‘automatic’ in at least some conceivable sense of that term: their conditions are exactly and disjointly specifiable (high vowels in one case, low vowels in the other; or, an elsewhere interaction between the two), which is what makes separate obligatory phonological rules possible to state in the first place. So why does Wells pursue the definition of ‘automatic’ that excludes such cases, as opposed to the other that includes them?
Personally, I think this is simply a case of the alternative definition being beyond the reach of the representational and derivational notations that Wells considers (which Goldsmith very convincingly argues are clear precursors to the basic representational and derivational assumptions of SPE). Besides, there’s another way out under these assumptions, and that is to simply deny the unity of hiatus avoidance, stating each ‘surface phonotactic’ (whatever that refers to in the theory) separately: avoid high-V hiatus, avoid low-V hiatus. (A whiff of reductio ad absurdum, perhaps, but you get the point.)
Constraint-based theories don’t necessarily falter in this way — take OT, for example (duh). The schematic OT analysis of a conspiracy captures both the unity and the complementarity of this hypothetical case (and arguably any case) of a conspiracy: hiatus avoidance (Onset, NoHiatus, whatever) is best-satisfied by gliding, but glided alternatives to the deletion of low vowels are blocked. Both processes are ‘automatic’ in an intuitively substantive sense, in that the application of each is completely predictable (though of course in some relevant sense, each process must be exceptionless).
Of course, there’s always the possibility that I’m grossly misunderstanding something deeper than this, about what makes an alternation ‘automatic’. That’s why I suggested a reading group.
- And I can’t believe I never caught this before, but at the very end of the paper Sommerstein independently comes to the same conclusion as Kiparsky (1973): the unmarkedness of transparent rule orders predicts the unmarkedness of conspiracies, because each rule in a conspiracy contributes to the surface truth of itself and the other rules in the conspiracy. See Baković (2007: 244) for more related discussion. (Go back.)