Back in October 2005, there was a discussion about the anti-OT bias of some derivational phonologists. In his book manuscript, Andrea Calabrese had alleged that “magical thinking” is especially common among those OT practitioners who would “attempt to provide a synchronic explanation to all aspects of the phonology of a language.” It was pointed out in the discussion that the magical thinking actually dates back to SPE and is still present to some degree in Calabrese’s own work.
I guess it comes as no surprise that it is possible to find a similar bias against OT among phoneticians. Consider the following quote from Joaquín Romero’s recent review of Laboratory Approaches to Spanish Phonology (Mouton de Gruyter, 2004), edited by Timothy Face:
As is unfortunately rather common in some of the work currently being done on laboratory phonology, it seems that some times the experimental data is viewed simply as a complement to the theoretical phonological analysis. […] In other instances, as in those papers that present OT analyses of the data, the distance between the experimental results and the assumptions made in the theoretical analysis represents such a leap of faith and such a simplification of the physical reality that one wonders why the authors bothered to gather experimental data at all.
Several papers present detailed arguments with respect to how the phonology is based on articulatory or perceptual factors, then follow this up with an implementation couched in OT mechanics. I felt that the latter was generally unnecessary. The important arguments had been made and the OT tableaux added nothing to my understanding. They seemed to be an abstraction which actually took me further away from the phonetically-based reality that the authors espoused.
I suspect that what we see here is not a specific bias against a framework in which input-output mappings are governed by interacting constraints. Rather, it’s
a more general bias against all formal approaches to phonology [CORRECTION: possibly a general predisposition against phonological formalism–see ACW’s comment below], even if the analysis happens to incorporate more phonetic detail than is usually included. Just for the sake of argument, an absurdly extremist extension of this predisposition (which admittedly few people are likely to hold) would seem to entail that phonology is not a problem-solving system, in the sense discussed by Donca Steriade at the end of her 2001 paper The Phonology of Perceptibility Effects: the P-map and its consequences for constraint organization:
At the most basic level one can dispute the premise this account shares with most modern phonology, namely that phonology is a problem-solving system, or – as Goldsmith (1993) puts it – “an intelligent system”. If the phonotactic [constraint against word-final voiced obstruents–TGB] is not viewed as a problem to be solved, or as a standard of well-formedness that is independent of the lexicon’s contents, but rather as a static generalization over the words that happen to be attested in one’s language, then no Too-Many-Solutions problem arises: learners, on this view, do not seek to find a solution to [the phonotactic constraint–TGB] but to learn whatever patterns happen to be instantiated by their lexicon.
If phonology is not a problem-solving system, then I suppose that one could put everything in the lexicon and call it a day. I’m sure many phonoloblog readers would agree that this move would be a simplification of the phonological reality which would add nothing to our understanding of how phonological systems work.
Goldsmith, John. 1993. The Last Phonological Rule. University of Chicago Press.