Author Archives: marc

Filling ConCat

Almost exactly three years ago, I announced a plan on Phonoloblog: the plan to establish a Wiki for OT constraints, which should slowly grow into a large online reference on all constraints which have been proposed in the literature. The plan was born during a conversation with Curt Rice during the legendary Bloomington PhonologyFest of 2006.

The advantages of a ConCat are evident. It’s useful to have a tool where you can look up what has been written about a certain constraint, how it has been defined, which constraints are related to it, and whether some constraint has already been proposed in a different form or with a different name. Eventually, it could be a tool in the development of a true theory of possible OT constraints.

We established a website for ConCat in 2006, but it didn’t really grow since then. Maybe it was too small to be really attractive as a tool to use. This summer, however, I was fortunate enough to find two enthusiastic students from the University of the Aegean in Greece (Anna Fragkiadaki and Sofia Kousi) who filled the database with over 340 constraints, mostly excerpted from the handbooks by Kager and McCarthy, but also many other books and papers from the literature of the past 15 years.

I hope that in this way ConCat is becoming more useful. I hope you will see how useful it is, and that you start contributing.

Seminar Approaches to word accent in Leiden – April 2, 2009

Seminar: Approaches to word accent (word stress)

Organized by Rob Goedemans, Jeroen van de Weijer and Marc van Oostendorp (Leiden University)

April 2, 2009; Leiden University, Lipsius Building (http://www.visitors.leiden.edu/lipsius.jsp), room 235c

14.00 – 16:00 Harry van der Hulst (University of Connecticut): A new theory of word accentual structures (abstract below)
16:00 – Comments by Marc van Oostendorp and Jeroen van de Weijer, followed by discussion

Participation in this seminar is free for all. If possible, please announce your intention to come with Marc.van.Oostendorp@Meertens.KNAW.nl

A New Theory of Word Accentual Structures
Harry van der Hulst
University of Connecticut

The key insight of standard metrical theory (Liberman and Prince 1977, Vergnaud and Halle 1978, Hayes 1980, Halle and Vergnaud 1987, Idsardi 1990) is that syllables (or perhaps subsyllabic constituents such as skeletal positions, rhymes or moras) of words are organized into a layer of foot structure, each foot having a head. Primary accent is then derived by organizing the feet into a word structure in which one foot is the head. The head of the head foot, being a head at both levels, expresses primary accent. In this view, rhythmic accents are assigned first, while primary accent is regarded as the promotion of one of these rhythmic accents. In this seminar, I defend a different
formal theory of word accent. The theory is non-metrical in that the account of primary accent location is not based on iterative foot structure. The theory separates the representation of primary and rhythmic accents, the idea being that the latter are accounted for with reference to the primary accent location. This means that rhythmic structure is either assigned later (in a derivational sense),
or governed by constraints that are subordinate to the constraints that govern primary accent (as is possible in the approach presented in Prince and Smolensky 1993). The present approach has been called ‘a primary-accent first theory’ (see van der Hulst 1984, 1990, 1992, 1996, 1997, 1999, 2000a, 2002, 2009, van der Hulst and Kooij 1994, van der Hulst and Lahiri 1988 for earlier statements; see web page below for these and other references). I will demonstrate the workings of the theory using a variety of examples from bounded and unbounded (weight-sensitive and insensitive systems) taken from the StressTyp database developed by Rob Goedemans and Van der Hulst (http://stresstyp-test.leidenuniv.nl/).

Workshop on Phonological Voicing Variation

Location: Amsterdam and Leiden

Dates: September 11 and 12, 2008

The phonetic difference between b and p, or z and s has been described as a difference in (timing of) vocal fold vibration, but it well-known that there are subtle differences in the precise implementation of ‘voicing’, as well as its function in the phonologies of the world’s languages. This workshop brings together researchers who study the phenomenon from a variety of perspectives, both theoretical and empirical, and both synchronic and diachronic. What’s the right phonological interpretation of voicing? How does it interact with other phonological features? How do phonological processes involving voice — such as intervocalic voicing, devoicing and voicing assimilation — interact with other phonological processes?

The workshop takes place in Amsterdam and Leiden. The last talk is a Dutch-style inaugural address, followed by a party, which is open to participants in the workshop. Participation is free; but please announce your presence beforehand to marc.van.oostendorp@meertens.knaw.nl.

The full programme and other details are here.

etsi este!

A crucial point in Well’s argumentation against static approaches to alternation comes from Latin. Interestingly, his point seems to argue at the same time against rule ordering, although neither Wells nor Goldsmith mention this point.

In Latin, pat-tus becomes passus and met-tus becomes messus. This is very difficult to understand in a ‘static’ way (Wells even calls this ‘fatal’, as Goldsmith points out), for instance by only using output constraints. We cannot invoke a constraint *ts and/or a constraint *st, because words such as etsi and este stay unaffected. Only /t/’s which are adjacent to underlying /t/’s turn into [s]. As far as I can see, the only OT mechanism ever proposed which could do this kind of analysis are two-level constraints (which I don’t think anybody is seriously working with).

On the other hand, we can deal with this phenomenon in a ‘dynamic’ way, by positing rules of the following type:

  • t->s / _ + t
  • t->s / t + _

But we can only do this if we do not order these rules, but let them apply simultaneously. As soon as we order the rules they do not work, or the etsi/este problem arises again. That is the reason why the two-level constraint approach to this is the only one which works as far as I can see: Sympathy, Stratal OT, Comparative Markedness, OT-CC, etc. are all too ‘derivational’.

There also is no clear representational solution (changing a geminate /t/ to a geminate [s], leaving singletons unaffected), since it seems to be a crucial condition that there is a morpheme boundary between the /t/’s.

These thus are very important data, if they are real. Does anybody know about this? Has anybody ever tried to analyze this alternation?

Call for Papers: Workshop on Phonological Variation in Voicing

For most phonologists, the process of Final Devoicing, which we can observe in languages such as German, Dutch, Yiddish, Russian, Polish, Catalan and Turkish, did not deserve a lot of attention. One would write a rule of approximately the shape [-son] → [-voice] / __ #/$, and declare the issue resolved.

However, recent years have seen a revived interest in phenomena surrounding devoicing, for a variety of reasons. One of them are developments in the formalism, like that of OT. For one thing, it appears much easier to view devoicing as a rule than as the result of a constraint. There is no consensus yet as to what the constraint should be in OT (e.g. a general constraint against voicing *Voiced, dominated by a faithfulness constraint for onsets, a conjunction of NoCoda with *Voiced, a positional markedness constraint, etc.) and further, Final Devoicing is one of the most famous cases of the so-called Too-Many-Solutions Problem: why would the relevant constraint always be satisfied by deletion of the voicing feature?

Further, lots of empirical work has come out which does not fit very easily with classical views of phonology (including most of OT). First, we find final devoicing both in languages in which the relevant contrast is indeed [voice] (such as Catalan), but also in languages in which it rather involves [spread glottis] (like German), which raises the question what these phenomena have in common from a phonological point of view. Secondly, there is a large body of work showing that final devoicing in many cases is not neutralizing completely, but that there are phonetic traces of voicing in the acoustic signal, and that listeners to some extent can detect these traces at least in experimental circumstances. Thirdly, it turns out that whether or not a given stem is subject to final devoicing is to a large extent predictable given lexical statistics.

Finally, it has become clear over the years that devoicing interacts with many other phonological processes in (varieties of) European languages, such as voicing assimilation, but also lexical tone. It has been claimed as well that certain dialects of French, for instance, have developed interesting phonological phenomena as a result of contact with West-Germanic final devoicing systems.

What is the place of devoicing and other voicing phenomena in phonological theory? Which phenomena need to be accounted for by our theory? Which phenomena CAN be understood by it? This will be the topic of a workshop at the Meertens Instituut in Amsterdam on September 11, 2008, and the University of Leiden on September 12, 2008. The workshop will end in a very big party. Participation (including the party) is free for all readers of Phonoloblog. Invited speakers will be Harry van der Hulst (University of Connecticut) and Ben Hermans (Meertens Instituut).

Please submit an abstract (2 pages max; does not need to be anonymous; pdf file) to Marc.van.Oostendorp@Meertens.KNAW.nl. Deadline: June 28.

Workshop on Segments and Tone

On June 7 and 8, 2007, the Meertens Instituut in Amsterdam and the Phonetics Institute of the University of Amsterdam jointly organize a workshop on segments and tone. Altogether 15 talks will be presented on many aspects of the relationship between consonantal and vocalic features and tone.

Participation in this workshop is free, but it would be appreciated if you announce your plans to come. A programme with all the abstracts can be found here.

OCP 4

The fourth edition of the Old-World Conference in Phonology (Συνέδριο Φωνολογίας της Γηραιάς Ηπείρου 4) will take held from Jan. 19-21, 2007 on the beautiful island of Rhodes, Greece. The conference will be preceded by a workshop on Vowel Harmony in the Languages of the Mediterranean on January 18. More information on OCP 4, including the programme and all abstracts, can be found on the website of the organisers.

OCP started in Leiden, the Netherlands, in January 2003, as a follow-up to the HILP Conferences in the 1990s. OCP2 took place in Tromsø, Norway, in January 2005, and OCP3 in Budapest, Hungary, in January 2006. Most probably, OCP5 will be organized in Toulouse, France, in January 2008.

Grounding the iambic/trochaic law

Trochees tend to be even, iambs are usually uneven. Since Hayes (1985) it is believed that this distinction has a basis in an extralinguistic principle of rhythmic grouping:

  • Elements contrasting in intensity naturally form groupings with initial
    prominence.
  • Elements contrasting in duration naturally form groupings with final
    prominence.

It is believed that this ‘iambic/trochaic law’ reflects a universal cognitive tendency. But new research in musical theory seems to put this into question: adherence to the iambic/trochaic law seems to be partly dependent on the native language of the speaker. A group of researcher led by Aniruddh Patel found that speakers of (American) English conformed to the Iambic/Trochaic Law, but speakers of Japanese do not (see this summary in New Scientist). They argue that this difference in judgement is based on a difference in the syntactic structures of the languages in question, and in consequence that musical (rhythmic) perception is based at least partly on grammar. I suppose this puts into question the argument on the ‘groundedness’ of the iambic trochaic law.

OTableau

OT tableaux seem to be designed with WYSIWYG editors, such as Word or WordPerfect or OpenOffice in mind. They do not come as natural to those linguists using e.g. LaTeX; it is too easy make a lot of mistakes in where one should put asterisks, etc.

Julien Eychenne, a phonology student who is finishing his PhD thesis in Toulouse right now, has written a small programme which works as a small WYSIWYG editor, just for tableaux: OTableau. Next to generating LaTeX output, the programme also calculates ‘fatal’ violations, and places exclamation marks and shading (if desired) accordingly. It’s a nice little programme.

ConCat: A catalogue of constraints

During the second week of the PhonologyFest, earlier this year in Bloomington, Indiana, I shared an apartment with Curt Rice. One night he told me that he had a plan: wouldn’t it be nice to have a catalogue of OT constraints as they have been proposed in the literature? The IPA Guide has a list of symbols, with explanations how they are used, etc.; wouldn’t it be convenient to have such a book for constraints as well? So that you could look up who first proposed a constraint, what the alternatives are, how the constraints had been formalized by various authors, whether there have been similar proposals outside the OT literature, etc.

Talking about this a little bit further, we decided it should be a Wiki rather than a book — a website where everybody can contribute, add constraints, add background information, etc.

During the summer I wrote a few lemmas, in particular I write a first version for a page for the Onset constraint, plus several things which would be linked to such a page. In the mean time, Nathan Sanders, a graduate student at the University of Indiana, installed a Wiki server. We have now opened it.

Do you think this is a good idea? What are possible extensions? You can join ConCat and start building it with us.

Little Interface Library

Tobias Scheer (in Nice, France) is working on a book on the interface between phonology and morpho-syntax. For this, he has also reconstructed the history of phonological thinking about this topic, and read almost everything which was written on it for the past 60 years — at least, that is what I believe.

This work has produced already an interesting result: Tobias’ Little Interface Library, a part of his website where he has collected pdf versions of many, many articles on the topic, including papers by Selkirk or Gussman or Nespor & Ralli which are hard to find, because they appeared in Working Papers or minor journals.

Topics in Nivkh Phonology

In order to obtain a PhD in the Netherlands one traditionally submits not just a dissertation but also a list of approximately ten stellingen, i.e. ‘theses’ or ‘propositions’ of one or two sentences each. A few of these summarise the main themes of the book, but there are also a few which have a broader outlook.

Toshi Shiraishi will defend his dissertation Topics in Nivkh Phonology next week in Groningen. His 7th thesis is:

  • Fieldwork linguists should be more occupied with theoretical linguistics, and theoretical linguists more with fieldwork linguistics.

Toshi’s own work is a good illustration of how fruitful it can be to do fieldwork with a solid theoretical background, and work on the theory with a good grasp of the problems with the data. He spent a lot of time on the island of Sakhalin, in the Russian Far East, where the language is still spoken by a few hundred older people. But he did not just randomly collect data: his dissertation contributes to at least two important topical debates in current theory: about the precise representation of laryngeal contrasts, and about the phonology-syntax interface (since Nivkh has an interesting process of consonant mutation, which according to Shiraishi occurs at the edges of syntactic phrases).

The text of the dissertation is available here.

Whistled languages: phonology and Unesco

The most recent issue of Phonology (22.2) contains an article by Annie Rialland about the phonetics and phonology of a number of so-called ‘whistled languages’ (Rialland’s website has a prefinal version as a pdf).

In some sense, whistled languages use the phonology of a spoken language, such as Spanish in the case of the most well-known instance of this type of language, Silbo Gomero from one of the Canary Islands, La Gomera. Yet they implement this phonology in a radically different way — by whistling rather than moving organs in the vocal tract. Since this special type articulatory phonetics is more limited than the usual one, this in turn influences the phonology somewhat. All of this can be found in Rialland’s fascinating article.

The topic of whistled languages is also very suitable for explaining some basic principles of the phonetics-phonology interface. When I needed to write something for a Dutch popular science website for adolescents, I therefore took Rialland’s article as my basis. Spanish has a five vowel system, and Rialland shows that these vowels can be distinguished on the basis of F2 alone; it is the F2 which is whistled in Silbo Gomero. This fact can be used as a handle to explain what formants are, and what a vowel system is; here is the article I wrote (in Dutch, obviously).

I notified Rialland of the fact that I published this piece, and here is what she answered:

This paper will also serve an unexpected function for you: the Government of the Canary Islands is currently trying to get a recognition of Silbo (and also other whistled languages) as a patrimony of humanity by UNESCO. All of the papers in scientific journals (of any age) will help.

Explaining phonology to young people can have unexpected political consequences.

Third Old-World Conference in Phonology

The third Old-World Conference in Phonology (OCP3) in Budapest has just finished. The programme had a number of very high-quality talks in a variety of different phonological frameworks, and the atmosphere was very good.

I will not give an overview of all talks — here is the conference website, including all the abstracts, but I felt that one could see two opposing trends in this OCP. Continue reading

GLOW Phonology

Here is the programme for GLOW Phonology, March 29-31 in Geneva, Switzerland. There will be a workshop on Synchrony and Diachrony in Phonology, as well as a full day of talks in the Main Session. A few details might still change in the next few days, and a few abstracts still need to be added, but as you can see the program will be certainly worth the trip to Geneva.

Old-World Conference in Phonology

During the very succesful second Old-World Conference in Phonology, last week in Troms�, Norway, it has been decided that from now on the OCP will be an annual, rather than a biannual, event. OCP3 (2006) will take place in Budapest, Hungary, and OCP4 (2007) on the island of Rhodes, Greece. Presumably the host for OCP5 (2008) will be Toulouse, France. There is now also a small website on past and future OCP’s at the University of Leiden: http://www.ocp.leidenuniv.nl/.

Old World vs. New World phonology

By some trick of the human mind, Eric’s recent apology to the ‘Old World folks’ reminded me of Stephen Anderson. In his beautiful 1985 book Phonology in the Twentieth Century, Anderson wrote:

If a paper on ‘the morphosyntax of medial suffixes in Kickapoo�, bursting with unfamiliar forms and descriptive difficulties, is typical of American linguistics, its European counterpart is likely to be a paper on �l�arbitraire du signe� whose factual basis is limited to the observation that tree means �tree� in English, while arbre has essentially the same meaning in French.

This is obviously a caricature (of the way things were in the 1930s), and a funny one at that, but it is also acurate even to describe the current situation. A ‘typical’ American linguistics paper seems to be much more concerned with getting the facts right, whereas ‘typical’ European linguistics seems more interested in the overall structure of theories. The ‘typical’ American phonologists of today is studying brain scans, while his Old World colleague struggles with the definition of interconsonantal government. It’s not clear a priori which of those approaches will turn out to be most fruitful, and there are of course exceptions to the rule — Alan Prince, for instance, has won an honorary citizenship of the European Union with his work of the past few years; and there are many fine linguists in many parts of the world who behave sometimes as Old World, and as New World at other times. It is a mystery to me what explains this different academic and intellectual culture, especially since it seems to have been true for such a long time.

(The only linguistic fact in this post is about tree and arbre. My apologies, New World folks!)

[Update 04/09/23: The discussion is continued at Language Log]