Qi Cheng awarded NSF Doctoral Dissertation Research Improvement Grant

Graduate student Qi Cheng, a Ph.D candidate in our department and a member of Rachel Mayberry Lab for Multimodal Language Development, was recently awarded a National Science Foundation Linguistics Program – Doctoral Dissertation Research Improvement Grant (#1917922) for her dissertation work. Her research examines the biological foundations of human language with a focus on early language experience, linking observations from language learning, processing, and the brain network. Supported by the grant, she is currently conducting two psycholinguistic experiments to explore sentence processing strategies used by deaf late signers who suffered from early language deprivation. She presented the preliminary findings at CUNY Conference on Sentence Processing and Theoretical Issues in Signed Language Research (TISLR). She recently published a paper on the neural language pathways of deaf signers with and without early language on Frontiers in Human Neuroscience.

Cheng, Q., & Mayberry, R. ‘Word order or world knowledge? Effects of early language deprivation on simple sentence comprehension.’ Oral presentation at the 13th conference of Theoretical Issues in Sign Language Research, Hamburg, Germany, September 2019.

Cheng, Q., Roth, A., Halgren, E., & Mayberry, R. I. ‘Effects of early language deprivation on brain connectivity: Language pathways in deaf native and late first-language learners of American Sign Language.’ Frontiers in Human Neuroscience, 13, 320. 2019

Nina Semushina awarded NSF Doctoral Dissertation Research Improvement Grant

Graduate student Nina Semushina, a PhD candidate in our department and a member of Rachel Mayberry Lab for Multimodal Language Development, was recently awarded a National Science Foundation Doctoral Dissertation Research Improvement Grant (Ling-DDRI) for the project “The development of numerical cognition and linguistic number use: Insights from sign languages”. The goal of the project is to study the effects of language deprivation on the acquisition of numeracy and linguistic number use in American sign language, taking into account some modality-specific properties of numeral systems and plural morphology in sign languages.

Nina Feygl Semushina and Rachel Mayberry have a new paper on numeral incorporation in Russian Sign Language

Graduate student Nina Feygl Semushina and faculty member Rachel Mayberry published a paper Numeral Incorporation in Russian Sign Language: Phonological Constraints on Simultaneous Morphology in Sign Language Studies, vol. 20 no. 1.

Abstract. Numeral incorporation is the simultaneous combination of a numeral and a base sign into one sign. Incorporating forms typically use the numerical handshape combined simultaneously with the movement, location, and orientation of the base lexical sign: for example, “3 months” will be expressed through an incorporating form 3_MONTH. Analyses of Russian Sign Language (RSL) data collected through fieldwork in Russia, show that there is no general linguistic rule for numeral incorporation in RSL (unlike in ASL which has a one-handed numeral system). Instead, because of phonological constraints that govern the distribution of two-handed signs, incorporation of two-handed numerals in RSL depends upon the place of articulation and the hand orientation of the particular lexical sign.

Eva Wittenberg has a new paper on the acquisition of event nominals and light verb constructions

Faculty member Eva Wittenberg and Dr. Angela He (Chinese University of Hong Kong) have a new paper on the acquisition of event nominals and light verb constructions in Language & Linguistics Compass:

He AX, Wittenberg E. “The acquisition of event nominals and light verb constructions.” Lang Linguist Compass. 2019;1–18. https://onlinelibrary.wiley.com/doi/full/10.1111/lnc3.12363

Abstract. In language acquisition, children assume that syntax and semantics reliably map onto each other, and they use these mappings to guide their inferences about novel word meanings: For instance, at the lexical level, nouns should name objects and verbs name events, and at the clausal level, syntactic arguments should match semantic roles. This review focuses on two cases where canonical mappings are broken—first, nouns that name event concepts (e.g., “a nap”) and second, light verb constructions that do not neatly map syntactic arguments onto semantic roles (e.g., “give a kiss”). We discuss the challenges involved in their acquisition, review evidence that suggests a close connection between them, and highlight outstanding questions.

Will Styler gives talk at UCLA

Faculty member Will Styler was invited to give a talk at UCLA’s Department of Linguistics on November 15th.

Title: Using Transparent Machine Learning to study Human Speech


Machine learning, the use of nuanced computer models to analyze and predict data, has a long history in speech recognition and natural language processing, but have largely been limited to more applied, engineering tasks. This talk will describe two more research-focused applications of transparent machine learning algorithms in the study of speech perception and production.


For speech perception, we’ll examine the difficult problem of identifying acoustic cues to a complex phonetic contrast, in this case, vowel nasality. Here, by training machine learning algorithms on acoustic measurements, we can more directly measure the informativeness of the various acoustic features to the contrast. This by-feature informativeness data was then used to create hypotheses about human cue usage, and then, to model the observed human patterns of perception, showing that these models were able to predict not only the utilized cue, but the subtle patterns of perception arising from less informative changes.

For speech production, we’ll focus on data from Electromagnetic Articulography (EMA), which provides position data for the articulators with high temporal and spatial resolution, and discuss our ongoing efforts to identify and characterize pause postures (specific vocal tract configurations at prosodic boundaries, c.f. Katsika et al. 2014) in the speech of 7 speakers of American English. Here, the lip aperture trajectories of 800+ individual pauses were gold-standard annotated by a member of the research team, and then subjected to principal component analysis. These analyses were then used to train a support vector machine (SVM) classifier, which achieved a 96% classification accuracy in cross-validation tests, with a Cohen’s Kappa showing machine-to-annotator agreement of 0.79, suggesting the potential for improvements in speed, consistency, and objective characterization of gestures.

These methods of modeling feature importance and classifying curves using transparent and interpretable machine learning both demonstrate concrete methods which are potentially useful and applicable to a variety of questions in phonetics, and potentially, in linguistics in general.