Professor Sarah Creel
University of California, San Diego
Email: firstinitlastname at ucsd.edu
Office: Cog Sci 167, x4-7308
University of South Carolina
SC Honors College
BA, Music (1999)
BS, Psychology (1999)
University of Rochester
PhD, Brain and Cognitive Sciences (2005)
My research (CV)
I use a variety of methodologies to explore how children and adults learn and process complex acoustic information, especially speech, and also other types of temporally-patterned stimuli such as music. My work is currently supported by grants from the National Science Foundation and the National Institutes of Health.
Processing sound in language.
I look at how the speech signal is interpreted moment-by-moment (on-line) by examining participants’ eye movements to objects as a word elapses over time. This methodology is a particularly nice way to examine the development of word recognition: assuming normal vision, anyone from infancy through adulthood has some capacity to execute eye movements to named objects. I also conduct learning studies where I measure post-learning confusions between words/sounds and time to reach a particular accuracy criterion as a measure of learning difficulty. A new line of research examines relationships between a listener’s percepts and their productions, by asking how well they comprehend their own speech outputs. Stay tuned or ask me for more info!
Broadly, I’m interested in the development of word and sound recognition, and the specificity of memories for acoustic information. My work on representational specificity has investigated whether adult listeners store and use acoustic properties of words they hear in on-line recognition. The short story is that they do use acoustic properties (talker variability) in recognizing words on-line. The longer story, of course, is how they manage to do this, and how it evolves over the course of development. Some of my research suggests that children as young as 3-4 years use talker-specific detail to recognize words and to comprehend sentences more rapidly.
How is children’s knowledge about sound in language different from adults’? The prevailing account of developing sound recognition is that children are tuned to the sounds of their native language by the end of the first year of life. My work suggests a more protracted developmental time course (see Creel & Quam, 2015, TICS), wherein children are perceptually-learning their native language (and other sound patterns like music) for a lengthier time span. First, adults are vastly superior to children at learning to recognize new voices (Jiménez & Creel, 2011, BUCLD; Creel & Jiménez, 2012), an ability linked to language knowledge (Bregman & Creel, 2014). Second, ongoing work suggests that adults are better than children at a musical “word”-learning task, where they associate short melodies with pictures (Creel & Tumlin, Cognitive Science Journal, 2012; Creel, 2014, JEPLMC, and 2016, Cognitive Science Journal).
Auditory perception and music cognition.
My overarching goal in the realm of music perception is to uncover potentially common processes across the seemingly separate domains of language and music. For instance, certain word segmentation phenomena have nonspeech auditory analogues (see Creel, Newport, & Aslin, 2004, JEPLMC). I have employed eye tracking to explore moment-by-moment expectations in musical events (Creel & Tumlin, 2012, Cognitive Science). I also use more traditional music cognition methodologies, in a quest to understand what musical knowledge is and how it is acquired (see Creel, 2011, JEPHPP; Creel, 2012, Cognitive Psychology; Creel, Music Perception, 2020; Creel, Cognition, 2022), and how this learning interacts with enjoyment of a piece of music or musical genre.