Skip to main content

            This summer, I conducted research at the Penn Infant Language Center under the mentorship of Dr. Daniel Swingley. Broadly speaking, the goal of my project was to learn more about why some people are better at learning languages than others!

Humans are thought to learn speech sound categories through distributional learning, a form of unsupervised learning that results from exposure to frequency distributions of speech sounds in one’s surroundings. We modeled this process in adult subjects using an unsupervised learning task where adults learned to categorize vowels by specific dimensions that they were trained to pay attention to (either formant or duration). In addition to the vowel category learning task, subjects completed sound-related cognitive tasks: speaking, musical chord, and syllable memory tasks. Our goal was to probe for correlations among the tasks and determine if performance in the cognitive tasks correlated with vowel category learning ability.

We concluded that some participants learned categories from exposure to them. However, there were no significant correlations between performance on the additional cognitive tasks and how well subjects learned the vowel categories. Therefore, vowel learning ability may not depend on the ability to imitate vowels (speaking task), generic sound perception ability (musical chord task) or phonological working memory (syllable memory task). One interesting observation was that subjects resisted categorizing by duration, though their imitation showed they could hear and remember duration differences. We hypothesize that these subjects, as native English speakers, were familiar with categorizing vowels based on formants (vowel sounds in “bat” and “bit”) since those contrasts are important in English. Since duration is not contrastive for English vowels, subjects may have assumed that vowels are always categorized by formant and never by duration.

My research experience allowed me to learn many new skills in speech synthesis, computer scripting, and data analysis. I learned how to use the Unix command line for many purposes, like running Sound eXchange (SoX) audio editing software. I also gained familiarity with R and Praat, using the former for statistical analysis and data visualization with ggplot2, and the latter for synthesizing speech files and annotating raw data from the speaking task.

As an intended Cognitive Science major, I’m grateful that I had the opportunity to conduct research for a project that ties together many of my interests. My research experience allowed me to have a deeper understanding of certain topics from the computer science and linguistics courses I took freshman year. As a result of this project, I look forward to my statistics and developmental psychology courses in the fall, and I hope to complement my studies by continuing to conduct research in this field.

            This summer, I conducted research at the Penn Infant Language Center under the mentorship of Dr. Daniel Swingley. Broadly speaking, the goal of my project was to learn more about why some people are better at learning languages than others!

Humans are thought to learn speech sound categories through distributional learning, a form of unsupervised learning that results from exposure to frequency distributions of speech sounds in one’s surroundings. We modeled this process in adult subjects using an unsupervised learning task where adults learned to categorize vowels by specific dimensions that they were trained to pay attention to (either formant or duration). In addition to the vowel category learning task, subjects completed sound-related cognitive tasks: speaking, musical chord, and syllable memory tasks. Our goal was to probe for correlations among the tasks and determine if performance in the cognitive tasks correlated with vowel category learning ability.

We concluded that some participants learned categories from exposure to them. However, there were no significant correlations between performance on the additional cognitive tasks and how well subjects learned the vowel categories. Therefore, vowel learning ability may not depend on the ability to imitate vowels (speaking task), generic sound perception ability (musical chord task) or phonological working memory (syllable memory task). One interesting observation was that subjects resisted categorizing by duration, though their imitation showed they could hear and remember duration differences. We hypothesize that these subjects, as native English speakers, were familiar with categorizing vowels based on formants (vowel sounds in “bat” and “bit”) since those contrasts are important in English. Since duration is not contrastive for English vowels, subjects may have assumed that vowels are always categorized by formant and never by duration.

My research experience allowed me to learn many new skills in speech synthesis, computer scripting, and data analysis. I learned how to use the Unix command line for many purposes, like running Sound eXchange (SoX) audio editing software. I also gained familiarity with R and Praat, using the former for statistical analysis and data visualization with ggplot2, and the latter for synthesizing speech files and annotating raw data from the speaking task.

As an intended Cognitive Science major, I’m grateful that I had the opportunity to conduct research for a project that ties together many of my interests. My research experience allowed me to have a deeper understanding of certain topics from the computer science and linguistics courses I took freshman year. As a result of this project, I look forward to my statistics and developmental psychology courses in the fall, and I hope to complement my studies by continuing to conduct research in this field.