Speech Perception and Language Lab at Villanova University
Welcome to the Word Recognition & Auditory Perception Lab (WRAP Lab)! Our group studies how human listeners recognize speech and understand spoken language. Investigating language processing as it happens is central to our approach, and we use a combination of computational, cognitive neuroscience, and behavioral techniques to study these processes.
Find out more about our research on this site. Thanks for stopping by! — Joe Toscano
Here's what we've been up to lately
Earlier this summer, graduate student Abby Benecke was among the 2017 recipients of the Graduate Travel Award from the Psychonomic Society for her research in "Classification of English Stop Consonants: A Comparison of Multiple Models of Speech Perception." Click here to read more. Way to go, Abby!
March 23, 2017
Post-Doctoral fellow Laura Getz along with grad students Elke Nordeen and Sarah Vrabaic recently published a paper entited, Modeling the Development of Audiovisual Cue Integration in Speech Perception."
We know that adult speech perception is generally enhanced when information is provided from multiple modalities (i.e., when you can both hear the speaker and see their lip movements). In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. How, then, do listeners learn how to process auditory and visual information as part of a unified signal? In this paper, we used a computational modeling approach to simulate the developmental time course of audiovisual speech integration. We find that this domain-general statistical learning technique provides a developmentally-plausible mechanism for understanding speech perception development.
March 23, 2017
Grad student Abby Benecke has received a summer fellowship from the Graduate College at Villanova. Way to go, Abby!
Her proposal, "Redundancy and variability in speech: Listeners' use of token-level phonetic cues," will use computational modeling approaches to investigate what acoustic features are necessary for accurate speech perception of English stop consonants (/b,d,g,p,t,k/), as well as examine whether there are multiple informative cues in each individual speech sounds.
December 15, 2016
Reprints of conference presentations by lab members from the Fall 2016 semester are below:
August 25, 2016
Ben Falandays, a second-year grad student in the lab, successfully defended his thesis proposal on August 25, becoming the earliest grad student in the Department's history to propose his thesis project. Congratulations Ben! His project will be using the visual-world eye-tracking paradigm to investigate the integration of low-level acoustic information into discourse processing.
August 11, 2016
WRAP Lab grad students Elkee Nordeen, Sarah Vrabic, and Olivia Pereira presented posters at the recent CogSci meeting in Philadelphia. Reprints of our CogSci 2016 posters are available here:
April 22, 2016
Earlier this year, members of the WRAP Lab, Lexie Tabachnick and Joe Toscano, along with friend of the lab, Neil Bardhan, submitted an entry to this year's Flame Challenge to answer the question "What Is Sound?". The Flame Challenge is a contest organized by the Alan Alda Center for Communicating Science that challenges scientists to explain complex concepts to 11-year-old students. Written and video entries are judged by students in classrooms. Click here to see our entry.
Please contact us if you would like to learn more about our research, request a copy of a paper, are interested in joining the lab, or have any other questions. Our email address is email@example.com.
Scheduled to participate in a study? The main lab is located in Tolentine Hall, Room 231. Some of our experiments also take place in the eye-tracking lab in Tolentine 18A. If you're scheduled to participate in an experiment but aren't sure where to go, please come to the main lab and a research assistant will meet you there!