Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences

Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences

Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences

Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences

Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences


WRAP Lab

Speech Perception and Language Lab at Villanova University

Welcome to the Word Recognition & Auditory Perception Lab (WRAP Lab)! Our group studies how human listeners recognize speech and understand spoken language. Investigating language processing as it happens is central to our approach, and we use a combination of computational, cognitive neuroscience, and behavioral techniques to study these processes.

Find out more about our research on this site. Thanks for stopping by! — Joe Toscano


NEWS & UPDATES

Here's what we've been up to lately

July 2018

Neuroimaging study reveals the time-course of speech perception

In a new paper published in Brain & Language, we used the fast optical imaging technique to study the time-course of speech perception. We show that the brain encodes sounds in terms of continuous acoustic cues at early stages of perception and rapidly begins to categorize them based on phonological differences. This technique allows us to study these responses non-invasively in human subjects. Check out the paper here.

March 2018

Studying Speech Communication Using Minecraft

How do talkers indicate information about discourse status through differences in specific acoustic cues, and how is this affected by communicative context? In a new paper published in Discourse Processes with our colleagues Andrés Buxó-Lugo and Duane Watson, we show that game-based approaches (specifically using Minecraft) allow us create naturalistic experiments for studying speech communication in the lab, revealing differences in the reliability of cues across communicative contexts.

January 2018

Students Present at the Society for Computation in Linguistics

Undergraduate student Anne Marie Crinnion gave a talk on how her work uses tools from graph theory, namely Steiner trees, to find networks of relevant acoustic cues for fricatives.

Grad student Abby Benecke presented a poster on her computational modeling research investigating what cues are necessary for categorizing voiced versus voiceless stop consonants. Specifically, she was testing (1) if VOT alone is enough and (2) without VOT, how well the model can perform. Click here to read more.

November 2017

New Paper on Age-Related Changes in Speech

In a paper in press at Language and Speech, Dr. Toscano and Dr. Charissa Lansing from the University of Illinois investigated how cue weights in speech perception change with age. Young adults (18-30 years old) use both voice onset time (VOT) and f0 as cues to voicing. Older adults (approx. 30-50 years old) do as well, but they rely more on f0, even though it is a less reliable cue than VOT. This shows that listeners continue to reweight acoustic cues in speech across the lifespan. Read the article here.


CONTACT INFO

Please contact us if you would like to learn more about our research, request a copy of a paper, are interested in joining the lab, or have any other questions. Our email address is wraplab@villanova.edu.

Scheduled to participate in a study? The main lab is located in Tolentine Hall, Room 231. Some of our experiments also take place in the eye-tracking lab in Tolentine 18A. If you're scheduled to participate in an experiment but aren't sure where to go, please come to the main lab and a research assistant will meet you there!

Location: 231 Tolentine Hall
Phone: +1 610.519.3887
Email: wraplab@villanova.edu
Facebook: VU WRAP Lab
Villanova University
Department of Psychological and Brain Sciences
800 E Lancaster Ave
Villanova PA 19085