Welcome!

Welcome to the Word Recognition & Auditory Perception Lab (WRAP Lab) at Villanova! Our group studies how human listeners recognize speech and understand spoken language. Investigating language processing as it happens is central to our approach, and we use a combination of computational, cognitive neuroscience, and behavioral techniques to study these processes. Find out more about our research on this site. Thanks for stopping by! — Dr. T

CogSci 2016 Posters

Check out our CogSci 2016 posters: Pereira, O., & Toscano, J.C. (2016, August). Auditory N1 Amplitude Varies Across Multiple Acoustic and Phonological Dimensions in Speech. Poster presented at the 38th Annual Meeting of the Cognitive Science Society, Philadelphia, PA. Vrabic, S., Nordeen, E., & Toscano, J.C. (2016, August). Speech Perception Across the Lifespan: Using a Gaussian Mixture Model to Understand Changes in Cue Weighting Between Younger and Older Adults. Poster presented at the …

Flame Challenge Top 25

Earlier this year, members of the WRAP Lab, Lexie Tabachnick and Joe Toscano, along with friend of the lab, Neil Bardhan, submitted an entry to this year’s Flame Challenge to answer the question “What Is Sound?”. The Flame Challenge is a contest organized by the Alan Alda Center for Communicating Science that challenges scientists to explain complex concepts to 11-year-old students. Written and video entries are judged by students in classrooms …

IMG_20160417_154522

WRAP Lab at #SciFest

Emma Folk, Olivia Pereira, and Joe Toscano were in Washington, D.C. this past weekend to help teach kids about speech perception! We were part of the Acoustical Society’s exhibit at the #BigTopPhysics booth at the USA Science & Engineering Festival. It was a great event, and Emma and Olivia got to show off their Praat skills by re-synthesizing and auto-tuning visitor’s voices. Lots of fun with these and the other ASA demos at our …

Two students receive summer fellowships

Two WRAP Lab grad students have received summer fellowships from the Graduate College at Villanova. Congrats to Ben and Dave! Ben Falandays received a fellowship for his proposal, “How long can listeners maintain gradient acoustic information?”, which will be using the visual world eye-tracking paradigm to study listeners’ interpretation of pronouns in extended discourses. We are conducting this study in collaboration with our colleague, Sarah Brown-Schmidt, at the University of Illinois …

svalp-2016

Using naturalistic and engaging tasks to measure phonetic convergence

WRAP Lab grad student Tif Biro presented her poster, “Using naturalistic and engaging tasks to measure phonetic convergence” at the Sociolinguistic Variation and Language Processing Conference (SVALP). The poster describes Tif’s work on phonetic convergence that we are conducting with our collaborator at Kansas, Navin Viswanathan. Link to poster. Abstact: The acoustic cues used to signal phonological contrasts vary across languages, dialects, and even between individual talkers. Voice onset time …