Joe Toscano

Joe Toscano, Ph.D.

Assistant Professor
Department of Psychology

Villanova University
800 E Lancaster Ave
Villanova, PA 19085

Office: Tolentine 344
Email: joseph.toscano@
null
villanova.edu
Twitter: @joetosc
Google Scholar profile

Phone: 1-610-519-4755
Fax: 1-610-519-4269

Hi, my name is Joe Toscano, and I'm an assistant professor in the Department of Psychology at Villanova University, where I direct the Word Recognition and Auditory Perception (WRAP) Lab. My research focuses on questions about speech recognition and spoken language processing. Scroll down for more on my background and current work.

About

I’m originally from snowy Rochester, NY and received my B.S. in Brain & Cognitive Sciences from the University of Rochester where I worked with Mike Tanenhaus. I then began a 9-year journey to the Midwest where I received my Ph.D. in Cognition and Perception from the Department of Psychology at the University of Iowa, working with Bob McMurray.

After grad school, I spent three years in Champaign-Urbana as a Beckman Postdoctoral Fellow at the University of Illinois, where I had the opportunity to work with researchers in a number of fields, from Psychology to Electrical Engineering, including Susan Garnsey, Duane Watson, Sarah Brown-Schmidt, Jont Allen, Charissa Lansing, Monica Fabiani, and Gabriele Gratton. Now I'm back in the Northeast, where I joined the faculty in Psychology at Villanova in Fall 2014.

Research

My research focuses on auditory perception and spoken language comprehension. How are we able to recognize speech accurately, yet we struggle to build computer systems that can do so equally well? How does the ability to understand speech emerge over development, and how malleable is it in adulthood? How do listeners adapt to accents and understand talkers they have never encountered before? How can we improve speech recognition for listeners who use assistive devices like hearing aids and cochlear implants?

To answer these questions, I use techniques that allow us to study spoken word recognition as it happens. These include cognitive neuroscience methods (ERP and optical neuroimaging techniques) that tell us what information listeners have access to at early stages of perception. I also use eye-tracking approaches to study lexical activation as the speech signal unfolds. These data inform models of speech perception that allow us to ask questions about the limits of unsupervised statistical learning and understand how listeners weight acoustic cues in speech.

For more on my work, check out my lab website here.

Publications

Click here for a full list of publications.


Calendar