Gabriella Vigliocco
Ecological Language: A Multimodal Approach to Language Learning and Processing
Speaker
-
Gabriella Vigliocco
Gabriella Vigliocco
Gabriella is Professor of the Psychology of Language at University College London where she leads the Language and Cognition Lab (www.language-cognition-lab.
org). She received her PhD in Experimental Psychology from University of Trieste. Before moving to UCL, she has worked at University of Arizona, the Max Planck Institute for Psycholinguistics and the University of Wisconsin. Her research focuses on the cognitive and neurobiological basis of human communication and how, through communication, humans can learn about and share ideas. More specifically she is interested in how we learn and process language in face-to-face, real-word settings, how our semantic knowledge interfaces with perception, action and emotion and how these systems are recruited during language learning and processing. Her current efforts focus on the study of language and cognition in their ecology. She also leads an interdisciplinary PhD programme (the Ecological Brain programme) aiming at training graduate students in real-world research.
Her work is interdisciplinary, bringing together theoretical insights from psychology, linguistics, neuroscience, philosophy and computer science. She uses methods from psychology, cognitive neuroscience and computational modelling, integrating evidence from different languages and different populations (adults, children, deaf individuals using British Sign Language, as well as individuals who have developed aphasia or apraxia after brain damage).
Discussant
Abstract →
Gabriella Vigliocco
Ecological Language: A Multimodal Approach to Language Learning and Processing
The ecology of human language is face-to-face interaction, comprising cues, like prosody, co-speech gestures, eyegaze and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms tend to focus on linguistic processing only. Here, I question this dominant paradigm and present initial evidence for why we should consider the multimodal context in our study of language learning and processing. First, I will describe how speakers use the multimodal cues in dyadic naturalistic interaction with children and adults. Then, I will provide computational evidence that (at least some of) these cues are spontaneously produced to support effective communication. Finally, I will present electrophysiological data showing how these cues are used online by comprehenders.