events
Decoding the neural architecture of speech comprehension
event sponsored by
Duke Institute for Brain Sciences (DIBS)
series
Cognitive, Auditory, and Neural Bases of Language and Speech
speaker
Laura Gwilliams (UCSF)
Humans understand speech with such speed and accuracy, it belies the complexity of transforming sound into meaning. The goal of my research is to develop a theoretically grounded, empirically tested and computationally explicit account of how the brain achieves this feat. In this talk, I will first present an analytical framework - informed by machine-learning and classic statistics - which allows neural signals to be decomposed into an interpretable sequence of operations. Next, utilising this framework, I will overview a set of magneto-encephalography studies that describe (i) what linguistic representations the brain uses to bridge between sound and meaning; (ii) how those representations are combined to form hierarchical structures (e.g. phonemes into morphemes; morphemes into words); (iii) how information is exchanged across structures to guide comprehension from the bottom-up and the top-down. Overall, this body of work showcases the utility of combining theoretical linguistics, machine-learning and cognitive neuroscience for developing neurally-constrained models of spoken language processing.
Please email Tyler Lee at tyler.lee@duke.edu for Zoom link access
Categories
Medicine, Lecture/Talk, and Research