Friday, March 27, 2015 4:00 p.m. in ETC 4.150
Professor Bharath Chandrasekaran
Department of Communication Science & Disorders
The University of Texas at Austin
Speech is the most important acoustic signal for humans. Our brain is able to simultaneously extract “what” is being said (referred to as lexical information) and “who” is speaking (referred to as indexical information) from the speech signal with consummate ease. In this talk, I will focus on recent studies examining experiential influences on the subcortical encoding of speech signals. I will also discuss methods that recover and classify lexical and indexical content in speech signals from the brain of listeners. This is a complex computational problem; the brain response to speech signals is noisy due to several methodological issues. In pilot experiments, we use a combination of electroencephalography (EEG) and machine learning approaches to classify speech signals. We have extensive preliminary data that show that “brain reading” of speech signals is an achievable goal. Using these studies as a scaffold, I will discuss the initial groundwork for the “predictive coding” hypothesis, which explicates how prior experiences affects neural encoding of speech signals.