BRAIN-funded researchers advance our understanding of language processing

A team of BRAIN researchers make progress in understanding the role of facial expressions in processing speech. These insights provide a potential roadmap to developing new treatments for patients with difficulty understanding language.

The most common way for humans to communicate is face-to-face, which requires our brains to combine information from the voice of the person we are talking to with their facial expressions. Dr. Michael Beauchamp and his team are studying this process through their BRAIN grant, a part of the BRAIN Initiative’s program on Investigative Human Neuroscience.  Dr. Beauchamp also participates in the Research Opportunities in Humans (ROH) Consortium work group, coordinated by staff at the National Institutes of Health. Members of the ROH Consortium collaborate to identify consensus standards of practice as well as supplemental opportunities to collect and provide data for ancillary studies, and to aggregate and standardize data for dissemination among the wider scientific community.

Check out the interview below with Dr. Beauchamp to learn more about his team’s work investigating how the brain processes speech and how visual information from a talker’s face can aid in speech perception. Their research may help advance understanding of the underlying neural processes involved in speech perception and has potential therapeutic implications for patients with difficulty understanding language.

Could you briefly introduce yourself and the team of scientists that significantly contributed to this project?

My name is Michael Beauchamp and I am the Principal Investigator (PI) of the award, together with Charlie Schroeder, Ph.D. from Columbia University. Our large group of intracranial electroencephalography (iEEG) researchers, including faculty members Daniel Yoshor, M.D., Brett Foster, Ph.D., and John Magnotti, Ph.D., moved from Baylor College of Medicine to the University of Pennsylvania in 2021. It was a juggling act to coordinate such a large move but fortunately everyone is settling in well. As far as trainees, I would like to highlight two outstanding trainees. The first, Patrick Karas, was a neurosurgery resident. During the research year of his residency program, Patrick learned to collect and analyze iEEG data. Patrick was very productive and the skills he learned during his research year enabled him to start his own research laboratory at the University of Texas Medical Branch in Galveston. The second trainee that I would like to highlight is Zhengjia Wang. Zhengjia was a statistics graduate student at Rice University in Houston with no experience in neuroscience when he joined my lab. He is now a post-doctoral fellow in the group and the lead programmer on RAVE, our software suite for iEEG analysis. Zhengjia is a perfect example of how BRAIN Initiative funding has brought new talent and energy from other disciplines into neuroscience.

Could you please provide us a brief overview of the central premise of this project, perhaps including the broader context of the neuroscience clinical research that led to this project?

A key part of human existence is communicating with others. Human communication depends on both the auditory modality, we hear others as they talk, but also on the visual modality—we closely examine their face as they talk. Faces carry information about emotional content, spatial information, and also about the content of speech. As we age, we lose hearing. Seeing the face of the talker can serve as a natural hearing aid. In normal aging and clinical conditions, we would like to better leverage the visual contribution to speech perception to help patients with difficulty understanding language.

What were the major unanswered questions your team hoped to address at the onset of the grant/project?  

We hoped to shed light on how the brain integrates auditory and visual speech information in the service of perception.

The NIH BRAIN Initiative U01 funding opportunity is designed to maximize opportunities to conduct innovative in vivo neuroscience research made available by direct access to brain recording and stimulating from invasive surgical procedures. Could you describe how experiments in this project are performed, and how they differ from projects involving non-invasive recordings?

Humans are unique in their ability to use language to communicate, meaning that experimental animals are of limited utility for studying language. Before receiving BRAIN Initiative funding, my primary technique for investigating brain function was blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). While fMRI is the most powerful non-invasive technique for studying human brains, it depends on an indirect, hemodynamic signal. This means that fMRI is very slow—responses are measured in seconds, which is thousands of times slower than the neural signal itself. This is problematic for studying speech, which is characterized by the production and reception of speech elements, many times each second. The ability to directly record neural activity in the human brain has been a huge boost in our ability to understand the neural mechanisms of speech perception.

This work would not be possible without the invaluable role of volunteers. How do participants engage in the science they are helping to advance? Are there any insights your team would like to share about the engagement of participants in clinical research projects?

In one way, the experiments that we perform in patients are very similar to those that we perform in healthy adults: both groups watch and listen to people talking and report back to us what they perceive. However, the patients are unique because we can also record their brain activity with implanted electrodes. We are honored by their willingness to participate in our studies and do everything we can to make it a valuable and interesting experience for them.

What are some critical new insights into human brain function that this project and your team has facilitated?

Our work has shown that information from the talker's face can give the brain a head-start on processing speech. This is important because speech arrives so rapidly that it is a real struggle for the brain to keep up—just imagine services like Alexa or Siri—they can respond to simple phrases, but not continuous speech. Your brain must turn the incoming speech into a meaningful representation so that you can respond appropriately.

In addition to the knowledge gained, what are the broader impacts of this project on human health and the development of new therapies for disorders of the human brain?

Our studies will help us develop treatments for patients with difficulty understanding language. In addition, the mechanisms that the brain uses to integrate very different kinds of information—such as auditory information from the voice and visual information from the face—is a general process that may go wrong in a variety of disorders.

Science frequently raises as many new questions as it answers. What major research directions does your team envision in your specific area of human neuroscience in the next five to ten years?

We are currently focused on two fascinating observations in language. The first is the abundant plasticity of the speech perception system. We can meet someone from another country whose accent makes them difficult to understand at first. Yet after an hour or two of conversation, we barely notice their accent and have no trouble understanding them. What are the neural processes underlying this ability? Does seeing the face of the person help? The second observation is the vast individual differences in speech perception. Even with similar hearing and vision, under noisy listening conditions some people have no trouble understanding speech while others struggle mightily. What are the brain differences underlying these perceptual differences?

Stay tuned for more research highlights on The BRAIN Blog.

Latest from The BRAIN Blog

The BRAIN Blog covers updates and announcements on BRAIN Initiative research, events, and news. 

Hear from BRAIN Initiative trainees, learn about new scientific advancements, and find out about recent funding opportunities by visiting The BRAIN Blog.

This site is protected by reCAPTCHA and the Google Privacy Policyand Terms of Serviceapply.
Image
black and white image of people working on laptops at a counter height table on stools at the annual BRAIN meeting