The unique human capability to produce speech enables swift communication of abstract and substantive
information. Currently, nearly two million people in the United States, and far more worldwide, suffer from
significant speech production deficits as a result of severe neuromuscular impairments due to injury or
disease. In extreme cases, individuals may be unable to speak at all. These individuals would greatly
benefit from a device that could alleviate speech deficits and enable them to communicate more naturally
and effectively. This project entitled "REvealing SPONtaneous Speech processes in Electrocorticography (RESPONSE)" will explore aspects of decoding a user’s intended speech directly from the
electrical activity of the brain and converting it to synthesized speech that could be played through a
loudspeaker in real-time to emulate natural speaking from thought. In particular, RESPONSE will uniquely
focus on decoding continuous, spontaneous speech processes to achieve more natural and practical
communication device for the severely disabled.
RESPONSE will investigate the neural processes of spontaneous and imagined speech production using
Electrocorticography (ECoG) which measures electrical activity directly from the brain surface and covers
an area large enough to provide insights about widespread networks for speech production and
understanding, while simultaneously providing localized information for decoding nuanced aspects of the
underlying speech processes. In conjunction with in-depth analysis of the recorded neural signals, the
researchers will apply customized ECoG-based automatic speech recognition (ASR) techniques to facilitate
the analysis of the large amount of phones occurring in continuous speech. Ultimately, the project aims to
define fundamental units of continuous speech production and understanding, illustrate functional
differences between these units, and demonstrate that representations of spontaneous speech can be
synthesized directly from the neural recordings.
RESPONSE is jointly funded by the BMBF (project management DLR) in the framework of the „Multilateral Cooperation in Computational Neuroscience“, a joint funding initiative from BMBF (Germany) and NSF (USA) with the award number 01GK1602 (2017-2019).
It is jointly carried out by a German - USA team, i.e. the Cognitive Systems Lab at the University of Bremen and the Advanced Signal Processing in Engineering and Neuroscience Lab of the Old Dominion University in Norfolk, VA USA.
CSL contact: Dr.-Ing. Christian Herff, Prof. Dr.-Ing. Tanja Schultz