Skip to main content

ADSPEED

Spoken speech is the most natural and efficient form of human communication. However, various neurological diseases, such as amyotrophic lateral sclerosis (ALS), can impact speech production functions and severely impair or completely silence those affected. In recent years, various research projects have shown that brain-computer interfaces (BCIs) - user interfaces that utilize neural activities for human-machine interaction - offer great potential for the implementation of speech prostheses. This involves decoding speech processes in the brain and reconstructing the spoken words for subsequent playback through a loudspeaker. In order to use BCIs for such speech prostheses, several fundamental research questions and challenges need to be addressed.

In the „ADaptive Low-Latency SPEEch Decoding and synthesis using intracranial signals“ (ADSPEED) project, we tackle three of these challenges to advance the technology that has so far been developed for individuals with healthy speech production functions towards its intended target users (e.g. ALS patients). Here, the ADSPEED project focuses on synthetizing natural speech via systems that rely on captured neural processes of imagined speech. To accomplish this, the ADSPEED team will address the decoding of imagined speech in real-time to allow user and system to co-adapt via a continuous neurofeedback. Thus, fundamental challenges for a natural and practical communication device that generates audible speech from brain activity data in real-time will be studied.

In this regard, ADSPEED constitutes a German-US collaboration in the field of computational neuroscience and specifically addresses the following four research thrusts:

  1. The training of synthesis techniques even when time-aligned data between neural activities and speech are not available.
  2. The investigation of online synthesis methods based on previous work to decode imagined speech and enable user adaptation via a continuous auditory feedback.
  3. The development of techniques regarding co-adaptation between user and system.
  4. A proof-of-concept study of a neuroprosthesis based on imagined speech via a co-adaptive online synthesis system without time-aligned data for training.

ADSPEED is jointly funded by the BMBF (project management DLR) in the framework of the „Multilateral Cooperation in Computational Neuroscience“, a joint funding initiative from BMBF (Germany) and NSF (USA) with the award number 01GQ2003 (2021-2023).

In our research project, we work closely together with the ASPEN Lab sites.google.com/vcu.edu/aspenlab at the Virginina Commonwealth University, which is run by professor Dean Krusienski.

CSL Contact: M. Sc. Miguel Angrick, Prof. Dr.-Ing. Tanja Schultz