Silent Speech Communication

The Cognitive Systems Laboratory at the University of Bremen is working on innovative Silent Speech Interfaces, which allow humans to communicate with each other by speaking silently. In march 2010, we demonstrated a prototype system at CeBIT, the world's largest IT fair, for the first time. We also presented our technology at the CeBIT Vision press show on November 24, 2009.

Technology

Our technology is based on Electromyography, i.e. the capturing and recording of electrical potentials that arise from muscle activity. Speech is produced by the contraction of muscles that move our articulatory apparatus. The electric potentials which are generated by this muscular activity are captured by surface electrodes attached to the skin. The analysis and processing of these signals by suitable pattern matching algorithms allow the reconstruction of the corresponding movement of the articulatory muscles and to deduct what has been said. The recognized speech is output as text or synthesized as an acoustic signal. Since electromyography records the muscle activity rather than acoustic signals, speech can be recognized even if it is uttered silently, without any sound production.

For Silent Speech using brain data please have a look at our page on brain activity modeling.

Applications

Our group has been working on this technology since 2004; a prototype was presented at CeBIT 2010 for the first time, showing the following applications:

  1. Silent Telephony: Silent speech recognition allows for silent communication without disturbing any bystanders.
  2. Transmitting Confidential Information: The system allows for seamless switches between silent and audibly spoken speech and thus enables to safely and securely transmit confidential information such as passwords and PINs.
  3. Robust communication in adverse environments: Since electromyography relies on signals captured directly at the human body, the signal is not corrupted by noisy and adverse conditions.
  4. Speaking in a foreign tongue: By feeding the output of silent speech recognition into a component that translates from one language to another, native speakers can silently utter a sentence in their language, and the receivers hear the translated sentence in their language. It appears as if the native speaker produced speech in a foreign language.
  5. Help for disabled people: Our technology may also help people who have lost their voice due to accident or illness.

Downloads

Contact

Please direct any inquiries to the following people:

Relevant Publications

Below, you can find a list of our publications related to Silent Speech Communication, sorted by publication date. Two papers that provide a good introduction to and overview of the topic are Session-Independent EMG-based Speech Recognition (Michael Wand, Tanja Schultz , International Conference on Bio-inspired Systems and Signal Processing , 2011) and Modeling Coarticulation in EMG-based Continuous Speech Recognition (Tanja Schultz, Michael Wand, Speech Communication Journal, volume 52, 2010).