Lorenz Diener (Cognitive Systems Lab, University of Bremen, Bremen, Germany), lorenz.dienerprotect me ?!uni-bremenprotect me ?!.de
Jose A. Gonzalez (Dept. of Languages and Computer Science, University of Malaga, Spain), jgonzalezprotect me ?!lcc.umaprotect me ?!.es
Tanja Schultz (Cognitive Systems Lab, University of Bremen, Bremen, Germany), tanja.schultzprotect me ?!uni-bremenprotect me ?!.de
Scope of the Special Session
Speech is a very rich and complex process, the acoustic signal being just one of the biosignals resulting from it. In the last few years, the automatic processing of these speech-related biosignals has become an active area of research within the speech community. This special session aims to foster research on one emerging area that is growing within the field of silent speech: Direct synthesis. Direct synthesis refers to the generation of speech directly from speech-related biosignals (e.g. ultrasound, EMG, EMA, PMA, lip reading video, BCI, ...) without an intermediate recognition step. This has been made possible by recent developments in supervised machine learning techniques and the availability of high-resolution biosensors. Furthermore, the availability of low-cost computing devices has made something possible that was unthinkable 20 years ago: the generation of audible speech from speech-related biosignals in real time. With this special session, we aim at bringing together researchers working on direct synthesis and related topics to foster work towards direct synthesis toolkits and datasets and to highlight and discuss common challenges and solutions in this emerging research area.
Papers and Presentation Form
Topics of interest for this special session include, but are not limited to:
- Speech Synthesis from Speech-Related Biosignals,
- Speech Recognition from Biosignals,
- Articulatory Synthesis
- Acquisition of non-acoustic speech data using different modalities (e.g. Electromyography, Electroencephalography, Electrocorticography, Electromagnetic Articiculography Permanent Magnet Arciculography, Ultrasound, Video, etc.),
- Silent Speech Interfaces,
- Lip Reading,
- Neural Representations of Speech and Language.
Paper submissions must conform to the format defined in the Interspeech paper preparation guidelines and detailed in the Author’s Kit, which can be found on the Interspeech web site.When submitting the paper in the Interspeech electronic paper submission system, please indicate that the paper should be included in the Special Session on Novel Paradigms for Direct Synthesis based on Speech-Related Biosignals. All submissions will take part in the normal paper review process.
We encourage you to bring demonstrations of working systems to present along with your paper. If you would like to do so, please inform the special session chairs via e-mail by June 10.
Submission opened: February 1, 2018
Abstract submission deadline: March 16, 2018 midnight GMT
Final paper submission deadline: March 23, 2018 midnight GMT
Acceptance notification: June 3, 2018
Camera-ready paper due: June 17, 2018