Member Login

JCal Pro Mini-calendar

March 2020 April 2020 May 2020
Mo Tu We Th Fr Sa Su
Week 14 1 2 3 4 5
Week 15 6 7 8 9 10 11 12
Week 16 13 14 15 16 17 18 19
Week 17 20 21 22 23 24 25 26
Week 18 27 28 29 30

Current Time

The time is 02:45:32.
Workshop on Multistream Processing: From Speech Recognition to BCI Print E-mail

Date and Time   Wednesday, 10 September 2008
09.00 - 13.00h (to be confirmed)
University of Hamburg,
Edmund-Siemers-Allee 1,
ESA 1W, Room 121
(On the first floor of the west wing)
Keynote   Prof. Dr. José del R. Millán , IDIAP Research Institute, Switzerland
Talk title
to be announced
Associate Prof. Dr. Bo Hong, Tsinghua University, China
Talk: icon BCI using mental response marker along sensory streams
Topics   Tian Gan, Hamburg University
icon Multi-Stream Data in Bimodal Speech Recognition

Dan Zhang , Tsinghua University
icon Crossmodal Selective Attention

Yixuan Ku , Tsinghua University
icon Crossmodal Working Memory
  Tian Gan
E-mail : gan(at)
Phone : +49 . 40 . 428 83 - 23 21

Dan Zhang
E-mail : d-zhang(at)

Yixuan Ku
Phone: +86 . 10 . 6279 4058 -- 81

Workshop Abstract

As part of the research activities of CINACS Summer School 2008, this workshop sets out to investigate multi-modal fusion techniques, which are included not only in the traditional computer systems but also in the brain-computer interaction (BCI) systems. This kind of system-integration differs from other problems of information fusion by the need to consider BCI information, e.g. EEG signals, in addition to speech and visual information. Despite the enormous gap between the representations involved, using traditional machine learning techniques for extracting and processing the brain signals, like EEG, is a very promising intersection between natural and artificial systems, which contribute to the rapid and robust behaviour of human multi-modal communication and sets it apart from any current technical solutions.

The basic research concept of CINACS can be summarized as a) Extraction of biological principles and improved understanding of the cross-modal integration in humans, b) Introduction of biological principles into artificial intelligent systems. To gain deeper insights into this principal aims, several concrete research issues will be further discussed in this workshop as follows:

  • How to robustly extract useful BCI information (like EEG signals) for traditional artificial intelligent system with the goal to achieve better system performance?

  • How to combine various channels of information together within the context of multi-modal interaction?

  • How can machine learning models be better adopted to the processing EEG signals?

  • We see, hear, touch, smell and taste the outer world. So how do we combine different sensory modalities together to get the unitary perception of the environment?

  • More and more research results show that the human brain is multimodal instead of the traditional view of the primary sensory areas. So how does one sensory modality influence another?

  • Are the unimodal sensory areas essentially multimodal?

  • We perform actions everyday according to what we perceive from the outside world. How will the sensory perception influence the internal representation, and the actions derived from them?

Last Updated ( Thursday, 04 September 2008 )