Member Login

JCal Pro Mini-calendar

February 2020 March 2020 April 2020
Mo Tu We Th Fr Sa Su
Week 9 1
Week 10 2 3 4 5 6 7 8
Week 11 9 10 11 12 13 14 15
Week 12 16 17 18 19 20 21 22
Week 13 23 24 25 26 27 28 29
Week 14 30 31

Current Time

The time is 13:15:12.
Summer School 2009: "Cross-Modal Learning and Interaction" Print E-mail

Timetable

Week 1 (7 to 11 September 2009), Location: Room 1-312, FIT Building

   Sept. 7
 Sept. 8
 Sept. 9
 Sept. 10
 Sept. 11
  Monday
 Thuesday  Wednesday  Thursday
 Friday
09:00 - 10:30

Welcome/

Course Liu

Course

Engel/Maye

Course

Büchel

Course

Gao/ Gao/Hong

Course

Zhou

 Coffee
break

         
11:00 - 12:00

Course Liu

 Course

 Engel/Maye

 Course

 Büchel

Course

Gao/Gao/Hong

Course

Zhou

12:00 – 12:30


Teaser for Poster Session PhD students

Teaser for Poster Session PhD students

Teaser for Poster Session PhD students

Teaser for Poster Session PhD students

Teaser for Poster Session PhD students

 Lunch          
13:30 - 14:00
Poster Poster Poster Poster Poster

14:00 – 15:30

Course Liu

Course

 Engel/Maye

Course

 Röder

Course

Peng

 
 Coffee 
break
         

16:00 – 18:00

Lab visit

 (Medical 

 School)

Lab visit

 (Information 

 College)

Course 

 Röder

Course

Peng

 



Course Liu:

Prof. Guosong Liu

Neural Basis of Learning and Memory

Type

Introductory course: 2 lectures (4 h)

Goal

Introduction on neural basis of learning and memory. Focusing on data from animal study.

Target audience

Students of Psychology, Neuroscience, Computer Science

Content

  1. Experimental systems for studying neural basis of learning and memory in animals:

    • Eyeblink conditioning

    • Fear conditioning

    • Morris water maze

    • Novel object recognition test

  2. Brain region important for memory

    • Working memory/short-term memory

      • Hippocampus

      • PFC

    • Long term memory

      • Cortex

  3. Memory process

    • Encoding

    • Consolidation

    • Storage

    • Retrieval

    • Reactivation

    • Reconsolidation

  4. Fear memory and fear memory extinction

  5. Memory enhancement


Background reading list:

    1. Memory system (Squire et al., 1993)

    2. Fear memory (LeDoux, 2007):

    3. Working memory (Baddeley, 1992)

    4. Hippocampus (Morris et al., 2003)

    5. Enhancement of cognition(Lee and Silva, 2009)

    6. Synaptic plasticity (Bliss and Collingridge, 1993)

    7. Computation model (O'Reilly and Rudy, 2001)


Baddeley, A. (1992). Working memory. Science 255, 556-559.

Bliss, T.V., and Collingridge, G.L. (1993). A synaptic model of memory: long-term potentiation in the hippocampus. Nature 361, 31-39.

LeDoux, J. (2007). The amygdala. Curr Biol 17, R868-874.

Lee, Y.S., and Silva, A.J. (2009). The molecular and cellular biology of enhanced cognition. Nat Rev Neurosci 10, 126-140.

Morris, R.G., Moser, E.I., Riedel, G., Martin, S.J., Sandin, J., Day, M., and O'Carroll, C. (2003). Elements of a neurobiological theory of the hippocampus: the role of activity-dependent synaptic plasticity in memory. Philos Trans R Soc Lond B Biol Sci 358, 773-786.

O'Reilly, R.C., and Rudy, J.W. (2001). Conjunctive representations in learning and memory: principles of cortical and hippocampal function. Psychological review 108, 311-345.

Squire, L.R., Knowlton, B., and Musen, G. (1993). The structure and organization of memory. Annual review of psychology 44, 453-495.


 

Course Engel:

 

Prof. Andreas K. Engel & Alexander Maye

Attention Selection: Physiology, Models and Application

Type

Introductory course: 2 lectures (4 h)

Goals

Review of recent physiological and modelling work that focusses on understanding attentional selection of sensory signals; specifically, we will address the role of neural synchrony for selection processes. Technical applications of attention models in image understanding and robot control will be introduced.

Target audience

Students of Psychology, Neuroscience, Computer Science

Content

1. Physiology

  • Physiological models of stimulus selection

  • Role of neural coherence for attentional selection

  • Attention and multisensory processing

  • Review of animal data on attention and oscillatory activity

  • Discussion of human EEG and MEG data

2. Modelling and Application

  • Computational models of attentional selection

  • Top-down vs. bottom-up attentional control

  • Attention assistance systems

  • Discussion: Do robots have/need attention?

 

 

Course Büchel:

Prof. Christian Büchel

Imaging Methodology for Learning Processes



Course Röder:

Prof. Brigitte Röder

Event-related brain potentials and their use in multisensory

research

Content

This workshop introduces the neural basis of event-related potentials, how they are measured and how they are extracted from the electroencephalography. Several parameters can be used to describe ERPs which in turn are used to study perceptual-cognitive functions. Some well known ERPs and their possible functional meaning are introduced. Moreover, the use of ERPs to study multisensory functions is illustrated. Finally, advantages and limitations of the use of ERPs in cognitive neuroscience are discussed.



Course Gao/Gao/Hong:

Prof. Shangkai Gao, Prof. Xiaorong Gao, Dr. Bo Hong

Learning in a Brain-Computer Interface


Content

In the past decade, many research groups explored the feasibility of establishing a direct (non-muscular) communication channel between brain and the external world, by interpreting brain signal in an online fashion. It is now widely known as brain computer interface (BCI). The major goal of BCI research is to help the disabled, especially the locked-in patients, to interact with their environment. Besides, BCI has been adopted as a new way of human computer interaction as well. BCI is not just a feed forward translation of brain signal into control commands, rather, it is about the bi-directional adaptation between brain and computer algorithm. Thus, the BCI paradigm with proper consideration of co-adaptation and learning for both brain and computer sides is highly preferred for a successful BCI implementation. In this course, BCI will be introduced as a composition of two learning systems: machine learning and brain learning. For machine (or computer) side, brain signals are translated into different mental states by using feature extraction and pattern classification algorithms, in which static learning techniques are used to maximize the difference among brain states and adaptive learning approaches are employed to track the brain dynamics during BCI operating. For brain sides, sensory stimulus of multiple modalities and cognitive tasks across multiple levels are used to elicit discriminable brain signals, in which feedback and online training ensures a better brain learning to produce more prominent brain signals. Accumulating evidence has shown that a well controlled BCI learning process may lead to a positive reorganization of brain functions through the mechanism of cortical plasticity, which gives a hint of using BCI as a new approach of neural rehabilitation as well.

 

 

Course Peng:

Prof. Kaiping Peng

Different Cognitive Models in Western and Chinese Cultures

 

 

Course Zhou:

Prof. Zhi-Hua Zhou

Multimodal Data Mining







 

 

Week 2 (14 to 16 September 2009), Location: Room 1-312, FIT Building:

 


   Sept. 14
 Sept. 15
 Sept. 16
 Sept. 17
 Sept. 18
   Monday  Thuesday  Wednesday  Thursday  Friday
 

09:00 – 10:30

Course

Habel/

Eschenbach

 

Course

 

Habel/

Eschenbach

 

Course

 Chang

   

Coffee break

         

11:00 – 12:30

Course

Habel/

Eschenbach

Course


Menzel

 

Course

 Chang

   

Lunch

         
 

14:00 – 15:30

Course

Menzel

 

Course


Menzel
 

 Finish

   

Coffee break

         

16:00 – 18:00


Course

Menzel

Course

 

Menzel

 

     



 

Course Menzel:

 

Prof. Wolfgang Menzel

Bayesian modelling for multimodal interaction


Content

Bayesian reasoning plays an important role in evidence arbitration and decision making. The course covers different classes of probabilistic models for atomic and sequential observations.

While naive models are based on independence assumptions, which are too strong to capture the relevant causal influences of many application domains, optimal approaches, on the other hand, tend to be computationally infeasible and require extraordinary large amounts of training data. Bayesian networks provide a good compromise here, since they introduce a clear distinction between dependencies that have to be considered and others which need not.

Bayesian networks can also include hidden variables and therefore provide a powerful mechanism for training on incomplete data. This enables the model to adapt to unknown regularities of the domain. Using such variables to model the (hidden) state of a system, the approach can be extended to sequential observations. The resulting model class, Dynamic Bayesian Networks, is a generalization of Hidden Markov Models and offers particular advantages with respect to information integration in multi-stream applications like audio-visual data processing.


Course Habel/Eschenbach:

Prof. Christopher Habel, Dr. Carola Eschenbach

Multimodal Learning and Instruction

Type

Introductory course: 2 - 3 lectures (3 – 4,5 h)

Goal

Introduction in basic concepts of multimodal learning and instruction from an interdisciplinary point of view integrating: human-computer interaction, artificial intelligence & psychology

Target audience

Students of Psychology, Neuroscience, Computer Science, Linguistics

Content

1. Fundamentals

  • Psychological models of learning and instruction

  • Types of modalities
  •  
    • sensor modalities: visual, auditory,…

    • (re-)presentational modalities: language, pictures / diagrams / graphs / maps / tables

2. Principles for multimodal learning

  • Principles for reducing extraneous processing

  • Principles for managing essential processing

  • Principles for fostering generative processing

3. Media for multimodal learning

  • Static multimodal documents: print-media, digital media

  • Dynamic multimodal learning material

    • animation

    • interactive media



Course Chang:

Prof. Edward Chang

Large-scale Photo Annotation Using the Collective Wisdom of Data and Users

Content

Thanks to the explosive growth of photo/video capturing devices, the amount of online photos and videos is now at the scale of tens of billions. A large percentage of these photos/videos, however, cannot be reached by search engines because of their absence of text meta-data. To remedy this problem, we introduce our annotation pipeline, which uses supervise and unsupervised hybrid approach to provide text annotation. Our empirical study shows that this hybrid pipeline can almost always suggest some relevant words to allow the photo to be indexed and hence searched. Irrelevant annotations can subsequently be demoted when the photo returned under a keyword is seldom clicked by users. This hybrid approach takes advantage of the huge, collective wisdom of data and users to effectively provide and improve annotations in a scalable way.

Bio

Edward Chang joined the department of Electrical & Computer Engineering at University of California, Santa Barbara, in September 1999. Ed received his tenure in March 2003, and was promoted to full professor of Electrical Engineering in 2006. His recent research activities are in the areas of distributed data mining and their applications to rich-media data management and social-network collaborative filtering. His research group (which consists of members from Google, UC, MIT, Tsinghua, PKU, and Zheda) recently parallelized SVMs (NIPS 07), PLSA (KDD 08), Association Mining (ACM RS 08), Spectral Clustering (ECML 08), and LDA (WWW 09) (see MMDS/CIVR keynote slides for details) to run on thousands of machines for mining large-scale datasets. Ed has served on ACM (SIGMOD, KDD, MM, CIKM), VLDB, IEEE, WWW, and SIAM conference program committees, and co-chaired several conferences including MMM, ACM MM, ICDE, and WWW.

Ed is a recipient of the IBM Faculty Partnership Award and the NSF Career Award. He heads Google Research in China since March 2006. He received his M.S. in IEOR and M.S. in Computer Science from UC Berkeley and Stanford, respectively; and received his PhD in Electrical Engineering from Stanford University in 1999.


Review of recent physiological and modelling work that focusses on understanding attentional selection of sensory signals; specifically, we will address the role of neural synchrony for selection processes. Technical applications of attention models in image understanding and robot control will be introduced.

Last Updated ( Thursday, 10 September 2009 )