Member Login

JCal Pro Mini-calendar

November 2018 December 2018 January 2019
Mo Tu We Th Fr Sa Su
Week 49 1 2
Week 50 3 4 5 6 7 8 9
Week 51 10 11 12 13 14 15 16
Week 52 17 18 19 20 21 22 23
Week 1 24 25 26 27 28 29 30
Week 54 31

Current Time

The time is 04:04:20.
Summer School 2013: "Cross-Modal Learning and Interaction" Print E-mail
CINACS - Summer School 2013



-----------------------------------------

Mon Sep 2


10:00-10:45   FIT Building, 1-415

Opening




11:00-12:00   FIT Building, 1-415

Neural Architectures for Crossmodal Learning

  • Speaker : Stefan Wermter, University of Hamburg
  • Abstract : There has been substantial interest and progress in intelligent systems and knowledge technologies in recent years based on new biomimetic processing principles for integrated knowledge-based systems. While in the past robots were successful in traditional industrial environments, new generations of hybrid intelligent agents and robotic systems are now being developed which focus on bio-inspired and cognitive capabilities, including reasoning, learning and language communication. In this talk we focus on the potential of nature-inspired, in particular hybrid and neural representations, in order to build new adaptive crossmodal systems. We will give an overview of some neural learning technologies, recurrent networks and robotic agents from the perspective of integrative hybrid intelligent systems for crossmodal learning and we illustrate some new developments under development in the Knowledge Technology lab.
  • References : http://www.informatik.uni-hamburg.de/WTM/publications/



14:00-17:30   FIT Building, 1-415

Mechanisms for Continual Learning in Artificial Agents

  • Speaker : Mark Ring
  • Abstract : Like most other fields of research, progress in AI occurs in stages: techniques that were novel five years ago are part of today's standard methodology. At every stage we grapple with the tasks that are currently not solvable. Machine Learning attempts to facilitate this process by removing our minds from the tediousness of implementing the lowest-level details of specific cases so that we may concentrate on the more general issues. Reinforcement learning can allow extremely unintelligent agents to learn tasks that we might not even know how to teach an agent using other learning techniques. Yet even with reinforcement learning, once a task is mastered, learning ends.
    In this talk I will discuss several methods for continual learning. The motivation is to build an agent that never stops learning. Skills that it struggles to learn at one point in its development become part of its standard repertoire for use later when it learns more challenging skills. A continual-learning agent builds up its abilities in layers of increasing complexity, ideally using a single mechanism at all levels.
    I will given an overview of the following methods (as time allows) that each addresses continual learning in a different way: CHILD (which automatically builds hierarchies of skills), SERL (which autonomously distributes large behavior spaces across fixed-capacity modules), the Motmap (which, inspired by the motor cortex, uses spatial and temporal constraints to organize behaviors into related regions), and Forecasts (which use action-conditional predictions to describe an agent's expanding knowledge of its world).

-----------------------------------------

Tue Sep 3


09:00-12:00   FIT Building, 1-415

Introduction to deep learning

  • Speaker : Xiaolin Hu, Tsinghua University
  • Abstract : In this tutorial, I will first introduce the background of deep learning. Then I will focus on several typical deep learning models including deep belief network, deep auto-encoder and convolutional neural network, as well as their variants. The computational principles will be elaborated in detail. Applications in computer vision and speech recognition will also be presented. Finally, some recent advances on this topic will be discussed.
  • References : zip



14:00-15:30   FIT Building, 1-415

Techniques to Estimate Brain Connectivity from Measurements with Low Spatial Resolution.

  • Speaker : Guido Nolte, UKE
  • Abstract : doc
    The by far most important confounder in studying brain connectivity form EEG/MEG data is the poor spatial resolution. As a consequence, data in channels as well as estimated sources are to large extent unknown mixtures of the true source activities of interest and of other uninteresting activities like brain and channel noise. Consequently, estimates of brain connectivity are often strongly biased by physiologically meaningless mixtures due to the measurement process. In this talk I will give a summary of our methods to address this problem.
    We suggest to exploit the fact that the mixing, most commonly termed "volume conduction", is to excellent approximation instantaneous while neural interactions involve time delays well within the temporal resolution of EEG/MEG measurements. In this talk I will illustrate the problem and present techniques to address it at various levels and for various questions.
    1. It can be shown that a significant imaginary part of coherency or cross-spectrum cannot be generated by independent sources, regardless of number and of temporal and spatial properties. Therefore, these imaginary parts necessarily reflect true interactions as opposed to artifacts of volume conduction. Applied on real data, the imaginary part of coherency gives qualitatively new insight into brain dynamics which cannot be seen in classical coherence.
    2. We formulate essentially an "Anti-Independent-Component-Analysis" by diagonalizing in the complex domain the anti-symmetric, i.e. imaginary, part of cross-spectral matrices calculated at sets of frequencies. The respective mixing matrices contain the two-dimensional subspaces of pairs of interacting sources. Applied on real data we observe that with this "Pairwise Interacting Source Analysis" (PISA) we can estimate interacting subsystems and separate these subsystems from each other.
    3. Magnetic fields and electric potentials of spatially distinct sources are typically highly overlapping and especially non-orthogonal in channel space. Hence, making respective assumptions in channel space is inadequate to separate the sources. We formulate corresponding assumptions in source space using linear inverse methods. A method termed Minimum Overlap Component Analysis (MOCA) separates the sources within a given interacting subsystem.
    4. Many methods erroneously interpret asymmetries e.g. in signal-to-noise ratio or spectral content as indication of a specific direction of information flow. To estimate the direction, we construct a "Phase-Slope-Index" (PSI) from real and imaginary part of coherency which is strictly insensitive to pure mixtures of independent sources but is highly sensitive to true information flow. E.g., this measure shows that in rest with eyes closed at alpha frequency information flows dominantly from front to back.
    5. Finally, a test will be presented to evaluate the reliability of different methods of measuring connectivity. For this purpose surrogate data are generated with the same statistical properties as the original data but based on a superposition of independent sources. This is achieved via Independent Component Analysis (ICA) and by shifting the time series of each independent component relative to each other. Results will be shown for linear and nonlinear measures of functional connectivity.

-----------------------------------------

Wed Sep 4


09:00-12:00   FIT Building, 1-415

Implicit cross-modal learning

  • Speaker : Qiufang Fu, Chinese Academy of Science
  • Abstract : An issue that continues to divide psychologists, despite decades of research, is whether people can acquire and use unconscious knowledge. The two factions of psychologists can often be separated by how to measure the conscious status of knowledge. Researchers, who use objective measures of conscious knowledge i.e. the ability to discriminate features of the world, tend to be skeptics concerning the existence of unconscious knowledge. Conversely, researchers, who use subjective measures of conscious knowledge, i.e. the ability to report or discriminate mental states, tend to accept the existence of unconscious knowledge.
    Implicit learning is about the acquisition of unconscious knowledge about the structure of an environment. Although the issue of whether people can learn without conscious knowledge has been widely investigated in implicit sequence learning, it still remains controversial. This talk will begin with a briefly introduction to implicit learning and unconscious knowledge, considering question about what is implicit learning and how to measure conscious status of knowledge in implicit learning and other tasks. Then, we will discuss the relationship between implicit sequence learning and conscious awareness and what kind of knowledge is acquired in implicit sequence learning. Finally, we will review how culture and emotion can influence implicit sequence learning.
  • References : zip



14:00-15:30   FIT Building, 1-415

Modular Organization of Cognitive Control Network in Human Brain

  • Speaker : Xun Liu, Chinese Academy of Science
  • Abstract : "Everyone knows what attention is", according to William James. Attention involves selective processing of competing, and sometimes, conflicting information. The essence of this top-down modulation is cognitive control. Many behavioral and neurobiological theories have proposed different modular networks for cognitive control. I will present several recent studies that examined various stimulus-response-compatibility (SRC) paradigms that involve conflict detection and executive control, from the perspective of dimensional overlap theory. This unified framework for stimulus-response ensembles and compatibility effects helps us elucidate both shared and distinct neural network modules of cognitive control.
  • References : zip



16:00-17:30   FIT Building, 1-415

Eye movements and high-level cognition

  • Speaker : Xingshan LI, Chinese Academy of Science
  • Abstract : docx
    In daily life, we move our eyes about four times a second to selectively perceive visual information that is most important for the current cognitive process. Hence, information of eye movements (where we look, how long we look) carry important information about high-level cognition. I will introduce three topics on the relation between eye movements and high-level cognition. First, I will introduce some background knowledge on eye movements. Second, I will introduce studies that show eye movement strategies could be optimized to improve their performance. In this part, I will also introduce one of my studies that showed readers could adaptively adjust their eye movement strategies to improve their perception efficiency (Li, Gu, Liu & Rayner, 2013). Finally, I will review some recent findings on how listeners use linguistic information they hear to guide their eye movement so that they could process information for the same object simultaneously.
  • References : rar
    • Li, X., Gu, J. Liu, P., & Rayner, K. (2013). The advantage of word-based processing in Chinese reading: Evidence from eye movements. Journal of Experimental Psychology: Learning, Memory, and Cognition. 39(3), 879-889. doi: 10.1037/a0030337
    • Najemnik, J. & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391.
    • Rayner, K. (2009). The 35th Sir Frederick Bartlett Lecture: Eye movements and attention in reading, scene, perception, and visual search. The Quarterly Journal of Experimental Psychology, 62, 1457-1506.
    • Tanenhaus, M. K., Spivey-Knowlton, M.J., Eberhard, K. M., & Sedivy, J. C.(1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634.

-----------------------------------------

Thu Sep 5


09:00-10:30   FIT Building, 1-315

The Neural Mechanisms of Effective Learning

  • Speaker : Gui Xue, Beijing Normal University



11:00-12:00   FIT Building, 1-315

On Abduction and its role in Learning and Cognition

  • Speaker : Shushan Cai, Tsinghua University



12:00-12:30

Snapshot Presentations Computer Science PhD Students




14:00-15:30

Poster Presentations Computer Science PhD Students




15:30-17:00   FIT Building, 1-315

Common Ground and Multimodal Cues in Infant Communication

  • Speaker : Ulf Liszkowski, University of Hamburg
  • Abstract : A preoccupying question in related areas like psychology, philosophy of mind and language, informatics, and cognitive sciences among others is how we understand each other and attach meaning to the things we are doing. This question has been especially dominant in developmental psychology studies of children’s acquisition of words and language. More recent research shows that much of meaningful communication and understanding emerges already before language. These meaningful preverbal exchanges encompass a bi-directional understanding of communication in comprehension and production; and they rest on two key aspects that accordingly are prior to any semantically or syntactically specified meaning: a shared common ground of mutual activity; and knowledge about multimodal cues that accompany and can modify the meaning of communicative acts. In the current talk I will present recent and new experimental findings on the contribution of common ground in infants’ comprehension and production of prelinguistic communication; and on the use of multimodal cues to augment meaning, especially when there is no common ground.

-----------------------------------------

Sun Sep 8


09:30-16:00

Excursion to Great Wall


-----------------------------------------

Mon Sep 9


09:00-10:30   FIT Building, 1-415

Neuronal oscillations and neural synchrony for multisensory integration

  • Speaker : Andreas Engel, UKE Hamburg



11:00-12:00   FIT Building, 1-415

Conscious and Unconscious Cues for Motor Behavior

  • Speaker : Volker Franz, University of Hamburg
  • Abstract : The Neurosciences have provided us with striking reports of motor behavior without conscious awareness. For example, blindsight patients are able to perform certain motor actions in response to visual cues they are not aware of. This raises the question of the functions of consciousness/awareness. If, for example, all actions could be performed outside awareness, we would be tempted to assume that awareness is only an epiphenomenon and it would be conceivable to build versatile, robotic systems without the need to implement something like awareness. If, on the other side, certain classes of actions do require awareness, we might learn something about the functions of awareness and the corresponding requirements for artificial, cognitive systems. Therefore, it is interesting to evaluate the boundary conditions of which cues can guide actions without awareness. One interesting question is whether semantic cues can guide motor behavior outside awareness. I will focus on experimental masking paradigms investigating this question in healthy participants and will scrutinize the methodological challenges involved in such an endeavor.
  • References : Dehaene, S., Naccache, L., Le Clec'H, G., Koechlin, E., Mueller, M., Dehaene-Lambertz, G., van de Moortele, P. F., & Le Bihan, D. (1998). Imaging unconscious semantic priming. Nature, 395, 597-600.



12:00-12:30

Snapshot Presentations UKE + Psych. PhD Students




14:00-15:30

Poster Presentations UKE + Psych. PhD Students




16:00-17:00   FIT Building, 1-415

Development and Sensitive Phases of Multisensory Functions

  • Speaker : Brigitte Röder, University of Hamburg
  • Abstract : Recent data have suggested that the development of multisensory functions follows a protracted developmental time course. One example is the spatial matching of crossmodal input which seems to mature in mid and late childhood. Visual input seem to be essential for a number of crossmodal functions and seem to be permanently impaired if vision is missing for the first month of life. However, some crossmodal recalibration and thus crossmodal learning capacities remain throughout life.

-----------------------------------------

Tue Sep 10




09:00-12:00   FIT Building, 1-415

Incremental Spoken Language Processing for More Natural Human-machine Interaction

  • Speaker : Timo Baumann, University of Hamburg
  • Abstract : Incremental processing is the processing of information in a piece-meal fashion, where parts of information become available bit by bit. In spoken language (and essentially, in all human behaviour), time is the driving factor and human language processing happens while listeners are still listening and continues on while speakers are already speaking. In contrast to this, most speech-processing systems for interaction (i.e., spoken dialogue systems) process both input and output on the level of full utterances. This renders mid-utterance interaction difficult or impossible, and forces an inflexible kind of ping-pong interaction. I will present our general approach towards incremental processing and evaluation, and --time permitting-- will present our toolkit for incremental spoken language processing including some demonstrations of its behaviour.



14:00-15:30   FIT Building, 1-415

Learning in a dual-path architecture for multi-level language comprehension

  • Speaker : Wolfgang Menzel, University of Hamburg



16:00-17:30   FIT Building, 1-415

Multimodal language understanding

  • Speaker : Yang LIU, Tsinghua Unviersity

-----------------------------------------

Wed Sep 11


09:00-10:30   Medical School Room 321

Investigating multimodal attention using steady-state visual evoked potentials

  • Speaker : Dan Zhang, Tsinghua University
  • Abstract : Steady-state visual evoked potentials (SSVEPs) have attracted increasing interest among cognitive psychologists. In the field of multimodal attention, the introduction of SSVEPs have helped to clarify several important issues. Attentional modulations of SSVEPs support the idea that attention leads to enhanced neural representation of the attended location / object while suppressing the unattended events. Also, time course of attention orientation is provided due to the continuous nature of SSVEPs. Moreover, neural mechanisms of multimodal attention can be investigated in more details, with SSVEPs as an effective biomarker. In sum, recordings of SSVEPs provide a new approach for studying the neural mechanisms and functional properties of multimodal attention.



11:00-12:00   Medical School Room 321

Cortical network and representational maps

  • Speaker : Bo HONG, Tsinghua University



13:00-14:00

Lab Visit in Medical School Room 321 of Tsinghua University




14:00-15:30   Medical School Room 321

Dynamic Coding Processes of a Contextual Fear Memory in Mammalian Cortices

  • Speaker : Jisong Guan, Tsinghua University
  • Abstract : All cortical levels of sensory processing are subject to top-down influences to reform the lower-level processes by high-level integrative information. However, the neuronal sources of the behavior-related top-down influences were still unclear. By optical tracking the neuronal activity induced immediate gene expression in awake mice performing contextual fear conditioning tasks, we found that context-induced activities in layer II neurons within visual cortex underwent dramatic changes during the first 1 week of repetitive training. After 2 weeks, the context-induced activities were stable and consistent during the repetitive exposures for over one month. Lesion of hippocampus totally abolished both of the contextual fear memory and the dynamics in context-induced cortical activates during the first 2 weeks, indicating the critical role of hippocampus in cortical rewiring. Furthermore, lesion of hippocampus disrupted the similarity of layer III activation pattern between adjacent training and recall trials, indicating the active role of hippocampus to integrate sensory and visual inputs in visual cortex. Coding in visual cortex was unstable, as most of the coding neuron showed reduced activity, when tested one month after the last recall. In contrast, in retrosplenial cortex (RSP), which is closer to the hippocampus, contextual fear training induced consistent activity pattern over 3 months. These results indicated a two-phase coding in mammalian cortex that a fast coding in hippocampus within hours and a gradual rewiring of cortical circuits over weeks, which were coordinated to achieve the dynamic coding in sensory cortex and persistent and invariant representation in hippocampus associated cortices, such as RSP. We hypothesized that the dynamics of cortical coding, synchronized by hippocampus, are critical for the efficient processing of external information in primary cortex and persistent storage of experiences in higher levels.
  • References : Barlow1972.pdfreview_carl Peterson_layer2_3.pdf.



16:00-17:00   Medical School Room 321

In Search of the Hebbian Assembly: Watching Cortical Circuits in Action

  • Speaker : Sen SONG, Tsinghua University
  • Abstract : What are the fundamental circuits of memory storage and computation in the brain? Donald Hebb’s influential theoretical proposal was that a group of neurons called a Hebbian assembly form the fundamental unit and cells could join those assemblies according to the Hebbian rule of firing together, wiring together. Recent advances in anatomical and imaging techniques have allowed us to get closer to test such ideas directly in live animals. I will describe three examples from my own work and recent literature. Anatomical and in vivo two-photon imaging studies have pointed to highly connected clusters of neurons reminiscent of Hebbian Assembly. Plastic changes of synapses can now be observed directly in live animals and show learning related changes. The computational roles of inhibitory neurons are starting to be revealed. Hopefully, such continued advances will allow us to directly test Hebb’s ideas and reveal how local circuits in the cortex perform computation in the near future.
  • References : structuralplasticity.pdfchasingcellassembly.pdf

-----------------------------------------

Thu Sep 12


09:00-10:30   FIT Building, 1-415

Learning for Cross-Modal Analysis

  • Speaker : Jun ZHU, Tsinghua University



11:00-12:30   FIT Building, 1-415

Introduction to Information Retrieval

  • Speaker : Yi Zhang, University of California Santa Cruz
  • Abstract : Information retrieval (IR) systems such as Google, Baidu, Amazon recommended systems, help users overcome the "information overload problem" and have quickly become among the most useful tools available for managing the massive amount of information available to Web users. This talk will provide an overview of information retrieval technologies, including the history, the major techniques used by search engines, the challenges of information retrieval, and what the next generation search and recommendation systems may look like.
  • Biography : Yi Zhang is an Associate Professor in School of Engineering at University of California Santa Cruz, with affiliation in Technology Management Department, Computer Science Department, Applied Math and Statistics Department, and Economics Department. Her research interests are recommendation systems, information retrieval, applied machine learning, natural language processing, and computational economics. She has received various awards, including ACM SIGIR Best Paper Award, National Science Foundation Faculty Career Award, Google Research Award, Microsoft Research Award, and IBM Research Fellowship. She has serve as program co-chair for IR in CIKM, area chair and PC member for various conferences such as SIGIR, WWW, SIGKDD, and ICML. She is an associate editor for ACM Transaction on Information Systems. She has served as a consultant or a technical adviser for several companies and startups. She received her B.S. from Department of Computer Science & Technology at Tsinghua University and her M.S. and Ph.D. from School of Computer Science at Carnegie Mellon University.
  • References : zip
    • The anatomy of a large-scale hypertextual Web search engine, Sergey Brin, Lawrence Page
    • Introduction to recommendation systems, http://dl.acm.org/citation.cfm?id=1376776
    • Frontiers, Challenges and Opportunities for Information Retrieval: Report from SWIRL 2012. James Allan, Bruce Croft, Alistair Moffat, Mark Sanderson (Eds.), pp 2-32



14:00-15:30   FIT Building, 1-415

Cross-Model Learning in Macaque VIP

  • Speaker : Tao ZHANG, Chinese Academy of Science



16:00-17:30   FIT Building, 1-415

Olfactory modulation of visual perception

  • Speaker : Wen ZHOU, Chinese Academy of Science

-----------------------------------------

Fri Sep 13


09:00-12:00   FIT Building, 1-415

Evolutionary Data Learning

  • Speaker : Changshui Zhang, Tsinghua University



14:00-15:30   FIT Building, 1-415

Bio-Inspired Control and Learning in Cognitive Robot Systems

  • Speaker : Jianwei ZHANG, University of Hamburg
  • Abstract : In a dynamic and changing world, a robust and effective robot system must have adaptive behaviors, incrementally learnable skills and a high-level conceptual understanding of the world it inhabits. I will first show several developed platforms of intelligent service robot systems, e.g. in medical assistance, rehabilitation, home service, and edutainment, etc. I will then present the bio-inspired control of multi-joint modular robots, a multifinger hand and arm-hand systems based on artificial neural networks and reinforcement learning. Finally, I will introduce a framework for representing robot experiences, planning and learning which is used in the EU RACE project.



Last Updated ( Tuesday, 10 September 2013 )