The aim of the new Multisensory Signals and Meanings research community is to advance the understanding of human communication and interaction mediated via vision and audition. We study these topics in vision, hearing and speech with a multidisciplinary approach, using methods of psychophysics, psycholinguistics, brain imaging and computational modeling. During the current evaluation period 2005-2010, research topics in vision include neural and perceptual interactions at early and intermediate processing levels of the visual system, planning and control of goal-directed hand movements based on visual information, and memory of visual features and shapes. Research topics in hearing, speech and language include the role of prosody, sentence structure and reference in spoken language processing, interaction of linguistic and visual information in discourse comprehension, mathematical modeling of hearing and modeling of speech production through high-quality naturalistic speech synthesis. In the future, the emphasis will shift to studying the interaction between sensory and motor systems in the extraction of meaning, thus merging these previously separate research fields.
The doctoral students participate in the activities of the research community, e.g., seminars and dissemination of the research, as full members of the group. The multidisciplinary nature of the research entails that the students acquire knowledge and skills in various traditionally distinct areas, starting from signal processing and programming to understanding mental representations. Research training has a further emphasis on technical skills, so that having completed their studies, students are competent to independently carry out all phases of research from setting up the laboratory, designing and conducting experiments, to scientific publishing.
The research community combines the expertise of vision, hearing and language researchers to study information processing at different levels of human sensory and cognitive systems through experimentation and modeling in order to unravel how meaning emerges from the interplay between visual, auditory and motor signals in a multisensory environment.
Responsible person: Martti Vainio, Institute of Behavioural Sciences
Participation category: 4
|Effective start/end date||23/02/2011 → …|