This project investigates how humans process spoken language in real time, with limited working memory capacity, but needing to make sense fast before new input comes in. Our guiding assumptions are (1) people segment the auditory speech flow into manageable chunks (2) this segmentation process is primarily driven by the search for meaning, and that it can best be modelled by linguistic theory that puts priority on linearity and the syntagmatic dimension. Segmentation (chunking) is a domain-general capacity that is automatically put to use when humans are exposed to a continuous stream of sensory input, as suggested in previous cognitive research. Our theoretical objective is to develop linguistic theory (Linear Unit Grammar, or LUG) and combine insights and methods from linguistics, psycholinguistics, and cognitive neuroscience. We ask what kind of variability can be found in chunks across levels of language (morphology, multi-word units, discourse etc), types of input (e.g. natural, read-aloud or synthesized), and in terms of individual behaviour.
We will employ (a) linguistic methods: LUG, analyses of clause structure, discourse, prosody (CWT) and the sublexical level; (b) behavioural methods (ChunkitApp, auditory chunking measures), (c) brain scans (MEG, fMRI).
The data is spontaneous speech from English, We gather behavioural data and brain response data. c
This is cutting-edge fundamental research with theoretical aims. It cuts across levels of language: we tap into multi-level processing, which we expect to have a bearing on constructing meaningful interpretations at different levels of chunk size.
Our expected results have potential to inform language learning and interpreting (processing that could be taught), they can also feed into the development of diagnostic tools insofar as deviant chunking is concerned. We expect technological applications to benefit from comparing natural processing to read-aloud and synthetic speech,