Neural correlates of acoustic and semantic cues during speech segmentation in French
- Others:
- Maison des Sciences de l'Homme et de la Société Sud-Est (MSHS Sud-Est) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Université Pascal Paoli (UPP)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)
- Laboratoire de Psychologie et NeuroCognition (LPNC ) ; Université Savoie Mont Blanc (USMB [Université de Savoie] [Université de Chambéry])-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)
- BCL, équipe Langage et Cognition ; Bases, Corpus, Langage (UMR 7320 - UCA / CNRS) (BCL) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)-Université Nice Sophia Antipolis (1965 - 2019) (UNS) ; COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre National de la Recherche Scientifique (CNRS)-Université Côte d'Azur (UCA)
Description
Natural speech is highly complex and variable. Particularly, spoken language, in contrast to written language, has no clear word boundaries. Adult listeners can exploit different types of information to segment the continuous stream such as acoustic and semantic information. However, the weight of these cues, when co-occurring, remains to be determined. Behavioural tasks are not conclusive on this point as they focus participants' attention on certain sources of information, thus biasing the results. Here, we looked at the processing of homophonic utterances such as l'amie vs la mie (both /lami/) which include fine acoustic differences and for which the meaning changesdepending on segmentation. To examine the perceptual resolution of such ambiguities when semantic information is available, we measured the online processing of sentences containing such sequences in an ERP experiment involving no active task. In a congruent context, semantic information matched the acoustic signal of the word amie, while, in the incongruent condition, the semantic information carried by thesentence and the acoustic signal were leading to different lexical candidates. No clear neural markers for the use of acoustic cues were found. Our results suggest a preponderant weight of semantic information over acoustic information during natural spoken sentence processing.
Additional details
- URL
- https://hal.science/hal-03916475
- URN
- urn:oai:HAL:hal-03916475v1
- Origin repository
- UNICA