AotD: Memory for Words Representing Modal Concepts

Article of the Day:

Memory for Words Representing Modal Concepts – Resource Sharing With Same-Modality Percepts Is Spontaneously Required

by Vermeulen, N., Chang, B., Mermillod, M., & Pleyers, G., Corneille, O. (2013)

(Experimental Psychology, 60(4), 293–301. DOI: 10.1027/1618-3169/a000199)

Background & Research

The idea that knowledge access is intimately related to simulations in sensory-motor systems is known as grounded cognition. In this view, conceptual representation is thought to share the same modal systems and resources with perception.

Indeed, a lot of research and evidence on the role of sensory systems in knowledge access and perceptual-conceptual interaction already exists.

In sequential tasks, the modality of the previous task appears to have a priming effect on the next one. For example, it has been shown that property verification tasks (e.g., “is GRASS typically GREEN?”) are performed faster when a preceding localization task matches the modality of the property in question (here, vision). On the other hand, research with multitasking designs has demonstrated that comprehension and memorizing of modal material is inhibited when a sensory load of the same modality is applied (more-so than when a similar contramodal load is present), indicating that perceptual and conceptual processing might share the same sensory-based resources.

Simultaneously, based on brain imaging studies, we know that grounded cognition effects appear spontaneously during word processing even when participants are not requested to appeal to the word’s modal properties. For instance, reading out a word relating to a bodily action activates the same brain areas as performing that action.

In this article, the authors report on two small-scale (N = 48 and N = 20) experiments designed to specifically test the interference effects of ipsimodal and contramodal stimuli on reporting on and memorizing words with strong conceptual modality (representing concepts strongly associated with the sense of vision or hearing), when there is no explicit requirement or need to process the words semantically or refer to their modal properties. They hypothesize that short-term and long-term memory of (conceptual) words would require modality-specific resources and thus be disrupted more by ipsimodal than contramodal sensory interference.

Results  & Discussion

In the first experiment, participants were asked to report on two words shown in an attentional blink task, where the second word carried strong visual or auditory connotations. After target presentation but before reporting on the words, a simple stimulus localization task (with semantically meaningless stimuli) was presented either visually or by sound. As per expectations, when the modality of the stimulus localization task matched that of the second word’s connotations, performance in both reporting the word and the localization task were poorer than in the contramodal condition.

In the second experiment, participants attempted to memorize single (again, modally charged) words while keeping in mind one or three other, semantically meaningless, visual or auditory items.  Trying to keep in mind one single shape or sound did not interfere with word memorization in either a recall or a recognition task in ipsimodal and contramodal conditions. However, with three shapes or sounds, there was clear interference in both tasks, but only when the modalities matched. In other words, it was apparent that the sensory load size of the ipsimodal interference must be substantial enough in order to affect recall.

In sum, the two experiments provide fresh evidence that memorizing words representing concepts predominantly related to a certain modality is disturbed more by intereference relating to that same modality, and thus appears to spontaneously require modality-specific resources, even when participants are not asked to process the words in any semantic or conceptual way. The effects demonstrated here thus support the idea of grounded cognition in that representing concepts (always) requires the activation of their sensory-related components.


The neat and well-planned experiments reported on in this article provide further evidence that it appears nigh-on-impossible to detect and report on a word without processing its meaning to quite a high degree, including its semantic connections and sensory relations. Processing a word just enough to detect it later already requires some (even if non-conscious) comprehension of its meaning. Furthermore, resources based on sensory-motor systems seem to be essential for the processing of both conceptual and perceptual information – concepts and percepts, as processed in the human brain, really are not such different beasts. Clearly, we can also run out of these resources (temporarily) quite easily when multitasking.

Share Button

Leave a Reply

Your email address will not be published. Required fields are marked *