In imaging, there are certain methodologies, like the cognitive subtraction methodology. In this method, activity in a control task in subtracted from activity in an experimental task. So for example, take a word task. A simple model of written word recognition is used. In a famous experiment, the Peterson et al. (1988) experiment, they wanted to identify brain regions involved with 1) recognizing written words, 2) saying the words, and 3_ retrieving meaning of the words. The researchers used cognitive subtraction to tease out all the things they were testing.
So, to work out which regions are involved with recognizing written words, the researchers compared brain activity while subjects passively viewed words versus passively viewing a cross (+). The idea behind this was that the same brain regions, the same visual processing is involved in passively viewing things. But, the experimental task involved word recognition visually, and therefore, subtraction could be used to tease out the brain regions involved.
To work out which regions are involved in spoken words, the researchers compared the viewing of written words with reading a word aloud. In this task, both experimental and baseline involved visual processing of the word, and word recognition, and therefore subtracting should cancel out things, but the experimental task would be able to be analyzed afterwards.
To work out which regions are involved in retrieving the meaning of written words, the researchers compared a verb-generation task with reading aloud.
What the researchers found was that the left lateral hemisphere is involved in these processes. Recognizing written words activated bilateral sites in the visual cortex. Producing speech activates the sensorimotor cortex bilaterally. Verb generations activates the left inferior frontal gyrus.
Of course, there are some issues with cognitive subtraction. Can you think of any?
For example, let’s consider the subtraction behind determining which brain regions are associate with written word recognition. The assumptions was that both tasks involve visual processing but the experimental task involves the component of word recognition. Therefore, there is the assumptions that adding an extra component does not affect the operation of earlier ones in the sequence. This is referred to the as the assumption of pure insertion, or pure deletion. It could be that the amount, type, etc of visual processing that deals with written words is NOT the same as for the visual processing that deals with non-linguistic visuals. The added extra component in the tasks has the potential to change the operation of other components in the task. That is, there could be interactions (the effect of one variable upon another) that make the imaging data ambiguous.