Mitchell, Just, and colleagues (2008) have joined machine learning with fMRI to produce this striking result. They started with a corpus of 3 trillion words from published sources. From this corpus they found that words had sets of frequent associations. For the word "celery," for instance, frequent associations included "eat," and "taste." The researchers trained a computational model to learn such semantic associations from the corpus. The next stage was to take particular target words and discover which areas of the brain were activated by their frequently associated words. As shown in the diagram, the word "celery" had a set of semantic associations each of which activated a particular set of brain areas.
Michell, Just, et al. then conducted experiments with nine participants. They chose 60 words, five each from 12 semantic categories: animals, body parts, food, clothing, and so on. Each participant, in an fMRI machine, was asked to view each the 60 words as word-picture pairs, and to view each pair six times. In this way, fMRI activation patterns were found for the nine participants for each of the 60 word-picture pairs. Next, the researchers determined a manageable set of semantic associations that would mediate between the word-picture pairs and the brain areas activated by them. This manageable set consisted of 25 verbs: sensory-motor words that would be likely to be associated with the 60 target words. They included "see, hear, listen, taste, run, push," and so on. Separate computational models were then trained to associate these 25 verbs for each of the nine participants' brain activation patterns, for 58 of the original 60 word-picture pairs. The test, then, was to see whether these computational models could predict the participants' brain activation patterns for the other two word-picture pairs in the 60-word list. The models were able to do this at significantly above-chance levels.
Towards the end of their paper, Mitchell et al. say:
the neural encodings that represent concrete objects are at least partly shared across individuals, based on evidence that it is possible to identify which of several items a person is viewing, through only their fMRI image and a classifier model trained from other people.
So we humans can not only think similar thoughts by means of concepts to which culturally shared words can point, but this similarity extends to shared patterns of brain activation. If we could transmit our thoughts, telepathy might be possible after all.
Mitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K.-M., Malave, V. L., & Just, M. A. (2008). Predicting human brain activity associated with the meanings of nouns. Science, 320, 1191-1195.